uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
1,314,259,995,863 | arxiv | \section{Introduction}
In addition to being diffraction- and dispersion-free, Bessel beams carry orbital angular momentum (OAM) \cite{padgett,bliokh,schulze}, making them ideal for many applications, such as high-resolution microscopy \cite{kozawa1}, optical trapping and tweezing \cite{yao}, precision drilling \cite{kozawa2,duocastella} and laser acceleration \cite{salamin-pla}. Opening up of the Hilbert space of OAM to information coding makes Bessel beams potential candidates for utility in data transfer and optical communications \cite{dudley}.
The ultra-short (temporally or spatially) and tightly-focused analogue of a non-diffracting and non-dispersing laser pulse is often referred to as a laser bullet \cite{durnin1,durnin2,zhong,trapani,siviloglou}. Such objects have attracted considerable attention over the past decade or so \cite{chong,naidoo,zong,urrutia,mendoza,volke}. For example, Airy-Bessel bullets \cite{chong} and higher-order Poincar\'e sphere beams \cite{naidoo} have been suggested and realized in laboratory experiments.
A new addition to the list of light bullets is the so-called Bessel-Bessel bullet \cite{salamin-oe,salamin-sr}. Analytic expressions for the electric and magnetic fields of a Bessel-Bessel bullet, propagating in an under-dense plasma, of plasma frequency $\omega_p$, have recently been derived. The fields stem from the zeroth-order vector potential (SI units and circular-cylindrical coordinates, $r$, $\theta$ and $z$, are used throughout)
\begin{equation}\label{A}
A(r,\theta,\eta,\zeta) = a_0J_l(k_rr) j_0\left(\frac{\pi\zeta}{L}\right) e^{i(\varphi_0+k_0\zeta+l\theta-\alpha\eta)},
\end{equation}
where $\eta = (z+ct)/2$, $\zeta = z-ct$, $c$ is the speed of light in vacuum, $a_0$ is a constant amplitude, $J_l$ and $j_0$ are, respectively, ordinary and spherical Bessel functions of their given arguments and integer orders $l$ and zero, respectively. Furthermore, $\varphi_0$ is a constant initial phase and $k_0 = 2\pi/\lambda_0$ is a central wavenumber for the pulse, corresponding to the central wavelength $\lambda_0$. The bullet described by the vector potential (\ref{A}) is assumed to have an axial (spatial) extension $L\sim c\tau$, where $\tau$ is its temporal full-width-at-half-maximum. On the other hand, a waist radius at focus for the pulse, $w_0$, is determined from $w_0=x_{1,l}/k_r$, where $x_{1,l}$ is the first zero of $J_l$. Finally, in (\ref{A})
\begin{equation}
\alpha = \frac{k_r^2+k_p^2}{2k_0};\quad k_p = \frac{\omega_p}{c};\quad \omega_p = \sqrt{\frac{n_0e^2}{m\varepsilon_0}},
\end{equation}
with $n_0$ the ambient electron density of the plasma, $\varepsilon_0$ the permittivity of the vacuum, and $m$ and $-e$ the mass and charge, respectively, of the electron. Equation (\ref{A}) has been obtained \cite{salamin-oe,salamin-sr,esarey} from solving the wave equations satisfied by the scalar and vector potentials with inhomogeneous terms which, in turn, stem from interaction of the laser pulse with an under-dense plasma, assumed to be linear, with the two potentials linked by the Lorentz gauge. Explicit expressions for the electric and magnetic field components, derived from (\ref{A}) are given by Eqs. (20)-(24) in \cite{salamin-sr}. Replacing $j_0$ in the amplitude of the vector potential (\ref{A}) with an Airy function brings to it formal resemblance with the field amplitude of an Airy-Bessel bullet \cite{siviloglou,chong}.
\section{Propagation characteristics}
Some of the key propagation characteristics of a Bessel-Bessel bullet may be highlighted, starting with the phase of the vector potential. Since the Bessel functions in Eq. (\ref{A}) are real, the phase of $A$ is fully accounted for by $\varphi = \varphi_0+k_0\zeta+l\theta-\alpha\eta$. For example, the wavevector of the pulse, in circular cylindrical coordinates, may be obtained from \cite{mcdonald1}
\begin{equation}\label{k}
\bm{k} = \bm{\nabla}\varphi = \left(\frac{l}{r}\right) \hat{\bm{\theta}} +\left(k_0-\frac{\alpha}{2}\right)\hat{\bm{z}},
\end{equation}
where $\hat{\bm{\theta}}$ and $\hat{\bm{z}}$ are unit vectors in the azimuthal and axial directions, respectively. The message of Eq. (\ref{k}) is simple: the normal to a surface of constant phase is not parallel to the direction of propagation, rather it makes an angle, $\beta$, with $\hat{\bm{z}}$ given by
$\tan\beta = (l/r)/(k_0-\alpha/2)$.
This is equivalent to saying that, apart from the $l = 0$ case, a wavefront is not planar and normal to the direction of propagation (like in the case of a plane wave) but is a helix of fixed radius $r$ and axis along $\hat{\bm{z}}$ \cite{mcdonald1,berry}.
\begin{figure}
\centering
\includegraphics[width=6.8cm]{fig1.pdf}
\begin{picture}(0,0)(0,0)
\put(-148,-10){$x/\lambda_0$}
\put(-52,-10){$x/\lambda_0$}
\put(-205,44){\begin{sideways}$y/\lambda_0$\end{sideways}}
\put(-205,138){\begin{sideways}$y/\lambda_0$\end{sideways}}
\put(-205,232){\begin{sideways}$y/\lambda_0$\end{sideways}}
\put(-205,325){\begin{sideways}$y/\lambda_0$\end{sideways}}
\put(-205,420){\begin{sideways}$y/\lambda_0$\end{sideways}}
\end{picture}
\vskip4mm
\caption{Intensity profiles (top to bottom) of $|E_r/E_0|^2$, $|E_\theta/E_0|^2$, $|E_z/E_0|^2$, $|cB_r/E_0|^2$, and $|cB_\theta/E_0|^2$ in the moving focal plane ($ z = ct, t = 1$ fs) of a Bessel-Bessel bullet for which $L = 1.6 \lambda_0$, $w_0 = 0.9 \lambda_0$, $\lambda_0 = 1 ~\mu$m, in a plasma of electron density $n_0 = 10^{20}$ cm$^{-3}$. Left column: $l = 1$, and right column: $l = 3$. Other parameters used are: $\varphi_0 = 0$, and $k_r = x_{1,l}/w_0$, where $x_{1,l}$ is the first zero of $J_l$. See Figs. \ref{fig3} and \ref{fig4} below for the relative intensities of the various rings displayed here.}
\label{fig1}
\end{figure}
The phase may also be used to derive a dispersion relation. First, an effective frequency is obtained from
\begin{equation}
\omega = -\frac{\partial\varphi}{\partial t} = c\left(k_0+\frac{\alpha}{2}\right).
\end{equation}
Likewise, an effective axial wavenumber follows from
\begin{equation}
k_z = \frac{\partial\varphi}{\partial z} = k_0-\frac{\alpha}{2},
\end{equation}
which agrees with Eq. (\ref{k}). Employing these results, one gets $(\omega/c)^2-k_z^2 = 2k_0\alpha$, and subsequently the dispersion relation
\begin{equation}
\omega = c\sqrt{k_z^2+k_r^2+k_p^2}.
\end{equation}
The group velocity will have an axial component \cite{esarey}
\begin{equation}
v_g = \frac{\partial\omega}{\partial k_z} = c\left[\frac{k_0-\alpha/2}{k_0+\alpha/2}\right],
\end{equation}
while the corresponding phase velocity will be given by
\begin{equation}
v_{ph} = \frac{\omega}{k_z} = c\left[\frac{k_0+\alpha/2}{k_0-\alpha/2}\right].
\end{equation}
Note, at this point, that $v_{ph}v_g = c^2$, and that $v_g < c$, whereas $v_{ph} > c$. These results should strengthen the case for the OAM-carrying pulses as potential candidates for application in data transfer and digital communications \cite{dudley,naidoo,milione}.
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{fig2.pdf}
\begin{picture}(0,0)(0,0)
\put(-210,-10){$l = 1$}
\put(-159,-10){$l = -1$}
\put(-99,-10){$l = 3$}
\put(-45,-10){$l = -3$}
\end{picture}
\vskip4mm
\caption{Density plots of the argument of $e^{i\tilde{\varphi}}$, for four values of the index $l$.}
\label{fig2}
\end{figure}
\section{Fields in the moving focal plane}
Time-dependence of the fields given by Eqs. (20)-(24) in \cite{salamin-sr} is not purely of the form $\exp(\pm i\tilde{\omega} t)$, where $\tilde{\omega}$ is some single frequency. Thus, calculation using those fields of time-averaged quantities ought to start from basic principles, i.e., using the real parts of the fields. This can always be done numerically. Analytically, however, a different approach is needed.
Recall that our equations represent a pulse that propagates with its shape intact, i.e., diffraction-free and dispersion-free. Recall also the interpretation of $\zeta$ as the coordinate of a point within the pulse relative to its centroid (point of maximum intensity), itself regarded as a {\it moving focal point}. With that in mind, all points on the transverse plane through that centroid always have $\zeta\sim0$, which makes the said transverse plane a {\it moving focal plane} \cite{salamin-oe,salamin-sr,esarey}. In the case of a beam with a stationary focus, the power, for example, is usually calculated by integrating the component of the Poynting vector in the direction of propagation over the entire focal plane, and subsequently averaging the result over time. By analogy, the power carried by our short pulse can be calculated using the fields in the moving focal plane, to be obtained from Eqs. (20)-(24) of \cite{salamin-sr} in the limit of $\zeta\to0$. This procedure, if acceptable for calculation of the time-averaged Poynting vector, may be applied equally as well for calculating other time-averaged quantities, as will be done shortly below.
From this point onward, arguments of all Bessel functions ($k_rr$) will be suppressed, for convenience. By taking the appropriate limits of Eqs. (20)-(24) of Ref. \cite{salamin-sr}, the electric field components on the moving focal plane of the Bessel-Bessel bullet become
\begin{equation}\label{Er}
E_r = E_0\left(\frac{k_r }{k_0}\right) \left(\frac{\alpha-2k_0}{\alpha+2k_0} \right) \left[\frac{J_{l-1} - J_{l+1}}{2}\right] e^{i\tilde{\varphi}},
\end{equation}
\begin{equation}\label{Etheta}
E_{\theta} = E_0l \left(\frac{k_r }{k_0}\right) \left(\frac{\alpha-2k_0}{\alpha+2k_0}\right) \left[\frac{J_l}{k_rr}\right] e^{i(\tilde{\varphi}+\pi/2)},
\end{equation}
\begin{equation}
\label{Ez} E_z = E_0\left(\frac{4\alpha}{\alpha+2k_0} \right)\left[1-\frac{(\pi/L)^2}{3k_0(\alpha+2k_0)}\right] J_le^{i(\tilde{\varphi}+\pi/2)},
\end{equation}
where $\tilde{\varphi} = \varphi_0+l\theta-\alpha\eta$. The associated magnetic field components, on the other hand, take on the following limiting forms
\begin{equation}
\label{Br} B_r = \frac{E_0}{c} l \left(\frac{k_r }{k_0}\right) \left[\frac{J_l}{k_rr}\right] e^{i(\tilde{\varphi}+\pi/2)},
\end{equation}
\begin{equation}
\label{Btheta} B_{\theta} = \frac{E_0}{c}\left(\frac{k_r }{k_0}\right) \left[\frac{J_{l-1} - J_{l+1}}{2}\right] e^{i(\tilde{\varphi}+\pi)}.
\end{equation}
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{fig3-eps-converted-to.pdf}
\begin{picture}(0,0)(0,0)
\put(-118,-13){$r/\lambda_0$}
\put(-48,153){$|E_r/E_0|^2$}
\put(-48,136){$|E_\theta/E_0|^2$}
\put(-48,120){$|E_z/E_0|^2$}
\put(-48,103){$|cB_r/E_0|^2$}
\put(-48,87){$|cB_\theta/E_0|^2$}
\put(-40,71){$Sum$}
\put(-130,130){$(a) ~l = 1$}
\end{picture}
\vskip4mm
\caption{Intensity profiles associated with the field components in the moving focal plane, for the case of $l = 1$, as functions of the radial distance from the focus, using parameters the same as in Fig. \ref{fig1}.}
\label{fig3}
\end{figure}
Note that the components $E_r$ leads both $E_{\theta}$ and $E_z$ by $\pi/2$. The same phase difference exists between $B_{\theta}$ and $B_r$, with $B_r$ leading. Equations (\ref{Er})-(\ref{Btheta}) may be written more compactly, using the expressions found above for $\omega$, $k_z$, and $v_g$.
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{fig4-eps-converted-to.pdf}
\begin{picture}(0,0)(0,0)
\put(-118,-13){$r/\lambda_0$}
\put(-48,153){$|E_r/E_0|^2$}
\put(-48,136){$|E_\theta/E_0|^2$}
\put(-48,120){$|E_z/E_0|^2$}
\put(-48,103){$|cB_r/E_0|^2$}
\put(-48,87){$|cB_\theta/E_0|^2$}
\put(-40,71){$Sum$}
\put(-130,130){$(b) ~l = 3$}
\end{picture}
\vskip4mm
\caption{Same as Fig.\ref{fig3}, but for the case of $l = 3$, using parameters the same as in Fig. \ref{fig1}.}
\label{fig4}
\end{figure}
Density plots of the electric and magnetic field intensity profiles are shown in Fig. \ref{fig1}, for $l = 1$ and $l = 3$. The fact that the density plots in the {\it moving focal plane}, at $t = 1$ fs, are identical to their counterparts in the {\it focal plane} at $t = 0$ in Ref. \cite{salamin-sr} is consistent with the earlier conclusion that the bullet propagates undistorted in the under-dense plasma. Note that profile of a negative-order field component is the same as that of the corresponding positive-order component, due to the relation $J_{-l} = (-1)^l J_l$. This, however, is not the case for the spiral phases shown in Fig. \ref{fig2}, for $l = \pm1$ and $l = \pm3$. Phases corresponding to $+l$ and $-l$ have opposite handedness.
Intensity profiles of all the components given by Eqs. (\ref{Er})-(\ref{Btheta}) together with their sums are shown in Figs. \ref{fig3} and \ref{fig4}, for the cases of $l = 1$ and $l = 3$, respectively, as functions of the radial distance from the moving focus. Relative brightness of the rings displayed in Fig. \ref{fig1} may be better seen by comparing the heights of the corresponding peaks in Figs. \ref{fig3} and \ref{fig4}. Note that ``Sum" stands for the sum of all the other intensity profiles, while it also represents the scaled energy density $u/u_0$, where $u$ is given by Eq. (\ref{u}) below, and $u_0 = \varepsilon_0E_0^2/2$.
\section{Time-averaged densities}
In this section, expressions will be derived for the time-averaged densities of several physical quantities pertaining to a Bessel-Bessel bullet, employing the fields (\ref{Er})-(\ref{Btheta}). Such expressions will ultimately be needed in applications for which the fields may be of utility \cite{mcdonald1,allen1,allen2}.
The field components (\ref{Er})-(\ref{Btheta}) have quasi-harmonic time-dependence. Since, in the moving focal plane, $\zeta = 0$ and $\eta = ct$, dependence upon the time is of the form $e^{-i\omega't}$, or at an effective frequency $\omega' = \alpha c$. Thus, the time-average $\langle X\rangle$ of a quantity $X(t)$, expressible as the product of two quantities, $Y(t)$ and $Z(t)$, will be found from $\langle X \rangle = (YZ^*+Y^*Z)/2$ \cite{jackson}.
\subsection{Energy}
After some algebra, the time-averaged electromagnetic energy density in the ultrashort and tightly-focused pulse may be cast in the form
\begin{eqnarray}
\label{u} \langle u \rangle &=& \frac{1}{2}\varepsilon_0(|E|^2+|cB|^2),\nonumber\\
&=&\frac{u_0}{(\alpha+2k_0)^2}\left\{(\alpha^2+4k_0^2)\left(\frac{k_r}{k_0}\right)^2 \left[J_{l+1}^2+J_{l-1}^2\right]\right.\nonumber\\
& & \left. +16\alpha^2 \left[1-\frac{(\pi/L)^2}{3k_0(\alpha+2k_0)}\right]^2J_l^2\right\} .
\end{eqnarray}
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{fig5-eps-converted-to.pdf}
\begin{picture}(0,0)(0,0)
\put(-117,-13){$r/\lambda_0$}
\put(-49,121){$\langle cp_\theta/u_0\rangle$}
\put(-49,105){$\langle cp_z/u_0\rangle$}
\put(-130,120){$(a) ~l = 1$}
\end{picture}
\vskip4mm
\caption{Time-averaged linear momentum components at points in the moving focal plane, as a function of the radial distance from the focus, for the case of $l = 1$, and employing the same parameters as in Fig. \ref{fig1}.}
\label{fig5}
\end{figure}
In the elaborate algebra leading to Eq. (\ref{u}) the transformation $(2l/\rho) J_l(\rho) = J_{l+1}(\rho)+J_{l-1}(\rho)$ has been used repeatedly. General agreement between this expression and its vacuum, long-pulse, counterpart \cite{mcdonald1} may be seen by letting $k_p\to0$ and $L\to\infty$.
\subsection{Linear momentum}
The electromagnetic linear momentum density is given by $\bm{p} = \varepsilon_0(\bm{E}\times \bm{B})$. In cylindrical coordinates, with unit vectors $\hat{\bm{r}}$, $\hat{\bm{\theta}}$, and $\hat{\bm{z}}$, its time-averaged value is
\begin{eqnarray}\label{p}
\langle \bm{p} \rangle &=& \frac{\varepsilon_0}{2} (\bm{E}\times \bm{B}^*+\bm{E}^*\times \bm{B}),\nonumber\\
&=& \frac{u_0}{c} \left[\frac{\alpha-2k_0}{\alpha+2k_0}\right] \left\{\left(\frac{8l\alpha/k_0}{\alpha-2k_0}\right)\right.\nonumber\\
& &\left.\times
\left[1-\frac{(\pi/L)^2}{3k_0(\alpha+2k_0)}\right] \left[\frac{J_l^2}{r}\right] \hat{\bm{\theta}}\right.\nonumber\\
& &\left.
-\left(\frac{k_r}{k_0}\right)^2\left[J_{l+1}^2+J_{l-1}^2\right]\hat{\bm{z}}\right\}.
\end{eqnarray}
Due to the fact that the fields in (\ref{Er})-(\ref{Btheta}) are not purely transverse, the pulse carries forward, as well as azimuthal, linear momentum, according to Eq. (\ref{p}). Note, however, that to the integration of $\langle \bm{p} \rangle$ over a plane at a fixed $z$, only the $z-$component contributes. Combined with the absence of a radial component from the linear momentum density expression, this demonstrates that the pulse does not spread transversely.
Variations of $\langle p_\theta\rangle$ and $\langle p_z\rangle$ in the moving focal plane, with the radial distance from the moving focus, for the cases of $l = 1$ and $l = 3$, are shown in Figs. \ref{fig5} and \ref{fig6}, respectively. A peak in these plots marks the radius of the center of a ring of maximum linear momentum, in the corresponding density plot. Note that all density plots would be hollow, apart from that of $\langle p_z\rangle$ of the $l = 1$ case.
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{fig6-eps-converted-to.pdf}
\begin{picture}(0,0)(0,0)
\put(-117,-13){$r/\lambda_0$}
\put(-49,121){$\langle cp_\theta/u_0\rangle$}
\put(-49,105){$\langle cp_z/u_0\rangle$}
\put(-130,120){$(b) ~l = 3$}
\end{picture}
\vskip4mm
\caption{Same as Fig. \ref{fig5}, but for the case of $l = 3$, and employing the same parameters as in Fig. \ref{fig1}.}
\label{fig6}
\end{figure}
\subsection{Radiation intensity and power}
The Poynting vector, representing the energy flux density, is $\bm{S} = \varepsilon_0 c^2\bm{E}\times\bm{B} = c^2 \bm{p}$. Hence, the time-averaged electromagnetic energy flux density of the pulse is $\langle \bm{S} \rangle = c^2 \langle \bm{p} \rangle$, which follows from Eq. (\ref{p}). The axial component, $\langle S_z \rangle $, gives the intensity of the pulse, in W/m$^2$, as a function of the radial distance $r$
\begin{equation}\label{intensity}
I(r) = cu_0 \left[\frac{k_0-\alpha/2}{k_0+\alpha/2}\right] \left(\frac{k_r}{k_0}\right)^2 [J_{l+1}^2+J_{l-1}^2].
\end{equation}
A density plot of $I$ in the moving focal plane would exhibit alternating bright and dark rings. In lieu of density plots, however, variations of the scaled intensities in the moving focal plane are shown in Fig. \ref{fig7}, as functions of the radial distance from focus, for the cases corresponding to $l = 0, 1, \cdots, 4$. Here, too, the rings are all hollow, except for the $l = 1$ case.
Finally, one gets an expression for the power carried by the pulse by integrating $I(r)$ over the moving focal plane
\begin{equation}\label{power}
P = 2\pi cu_0 \left[\frac{k_0-\alpha/2}{k_0+\alpha/2}\right] \left(\frac{k_r}{k_0}\right)^2\int_0^{\bar{r}}[J_{l+1}^2+J_{l-1}^2]rdr,
\end{equation}
in which $\bar{r} \gtrsim w_0$ is a measure of the transverse spatial extension of the pulse.
Note at this point that, like the wavevector (\ref{k}) lines of flow of the linear momentum and energy flux vectors are not parallel to the direction of propagation, but follow helices of fixed radii \cite{mcdonald1,berry}. With $\langle \bm{p}\rangle = \langle p_\theta\rangle\hat{\bm{\theta}}+\langle p_z\rangle \hat{\bm{z}}$, both vectors make the angle $\gamma$ with the direction of propagation, given by $\tan\gamma = \langle p_\theta\rangle/\langle p_z\rangle$.
\begin{figure}[t]
\centering
\includegraphics[width=7.7cm]{fig7-eps-converted-to.pdf}
\begin{picture}(0,0)(0,0)
\put(-113,-13){$r/\lambda_0$}
\put(-233,59){\begin{sideways}Scaled intensity\end{sideways}}
\put(-49,138){$l = 0$}
\put(-49,121){$l = 1$}
\put(-49,105){$l = 2$}
\put(-49,89){$l = 3$}
\put(-49,73){$l = 4$}
\end{picture}
\vskip4mm
\caption{Scaled intensity $I/cu_0$ in the focal plane as a function of the distance from focus, for the parameters of Fig. \ref{fig1}.}
\label{fig7}
\end{figure}
\subsection{Angular momentum}
In cylindrical coordinates, the position vector of a point within the pulse is $\bm{r} = r\hat{\bm{r}}+z\hat{\bm{z}}$. Thus, the angular momentum density is $\bm{l}= \bm{r}\times\bm{p} =\varepsilon_0 \bm{r}\times(\bm{E}\times\bm{B})$. Hence, the time-averaged angular momentum density may be cast in the following form
\begin{eqnarray}
\label{l} \langle \bm{l} \rangle &=& \frac{\varepsilon_0}{2} \bm{r}\times(\bm{E}\times\bm{B}^*+\bm{E}^*\times\bm{B}),\nonumber\\
&=& u_0 \left[\frac{\alpha-2k_0}{\alpha+2k_0}\right] \left\{\left(\frac{8\alpha}{\alpha-2k_0}\right) \left[1-\frac{(\pi/L)^2}{3k_0(\alpha+2k_0)}\right]
\right.\nonumber\\
& &\left.\times J_l^2 \left[-\left(\frac{l}{\omega_0}\right)\left(\frac{z}{r}\right)\hat{\bm{r}}+\left(\frac{l}{\omega_0}\right)\hat{\bm{z}}\right]\right.\nonumber\\
& &\left. +\left(\frac{k_r}{k_0}\right)^2\left[J_{l+1}^2+J_{l-1}^2\right] \left(\frac{r}{c}\right) \hat{\bm{\theta}}\right\},
\end{eqnarray}
where $\omega_0 = ck_0$. Writing $\langle \bm{l} \rangle = \langle l_r \rangle\hat{\bm{r}}+\langle l_\theta \rangle\hat{\bm{\theta}}+\langle l_z \rangle\hat{\bm{z}}$, contribution to the integration of the angular momentum density over a plane of fixed $z$ (perpendicular to the direction of propagation) comes only from $\langle l_z \rangle$. Thus, $\langle l_z \rangle $ is the time-averaged density of orbital angular momentum about the direction of propagation \cite{mcdonald1,berry}. Note that $\langle l_z \rangle = 0 $, for $l=0$, as expected. It is also worth pointing out the striking resemblance of Eq. (\ref{l}) for the angular momentum of a Bessel-Bessel bullet, to Eq. (8) in Ref. \cite{allen1} for the angular momentum of the Laguerre-Gaussian laser modes \cite{allen2}.
As an example, variation with the radial distance from the moving focus, of the time-averaged components of the angular momentum density, are shown in Fig. \ref{fig8}, for the case of $l = 4$. It can be inferred from this figure that the corresponding density plots would all consist of hollow concentric rings.
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{fig8-eps-converted-to.pdf}
\begin{picture}(0,0)(0,0)
\put(-113,-13){$r/\lambda_0$}
\put(-52,148){$\langle l_r\omega_0/u_0\rangle$}
\put(-52,132){$\langle l_\theta\omega_0/u_0\rangle$}
\put(-52,115){$\langle l_z\omega_0/u_0\rangle$}
\put(-132,140){$l = 4$}
\end{picture}
\vskip4mm
\caption{Components of time-averaged angular momentum density, scaled by $u_0/\omega_0$, in the focal plane as a function of the distance from focus, for parameters the same as in Fig. \ref{fig1}.}
\label{fig8}
\end{figure}
To understand the behaviour of the different components away from the focal point, one may use the asymptotic representation
\begin{equation}
J_l(k_rr) \sim \sqrt{\frac{2}{\pi k_rr}} \cos\left(k_rr-\frac{l\pi}{2}-\frac{\pi}{4}\right).
\end{equation}
The oscillations are obviously due to the $\cos$ function. It is easy also to see that, asymptotically, $l_r \sim-r^{-2}$, which explains why $\langle l_r\omega_0/u_0\rangle$ decays quickly to zero away from the focus, while $l_z \sim r^{-1}$ causes $\langle l_z\omega_0/u_0\rangle$ to decay to zero slowly by comparison. Finally, what appears to be an oscillation between the same maximum and minimum value of $\langle l_\theta\omega_0/u_0\rangle$ is due to the fact that $l_\theta$ is asymptotically independent of $r$. Asymptotic behaviour of the quantities shown in Figs. \ref{fig3}-\ref{fig7} may be understood on the basis of considerations similar to the ones just outlined for Fig. \ref{fig8}.
\section{Summary and conclusions}
Fields of an ultra-short and tightly-focused laser Bessel pulse have recently been derived analytically, for the first time \cite{salamin-oe,salamin-sr}. The pulse, dubbed a Bessel-Bessel bullet, has been shown to propagate without dispersion or diffraction inside an under-dense plasma. For the derived fields to be of utility in potential applications, they have here been supported by further investigation of some of their key propagation characteristics, along with important time-averaged quantities pertaining to them. It has been shown that a Bessel-Bessel bullet, propagating in an under-dense plasma, carries electromagnetic linear and angular momenta. Analytic expressions have been derived for the time-averaged energy density, linear momentum density, energy flux density, and angular momentum density. It has further been shown that the bullet possesses orbital angular momentum about its direction of propagation.
\section*{acknowledgements}
The author thanks K. Z. Hatsagortsyan for fruitful discussions and a critical reading of the manuscript.
|
1,314,259,995,864 | arxiv | \section{Introduction}
The Majorana modes in the vortices of topological superconductors can be used in quantum computation,
motivating continuing efforts in searching for topological superconductors. \cite{Sato2016} The leading candidate for intrinsic topological superconductors is Sr$_2$RuO$_4$, which may host triplet $p+ip'$-wave superconductivity.\cite{Maeno2012} In this material, the electron filling is near the van Hove singularity. This singularity is classified as of type-I, in the sense that the van Hove momenta are time-reversal invariant. Because of the proximity to the van Hove singularity, there are strong ferromagnetic fluctuations at low energy. This may either lead to ferromagnetic order, or trigger triplet pairing. At a first sight, by doping up to the van Hove level, the logarithmically diverging density of states at the Fermi level would enhance ferromagnetic correlations and possibly triplet pairing also. However, a pair of opposite time-reversal invariant momenta correspond to the same lattice momentum, and hence equal-spin pairing on such momenta is forbidden by Pauli exclusion principle. This would be a destructive effect for triplet pairing. An interesting recent proposal~\cite{Yao2015} is to look for materials in which the van Hove momenta are not time-reversal invariant and hence the above destructive effect is avoided. This kind of van Hove singularity is classified as of type-II.
The single-sheet BC$_3$ is a suitable material with type-II van Hove singularity (see Fig.\ref{model}). It is a graphene-like material with a layered hexagonal structure.\cite{Tang2013} According to first-principle calculations, \cite{Miyamoto1994} the undoped BC$_3$ is a semimetal with a band gap about 0.54 eV. The first and second conduction bands are $\pi$ and $\pi^\ast$ bands, which cross at $K$ and $K'$ in the Brillouine zone but are isolated from the other bands. A macroscopic uniform sheet of single-crystal BC$_3$ was reported to be available by carbon-substitution in a boron honeycomb. \cite{Tanaka2005} Superconductivity is theoretically considered in terms of electron-phonon coupling,\cite{Cohen2011} but no trace of superconductivity has been found under hole-doping.\cite{Ueno2006} However, by electron doping (into the $\pi$ band), strong ferromagnetic fluctuations were predicted and may be attributed to the proximity to the van Hove singularity. \cite{Chen2013} This makes electron-doped BC$_3$ a hopeful candidate for triplet pairing.
Previously, the possible superconductivity was analyzed by renormalization group (RG) in the limit of weak coupling right at the van Hove level, and random-phase approximation based fluctuation-exchange (FLEX) approximation slightly away from the van Hove level. \cite{Chen2015} Both schemes predict that $p$-wave pairing is the leading instability for weak interactions. Here we ask how it fairs as the interaction becomes stronger so that competing or interwinning orders have to be addressed. We apply the singular-mode functional renormalization group (SM-FRG) \cite{Wang2012,Xiang2012,Yang2013,Wang2013,Wang2016,Liu2017} that can treat all channels on equal footing.
Our results at weak coupling is consistent with RG and FLEX in Ref.\cite{Chen2015}. However, for moderate Hubbard interactions, the ferromagnetic or ferromagnetic-like spin-density wave order dominates in the immediate vicinity of the van Hove singularity, while $p$-wave pairing is present elsewhere near the singularity. The transition temperature becomes practically sizable (of the order of Kelvin) only when the local Hubbard interaction is several times stronger than that estimated by first principle calculations. We also find a weak nearest-neighbor Coulomb repulsion can enhance the transition temperature significantly. Our results call for refined estimation of the interaction parameters before one can decide whether BC$_3$ would be of practical interest as a $p$-wave superconductor.
The rest of the paper is as follows. We introduce the model and the SM-FRG in Sec.\ref{M&M}. The results are presented and discussed in Sec.\ref{R&D}. Finally, Sec.\ref{SMR} is a summary of this work.
\section{Model and Method} \label{M&M}
The structure of monolayer BC$_3$ is shown in Fig.\ref{model}(a). Every boron atom is connected to three carbon atoms, and every boron hexagon encloses a smaller carbon hexagon. The conduction bands are mainly derived from the $p_z$ orbital of boron, justifying a simplified model for borons on a honeycomb lattice, with the following Hamiltonian,
\begin{eqnarray}
H = &&-\sum_{\< ij \> \sigma}(c^\dag_{i\sigma}t_{ij}c_{j\sigma}+{\rm h.c.})-\mu\sum_{i\sigma}n_{i\sigma}\nonumber\\
&&+U\sum_i n_{i\uparrow}n_{i\downarrow}+V\sum_{\< ij \>\in {\rm NN}}n_i n_j. \label{H}
\end{eqnarray}
where $c^\dag_{i\sigma}$ is the annihilation operator at site $i$ with spin $\sigma$, $\<ij\>$ denotes first-, second- and third-neighbor bonds, and $\mu$ is the chemical potential. According to a fit to the first-principle band structure, $t_{1} = 0.62 $ eV, $t_{2} = 0$ and $t_{3}=-0.38 $ eV for the first, second and third neighbors. The onsite Hubbard interaction was estimated as $U\sim 0.7 $ eV, \cite{Chen2015} but for systematics we leave both $U$ and the nearest-neighbor (NN) repulsion $V$ as parameters. Finally, spin-orbital coupling (SOC) may arise from the missing mirror symmetry about the second-neighbor boron-boron bonds, but this SOC is expected to be weak given the light elements and the relatively long bonds. For this reason, we will ignore SOC henceforth. The band dispersion along high symmetry cuts is plot in Fig.\ref{model}(b), together with the density of states (DOS) in (c). It is seen that the DOS diverges logarithmically at the van Hove energy level. (The singularity is cutoffed by the smearing factor used in the numerical calculation of DOS.) Proximity to such a singularity makes the system susceptible to various instabilities under the electron-electron interactions.
\begin{figure}
\includegraphics[width=8.5cm,trim={7.5cm 0.5cm 7.5cm 1cm},clip]{model.png}
\caption{ (Color online) (a) Structure of BC$_3$ monolayer. Here $\v a_1$ and $\v a_2$ are basis vectors. Purple (yellow) ball represents carbon (boron) atom. (b) Band energy $\epsilon$ as a function of momentum $\v k$ along high symmetry cuts in the Brillouine zone. (c) Density of states.}\label{model}
\end{figure}
In order to treat all possible and competing electronic orders on equal footing, we apply the singular-mode functional renormalization group (SM-FRG). Here we outline the necessary ingredients and notations, leaving technical details in the Appendix. In a nutshell, the idea is to get momentum-resolved running pseudo-potential $\Gamma_{1234}$, as in $(1/2)c_{1\sigma}^\dagger c_{2\sigma'}^\dagger \Gamma_{1234} c_{3\sigma'} c_{4\sigma}$, to act on low-energy fermionic degrees of freedom up to a cutoff energy scale $\Lambda$ (for Matsubara frequency in our case). Henceforth the numerical index labels momentum/position/sublattice (but will be suppressed wherever applicable for brevity). Momentum conservation/translation symmetry is also left implicit. Starting from $\Gamma$ at $\Lambda\rightarrow \infty$ (specified by the bare interactions $U$ and $V$), FRG generates all one-particle-irreducible corrections to $\Gamma$ to arbitrary orders in the bare interactions as $\Lambda$ decreases. Notice that $\Gamma$ may evolve to be nonlocal and even diverging. To see the instability (diverging) channel, we extract at $\Lambda$ concurrently the effective interactions in the general charge-density wave (CDW), spin-density wave (SDW) and superconductivity (SC) channels,
\begin{eqnarray}
&& V^{\rm CDW}_{(14)(32)} = 2 \Gamma_{1234} - \Gamma_{1243},\nonumber\\
&& V^{\rm SDW}_{(13)(42)} = - V^{\rm SC}_{12)(43)} = - \Gamma_{1234}. \label{eq:VX}
\end{eqnarray}
The left-hand sides are understood as matrices with composite indices, describing scattering of fermion bilinears. Since they all originate from $\Gamma$, $V^{\rm CDW/SDW/SC}$ have overlaps but are naturally treated on equal footing. We remark that the FRG flow would be equivalent to ladder or random-phase approximations in the respective channels if the overlaps were ignored in the FRG flow equation. The divergence of the leading attractive (i.e., negative) eigenvalue $S$ of $V^{\rm CDW/SDW/SC}$ decides the instability channel, the associated eigenfunction (which is a matrix in the sublattice basis) and collective momentum describe the order parameter, and the divergence energy scale $\Lambda_c$ is representative of the transition temperature $T_c$. More technical details can be found in Refs.\cite{Wang2012, Xiang2012, Yang2013, Wang2013, Wang2016, Liu2017} and also in the Appendix for self completeness.
\section{Results and Discussions} \label{R&D}
First, we consider the van Hove filling level $n = 0.49$. This case is of special interest because of the type-II singularity. In the weak coupling limit, it has been shown that $p$-wave SC wins over ferromagnetic SDW.\cite{Chen2015} We ask how it fairs in the case of finite interaction. Fig.\ref{n49} shows FRG flow of the leading eigenvalues in the SC (blue) and SDW (black) channels for $U=2$ eV. (The CDW channel is much weaker than SC and SDW channels and will be ignored henceforth.) As the energy scale $\Lambda$ decreases, the SDW channel always dominates over the SC channel. The arrows snapshots the collective momentum $\v Q$ (divided by $\pi$) associated with the leading SDW eigenvalue. At low energy scales $\v Q\sim 0$, and the SDW channel diverges at $\Lambda_c\sim 4\times 10^{-3}$ eV. We checked the leading eigenfunction of $V^{\rm SDW}$ to find that it corresponds to site-local spin density. This means the system enters the ferromagnetic SDW state below a transition temperature $T_c\sim \Lambda_c$. The SC channel is triggered attractive as the SDW channel is enhanced in the intermediate energy window, and grows thereafter. The inset shows the pairing gap function (the leading eigenfunction of $V^{\rm SC}$) on the Fermi surface. The crossing points on the Fermi surface are the van Hove momenta. They are not time-reversal invariant, and hence of type-II according to Ref.\cite{Yao2015,Chen2015}. The gap function clearly has $p$-wave symmetry, and it does not vanish at the type-II van Hove momenta. By the point-group symmetry and also in our numerics, there is another degenerate $p$-wave function (not shown). Therefore, the FRG flow reveals a well-known fact that $p$-wave triplet pairing can be triggered by ferromagnetic and ferromagnetic-like spin fluctuations, and implies that if the SDW channel remains strong but does not diverge, $p$-wave SC may become the leading instability, as we show below.
\begin{figure}
\includegraphics[width=8.5cm,trim={0.0cm 0.0cm 0.0cm 0.0cm},clip]{n49.png}
\caption{(Color online) Flow of (the inverse of) leading eigenvalues in the SC (blue) and SDW (black) channels for $n=0.49$ and $U=2$ eV.}\label{n49}
\end{figure}
Next we consider a filling $n = 0.46$ slightly below the van Hove level. For $U=2$ eV, the FRG flow is shown in Fig.\ref{n46}. The flow in the SDW channel is similar to that for $n=0.49$ at higher energy scales, where quasi-particle excitations are not sensitive to the fine features of the Fermi surface, but it saturates at low energy scales, with a nonzero but small $\v Q$, where SC diverges instead. The small $\v Q$ can be attributed to particle-hole scattering between the neighboring parts of the Fermi pockets. The inset shows one of the two degenerate gap functions on the Fermi surface, which is still of $p$-wave symmetry. As in the previous case, the SC channel is triggered attractive as ferromagnetic-like (or small-$\v Q$) SDW is enhanced in the intermediate energy scale. The reason that the SDW channel saturates is because the deviation from the van Hove singularity, although only slightly, regularizes the diverging density of states so that the phase space for low-energy particle-hole excitations diminishes. Instead, the SC channel, once triggered attractive, can grow with decreasing energy scale by the Cooper mechanism, even in the absence of any singularity in the density of states. This leaves room for SC to overwhelm the SDW channel. From the leading eigenfunction of $V^{\rm SC}$ we find that for the case under concern the $p$-wave pairing on NN bonds dominates, while that on longer bonds is smaller by more than one order of magnitude. On the other hand, the two degenerate $p$-wave pairings will recombine as a fully gapped $p\pm ip'$-wave pairing in the ordered state to gain energy.
\begin{figure}
\includegraphics[width=8.5cm,trim={0cm 0cm 0.0cm 0.0cm},clip]{n46.png}
\caption{(Color online) Flow of (the inverse of) leading eigenvalues in the SC (blue) and SDW (black) channels for $n = 0.46$ and $U=2$ eV. The inset shows one of the degenerate $p$-wave gap functions (color scale) on the Fermi surface.} \label{n46}
\end{figure}
For comparison, we also consider a filling $n=0.57$ slightly above the van Hove level. The FRG flow is shown in Fig.\ref{n57}. The overall feature is similar to that in Fig.\ref{n46}, except for the Fermi surface topology and divergence energy scale. The difference in Fermi surface topology leads to slightly larger inter-pocket scattering momentum and hence that of the leading SDW eigenmode at the divergence scale. In Ref.\cite{Xiang2012} it is observed that proximity to van Hove singularities as well as small-momentum inter-pocket scattering are favorable to trigger triplet pairing. Interestingly both of these mechanisms are realized in proximity to a type-II van Hove singularity under concern.
\begin{figure}
\includegraphics[width=8.5cm,trim={0cm 0cm 0.0cm 0cm},clip]{n57.png}
\caption{(Color online) Flow of (the inverse of) leading eigenvalues in the SC (blue) and SDW (black) channels for $n = 0.57$ and $U=2$ eV. The inset shows one of the degenerate $p$-wave gap functions (color scale) on the Fermi surface.} \label{n57}
\end{figure}
By systematic calculations for various values of $U$ and $n$, we obtain a phase diagram in the $(U, n)$ parameter space, as shown in Fig.\ref{phasediag}. We see that for weak interaction $U<1$ eV, the system is in the $p$-wave SC state, in agreement with the weak coupling analysis in Ref.\cite{Chen2015}, but the divergence scale (or transition temperature) is very low. For larger interactions, SDW order is favorable in the immediate vicinity of the van Hove level, with strict ferromagnetic order right at the van Hove level. Elsewhere near the van Hove level, $p$-wave superconductivity is the leading instability, with sizable divergence scale. Notice that $\Lambda_c$ depends on $U$ very sensitively. It changes by more than five orders of magnitude as $U$ is increased from $0.5$ eV to $3.5$ eV (see notations of the color scale). The interaction estimated in first principle calculations is roughly $U\sim 0.7$ eV. This would correspond to a divergence scale $\Lambda_c < 10^{-6}$ eV, questioning the practical interest of the underlying $p$-wave SC. Therefore a more refined estimation of $U$ is needed before one can decide whether the SC state is of practical interest. On the other hand, we have not considered Coulomb repulsion $V$ on the NN bonds so far. A weak $V$ enhances charge-charge interaction at zero momentum. Since this overlaps in part with SDW interaction it could enhance the SDW fluctuations, and SC as a consequence. This effect is shown in Fig.\ref{Tc} for $U=2$ eV. The transition temperature is enhanced significantly from $V=0$ eV (solid line) to $V=0.2$ eV (dashed line), for both SC and SDW orders at the corresponding filling levels. We remark that in the SC phase the pairing amplitude on second-neighbor bonds starts to increase (relative to that on the NN bond) in line with increasing repulsion $V$ on NN bonds. Since we are material oriented we do not consider large values of V, which would drive the system into the CDW state.
\begin{figure}
\includegraphics[width=8.5cm,trim={0cm 0cm 0cm 0cm},clip]{phasediag.png}
\caption{(Color online) Phase diagram in the $(n, U)$ parameter space. The white solid line is the phase boundary between SC and SDW phases. The dashed line indicates ferromagnetic SDW right at the van Hove filling. The color scale indicates $\log_{10}\Lambda_c$.} \label{phasediag}
\end{figure}
\begin{figure}
\includegraphics[width=8.5cm,trim={0cm 0cm 0cm 0cm},clip]{Tc.png}
\caption{(Color online) Divergence scale (or transition temperature) $\Lambda_c$ versus $n$ for $V=0$ eV (solid line) and $V=0.2$ eV (dashed line), both with $U=2$ eV. Open squares (circles) indicate $p$-wave SC (SDW) order that would emerge below $\Lambda_c$. \label{Tc}}
\end{figure}
\section{Summary} \label{SMR}
We investigated the electron instabilities in the single-sheet BC$_3$ near the van Hove filling. In the weak coupling limit $p$-wave SC is favorable but only below a tiny energy scale. For a moderate Hubbard interaction, the ferromagnetic-like SDW order dominates in the immediate vicinity of the singularity. Elsewhere near the singularity the $p$-wave superconductivity prevails. A small nearest-neighbor Coulomb repulsion can enhance the superconductivity. The wide range of $p$-wave SC regime in the phase diagram is a manifestation of the type-II van Hove singularity. However, the transition temperature becomes practically sizable only if the bare interaction is moderately strong.
\acknowledgments{The project was supported by National Key Research and Development Program of China (under grant No. 2016YFA0300401) and NSFC (under grant Nos. 11574134 and 11604168).}
\section{Appendix}
For self-completeness, here we present necessary technical details for SM-FRG applied in the main text. For brevity we first suppress sublattice and/or orbital labels, to which we will come shortly. Consider the interaction hamiltonian $H_I=(1/2)c_{1\sigma}^\dagger c_{2\sigma'}^\dagger \Gamma_{1234} c_{3\sigma'} c_{4\sigma}$. Here the numerical index labels single-particle quantum numbers, such as momentum/position, and we leave implicit the momentum conservation/translation symmetry. The spin SU(2) symmetry is guaranteed in the above convention for $H_I$. The idea of FRG is to get the one-particle-irreducible interaction vertex $\Gamma$ for fermions whose energy/frequency is above a scale $\Lambda$. (Thus $\Gamma$ is $\Lambda$-dependent.)
Equivalently, such an effective interaction can be taken as a generalized pseudo-potential for fermions whose energy/frequency is below $\Lambda$. It is useful to define matrix aliases of the rank-4 `tensor' $\Gamma$ via
\begin{eqnarray} \Gamma_{1234}=P_{(12)(43)}=C_{(13)(42)}=D_{(14)(32)}.\end{eqnarray}
Here $P$, $C$ and $D$ are matrices of combined indices, reflecting scattering amplitudes for fermion bilinears in the pairing, crossing and direct channels. Starting from the bare interactions at $\Lambda=\infty$, the interaction vertex flows toward decreasing scale $\Lambda$ as,
\begin{eqnarray} \frac{\partial \Gamma_{1234}}{\partial\Lambda} = &&[D\chi^{ph}(D-C)+(D-C)\chi^{ph}D]_{(14)(32)}\nonumber\\
&&+ [P\chi^{pp}P]_{(12)(43)} - [C\chi^{ph}C]_{(13)(42)},
\label{Eq:dV} \end{eqnarray}
where matrix convolutions are understood within the square brackets, and
\begin{eqnarray} && \chi^{pp}_{(ab)(cd)} = \frac{1}{2\pi}[G_{ac}(\Lambda)G_{bd}(-\Lambda)+(\Lambda\rightarrow -\Lambda)],\nonumber\\
&& \chi^{ph}_{(ab)(cd)} = -\frac{1}{2\pi}[G_{ac}(\Lambda)G_{db}(\Lambda)+(\Lambda\rightarrow -\Lambda)],
\label{Eq:def} \end{eqnarray}
where $G$ is the normal state Green's function, and we used a hard-cutoff in the continuous Matsubara frequency. \\
From $\Gamma$ (or its aliases $P$, $C$ and $D$), we extract at a given scale $\Lambda$ the effective interactions in the general SC/SDW/CDW channels
\begin{eqnarray} (V^{\rm SC},V^{\rm SDW},V^{\rm CDW}) = (P, -C, 2D-C). \label{eq:channel}\end{eqnarray}
They are matrices describing scattering of fermion bilinears in the respective channels. Since they all originate from $\Gamma$, they are overlapped but are naturally treated on equal footing. The effective interactions can be decomposed into eigenmodes. For example, in the SC channel (with a zero collective momentum),
\begin{eqnarray}
[V^{\rm SC}]_{(\v k,-\v k)(\v k',-\v k')} = \sum_m f_m(\v k)S_m f_m^{*}(\v k'),
\end{eqnarray}
where $S_m$ is the eigenvalue, and $f_m(\v k)$ is the eigenfunction, which can be expanded in terms of lattice harmonics, such as $e^{i\v k\cdot \v r}$ where $\v r$ is the distance between the fermions within a fermion bilinear. We look for the most negative eigenvalue, say $S=\min[S_m]$, with an associated eigenfunction $f(\v k)$. If $S$ diverges at a scale $\Lambda_c$, it signals the instability of the normal state toward a SC state, with a pairing function described by $f(\v k)$. Similar analysis can be performed in the CDW/SDW channels, with the only exception that in general the collective momentum $\v q$ in such channels is nonzero. Since $\v q$ is a good quantum number in the respective channels, one performs the mode decomposition at each $\v q$. There are multiple modes at each $\v q$, but we are interested in the globally leading mode among all $\v q$. In this way one determines both the ordering vector $\v Q$ and the structure of the order parameter by the leading eigenfunction. Finally, the instability channel is determined by comparing the leading eigenvalues in the CDW/SDW/SC channels.\\
For systems with multiple sublattices/orbitals, such as the case of two sublattices in the main text, we take them as general orbits, and the only modification to the above formalism is to take orbital-bilinears into account in the form factors. In this case each form factor is a matrix in the generalized orbital basis.\\
In principle, the above procedure is able to capture the most general candidate order parameters. In practice, however, it is impossible to keep all elements of the `tensor' $\Gamma$ for computation. Fortunately, the order parameters are always local or short-ranged. This is notwithstanding the possible long-range correlations between the order parameters. For example, the s-wave pairing in the BCS theory is local, since the gap function is a constant in momentum space. The order parameter in usual Landau theories are assumed to be local. The d-wave pairing is nonlocal but short-ranged. The usual CDW/SDW orders are ordering of site-local charges/spins. The valence-bond order is on-bond but short-ranged. In fact, if the order parameter is very nonlocal, it is not likely to be stable. The idea is, if it is not an instability at the tree level, it has to be induced by the overlapping channel. But if the induced order parameter is very nonlocal, it must be true that the donor channel has already developed long-range fluctuations and is ready to order first. These considerations suggest that most elements of the `tensor' $\Gamma$ are irrelevant in the RG sense and can be truncated. \Eq{Eq:dV} suggests how this can be done. For fermions, all 4-point interactions are marginal in the RG sense, and the only way a marginal operator could become relevant is through coherent and repeated scattering in a particular channel, in the form of convolution in \Eq{Eq:dV}. Therefore, it is sufficient to truncate internal spatial range within the fermion bilinear, e.g., between 1 and 2, and between 3 and 4, in $P_{(12)(43)}$. This means that the form factors are expanded in a truncated set of lattice harmonics. The setback distance between the two groups is however unlimited (thus thermodynamical limit is not spoiled). Similar considerations apply to $C$ and $D$. Eventually the same type of truncations can be applied in the effective interactions $V^{\rm CDW/SDW/SC}$. Such truncations keep the potentially singular contributions in all channels and their overlaps, underlying the key idea of the SM-FRG.~\cite{Wang2012,Xiang2012,Wang2014} The merits of SM-FRG are: 1) It guarantees hermiticity of the truncated interactions; 2) It is asymptotically exact if the truncation range is enlarged; 3) It respects all underlying symmetries, and in particular it respects momentum conservation exactly. 4) In systems with multi-orbital or complex unit cell, it is important to keep the momentum dependence of the Bloch states, both radial and tangential to the Fermi surface. This is guaranteed in SM-FRG since it works with Green's functions in the orbital basis. These are important but may be difficult to implement in the more conventional patch-FRG applied in the literature.~\cite{Honerkamp2001, Metzner2012, Platt2013}\\
To check the convergence of the real-space truncation for fermion bilinears discussed above, we define $L_c$ as the maximal distance between the two fermions within a fermion bilinear. We take a sufficiently large $L_c$ such that the results are not sensitive to a further increase of $L_c$. In the main text, we used $L_c$ up to the third-neighbor bond.
|
1,314,259,995,865 | arxiv | \section{Introduction}
\label{sec:Introduction}
With the rapid development of camera and display equipment, the resolution of images is getting higher and higher, where 4K and 6K resolutions become common. It gives different chances in portrait photo post-processing, industrial defect detection, medical diagnose, \etc. However, ultra high-resolution images also bring challenges to the classical image segmentation methods. First, the significant number of input pixels is computationally expensive and GPU memory-hungry. Second, most existing methods up-sample the final prediction for 4 to 8 times through interpolation~\cite{yang2018denseaspp, yuan2018ocnet, zhao2017pyramid,chen2018encoder,zhao2018psanet}, without building fine-grained details on output masks.
Previous segmentation refinement methods include those of~\cite{huynh2021progressive, yuan2020segfix, lin2017refinenet, kirillov2020pointrend}. They still target at images with 1K$\sim$2K resolutions. Work of~\cite{cheng2020cascadepsp, yang2020meticulous} handles ultra high-resolution refinement based on low-resolution masks generated from classic segmentation algorithms. They utilize cascade-scheme in decoder to upsample intermediate refinement results in several resolution stages until reaching the target resolution. They are still time-consuming due to working in discrete style on predefined resolution stages of decoder. We instead consider continuity to make the decoding more efficient and more friendly to the learning of up-sampling resolution. We propose the Continuous Refinement Model~(CRM) to exploit continuity.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{./images/teaser.pdf}
\caption{Coarse mask refinement results. (a) Coarse mask from PSP~\cite{zhao2017pyramid}, (b) refined mask of state-of-the-art \cite{cheng2020cascadepsp}, and (c) refined mask of our proposed CRM. The image is from BIG~(2K$\sim$6K res).
}
\label{fig:teaser_image}
\end{figure}
The coarse mask is from low-resolution segmentation. In order to expand it further, the problem is similar to a classical super-resolution~(SR) task. Other than classical SR methods, constructing continuous local representation is proposed~\cite{chen2021learning}. We note that utilizing implicit function~\cite{mildenhall2020nerf} to handle high-resolution segmentation refinement is not trivial. First, the resolution of the training image in our task is around 500, while the training image for SR is with 2K resolution. The training strategy to down-sample the input to SR would make our input mask tiny and meaningless. Second, more multi-level semantic features are needed compared with super-resolution configuration. Third, there exists a resolution gap between training on low-resolution and testing on ultra high-resolution. Therefore, this task needs specific designs. %
To realize the continuity in ultra high-resolution segmentation refinement, we first propose Continuous Alignment Module~(CAM) to align the feature and refinement target continuously (different from utilizing the cascade scheme in decoder). In CAM, the coordinates of feature and refinement target are transferred into a continuous space. We then align position and feature based on the continuous coordinate. An implicit function combines position information and aligned latent image feature to predict the segmentation label for the queried pixel on images. Here, the pixel-wise implicit function models the relationship between continuous position and prediction and realizes image-aware refinement by latent feature. Overall, this design is simpler and lighter than the cascade-based decoder, but generates more precise refinement mask as \cref{fig:teaser_image}.
In addition, there is a resolution gap between low-resolution training images and ultra high-resolution testing ones. In cascade-decoder-based methods~\cite{cheng2020cascadepsp, yang2020meticulous}, convolution always covering a fixed size neighbor patch under the training resolution reduces its generalization to other testing resolutions. However, implicit function in CRM is in pixel-wise extracted feature without this bias. Also, in our multi-resolution inference strategy, low-resolution input is inferred first. Then we increase the input resolution to generate more details in the refined mask. Working with a multi-resolution inference strategy, CRM realizes stronger generalization ability than previous methods~\cite{cheng2020cascadepsp}. In experiments, our CRM achieves better performance and infers more than twice as fast as previous state-of-the-art methods in the ultra high-resolution segmentation refinement task.
\begin{figure}[t]
\centering
\includegraphics[width=0.98\linewidth]{./images/compare22.pdf}
\hfill
\caption{Structure difference between (a) Cascade-based decoder in model~\cite{cheng2020cascadepsp} and (b) our CRM. We can see CRM is much simpler, which is the base of our speed advantage.}
\label{fig:CascadePSPandCRM}
\end{figure}
Our main contribution is the following.
\begin{itemize}
\item
We propose a general Continuous Refinement Model~(CRM). It introduces an implicit function that utilizes continuous position information and continuously aligns latent image feature in ultra high-resolution segmentation refinement. Without a cascade-based decoder, we effectively reduce computation cost and yet reconstruct more details.
\item
CRM with multi-resolution inference is suitable for using low-resolution training images and ultra high-resolution testing images. Due to the simple design, even with refining from low- to high-resolution, the total inference time is less than half of CascadePSP~\cite{cheng2020cascadepsp}.
\item
In experiments, CRM yields the best segmentation results on ultra high-resolution images. It also helps boost the performance of state-of-the-art panoptic segmentation models without fine-tuning.
\end{itemize}
\section{Related Work}
\subsection{Semantic Segmentation}
Semantic segmentation is to assign a class label to each pixel for an image. FCN~\cite{long2015fully} introduces the deep convolution network into semantic segmentation and achieved remarkable progress, and deep convolution networks are the dominant solution in this area. Later work includes PSPNet~\cite{zhao2017pyramid}, DeepLab series methods~\cite{chen2017deeplab, chen2017rethinking, chen2014semantic, chen2018encoder}, and other outstanding work \cite{fu2019dual, yuan2018ocnet, wang2018non, vaswani2017attention, wei2017object, yang2018denseaspp, he2019adaptive, lin2019zigzagnet, gidaris2017detect, islam2017label, li2020spatial}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{./images/pipeline555.pdf}
\hfill
\caption{The general framework of CRM. The upper part is the structure of the model. The lower part is the training and testing process of CRM. From the lower part, we can also see the resolution gap between low-resolution training and high-resolution testing.}
\label{fig:GeneralFramework}
\end{figure*}
Among these methods, output stride (or down-sample ratio) is one point that cannot ignore. In most semantic segmentation methods, it is set to 4$\times$~\cite{yang2018denseaspp, yuan2018ocnet} or 8$\times$\cite{zhao2017pyramid,chen2018encoder,zhao2018psanet}, which reduces precision. Directly interpolating prediction logits to target-size results in jagged edge and fewer details. In contrast, our proposed CRM continuously aligns features to arbitrary target refinement resolution, which is more natural for visual instinct and friendly to detail reconstruction.
\subsection{Segmentation Refinement}
\label{sec:SegmentationRefinement}
The segmentation refinement technique is proposed to improve the quality of image segmentation. In this track, recent work can be categorized into two classes according to the image size of high-resolution~(1K$\sim$2K) or ultra high-resolution~(4K$\sim$6K).
For the refinement techniques of images around 1K resolution, they greatly improve the segmentation quality. The remaining drawbacks include graphical models adhering to low-level color boundaries~\cite{chen2014semantic, zheng2015conditional}, propagation-based approaches facing computational and memory constraints~\cite{liu2017learning}, and large models prone to overfitting while shallow refinement networks with limited refinement capability~\cite{lin2017refinenet, kirillov2020pointrend, huynh2021progressive}.
This paper focuses on ultra high-resolution image segmentation refinement on, e.g., 4K images. Due to this resolution setting, the above methods would face resource and effectiveness difficulties. Cascade-in-decoder methods~\cite{cheng2020cascadepsp, chen2019collaborative} achieve the state-of-the-art refinement performance on ultra high-resolution images due to its cascade network structure~\cite{he2019bi, sun2013deep, zhao2018icnet, shen2019r} and a global-local patch-based refining pipeline.
However, the heavy cascade structure in the decoder needs down-sampling and cropping patches during inference, which increases cost, loses details, and destroys global context. To solve these problems in ultra high-resolution image segmentation, we propose CRM. Through CAM in CRM, we continuously align the feature map with refinement target simply and elegantly.
The structure difference between cascade-based model~\cite{cheng2020cascadepsp} and our CRM is presented in \cref{fig:CascadePSPandCRM}.
\subsection{Implicit Function for Representation}
\label{sec:relatedImplicitFunction}
In the beginning, implicit neural is designed to represent an object or a scene in a neural network~(by usually multi-layer perceptron), which maps coordinates to different signals. For example, NeRF~\cite{mildenhall2020nerf} maps the 3D coordinate and 2D view angle into RGB and transparency of certain positions from specific views. PixelNerf~\cite{yu2021pixelnerf} introduces an architecture that conditions a NeRF~\cite{mildenhall2020nerf} on image input in a fully convolutional manner, which realizes scene-aware modeling. In addition, its ``relative camera poses" idea also inspires research to use relative position information.
As another extension, Semantic-NeRF~\cite{zhi2021place} extends neural radiance fields to encode semantics with appearance and geometry jointly. The intrinsic multi-view consistency and implicit function's smoothness benefit segmentation by enabling efficient propagation on sparse and noisy labels. There is work utilizing implicit functions in 2D image~\cite{sitzmann2020implicit, dupont2021coin, chen2021learning,shaham2021spatially, chen2019learning}. To the best of our knowledge, we make the first attempt to introduce implicit functions to segmentation.
\section{Proposed Method}
This section first describes the general framework for the Continuous Refinement Model~(CRM), then illustrates the Continuous Alignment Module~(CAM) and the following implicit function. Finally, we introduce the corresponding inference strategies to exploit continuity in ultra high-resolution.
\subsection{General Framework}
\label{sec:framework}
As illustrated in \cref{fig:GeneralFramework}, following the setting of CascadePSP~\cite{cheng2020cascadepsp}, our proposed CRM takes an image $I \in \mathbb{R}^{3\times H\times W}$ and a coarse segmentation mask $M_{\text{coarse}} \in \mathbb{R}^{1\times H\times W}$ as input. First, $I$ and $M_{\text{coarse}}$ are concatenated as $I_{\text{coarse}} \in \mathbb{R}^{4\times H\times W}$ and are represented as latent embedding $F_{\text{latent}} \in \mathbb{R}^{C\times h\times w}$ by an encoder $E_{\theta}$ as \cref{eq:cam0}, where $\theta$ denotes the parameters.
\begin{equation}
F_{\text{latent}}=E_{\theta}(I_{\text{coarse}}).
\label{eq:cam0}
\end{equation}
Second, $F_{\text{latent}}$ and position information $P$ are continuously aligned to be the target size feature $F_{\text{cont.}} \in \mathbb{R}^{(C+6) \times H\times W}$ through CAM without explicit up-sampling as \cref{eq:cam1}, where $[\cdot,\cdot]$ denoted concatenation.
\begin{equation}
F_{\text{cont.}}=\mathrm{CAM}([P, F_{\text{latent}}]).
\label{eq:cam1}
\end{equation}
Finally, $F_{\text{cont.}}$ passes an implicit-function-based decoder $D_{\phi}$ and feature aggregation step, making refined mask $M_{\text{refined}}$ generated as below:
\vspace{-1mm}
\begin{equation}
M_{\text{refined}}(x) = \sum_{\text{z}_{\text{k}} \in N(x)} \frac{\text{w}_{\text{z}_{\text{k}}}}{\sum \text{w}_{\text{z}_{\text{k}}}} D_{\phi}(F_{\text{cont.}}(\text{z}_{\text{k}})),
\label{eq:cam2}
\end{equation}
where $x$ is an aligned point, $N(x)$ denotes the set of $x$'s supporting points $\text{z}_{\text{k}}, k \in \{1, 2, 3, 4\}$, $w_{z_{k}}$ is the aggregation weights~
swap the area value of the box between $x$ and $\text{z}_{\text{k}} \in N(x)$ symmetrically with x as the center), and $F_{\text{cont.}}(\text{z}_{\text{k}})$ is the feature vector of $\text{z}_{\text{k}}$ on $F_{\text{cont.}}$.
\subsection{Continuous Alignment Module}
\paragraph{Motivation}
After passing the image encoder, the size of the encoded feature is smaller than the refinement target. Intermediate feature or refined results need to be up-sampled to later stages progressively. In previous work~\cite{cheng2020cascadepsp, yang2020meticulous} on ultra high-resolution image segmentation, cascade scheme seems an indispensable part of the decoder.
Although novel designs alleviate information damage after up-sampling in a specific resolution, the overall process is hard to restore more details.
We note that the discrete manner in cascade-based decoder with predefined up-sampling ratios can be regarded as constraints to up-sampling, limiting the further improvement and reducing generality. In addition, it increases the complexity of the whole framework, illustrated in \cref{fig:CascadePSPandCRM}. Our proposed Continuous Alignment Module~(CAM) utilizes position information and feature alignment to model the continuous deep feature $F_{\text{cont.}}$.
\vspace{-1mm}
\paragraph{Position Information~$P$}
Referring to NeRF-Series~\cite{mildenhall2020nerf, yu2021pixelnerf, zhi2021place}, the position information is the essential input to the implicit function. Coordinate of refinement target $C_{\text{t}}$ is projected to feature map coordinate $C_{\text{f}}$. This operation creates continuous coordinates for pixels on different resolution feature maps and various desired inference resolutions, shown in \cref{sec:Inference}.
The absolute coordinate may vary with the image and feature size. To make our CRM universal for images of arbitrary sizes, the $C_{\text{t}}$ and $C_{\text{f}}$ are normalized to certain range $[-1, 1]$. After projection, the offset between the points on $C_{\text{t}}$ and their corresponding nearest points on $C_{\text{f}}$ is denoted as $C_\text{r}$. In \cref{fig:GeneralFramework}, the $C_{\text{r}}^{i, j}$ represents the offset~(blue arrow) on position$(i, j)$. The relative target coordinate offset $C_\text{r}$, the ratio $r$ between feature and target ~\cite{chen2021learning}, and the refinement target position $C_{\text{t}}$ form the position information $P$ as
\vspace{-1mm}
\begin{equation}
P=\left \{C_{\text{r}}, r, C_{\text{t}}\right \}.
\label{eq:position}
\end{equation}
The continuous position information is the basis of continuity in CRM.
\vspace{-2mm}
\paragraph{Continuous Feature Alignment}
Compared with continuous resolution conversion in SR~\cite{chen2021learning}, $F_{\text{latent}}$ from $E_{\theta}$ in the equation needs to enhance by fusing global-local information for the segmentation refinement task. For simplicity, $F_{\text{latent}}$ includes the enhancement. The refinement target position $C_{t}$ can also be regarded as a global feature.
Then, same as that for the position information, we align each pixel in refinement target to $F_{\text{latent}}$. The continuous feature $F_{\text{cont.}}$ is established by concatenating the position information $P$ and the aligned $F_{\text{latent}}$ as shown in \cref{eq:cam1}.
Therefore, compared with discrete resolution conversion, CAM up-samples feature in a continuous manner. The discrete predefined up-sampling ratios reduce the learning difficulty but constrain the up-sampling process. Out CAM has a greater degree of freedom in this respect, which means a larger space to optimize and higher performance potential. The multi-resolution inference in~\cref{sec:Inference} gives full play to the advantage of continuity of CAM.
\subsection{Implicit Function in CRM}
After CAM, implicit-function $D_{\phi}$ takes $F_{\text{cont.}}$ as input. The reason to utilize implicit function is its impressive ability to process continuous coordinates and reconstructing details~\cite{mildenhall2020nerf, chen2021learning, yu2021pixelnerf, zhi2021place}.
A queried point~(blue point on ~\cref{fig:GeneralFramework}) on target refinement mask could be denoted as $x(i, j)$, in which $(i, j)$ is its unnormalized position. First, we find its neighbor points $y_{k}, k \in \{1, 2, 3, 4\}$~(green points on ~\cref{fig:GeneralFramework}) on target refinement mask, whose positions are $(i\pm1, j\pm1)$. Next, the nearest points of $y_{k}$, denoted as $z_k$~(red points on ~\cref{fig:GeneralFramework}), are selected on the aligned feature map. And $z_{k}$ are utilized as the supporting points of $x$, represented as $N(x)$
We then input $z_{k}$'s feature vector $F_{\text{cont.}}(z_{k})$ to implicit function $D_{\phi}$. Finally, we aggregate the implicit function's output. The aggregation weights, i.e., area value~$w_{z_{k}}$, are calculated from relative coordinate offsets $C_{r}$ in \cref{eq:cam2}. The aggregated output is the final prediction result on $(i, j)$.
\vspace{-2mm}
\paragraph{Analysis} It is well-known that the forward process of CNNs~(e.g., CascadePSP~\cite{cheng2020cascadepsp}) and MLPs~(e.g., CRM) can be regarded as a sequence of operations built on matrix-vector multiplications and nonlinear activation. At initialization, all the weights are sampled from well-scaled Gaussian. Hence, each layers' feature shares almost the same Eculiclid norm with high probability (see Cor. A.10 in ~\cite{allen2018convergence}). Namely, for some constant $c$, with probability at least $1-2 \exp \left(-c\varepsilon^{2} m\right)$, we have:
\begin{equation}
\|\phi(A F_{\text{cont.}})\|_{2} \in \qty(1\pm\varepsilon)\qty(\|F_{\text{cont.}}\|_2),
\label{eq:norm}
\end{equation}
where each entry of the matrix $A \in \mathbb{R}^{d\times m}$ is sampled from $\mathcal{N}(0,\frac{1}{m})$, $F_{\text{cont.}}$ is the fixed feature~(same as $F_{\text{cont.}}$ in \cref{eq:cam1}), $\varepsilon \in [0,1]$, $\|\cdot\|_2$ is $\ell_2$-norm, and $\phi: \mathbb{R} \to \mathbb{R}$ is the ReLU activation.
\par
The norm is almost preserved after going through one layer. However, if we further append one operation of weighted average on $\phi(A F_{\text{cont.}})$, things become interesting.
The appending weighted average can always help to improve the representation ability of model, i.e.,
\begin{equation}
\operatorname{dim}(\sum_{\text{z}_{\text{k}} \in N(x)} \frac{\text{w}_{\text{z}_{\text{k}}}}{\sum \text{w}_{\text{z}_{\text{k}}}} \phi(A F_{\text{cont.}}(\text{z}_{\text{k})})) \geq \operatorname{dim}(\phi(A F_{\text{cont.}})),
\label{eq:dim}
\end{equation}
where $\operatorname{dim}$ is the dimension of space.
A toy example is that, when $F_{\text{cont.}}$ is the $m$-dimensional sphere $\mathcal{S}(m)$, $\phi(A F_{\text{cont.}})$ will concentrate around the sphere $\mathcal{S}(d)$ by the norm-preserving property. However, after combining with the weighted average operator, we can get any points \emph{in} the $d$-dimensional ball $\mathcal{B}(d)$.
Generally, $\operatorname{dim}\qty(\mathcal{B}(d))>\operatorname{dim}\qty(\mathcal{S}(d))$.
Back to the section, the main difference between CRM and CascadePSP~\cite{cheng2020cascadepsp} is the decoder part. Take four neighboring points as an example. CRM utilizes MLP and area-based average instead of 2$\times$2 convolution. Therefore, the dimension of CRM's feature space is larger. If the four points all belong to the same class, the influence is not very large. Still, for boundary region, where 4 points belonging to different classes,
larger feature space always provides more distinguishable feature to classified.
From this view, we can give some hints about CRM having stronger boundary region representation and predicting better details.
\subsection{Training and Inference Strategy}
\paragraph{Training without Cascade}
LIIF~\cite{chen2021learning} proposes an elegant solution for SR with the implicit function.
It has 2K images as ground truth and generates any low-resolution images as input.
Ultra high-resolution images with precise segmentation annotations are too few to train. In addition, high-resolution training is directly limited by the constraint of GPU memory and batch size.
With these challenges, we follow the training setting of CascadePSP~\cite{cheng2020cascadepsp} to use low-resolution images in their initial resolution. $M_{\text{coarse}}$ is generated by morphological perturbation of the provided ground truth mask $M_{\text{gt}}$.
We design the training loss in a simple way on the final prediction $M_{\text{refined}}$ without different loss functions on different resolution stages~\cite{cheng2020cascadepsp}. Our loss term $L(\theta, \phi)$ is calculated on the refinement target as
\begin{equation}
L(\theta, \phi)=\sum_{i=1}^{4}{w_{\text{i}} \cdot L_{\text{i}}\qty(M_{\text{refined}},M_{\text{gt}})},
\end{equation}
where $L_{i}, i \in [1,2,3,4]$ denote cross-entropy loss, L1 loss, L2 loss, and gradient loss, respectively. $w_{i}$ are their corresponding weights. $(\theta, \phi)$ are the parameters of encoder $E_{\theta}$ and decoder $D_{\phi}$. $M_{\text{gt}}$ denotes the ground truth mask.
Although we train on the low resolution, multi-resolution inference strategy exploits the continuity potential and narrows the training and testing resolution gap.
\begin{table*}[!htp]
\centering
\begin{tabular}{lccccc}
\toprule
IoU/mBA &
Coarse Mask &
SegFix\cite{yuan2020segfix} &
MGMatting\cite{mgmatting} &
CascadePSP\cite{cheng2020cascadepsp} &
CRM(Ours) \\ \toprule
FCN-8s~\cite{long2015fully} & 72.39/53.63 & 72.69/55.21 & 72.31/57.32 & 77.87/67.04 & \textbf{79.62}/\textbf{69.47} \\
DeepLabV3+~\cite{chen2018encoder} & 89.42/60.25 & 89.95/64.34 & 90.49/67.48 & \textbf{92.23}/74.59 & 91.84/\textbf{74.96} \\
RefineNet~\cite{lin2017refinenet} & 90.20/62.03 & 90.73/65.95 & 90.98/68.40 & 92.79/74.77 & \textbf{92.89}/\textbf{75.50} \\
PSPNet~\cite{zhao2017pyramid} & 90.49/59.63 & 91.01/63.25 & 91.62/66.73 & 93.93/75.32 & \textbf{94.18}/\textbf{76.09} \\ \hline
Average Improve. & 0.00/0.00 & 0.47/3.30 & 0.73/6.10 & 3.58/14.05 & \textbf{4.01}/\textbf{15.12} \\ \toprule
\end{tabular}
\vspace{-2mm}
\caption{IoU and mBA results on the BIG dataset comparing with other mask refinement methods. Coarse mask is from FCN, DeepLabV3+, RefineNet and PSPNet. Best results are noted with \textbf{bold}. Average Improve. represents average improvement based on coarse mask.}
\label{tab:BIGPerformance}
\end{table*}
\begin{figure}[!t]
\centering
\includegraphics[width=0.98\linewidth]{./images/refine_step.pdf}
\hfill
\caption{Visualization of refinement steps in our inference strategy. From left to right, top to down: $M_{\text{coarse}}$, refined mask $M^{i}_{\text{refined}}, i\in \{1,2,3,4\}$~(The rescale ratios are 0.125, 0.25, 0.5, and 1.0 here.), and overlay $M^{4}_{\text{refined}}$ on the original image. }
\label{fig:Refinement}
\end{figure}
\paragraph{Inference Strategy}
\label{sec:Inference}
For the resolution gap between low-resolution in training~(300$\sim$1K) and ultra high-resolution~(2K$\sim$6K) in testing, we propose multi-resolution inference to exploit CRM's continuous $P$ and aligned $F_{cont.}$ fully. The lower part of \cref{fig:GeneralFramework} shows the resolution contrast. Due to the continuous property of CAM, for one image, we can generate outputs of the same target ultra high-resolution $M_{\text{refined}}^{i}$ from multi-resolution input $R^{i}(I_{\text{coarse}}^{i})$.
In the beginning, inference is around the resolution of training images, and gradually increases input's resolution along the continuous ratio axis $Rs$~(with infinite different rescale ratio) as illustrated in \cref{fig:GeneralFramework}. In particular, we concatenate the original ultra high-resolution image $I$ and the coarse mask $M_{\text{coarse}}$~(initial stage) or refined mask $M_{\text{refined}}^{i-1}$ in previous stage. We rescale it on rescale ratio by $R^{i} \in {Rs}$ to be $I_{\text{coarse}}^{i}$. After refinement, $M_{\text{refined}}^{i}$ is generated and used as $M_{coarse}^{i+1}$ for the next rescale ratio stage. The progressive progress is illustrate as~\cref{eq:test1,eq:test2,eq:test3}:
\begin{equation}
I_{\text{coarse}}^{0} = [I, M_{\text{coarse}}^{0}],
\label{eq:test1}
\end{equation}
\begin{equation}
M_{\text{refined}}^{i}=D_{\phi}\qty(\mathrm{CAM}\qty(E_{\theta}\qty(R^{i}\qty(I_{\text{coarse}}^{i})))),
\label{eq:test2}
\end{equation}
\begin{equation}
I_{\text{coarse}}^{i+1} = [I, M_{\text{refined}}^{i}],
\label{eq:test3}
\end{equation}
where $R^{i}$ is one rescale function of $Rs$, $i$ denotes the refinement stage as the upper right mark. For simplicity, \cref{eq:test2} does not include aggregation. In practice, we select enough $R^{i}$s as required regarding performance or by supporting resource. The relation between performance and the number of $R^{i}$ is illustrated in \cref{fig:cont_sample}. And \cref{fig:Refinement} is an example.
This strategy can also be regarded as a variant of coarse-to-fine operations, where methods~\cite{cheng2020cascadepsp, yang2020meticulous} realize it through cascade in decoder, and method of~\cite{huynh2021progressive} through moving window size in range~(256, 512, 1024, and 2048). They can also use this strategy to shrink the gap. Nevertheless, the relatively heavy cascade-based network and many forward times in inference design hinder their usage. Take CascadePSP~\cite{cheng2020cascadepsp} as example, CascadePSP uses the whole ResNet-50~\cite{he2016deep} as backbone, but CRM use it without conv5$\_$x. Then, the cascade-based decoder in CascadePSP~(three resolution up-samplings and the corresponding computation) is more costly than CRM's CAM and $D_{\phi}$~(a five-layer MLP). Therefore, even with multi-resolution inference, the whole refinement process of CRM can be more than twice as fast as CacadePSP~\cite{cheng2020cascadepsp} in ~\cref{tab:BIGCost}.
\section{Experiments}
\label{sec:Experiment}
In this section, we evaluate our CRM and compare it with other corresponding state-of-the-art methods on BIG~\cite{cheng2020cascadepsp}, COCO~\cite{lin2014microsoft} and relabeled PASCAL VOC 2012~\cite{everingham2015pascal}. We evaluate the Intersection over Union~(IoU), mean Boundary Accuracy~(mBA)~\cite{cheng2020cascadepsp}, panoptic quality~(PQ)~\cite{kirillov2019panoptic} and average precision~(AP) to measure the ability. Then, we present visualization along with ablation studies to understand the effectiveness of our CRM.
\subsection{Datasets and Methods of Comparison}
For training datasets, we follow the setting of CascadePSP~\cite{cheng2020cascadepsp}. MSRA-10K~\cite{cheng2014global}, DUT-OMRON~\cite{yang2013saliency}, ECSSD~\cite{shi2015hierarchical}, and FSS-1000~\cite{li2020fss} are merged into the training datasets, consisting of 36,572 images with diverse semantic classes~(\textgreater 1,000 classes). For the testing datasets, CascadePSP~\cite{cheng2020cascadepsp} proposes an high-resolution image segmentation dataset, named BIG, for evaluation in ultra high-resolution. The image resolution in BIG ranges from 2K to 6K. To prove that our proposed model is general, we evaluate CRM as the extension of Panoptic Segmentation~\cite{li2021fully} and Entity Segmentation~\cite{qi2021open}. We also evaluate CRM on relabeled PASCAL VOC 2012, which is introduced in~\cite{cheng2020cascadepsp}.
We choose CascadePSP~\cite{cheng2020cascadepsp} as the main comparison method on ultra high-resolution. MGMatting~\cite{mgmatting} is chosen as mask-guided matting method and Segfix~\cite{yuan2020segfix} as a high-resolution segmentation refinement method. PanopticFCN~\cite{li2021fully} and Entity Segmentor~\cite{qi2021open} make benchmark of panoptic and entity segmentation. Our proposed method performs better in terms of precision and speed in almost all experiments, especially on high-resolution images.
\subsection{Implementation Details}
We implement our model with PyTorch~\cite{paszke2019pytorch}, and use ResNet-50~\cite{he2016deep} without conv5$\_$x as our $E_{\theta}$. For training, we use Adam~\cite{kingma2014adam} with $2.25\times 10$$^{-4}$ learning rate. The learning rate is reduced to one-tenth at steps 22,500 and 37,500 in a total of 45,000 steps. The training input concatenates 224 $\times$ 224 patches cropped from the original images and their corresponding perturbed masks. The perturbed masks are randomly perturbed on ground truth with a random IoU threshold between 0.8 and 1.0.
For evaluation, we select 4 rescale ratios from a continuous range to refine in experiments. The total inference time of CRM is still much less than half of CascadePSP~\cite{cheng2020cascadepsp}.
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.98\linewidth]{./images/results.pdf}
\hfill
\caption{Qualitative comparison between Segfix, CascadePSP and CRM on the coarse mask from FCN, DeepLabV3+, RefineNet and PSPNet. The images are from BIG~(2K $\sim$ 6K). And the black-white mask in bottom left part of first column is the coarse mask.}
\vspace{-2mm}
\label{fig:vis}
\end{figure*}
\begin{table}[]
\small
\centering
\begin{tabular}{lccc}
\toprule
Method~\footnotesize{(IoU/mBA)} & Time(s) & FLOPs(G) & Params(M) \\ \toprule
CasPSP~\footnotesize{(93.9/75.3)}\cite{cheng2020cascadepsp} & 620 & 26518 & 67.62 \\
CRM~\footnotesize{(94.2/76.1)}& 425 & 2536 & 9.27 \\
CRM*~\footnotesize{(93.9/76.3)} & 259 & 1331 & 9.27 \\ \toprule
\end{tabular}
\vspace{-2mm}
\caption{Comparison of total inference time, FLOPs, and the number of parameters on the BIG dataset. CasPSP denotes CascadePSP and selects patches to compute. CRM computes on all pixels. CRM* is a computational-friendly version by just computing the region of interest. Time is recorded on the whole BIG dataset. FLOPs are tested on the same image~(2560*1706).}
\label{tab:BIGCost}
\end{table}
\subsection{Quantitative Results}
In \cref{tab:BIGPerformance} and \cref{tab:BIGCost}, we show comparison among our CRM, CascadePSP~\cite{cheng2020cascadepsp}, Segfix~\cite{yuan2020segfix}, and MGMatting~\cite{mgmatting}. (SegFix and MGMatting perform better on a rescaled image with a downsample ratio 0.5.) They prove that CRM’s performance is better, and it runs faster on high-resolution.
All segmentation refinement models are trained on low-resolution images and tested on high-resolution images. Segfix and MGMatting's refinement performances are not as good as other methods without a special design for ultra high-resolution images in BIG~\cite{cheng2020cascadepsp}. CascadePSP~\cite{cheng2020cascadepsp} gains more IoU after refinement. Moreover, our CRM produces the highest-quality refinement.
Besides, the inference time is essential for the ultra high-resolution task. \cref{tab:BIGCost} shows that CRM takes less than half inference time of CascadePSP~\cite{cheng2020cascadepsp} on the whole BIG dataset. FLOPs and parameters are also less. This advantage is due to the simplicity of CRM.
\begin{table}[]
\small
\centering
\begin{tabular}{lclc}
\toprule
Method & PQ & Method & AP \\ \toprule
PanopticFCN~\cite{li2021fully} & 41.0 & EntitySeg~\cite{qi2021open} & 38.1 \\
PanopticFCN+CRM & 41.8 & EntitySeg+CRM & 38.9 \\ \toprule
\end{tabular}
\vspace{-2mm}
\caption{The performance after extending PanopticSeg and EntitySeg with our CRM without finetuning.}
\label{tab:PanopticPerformance}
\end{table}
The experiments on panoptic segmentation and entity segmentation are illustrated in \cref{tab:PanopticPerformance}. After adding CRM to \cite{li2021fully} and \cite{qi2021open}, their segmentation performance is enhanced.
We also report our performance on relabeled Pascal VOC 2012 in \cref{tab:relabelPascal}. Compared with CascadePSP~\cite{cheng2020cascadepsp} and Segfix~\cite{yuan2020segfix}, CRM runs better than Segfix~\cite{yuan2020segfix} and is comparable with CascadePSP on IoU, but tends to emphasize more on details.
These quantitative results show CRM's general effectiveness on ultra high-resolution images as well as low-resolution ones
\subsection{Qualitative Results}
We show comparison among CascadePSP~\cite{cheng2020cascadepsp}, Segfix~\cite{yuan2020segfix} and our proposed CRM in \cref{fig:vis}. There are more details in our refinement results. It generates matting-style results with only semantic segmentation annotation in training—the matting benefits from continuous alpha-value supervision. Further, the missing part in coarse masks can be reconstructed better through CRM.
In addition, we show some visualization of applying CRM into panoptic segmentation in \cref{fig:panoptic}. We can see the mask details and overall segmentation are considerably improved. More results in supplement material further manifest the effectiveness of CRM and the continuous modeling.
\begin{table}[!t]
\vspace{-4mm}
\small
\centering
\begin{tabular}{lcccc}
\\\toprule
IoU/\underline{mBA} & CM & SF\cite{yuan2020segfix} & CasPSP~\cite{cheng2020cascadepsp} & CRM \\ \toprule
FCN-8s~\cite{long2015fully} & 68.85 & 70.02 & 72.70 & 73.74 \\
& \underline{54.05} & \underline{57.63} & \underline{65.36} & \underline{67.17} \\ \hline
DeepLab & 87.13 & 88.03 & 89.01 & 88.33 \\
V3+~\cite{chen2018encoder} & \underline{61.68} & \underline{66.35} & \underline{72.10} & \underline{72.25} \\ \hline
RefineNet~\cite{lin2017refinenet} & 86.21 & 86.71 & 87.48 & 87.18 \\
& \underline{62.61} & \underline{66.15} & \underline{71.34} & \underline{71.54} \\ \hline
PSPNet~\cite{zhao2017pyramid} & 90.92 & 91.98 & 92.86 & 92.52 \\
& \underline{60.51} & \underline{66.03} & \underline{72.24} & \underline{72.48} \\ \toprule
\end{tabular}
\vspace{-1mm}
\caption{Quantitative comparison on relabeled PASCAL VOC 2012. Due to the limited width, CM represent coarse mask, SF represents SegFix, and CasPSP denotes CascadePSP.}
\label{tab:relabelPascal}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.98\linewidth]{./images/panoptic.pdf}
\hfill
\vspace{-1mm}
\caption{CRM applied in panoptic segmentation. (a) Input image, (b) coarse panoptic segmentation mask, (c) refined mask by our CRM. The images are from COCO.}
\label{fig:panoptic}
\end{figure}
\subsection{Ablation Study}
\vspace{-1mm}
\paragraph{CRM and Inference Resolutions}
CAM and implicit function are the key contributions of our work. The rows in \cref{tab:ExistingofCRM} shows the existence of CRM and implicit function can enhance the performance on every resolution~(the first column means the rescale ratios on $I_{\text{coarse}}^{i}$).
For the inference strategy, we analyze the columns of \cref{tab:ExistingofCRM}. CRM refines a good general mask at low-resolution ~(IoU mainly increased in low resolution). As the resolution grows, more details are generated, and mBA increases.
\begin{table}[!ht]
\small
\renewcommand*{\arraystretch}{0.9}
\centering
\begin{tabular}{ccc}
\toprule
IoU/mBA & w/o CAM\&Impl. & w CAM\&Impl. \\ \toprule
0.125 & 92.68/63.70 & 93.07/65.61 \\
0.25 & 93.49/69.23 & 93.88/71.41 \\
0.5 & 93.85/73.43 & 94.15/74.95 \\
1.0 & 93.94/75.42 & 94.18/76.09 \\ \toprule
\end{tabular}
\vspace{-2mm}
\caption{The effect of CRM and inference resolutions. Impl. denotes implicit function.}
\label{tab:ExistingofCRM}
\end{table}
\begin{table}[!ht]
\small
\renewcommand*{\arraystretch}{0.9}
\centering
\begin{tabular}{c c c c}
\toprule
CAM & Impl. & IoU & mBA \\ \toprule
$\times$ & $\times$ & 93.94 & 75.42 \\
$\surd$ & $\times$ & 93.99 & 75.93 \\
$\times$ & $\surd$ & 93.96 & 75.55 \\
$\surd$ & $\surd$ & 94.18 & 76.09 \\ \toprule
\end{tabular}
\vspace{-2mm}
\caption{The ablation study about CAM and implicit function.}
\label{tab:ContAlignAndImplicitFunc}
\end{table}
\vspace{-3mm}
\paragraph{CAM and Implicit Function}
The results in \cref{tab:ContAlignAndImplicitFunc} show CAM and implicit functions are all indispensable parts of CRM. Together, they achieve synergy effects.
\vspace{-3mm}
\paragraph{The effect of inference's continuity}
From \cref{fig:cont_sample}, we can see the performance is growing with the number of sampled rescale ratios between 0 and 1. More numbers mean more continuity in the resolutions of inference, which helps improve performance until convergence. Different from the chosen rescale ratios in ~\cref{fig:Refinement} and \cref{tab:ExistingofCRM}, the final performances are almost the same level as ~\cref{fig:Refinement} and \cref{tab:ExistingofCRM}.
\begin{figure}[t]
\centering
\includegraphics[width=0.98\linewidth]{./images/cont_sample.pdf}
\hfill
\vspace{-1mm}
\caption{The effect of inference's continuity. The horizontal axis represents the number of uniformly sampled points between 0 and 1. The sampled points are rescale ratios of input.}
\label{fig:cont_sample}
\end{figure}
\section{Conclusion}
\vspace{-1mm}
We have proposed CRM to refine segmentation on ultra high-resolution images.
CRM continuously aligns the feature map with the refinement target, which helps aggregate features for reconstructing details on the high-resolution mask. Besides, our CRM shows its significant generalization potential regarding low-resolution training and ultra high-resolution testing. Experiments show that continuous modeling is promising in terms of performance and speed.
\noindent \textbf{Limitations} We use the configuration of ``low-resolution training and ultra high-resolution testing'' at present. Using ultra high-resolution images to train and test is still resource-consuming. Addressing this challenging problem will be our future work.
{\small
\bibliographystyle{ieee_fullname}
|
1,314,259,995,866 | arxiv | \section{Introduction} \label{intro}
The classical way of solving nonstationary boundary value problems applies separate discretizations in space and time.
One promising alternative to this approach is the finite element method with full space-time discretization.
In principle, this method can be very effective for solving hyperbolic and parabolic problems.
Some basic examples of space-time methods can be found in~\cite{steinbach2015space,steinbach2017algebraic,toulopoulos2020space} and the recent survey of Langer and Steinbach~\cite{langer2019space}.
In addition, a representative example of this work is provided by Dumont et al.~\cite{dumont2012space} who developed a space-time finite element method to solve elastodynamics problems.
Since the dynamics problem in~\cite{dumont2012space} involves a three-dimensional spatial body, the authors have constructed four-dimensional meshes.
Dumont et al.~have not developed general, unstructured triangulations in four-dimensional domains.
Rather, they firstly make a spatial triangulation and then extend the domain in the temporal direction to obtain a four-dimensional finite element triangulation.
A similar approach for generating four-dimensional simplex meshes has been employed by Neum{\"u}ller and Karabelas in~\cite{neumuller2019generating}. A detailed survey of four-dimensional structured and unstructured simplicial meshing techniques is provided by Caplan~\cite{caplan2019four} and Frontin et al.~\cite{frontin2020polytopes}. Finally, examples of refinement strategies for high-dimensional simplicial meshes have been developed in the work of Brandts et al.~\cite{brandts2007simplicial}, and Korotov and K\v{r}\'{\i}\v{z}ek~\cite{korotov2014red}.
There have been virtually no efforts to develop four-dimensional hybrid meshes.
However, in the last few decades, there has been interest in the development of hybrid \emph{three-dimensional} meshes~\cite{bergot2010higher,cao2012new}, with some important work being contributed by Yamakawa and coworkers~\cite{yamakawa2011subdivision,yamakawa2003increasing,yamakawa2009converting,yamakawa201088,yamakawa2011automatic}. In~\cite{yamakawa2011automatic}, Yamakawa and Shimada developed a procedure for creating hexahedron dominant meshes in order to model thin-wall structures.
Additionally, Yamakawa et al. described a conforming coupling between hexahedron and tetrahedron meshes by using square pyramid interface elements~\cite{yamakawa2009converting}, and using triangular prism interface elements~\cite{yamakawa2011subdivision}.
The conforming coupling of hexahedra and tetrahedra via pyramids has been discussed independently by Meshkat and Talmor~\cite{meshkat2000generating}, and Devloo et al.~\cite{devloo2019high}.
In addition, Pantano and Averill~\cite{pantano2007penalty} have used a non-conforming, penalty-based procedure to couple independently modeled three-dimensional finite element meshes.
Similar non-conforming techniques using mortar elements have been explored by Maday et al.~\cite{maday1988nonconforming} and Seshaiyer and Suri~\cite{seshaiyer2000hp}.
Furthermore, hybrid non-conforming hexahedral-tetrahedral meshes have been examined by Reberol and L{\'e}vy~\cite{reberol2016low}.
Unfortunately, these non-conforming meshes are incompatible with most standard finite element methods, with the exception of discontinuous Galerkin methods.
Evidently, the most convenient way to unify independently created finite element meshes is through the conforming coupling of the elements in the interface subdomain. This strongly motivates the importance of transitional elements, such as pyramidal and prismatic elements.
Let us briefly review previous efforts to analyze transitional elements.
Evidently, the three-dimensional prismatic elements are relatively straightforward to analyze, as they can be constructed by forming tensor products between line segments and triangles, whereas, a similar procedure is impossible for pyramids.
As a result, prismatic elements inherit their interpolation and quadrature procedures from well-established 1D/2D procedures, and researchers have primarily focused their attention on pyramids.
The pioneering work on pyramids was performed by Bedrosian~\cite{bedrosian1992shape} in the early nineties.
He was the first researcher who explained the role of pyramidal elements for coupling different kinds of meshes, and developed the first interpolation and integration procedures.
A number of researchers have followed in Bedrosian's footsteps~\cite{ainsworth2017lowest,bergot2010higher,chan2015comparison,chan2015hp,chan2016orthogonal,chan2016short,coulomb1997pyramidal,gillette2016serendipity}. In particular, Bergot et al.~\cite{bergot2010higher} performed a deep analysis of the interpolation properties of pyramidal elements.
Chan and Warburton extended this work, developing alternative sets of basis functions~\cite{chan2016orthogonal,chan2016short}, analyzing the numerical stability of interpolation points~\cite{chan2015comparison}, and deriving trace inequalities~\cite{chan2015hp}.
Furthermore, Gillette~\cite{gillette2016serendipity} recently used techniques from finite element exterior calculus~\cite{arnold2010finite,arnold2006finite} to construct seredipity-type basis functions on pyramids.
Quadrature formulas on pyramidal elements have been created and analyzed in~\cite{bergot2010higher,nigam2012numerical,witherden2015identification}.
The error estimates arising from the quadrature rules of Bergot et al.~\cite{bergot2010higher} have been improved by Nigam and Phillips~\cite{nigam2012numerical} in order to obtain the classical rate of convergence for second-order boundary value problems. However, the best set of rules to date (in terms of integration strength for a minimum number of points) was identified by Witherden and Vincent in~\cite{witherden2015identification}.
Now, let us return our attention to four-dimensional space. It turns out that the three-dimensional definitions of transitional elements do not extend to higher dimensions.
For example, while the triangular prism provides a conforming interface between the hexahedron and the tetrahedron in three dimensions, its analog in four dimensions (the tetrahedral prism) does \emph{not} provide a conforming interface between the tesseract and the pentatope.
This immediately follows from the fact that the tetrahedral prism has two tetrahedral facets and four triangular prismatic facets, and therefore it cannot conformally interface with the tesseract. In a similar fashion, the cubic pyramid (the four-dimensional analog of the square pyramid) has one hexahedral facet and six square pyramidal facets, and therefore it cannot conformally interface with the pentatope. Fortunately, the lack of transitional elements between tesseracts and pentatopes does not completely prevent the formulation of hybrid meshes in four dimensions. We will show that it is still possible to build hybrid meshes using tesseract, cubic pyramid, and pentatopal elements. We note that some basic geometric properties of these elements are discussed in the work of Coxeter~\cite{coxeter1940regular,coxeter1973regular}, Sommerville~\cite{mclaren1958introduction}, and Zamboj~\cite{zamboj2018sections}.
The main goals of this paper are: i) to develop conforming hybrid four-dimensional meshes, ii) to create an optimal refinement strategy for these meshes, and iii) to develop numerical integration procedures on the elements of these meshes.
The major contributions of the paper are summarized as follows.
First, we identify a four-dimensional conforming coupling between tesseract elements and cubic pyramid elements.
Next, the tesseract and cubic pyramid elements are subdivided, while maintaining conformity.
Evidently, while the tesseracts can be uniformly subdivided into smaller tesseracts, the cubic pyramids cannot be subdivided into smaller cubic pyramids without leaving gaps.
Therefore, in the refinement strategy for the cubic pyramids, we use a combination of congruent cubic pyramids \emph{and} invariant bipentatopes.
A simple two-level refinement tree is obtained.
The theoretical properties of the refinement strategy are thoroughly analyzed. The proposed theoretical results cannot be improved, otherwise the cubic pyramid would be invariant with a one-level refinement tree, which is obviously not the case.
Finally, we conclude our work by developing numerical integration procedures for the cubic pyramid elements.
We note that, while such procedures are already well-established for the tesseract and pentatope elements~\cite{frontin2020polytopes,williams2020family}, they have yet to be developed for cubic pyramid elements.
The format of this paper is as follows.
In Section 2, we introduce some standard notation and terminology.
In Section 3, we formulate new hybrid meshes of tesseract, cubic pyramid, and bipentatope elements, along with a non-degenerate mesh refinement strategy.
In Section 4, we develop theoretical results which govern the hybrid meshes.
In Section 5, we introduce a new set of fully symmetric quadrature rules for cubic pyramid elements.
Finally, in Section 6, we summarize the main conclusions of the paper.
\section{Preliminary concepts}
\subsection{Background and motivation}
The four-dimensional space $\mathbb{R}^{4}$ has unique geometric properties which cannot be found in other multidimensional Euclidean spaces excluding the two-dimensional one.
This fact can be illustrated by comparing the three- and four-dimensional spaces.
It is well-known that the 3-cube cannot be triangulated by standard tetrahedra (i.e.~cube corners)~\cite{korotov2014red}.
Rather, the cube is usually triangulated into five simplices, four of which are standard tetrahedra and one of which is a regular tetrahedron.
Additionally, it is impossible to triangulate the standard tetrahedron with standard tetrahedra.
Instead, there exists a partition of the standard tetrahedron into six standard tetrahedra and one regular tetrahedron~\cite{todorov2013optimal}.
Interestingly enough, these principles do not extend to the four-dimensional case. Petrov and Todorov~\cite{petrov2018stable} have proved that each tesseract can be divided into standard pentatopes (i.e.~tesseract corners), and each standard pentatope can be partitioned into standard pentatopes.
The latter fact means that an arbitrary 4D \emph{canonical} domain can be divided into standard pentatopes~\cite{petrov2018stable}, which are the most convenient elements from a computational point of view.
The authors have tested six- and eight-dimensional hypercubes for such properties, but without success. Therefore, given its useful properties, more attempts to explore and analyze four-dimensional space are certainly warranted.
In what follows, we will introduce some useful definitions which facilitate our subsequent discussions of $\mathbb{R}^{4}$.
\subsection{Definitions}
\begin{df}
A simply connected domain in $\mathbb{R}^{4}$ is called canonical if there exists a conforming partition of the domain into tesseracts (4-cubes).
\end{df}
\begin{df}
Each $n$-dimensional hypercube can be divided into $2n$ $(n-1)$-hypercube pyramids. These pyramids are called canonical pyramids.
\end{df}
\begin{df}
Let $\Omega$ be a nondegenerated polytope in $\mathbb{R}^4$, and $T_i$, $i=1,2,\ldots,k,$ be $4$-dimensional finite elements. The triangulation
\begin{align*}
\tau=\left\{T_i\subset \mathbb{R}^4 \ | \ \Omega=\bigcup_{i=1}^k T_i\right\},
\end{align*}
of the polytope $\Omega$ is said to be consistent if $T_i$ and $T_j$, $1\le i<j\le k$, share nothing other than the empty set, a vertex, an edge or an $\ell$-dimensional facet, $\ell=2,3$.
\end{df}
\begin{df}
The polytope
\begin{align*}
T_{-i}=[t_{1},t_{2},\dots,t_{i-1},t_{i+1},\dots,t_{n+1}],\quad
i=2,\dots,n,
\end{align*}
related to the vertex $t_i$ is obtained from the polytope $T=[t_{1},t_{2},\dots,t_{n+1}]$, $n\in \mathbb{N}$ by removing the vertex $t_i$.
The polytopes $T_{-1}$ and $T_{-(n+1)}$ are defined in a similar fashion as
\begin{align*}
T_{-1}=[t_{2},t_{3},\dots,t_{n+1}],\quad
T_{-(n+1)}=[t_{1},t_{2},\dots,t_{n}].
\end{align*}
\end{df}
\begin{df}
The polytopes $T_1$ and $T_2$ are congruent if one of
them can be obtained from the other by applying a linear transformation
\begin{align*}
T_2 =\underline{b}+cQT_1,
\end{align*}
where $c\not=0$ is a scaling factor, $\underline{b}$ is a translation vector,
and $Q$ is an orthogonal matrix.
\end{df}
\begin{df}
We say that two polytopes $T_1$ and $T_2$ are from the same class if they are congruent.
\end{df}
The class
\begin{align*}
[K]=\{T\subset \mathbb{R}^4 \ | \ T\cong K\},
\end{align*}
consists of all equivalent pyramids to the pyramid $K$ in $\mathbb{R}^4$ with respect to the congruence relation.
\begin{df}
A polytope $T$ is said to be invariant concerning a refinement strategy $\mathcal{A}$ if all elements of $\mathcal{A}T$ belong to $[T]$.
\end{df}
\begin{df}
The degeneracy measure of an arbitrary pyramid
$K = [k_1,k_2,$ $\ldots,$ $k_{n+1}]$ is equal to
\begin{equation*
\delta(K)= \frac{h(K)\cdot {\rm vol}(\partial K)}{8\cdot {\rm vol}(K)},
\end{equation*}
where $h(K)$ is the diameter of $K$.
\end{df}
Here, we use ${\rm vol}(T)$ and ${\rm vol}(\partial T)$
instead of ${\rm vol}_{4}(T)$ and ${\rm vol}_{3}(\partial T)$ in order to avoid complicated notation.
\subsection{Reference elements}
In this section, we introduce a set of convenient reference elements in $\mathbb{R}^4$.
We review the important properties of these elements, including their vertex locations, integration limits, and orthonormal polynomial bases.
In order to prepare our discussion of the reference elements, let us briefly review the definition of an orthonormal basis. Broadly speaking, the basis maintains the following key property
\begin{equation*}
\int_{\Omega} \psi_{ijkq} \left( \boldsymbol{x} \right) \psi_{rstv} \left( \boldsymbol{x} \right) d \boldsymbol{x} = \delta_{ir} \delta_{js} \delta_{kt} \delta_{qv},
\end{equation*}
where $\delta_{ir}$ is the Kronecker delta.
The orthonormal basis functions of degree $p$ have the form
\begin{equation*}
\psi_{ijkq} \left( \boldsymbol{x} \right) = \zeta_{ijkq} \, \hat{P}_i^{(\alpha_1 ,\beta_1)} \left( a \right) \hat{P}_j^{(\alpha_2 ,\beta_2)} \left( b \right) \hat{P}_k^{(\alpha_3 ,\beta_3)} \left( c \right) \hat{P}_q^{(\alpha_4 ,\beta_4)} \left( d \right) f\left( \boldsymbol{x} \right),
\end{equation*}
where $i + j +k +q \leq p$, $a = a\left(x_1, x_4\right)$, $b = b\left(x_2, x_4\right)$, $c = c\left(x_3, x_4\right)$, $d = d\left(x_4\right)$, and $f = f\left( \boldsymbol{x}\right)$ are functions depending on the element type, $\zeta_{ijkq}$, $\alpha_1 ,\ldots, \alpha_4$ and $\beta_1, \ldots, \beta_4$ are constants, and $\hat{P}_n^{(\alpha ,\beta)}$ are the 1D orthonormal Jacobi polynomials defined as
\begin{equation*}
\hat{P}_n^{(\alpha,\beta)} \left( x_1 \right) = \frac{P_n^{(\alpha,\beta)} (x_1)}{\sqrt{\frac{2^{\alpha+\beta+1}}{2n + \alpha + \beta + 1}\frac{ \left( n + \alpha \right)! \left( n + \beta \right)! }{ n! \left( n + \alpha + \beta \right)!}}} .
\end{equation*}
Here, the functions $P_n^{(\alpha,\beta)}$ are the well-known orthogonal Jacobi polynomials, which themselves are \emph{not} orthonormal. The scale factor under the square root operator provides the desired normalization.
It is convenient to omit the Jacobi polynomial superscripts when $\alpha = \beta = 0$. Consequently, we frequently write $\hat{P}_{n}$ in place of $\hat{P}_{n}^{\left(0,0\right)}$.
Now, let us turn our attention to the definitions of the reference elements.
\subsubsection{Tesseract}
Consider the reference tesseract $T^{\ast}$ centered at the origin, having edge length $l[T^{\ast}] = 2$.
The vertices are defined such that
\begin{align*}
T^{\ast} = \Big[ &t_1\left(-1, -1, -1, -1\right), \; t_2\left(-1, 1, -1, -1\right), \; t_3\left(-1, -1, -1, 1\right), \; t_4\left(1, -1, -1, -1\right), \\
&t_5\left(-1, 1, -1, 1\right), \; t_6\left(1, -1, -1, 1\right), \;
t_7\left(1, 1, -1, -1\right), \;
t_8\left(1, 1, -1, 1\right), \\
&t_9\left(-1, -1, 1, 1\right), \;
t_{10}\left(-1, 1, 1, 1\right), \;
t_{11}\left(-1, -1, 1, -1\right), \;
t_{12}\left(1, -1, 1, 1\right), \\
&t_{13}\left(-1, 1, 1, -1\right), \;
t_{14}\left(1, -1, 1, -1\right), \;
t_{15}\left(1, 1, 1, 1\right), \;
t_{16}\left(1, 1, 1, -1\right) \Big],
\end{align*}
as shown in Figure~\ref{tesseract}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=10cm]{Figs/tesseract.eps}
\end{center}
\caption{The reference tesseract $T^{\ast}$.}
\label{tesseract}
\end{figure}
The volume is $\text{vol}\left( T^{\ast} \right) = 16$ and it is straightforward to define bounds of integration
\begin{equation*}
\int_{-1}^1 \int_{-1}^1 \int_{-1}^1 \int_{-1}^1 dx_1 dx_2 dx_3 dx_4 = 16.
\end{equation*}
Trivially, the orthonormal basis inside of the reference tesseract is given by
\begin{equation*}
\psi_{ijkq} \left( \boldsymbol{x} \right) = \hat{P}_i \left( a \right) \hat{P}_j \left( b \right) \hat{P}_k \left( c \right) \hat{P}_q \left( d \right),
\end{equation*}
where $a = x_1$, $b = x_2$, $c = x_3$, and $d = x_4$.
\subsubsection{Cubic pyramid}
Consider the reference cubic pyramid
$K^{\ast}$ centered at the origin, having edge length $l[K^{\ast}] = 2$.
The vertices are defined such that
\begin{align*}
K^{\ast} = \Big[ &k_1\left(-1, -1, -1, -1\right), \; k_2\left(-1, 1, -1, -1\right), \; k_3\left(-1, -1, 1, -1\right), \; k_4\left(1, -1, -1, -1\right), \\
& k_5\left(-1, 1, 1, -1\right), \; k_6\left(1, -1, 1, -1\right), \;
k_7\left(1, 1, -1, -1\right), \;
k_8\left(1, 1, 1, -1\right), \\
&k_9\left(0, 0, 0, 0\right) \Big],
\end{align*}
as shown in Figure~\ref{cubic_pyramid}.
\begin{figure}[h!]
\begin{center}
\includegraphics[trim = 0cm 2.5cm 0cm 2cm, clip,width=10cm]{Figs/cubic_pyramid.eps}
\end{center}
\caption{The reference cubic pyramid $K^{\ast}$.}
\label{cubic_pyramid}
\end{figure}
The volume of the cubic pyramid is $1/8$th of the tesseract, or equivalently $\text{vol}(K^{\ast}) = \frac{A h}{4}$ where $A = 8$ is the area of the hexahedron base and $h = 1$ is the height of the cubic pyramid.
Based on the vertices of $K^{\ast}$, the integration bounds for the cubic pyramid are
\begin{equation*}
\int_{-1}^{0} \int_{x_4}^{-x_4} \int_{x_4}^{-x_4} \int_{x_4}^{-x_4} \,\mathrm{d}x_1 \, \mathrm{d}x_2\, \mathrm{d}x_3 \, \mathrm{d}x_4 = 2.
\end{equation*}
The orthonormal polynomial basis for the cubic pyramid is found to be
\begin{equation}
\label{eq:cubpb}
\psi_{ijkq} \left( \boldsymbol{x} \right) = \sqrt{2^{\mu_{ijk}+1}} \hat{P}_i \left( a \right) \hat{P}_j \left( b \right) \hat{P}_k \left( c \right) \hat{P}_q^{(\mu_{ijk},0)} \left( d \right) \left(-x_4 \right)^{i+j+k},
\end{equation}
where $a = -\frac{x_1}{x_4}$, $b=-\frac{x_2}{x_4}$, $c = -\frac{x_3}{x_4}$, $d=2x_4 +1$, and $\mu_{ijk} = 2i + 2j + 2k + 3$.
\subsubsection{Bipentatope}
Consider the reference bipentatope
$P^{\ast}$ having edge length $l[P^{\ast}] = 1$.
This element is also frequently called the tetrahedral bipyramid.
The vertices are defined such that
\begin{align*}
P^{\ast} = \Big[ &p_1\left(\tfrac{1}{2}, \tfrac{1}{2}, \tfrac{1}{2}, -\tfrac{1}{2}\right), \; p_2\left(\tfrac{1}{2}, \tfrac{1}{2}, -\tfrac{1}{2}, -\tfrac{1}{2}\right), \; p_3\left(-\tfrac{1}{2}, \tfrac{1}{2}, -\tfrac{1}{2}, -\tfrac{1}{2}\right), \; p_4\left(-\tfrac{1}{2}, \tfrac{1}{2}, \tfrac{1}{2}, -\tfrac{1}{2}\right), \\
&p_5\left(0, 1, 0, -1\right), \; p_6\left(0, 0, 0, -1\right) \Big],
\end{align*}
as shown in Figure~\ref{bipentatope}.
\begin{figure}[h!]
\begin{center}
\includegraphics[trim = 0cm 0.5cm 0cm 0.5cm, clip,width=10cm]{Figs/bipentatope.eps}
\end{center}
\caption{The reference bipentatope $P^{\ast}$.}
\label{bipentatope}
\end{figure}
The reference bipentatope volume is $\text{vol}\left(P^{\ast}\right) = \tfrac{1}{24}$.
In order to facilitate interpolation and integration, the reference bipentatope can be divided into two pentatopes, as shown in Figure~\ref{split_bipentatope}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=7cm]{Figs/pentatopes_inside_bipentatope.eps}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=6cm]{Figs/pentatope_1.eps}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=6cm]{Figs/pentatope_2.eps}
\end{subfigure}
\end{center}
\caption{A subdivision of the reference bipentatope into two pentatopes.}\label{split_bipentatope}
\end{figure}
There exists a convenient mapping from these pentatopes onto the standard pentatope $S^{\ast}$ shown in Figure~\ref{standard_pentatope}.
\begin{figure}[h!]
\begin{center}
\includegraphics[trim = 0cm 0.5cm 0cm 0.5cm, clip,width=10cm]{Figs/standard_pentatope.eps}
\end{center}
\caption{The standard pentatope $S^{\ast}$.}
\label{standard_pentatope}
\end{figure}
We note that $S^{\ast}$ has the following vertices
\begin{align*}
S^{\ast} = &\Big[s_1 \left(1,-1,-1,-1\right), s_2 \left(-1,1,-1,-1\right), s_3\left(-1,-1,1,-1\right), s_4 \left(-1,-1,-1,1\right), \\
&s_5\left(-1,-1,-1,-1\right) \Big].
\end{align*}
The standard pentatope volume is $\text{vol} \left(S^{\ast}\right) = \tfrac{2}{3}$ and the bounds of integration are
\begin{equation*}
\int_{-1}^1 \int_{-1}^{-1-x_4} \int_{-1}^{-1-x_3 - x_4} \int_{-1}^{-1-x_2 - x_3 -x_4 } dx_1 dx_2 dx_3 dx_4 = \frac{2}{3}.
\end{equation*}
The orthonormal polynomial basis takes the following form
\begin{align}
\nonumber \psi_{ijkq} \left( \boldsymbol{x} \right) = &4 \hat{P}_i \left( a \right) \hat{P}_j^{(2i+1,0)} \left( b \right) \hat{P}_k^{(2i+2j+2,0)} \left( c \right) \hat{P}_q^{(2i+2j+2k+3,0)} \left( d \right) \\
&\times \left( 1-b \right)^{i} \left( 1-c \right)^{i + j} \left( 1-d \right)^{i + j +k}, \label{pent_basis}
\end{align}
where $a = -2\frac{x_1 +1}{x_2 +x_3 +x_4 +1} - 1$, $b=-2\frac{1+x_2}{x_3 +x_4} - 1$, $c = 2\frac{1+x_3}{1-x_4} - 1$, and $d=x_4$.
Lastly, we consider the following rescaled and shifted version of the standard pentatope
\begin{align*}
\mathcal{S} ^{\ast} = \left[
s_1^*(1,0,0,0),s_2^*(0,1,0,0),s_3^*(0,0,1,0),
s_4^*(0,0,0,1),s_5^*(0,0,0,0)\right],
\end{align*}
where by inspection
\begin{align*}
\mathcal{S}^{\ast} = \tfrac{1}{2} S^{\ast} + \left(1,1,1,1\right).
\end{align*}
This reference element has been used by many authors~\cite{brandts2007simplicial,petrov2018stable} due to its mathematical simplicity and convenience. It is frequently referred to as the `cube corner'. Its volume is $\text{vol}\left( \mathcal{S}^{\ast} \right) =\tfrac{1}{24}$ and the bounds of integration are
\begin{equation*}
\int_{0}^1 \int_{0}^{1-x_4} \int_{0}^{1-x_3 - x_4} \int_{0}^{1-x_2 - x_3 -x_4} dx_1 dx_2 dx_3 dx_4 = \frac{1}{24}.
\end{equation*}
The orthonormal polynomial basis on the cube corner can be obtained by a simple linear transformation of the basis in Eq.~\eqref{pent_basis}.
\pagebreak
\clearpage
\section{Hybrid meshes of tesseracts, cubic pyramids, and bipentatopes}
In this section, we introduce a set of hybrid meshes by constructing the following domain model.
Let $\Omega$ be a four-dimensional canonical domain. The domain $\Omega$ is partitioned into two subdomains $\Omega_1$ and $\Omega_2$, which are also canonical.
We suppose that $\check{\tau}_0$ and $\hat{\tau}_0$ are tesseract triangulations of $\Omega_1$ and $\Omega_2$ such that all elements of both triangulations form a conforming tesseract triangulation $\tau_0$ of the domain $\Omega$.
We define the boundary layers in the subdomains $\Omega_1$ and $\Omega_2$ by
\begin{align*}
B_1&=\{T\in\Omega_1 \ | \
\dim\left(\partial T\cap\Omega_{12}\right)=3\}, \\[1.5ex]
B_2&=\{T\in\Omega_2 \ | \
\dim\left(\partial T\cap\Omega_{12}\right)=3\},\quad
\Omega_{12}=\Omega_{1}\cap\Omega_{2}.
\end{align*}
Furthermore, we construct a hybrid mesh so that $\Omega_1$ is partitioned by tesseract elements and $\Omega_2$ is partitioned by cubic pyramid elements.
Our main goal is to establish a conforming coupling between the different kinds of elements in the boundary layers of both subdomains.
Additionally, we need a refinement strategy for both groups of elements with as small as possible number of classes and as small as possible measure of degeneracy for the pyramidal elements.
We restrict ourselves to refinement strategies dividing the edges of each element into two parts. We anticipate that such strategies will be very useful in multigrid and/or adaptive-quadrature procedures.
We will now define a refinement algorithm for creating nested hierarchical hybrid triangulations in five steps:
\begin{itemize}
\item[$\left(r_1\right)$] The tesseract triangulations of $\Omega$, denoted by $\check{\tau}_0$, $\hat{\tau}_0$ and $\tau_0=\check{\tau}_0\cup\hat{\tau}_0$ are constructed. We denote this partition operator by $\mathcal{R}$.
\item[$\left(r_2\right)$] Each element $T$ of $\check{\tau}_{0}$ is uniformly divided into sixteen smaller tesseracts. We denote this partition operator by $\mathcal{E}$. Evidently, this operator's definition is straightforward, and does not need to be explicitly stated.
\item[$\left(r_3\right)$] Each element $T$ of $\hat{\tau}_0$ is divided into eight cubic pyramids by connecting the cubic facets of each element to its centroid.
We denote this partition operator by $\mathcal{B}$. It is given explicitly by
%
\begin{align*}
\mathcal{B} T^{\ast}
= \Big\{ &K_{1} [t_{1},t_{2},t_{3},t_{4},t_{5},t_{6},t_{7},t_{8},t_{0}],\
K_{2} [t_{1},t_{3},t_{4},t_{6},t_{9},t_{11},t_{12},t_{14},t_{0}], \\
&K_{3}[t_{9},t_{10},t_{11},t_{12},t_{13},t_{14},t_{15},t_{16},t_{0}],\
K_{4} [t_{2},t_{5},t_{7},t_{8},t_{10},t_{13},t_{15},t_{16},t_{0}], \\
&K_{5}[t_{4},t_{6},t_{7},t_{8},t_{12},t_{14},t_{15},t_{16},t_{0}], \
K_{6} [t_{1},t_{2},t_{3},t_{5},t_{9},t_{10},t_{11},t_{13},t_{0}], \\
&K_{7}[t_{1},t_{2},t_{4},t_{7},t_{11},t_{13},t_{14},t_{16},t_{0}], \
K_{8} [t_{3},t_{5},t_{6},t_{8},t_{9},t_{10},t_{12},t_{15},t_{0}]
\Big\}.
\end{align*}
%
One of the resulting cubic pyramids $K_7$ is shown in Figure~\ref{cubic_pyramid_in_tesseract}. Note: $K_7$ is equivalent to $K^{\ast}$, which is a canonical pyramid.
\item[$\left(r_4\right)$] The elements of $\mathcal{B} \hat{\tau}_0$ are further subdivided by the partition operator $\mathcal{L}$, which is defined below.
\item[$\left(r_5\right)$] Steps 2--4 are performed repeatedly until the desired level of mesh refinement is obtained. These refinement steps are denoted by the operator $\mathcal{H}$.
\end{itemize}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=9cm]{Figs/pyramid_inside_tesseract.eps}
\end{center}
\caption{The cubic pyramid $K_7$ belonging to the decomposition $\mathcal{B}T^{\ast}$.}
\label{cubic_pyramid_in_tesseract}
\end{figure}
Now, let us turn our attention to formulating an explicit definition for the partition operator $\mathcal{L}$
\begin{align*}
\mathcal{L} X=
\begin{cases}
\mathcal{D} X, & \mbox{if } X \;\mbox{is a cubic pyramid}\\
\mathcal{M} X, & \mbox{if } X \; \mbox{is a bipentatope.}
\end{cases}
\end{align*}
It remains for us to define the operators $\mathcal{D}$ and $\mathcal{M}$. The operator $\mathcal{D}$ can be defined as follows
\begin{align*}
\mathcal{D} K^*= \bigg\{\Big\{ &\mathbb{K}_1[k_{1},k_{10},k_{11},k_{12},k_{13},k_{14},k_{15},k_{16},k_{21}], \; \mathbb{K}_2[k_{2},k_{10},k_{13},k_{14},k_{15},k_{17},k_{18},k_{19},k_{20}], \\
&\mathbb{K}_3[k_{4},k_{12},k_{14},k_{15},k_{21},k_{26},k_{27},k_{28},k_{29}], \; \mathbb{K}_4[k_{7},k_{14},k_{15},k_{18},k_{19},k_{27},k_{28},k_{34},k_{35}], \\
& \mathbb{K}_5[k_{3},k_{11},k_{13},k_{15},k_{21},k_{22},k_{23},k_{24},k_{25}], \; \mathbb{K}_6[k_{5},k_{13},k_{15},k_{17},k_{19},k_{22},k_{24},k_{30},k_{31}], \\
& \mathbb{K}_7[k_{6},k_{15},k_{21},k_{23},k_{24},k_{26},k_{28},k_{32},k_{33}], \; \mathbb{K}_8[k_{8},k_{15},k_{19},k_{24},k_{28},k_{30},k_{32},k_{34},k_{36}], \\
& \mathbb{K}_9[k_{9},k_{16},k_{20},k_{25},k_{29},k_{31},k_{33},k_{35},k_{36}], \; \mathbb{K}_{10}[k_{15},k_{16},k_{20},k_{25},k_{29},k_{31},k_{33},k_{35},k_{36}] \Big\}, \\[-0.5ex]
\Big\{ &P_{1}[k_{15},k_{21},k_{23},k_{24},k_{25},k_{33}], \; P_{2}[k_{11},k_{13},k_{15},k_{16},k_{21},k_{25}], \; P_{3}[k_{12},k_{14},k_{15},k_{16},k_{21},k_{29}], \\
& P_{4}[k_{15},k_{21},k_{26},k_{28},k_{29},k_{33}], \;
P_{5}[k_{15},k_{24},k_{28},k_{32},k_{33},k_{36}], \; P_{6}[k_{14},k_{15},k_{27},k_{28},k_{29},k_{35}], \\
& P_{7}[k_{15},k_{19},k_{28},k_{34},k_{35},k_{36}],\; P_{8}[k_{13},k_{15},k_{17},k_{19},k_{20},k_{31}], \;
P_{9}[k_{15},k_{19},k_{24},k_{30},k_{31},k_{36}], \\
& P_{10}[k_{14},k_{15},k_{18},k_{19},k_{20},k_{35}], \;
P_{11}[k_{13},k_{15},k_{22},k_{24},k_{25},k_{31}], \; P_{12}[k_{10},k_{13},k_{14},k_{15},k_{16},k_{20}], \\
& P_{13}[k_{15},k_{19},k_{20},k_{31},k_{35},k_{36}],\; P_{14}[k_{15},k_{28},k_{29},k_{33},k_{35},k_{36}], \;
P_{15}[k_{15},k_{16},k_{21},k_{25},k_{29},k_{33}], \\[-1.2ex]
& P_{16}[k_{13},k_{15},k_{16},k_{20},k_{25},k_{31}], \;
P_{17}[k_{15},k_{24},k_{25},k_{31},k_{33},k_{36}],\; P_{18}[k_{14},k_{15},k_{16},k_{20},k_{29},k_{35}]\Big\} \bigg\}.
\end{align*}
All elements $\mathbb{K}_i$, $i=1,2,\ldots,10$, are cubic pyramids
and all elements $P_j$, $j=1,2,\ldots,18$, are bipentatopes.
The first nine vertices (above) $k_1, k_2, \ldots, k_9$ are previously defined in the definition of $K^{\ast}$.
In addition, the remaining vertices $k_{10}, \ldots, k_{36}$ can be defined as follows
\begin{alignat*}{4}
&k_{10} = \tfrac{1}{2}(k_1+k_2), \qquad &&k_{11} = \tfrac{1}{2}(k_1+k_3), \qquad &&k_{12} = \tfrac{1}{2}(k_1+k_4), \qquad &&k_{13} = \tfrac{1}{2}(k_{1}+k_{5}), \\
&k_{14} = \tfrac{1}{2}(k_{1}+k_{7}), \; &&k_{15} = \tfrac{1}{2}(k_{1}+k_{8}), \; &&k_{16} = \tfrac{1}{2}(k_1+k_9), \; &&k_{17} = \tfrac{1}{2}(k_2+k_5), \\
& k_{18} = \tfrac{1}{2}(k_2+k_7), \; &&k_{19} = \tfrac{1}{2}(k_{2}+k_{8}), \; &&k_{20} = \tfrac{1}{2}(k_2+k_9), \; &&k_{21} = \tfrac{1}{2}(k_{3}+k_{4}), \\
& k_{22} = \tfrac{1}{2}(k_3+k_5), \; &&k_{23} = \tfrac{1}{2}(k_3+k_6), \; &&k_{24} = \tfrac{1}{2}(k_{3}+k_{8}), \; &&k_{25} = \tfrac{1}{2}(k_3+k_9), \\
& k_{26} = \tfrac{1}{2}(k_4+k_6), \; &&k_{27} = \tfrac{1}{2}(k_4+k_7), \; &&k_{28} = \tfrac{1}{2}(k_{4}+k_{8}), \; &&k_{29} = \tfrac{1}{2}(k_4+k_9), \\
& k_{30} = \tfrac{1}{2}(k_5+k_8), \; &&k_{31} = \tfrac{1}{2}(k_5+k_9), \; &&k_{32} = \tfrac{1}{2}(k_6+k_8), \; &&k_{33} = \tfrac{1}{2}(k_6+k_9), \\
& k_{34} = \tfrac{1}{2}(k_7+k_8), \; &&k_{35} = \tfrac{1}{2}(k_7+k_9), \; &&k_{36} = \tfrac{1}{2}(k_8+k_9).
\end{alignat*}
The full set of vertices is illustrated in Figure~\ref{cubic_pyramid_with_vertices}.
In addition, a few of the individual elements of $\mathcal{D} K^{\ast}$ are shown in Figure~\ref{sample_elements_cubic_pyramid}.
Finally, several couplings between the cubic pyramid and bipentatope elements are shown in Figure~\ref{coupled_elements_cubic_pyramid}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=9cm]{Figs/cubic_pyramid_with_vertices.eps}
\end{center}
\caption{The cubic pyramid $K^{\ast}$ and the full set of vertices for the subdivision operator $\mathcal{D}K^{\ast}$.}
\label{cubic_pyramid_with_vertices}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=9cm]{Figs/pyramid_inside_pyramid_1.eps}
\includegraphics[width=9cm]{Figs/bipentatope_inside_pyramid.eps}
\end{center}
\caption{The cubic pyramid element $\mathbb{K}_1$ and the bipentatope element $P_{13}$ of the decomposition $\mathcal{D} K^{\ast}$.}
\label{sample_elements_cubic_pyramid}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=9cm]{Figs/pyramid_and_bipent_inside_pyramid_1.eps}
\includegraphics[width=9cm]{Figs/pyramid_and_bipent_inside_pyramid_2.eps}
\end{center}
\caption{Couplings between cubic pyramid and bipentatope elements: $\mathbb{K}_{10}$ and $P_{14}$ (top) and $\mathbb{K}_2$ and $P_{10}$ (bottom).} \label{coupled_elements_cubic_pyramid}
\end{figure}
Next, the operator $\mathcal{M}$ can be defined as follows
\begin{align*}
\mathcal{M} P^*=\bigg\{ &\mathbb{P}_1[p_{5},p_{10},p_{13},p_{16},p_{18},p_{20}],\; \mathbb{P}_2[p_{6},p_{11},p_{14},p_{17},p_{19},p_{20}],\; \mathbb{P}_3[p_{8},p_{10},p_{13},p_{16},p_{18},p_{20}], \\
&\mathbb{P}_4[p_{8},p_{11},p_{14},p_{17},p_{19},p_{20}], \;
\mathbb{P}_5[p_{8},p_{16},p_{17},p_{18},p_{19},p_{20}],\;
\mathbb{P}_6[p_{8},p_{13},p_{14},p_{16},p_{17},p_{20}], \\
&\mathbb{P}_7[p_{8},p_{10},p_{11},p_{18},p_{19},p_{20}], \;
\mathbb{P}_8[p_{8},p_{10},p_{11},p_{13},p_{14},p_{20}], \;
\mathbb{P}_9[p_{3},p_{8},p_{12},p_{15},p_{16},p_{17}],\\ &\mathbb{P}_{10}[p_{4},p_{8},p_{9},p_{15},p_{18},p_{19}], \;
\mathbb{P}_{11}[p_{1},p_{7},p_{8},p_{9},p_{10},p_{11}], \;
\mathbb{P}_{12}[p_{2},p_{7},p_{8},p_{12},p_{13},p_{14}], \\
&\mathbb{P}_{13}[p_{8},p_{15},p_{16},p_{17},p_{18},p_{19}], \;
\mathbb{P}_{14}[p_{8},p_{12},p_{13},p_{14},p_{16},p_{17}], \;
\mathbb{P}_{15}[p_{7},p_{8},p_{10},p_{11},p_{13},p_{14}],\\
& \mathbb{P}_{16}[p_{8},p_{9},p_{10},p_{11},p_{18},p_{19}] \bigg\}.
\end{align*}
All the elements $\mathbb{P}_{j}$, $j = 1, 2, \ldots, 16$, are bipentatopes.
The first six vertices (above) $p_1, p_2, \ldots, p_6$ are previously defined in the definition of $P^{\ast}$.
In addition, the remaining vertices $p_7, \ldots, p_{20}$ can be defined as follows
\begin{alignat*}{4}
&p_7 = \tfrac{1}{2}(p_1+p_2), \qquad &&p_8 = \tfrac{1}{2}(p_1+p_{3}), \qquad &&p_9 = \tfrac{1}{2}(p_1+p_4), \qquad &&p_{10} = \tfrac{1}{2}(p_1+p_5), \\
&p_{11} = \tfrac{1}{2}(p_1+p_6), \;
&&p_{12} = \tfrac{1}{2}(p_2+p_3), \;
&&p_{13} = \tfrac{1}{2}(p_2+p_5), \;
&&p_{14} = \tfrac{1}{2}(p_2+p_6), \\
&p_{15} = \tfrac{1}{2}(p_3+p_4), \;
&&p_{16} = \tfrac{1}{2}(p_3+p_5), \;
&&p_{17} = \tfrac{1}{2}(p_3+p_6), \;
&&p_{18} = \tfrac{1}{2}(p_4+p_5), \\
&p_{19} = \tfrac{1}{2}(p_4+p_6),
&&p_{20} = \tfrac{1}{2}(p_5+p_6).
\end{alignat*}
The full set of vertices is illustrated in Figure~\ref{bipentatope_with_vertices}.
In addition, a few of the individual elements of $\mathcal{M} P^{\ast}$ are shown in Figure~\ref{sample_elements_bipentatope}.
\begin{figure}[h!]
\begin{center}
\includegraphics[trim=0cm 0.75cm 0cm 0cm, clip, width=9cm]{Figs/bipentatope_with_vertices.eps}
\end{center}
\caption{The bipentatope $P^{\ast}$ and the full set of vertices for the subdivision operator $\mathcal{M}P^{\ast}$.}
\label{bipentatope_with_vertices}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=9cm]{Figs/bipentatope_inside_bipentatope_1.eps}
\includegraphics[width=9cm]{Figs/bipentatope_inside_bipentatope_4.eps}
\includegraphics[width=9cm]{Figs/bipentatope_inside_bipentatope_2.eps}
\end{center}
\caption{The elements $\mathbb{P}_{2}$, $\mathbb{P}_{4}$, and $\mathbb{P}_{9}$ of the decomposition $\mathcal{M} P^{\ast}$.}\label{sample_elements_bipentatope}
\end{figure}
We conclude this section by summarizing the refinement procedure for obtaining hierarchical triangulations below:
\begin{align*}
\tau_0 = \mathcal{R}\Omega,\quad \tau_0=\check{\tau}_0\cup\hat{\tau}_0;
\end{align*}
\begin{align*}
\hat{\tau}_0^* = \mathcal{B}\hat{\tau}_0,
\quad \tau_0^*=\check{\tau}_0\cup\hat{\tau}_0^*;
\end{align*}
\begin{align*}
\check{\tau}_1=\mathcal{E}\check{\tau}_0,\;
\hat{\tau}_1=\mathcal{L}\hat{\tau}_0^*,
\quad \tau_1=\check{\tau}_1\cup\hat{\tau}_1;
\end{align*}
\begin{equation}\label{t1}
\check{\tau}_2=\mathcal{E}\check{\tau}_1,\;
\hat{\tau}_2=\mathcal{L}\hat{\tau}_1,
\quad \tau_2=\check{\tau}_2\cup\hat{\tau}_2;
\end{equation}
\begin{align*}
\check{\tau}_{k+1}=\mathcal{E}^{k} \check{\tau}_{1},\;
\hat{\tau}_{k+1}=\mathcal{L}^{k} \hat{\tau}_{1},
\quad \tau_{k+1}=\check{\tau}_{k+1}\cup\hat{\tau}_{k+1}.
\end{align*}
In accordance with ($\ref{t1}$), we can explicitly define the partition operator $\mathcal{H} :\tau_1\rightarrow\tau_2$ as follows
\begin{align*}
\mathcal{H} X=
\begin{cases}
\mathcal{E}X, & \mbox{if } X \;\mbox{is a tesseract}\\
\mathcal{L}X, & \mbox{if } X \; \mbox{is a cubic pyramid or bipentatope.}
\end{cases}
\end{align*}
Then
\begin{align*}
\tau_{k+1}=\mathcal{H} ^k\tau_{1}.
\end{align*}
Thus, we have a conforming sequence of successive hybrid finite element triangulations $\left\{\tau_{k}\right\}$.
\pagebreak
\clearpage
\section{Theoretical results}
In this section, we summarize the important theoretical results that govern the hybrid triangulations introduced in the previous section.
\begin{pro}\label{pro}
The hypercube $T^n$ can be divided into $2n$ hypercubic pyramids $K^n_i$ so that all edges of the $i$-th pyramid have the same length.
\end{pro}
\begin{thm}\label{uni}
The four-dimensional space is the only Euclidean space that satisfies Property~1.
\end{thm}
\begin{proof}
For the sake of simplicity, we suppose that
\begin{align*}
T^n=\left[
t_1^n(-a,-a,\ldots,-a),\ldots
t_{2^n}^n(a,a,\ldots,a)\right]
\end{align*}
has all edges parallel to the coordinate axes. Then, Property~\ref{pro} is valid iff
\begin{align*}
\frac{1}{2}h[T^n]=l[T^n],
\end{align*}
where $l[T^n]$ is the edge length of $T^n$. The latter equality leads to
\begin{align*}
\frac{1}{2}\sqrt{\sum_{k=1}^n \left(2a\right)^2}= 2a\Rightarrow \sqrt{na^2}=2a,
\end{align*}
which is possible iff $n=4$. \qed
\end{proof}
The result of Theorem~\ref{uni} is that the four-dimensional space is unique among all Euclidean spaces.
\begin{thm}\label{inv}
Consider the cubic pyramids in $\mathcal{D} K^{\ast}$.
All such pyramids are congruent to the reference pyramid, i.e.~$\mathbb{K}_i\subset[K^*],$ $i=1,2,\ldots,10$.
\end{thm}
\begin{proof}
For all elements $\mathbb{K}_i$ $i=1,2,\ldots10,$ of $\mathcal{D} K^*$ it is possible to verify through direct calculation that:
\begin{enumerate}[label=(\roman*)]
\item all edges of $\mathbb{K}_i$ are equal to $1$;
\item the base of $\mathbb{K}_i$ is a cube with edges equal to $1$;
\item the degeneracy measure of $\mathbb{K}_i$ is $\delta(\mathbb{K}_i)= 4.18154$;
\end{enumerate}
from which congruency with the reference pyramid follows. \qed
\end{proof}
\begin{thm}\label{con}
Consider the bipentatopes in $\mathcal{D} K^{\ast}$.
All such bipentatopes are congruent.
\end{thm}
\begin{proof}
Without loss of generality, we consider demonstrating this property for two bipentatopes chosen from $\mathcal{D} K^*$, with the claim following from repeating this procedure for all pairwise combinations. With this in mind, we take our two bipentatopes chosen from $\mathcal{D} K^*$ to be $P_{10}$ and $P_{14}$, see Figure \ref{coupled_elements_cubic_pyramid}.
Any bipentatope can be partitioned into
two pentatopes as shown in Figure~\ref{split_bipentatope}.
We hence begin by dividing the bipentatopes $P_{10}$ and $P_{14}$ into pentatopes whence
\begin{align*}
P_{10}=\hat S_{10}
\left[p_{15},p_{18},p_{19},p_{20},p_{35}\right]\cup
\check{S}_{10}\left[p_{14},p_{15},p_{18},p_{20},p_{35}\right], \\
P_{14}=\hat S_{14}
\left[p_{15},p_{28},p_{29},p_{33},p_{36}\right]\cup
\check{S}_{14}\left[p_{15},p_{28},p_{29},p_{35},p_{36}\right].
\end{align*}
Let the generic affine transformations of the pentatopes $\hat S_{10}$ and $\hat S_{14}$ be
\begin{align*}
F_{10}\;:\;\hat S_{10}=A_{10}\mathcal{S}^*+B_{10},\quad
F_{14}\;:\;\hat S_{14}=A_{14}\mathcal{S}^*+B_{14}.
\end{align*}
We map $\hat S_{14}$ onto $\hat S_{10}$ by
\begin{align*}
\hat S_{10}=F_{10}\left(F_{14}^{-1}\left(\hat S_{14}\right)\right)
\Leftrightarrow
F_{14,10}\;:\;\hat S_{10}=A_{14,10}\hat S_{14}+B_{10}-A_{14,10}B_{14},
\end{align*}
where $A_{14,10}$ is the transitional matrix from $\hat S_{14}$ to
$\hat S_{10}$ and $F_{14,10}=F_{10}\circ F_{14}^{-1}$.
We emphasize that $p_{14}=F_{14,10}(p_{35})$, which assures
that
$
P_{10}=F_{14,10}\left(P_{14}\right)$.
The claim that $P_{10}\cong P_{14}$ follows from the fact that the transition matrix
\begin{align*}
A_{14,10}=\left(
\begin{array}{cccc}
-1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & -1 & 0 & 0 \\
0 & 0 & 0 & -1 \\
\end{array}
\right)
\end{align*}
is orthogonal. \qed
\end{proof}
\begin{rem}
Let all bipentatopes from $\mathcal{D} K^*$ be partitioned into
two pentatopes as it is shown in Figure~\ref{split_bipentatope}.
Then all of them belong to $[\hat S_{10}]$ and
$\delta\left([\hat S_{10}]\right)=5.$ Moreover,
the elements of $[\hat S_{10}]$ are invariant. For more details concerning this class of pentatopes, we refer the reader to~\cite{petrov2020properties}.
\end{rem}
\begin{thm}\label{subd}
Consider the bipentatopes in $\mathcal{M} P^{\ast}$.
All such bipentatopes are congruent to the reference bipentatope, i.e.~$\mathbb{P}_j\subset[P^{\ast}],$ $j=1,2,\ldots,16$.
\end{thm}
\begin{proof}
Firstly, we prove that all bipentatopes in $\mathcal{M} P^*$ belong to the same class.
We randomly choose two bipentatopes
$\mathbb{P}_{2}$ and $\mathbb{P}_{4}$ (see Figure~\ref{sample_elements_bipentatope}) from $\mathcal{M} P^*$.
The bipentatope $\mathbb{P}_{2}$ is obtained by the coupling of the pentatopes
\begin{align*}
\hat S_{2}
\left[p_{6},p_{11},p_{14},p_{17},p_{20}\right] \quad \mbox{and} \quad
\check{S}_{2}\left[p_{6},p_{11},p_{17},p_{19},p_{20}\right].
\end{align*}
The second bipentatope is partitioned as follows
\begin{align*}
\mathbb{P}_{4}=\hat S_{4}
\left[p_{8},p_{11},p_{14},p_{17},p_{20}\right]\cup
\check{S}_{4}\left[p_{8},p_{11},p_{17},p_{19},p_{20}\right].
\end{align*}
The pentatope $\hat S_{2}$ is mapped onto $\hat S_{4}$ by the affine transformation
\begin{equation*}
\hat S_{4}=F_{4}\left(F_{2}^{-1}\left(\hat S_{2}\right)\right),
\end{equation*}
where
\begin{align*}
F_{2}\;:\;\hat S_{2}=A_{2}\mathcal{S} ^*+B_{2}, \quad
F_{4}\;:\;\hat S_{4}=A_{4}\mathcal{S} ^*+B_{4}.
\end{align*}
We present the map $F_{2,4}=F_{4}\circ F_{2}^{-1}$
in a matrix form by
\begin{align*}
F_{2,4}\;:\;\hat S_{4}=A_{2,4}\hat S_{2}+B_{4}-A_{2,4}B_{4}.
\end{align*}
The transitional matrix
\begin{align*}
A_{2,4}=\left(
\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & 0 & 0 & -1 \\
0 & 0 & 1 & 0 \\
0 & -1 & 0 & 0 \\
\end{array}
\right)
\end{align*}
is orthogonal. Additionally, $F_{2,4}$ keeps the facet $\mathbb{P}_{2,-1,-6}$ invariant, which assures $\mathbb{P}_{4}=F_{2,4}\left(\mathbb{P}_{2}\right)$. Therefore, $\mathbb{P}_{2}\cong \mathbb{P}_{4}$.
On the other hand, all edges of the bipentatopes have the same length equal to $\frac{1}{2}$, i.e. they are identical.
It remains to prove that $\mathbb{P}_{2}$ belongs to $[P^*]$.
This follows directly from
$A_{2,*}=2I$, where $A_{2,*}$ is the transitional matrix from $\mathbb{P}_{2}$ to $P^*$ and $I$ is the identity matrix.
\qed
\end{proof}
\section{Quadrature rules for the cubic pyramid}
\subsection{Methodology}
In this section, we consider the problem of approximating the integral of a function $f(\boldsymbol{x})$ on $K^{\ast}$, the reference cubic pyramid. This can be effectively accomplished via a quadrature rule whence
\begin{equation}\label{eq:moment}
\int_{-1}^{0} \int_{x_4}^{-x_4} \int_{x_4}^{-x_4} \int_{x_4}^{-x_4} f(\boldsymbol{x}) \,\mathrm{d}x_1 \, \mathrm{d}x_2 \, \mathrm{d}x_3 \, \mathrm{d}x_4 \approx \sum_{j=1}^N \omega_j f(\boldsymbol{x}_j),
\end{equation}
where $\{\omega_j\}$ are a set of $N$ weights and $\{\boldsymbol{x}_j\}$ are an associated set of $N$ points---known as abscissa---which together specify a \emph{quadrature rule}. If a rule is capable of exactly computing the integrals of a generic constant and all polynomials in the set
\[
\Xi(p) = \{ \psi_{ijkq}(\boldsymbol{x}) \mid 0 < i + j + k + q \le p \text{ and } i, j, k, q \ge 0 \},
\]
where $\psi_{ijkq}(\boldsymbol{x})$ is as given in Eq.~\eqref{eq:cubpb}, then the rule is said to be of strength $p$. When defining the abscissa of a rule, it is typical to constrain them to be strictly inside of our domain and be arranged symmetrically. This ensures that we never evaluate $f(\boldsymbol{x})$ outside of its domain. In addition, to reduce the likelihood of catastrophic cancellation it is also customary to require the weights to be positive. Evidently if the rule is to integrate a generic constant mode correctly it must necessarily be the case that
\[
\sum_{j=1}^N \omega_j = 2.
\]
Substituting the polynomials from our set into Eq. \eqref{eq:moment} and enforcing equality we require for all $\psi \in \Xi(p)$ that
\[
\int_{-1}^{0} \int_{x_4}^{-x_4} \int_{x_4}^{-x_4} \int_{x_4}^{-x_4} \psi(\boldsymbol{x}) \,\mathrm{d}x_1 \, \mathrm{d}x_2 \, \mathrm{d}x_3 \, \mathrm{d}x_4 = 0 = \sum_{j=1}^N \omega_j \psi(\boldsymbol{x}_j),
\]
where in evaluating the integral we have exploited the orthogonality relationship associated with the polynomials in our basis. This can be regarded as a non-linear least squares problem in which we have $|\Xi(p)| + 1$ equations and $5N$ unknowns. However, unlike a typical least squares problem, it is one where we seek a solution with zero residual. Assuming the abscissa are known we remark that the associated weights may be determined through the solution of a \emph{linear} least squares problem. This enables us to reduce the number of degrees of freedom in our non-linear problem to $4N$.
In order to enforce symmetry, it is advantageous to decompose our $N$ abscissa into symmetry orbits. The symmetry orbits of the cubic pyramid are similar to those of the hexahedron, with an additional parameter that accounts for the extent of the pyramid in the fourth dimension. Therefore, one may follow the procedure for generating orbits for the hexahedron (see~\cite{witherden2015identification}), and enumerate all symmetry orbits. Thereafter, upon adding a parameter to account for the fourth dimension, the symmetry orbits of the cubic pyramid are written as follows
\begin{equation*}
\begin{aligned}[c]
S_1 (\delta) &= (0,0,0,\delta),\\
S_2 (\alpha,\delta) &= \chi(\alpha,0,0,\delta),\\
S_3 (\alpha,\delta) &= \chi(\alpha,\alpha,\alpha,\delta), \\
S_4 (\alpha,\delta) &= \chi(\alpha,\alpha,0,\delta), \\
S_5 (\alpha,\beta,\delta) &= \chi(\alpha,\beta,0,\delta), \\
S_6 (\alpha,\beta,\delta) &= \chi(\alpha,\alpha,\beta,\delta), \\
S_7 (\alpha,\beta,\gamma,\delta) &= \chi(\alpha,\beta,\gamma,\delta),
\end{aligned}
\qquad \qquad
\begin{aligned}[c]
|S_1| &= 1, \\
|S_2| &= 6, \\
|S_3| &= 8, \\
|S_4| &= 12, \\
|S_5| &= 24, \\
|S_6| &= 24, \\
|S_7| &= 48,
\end{aligned}
\end{equation*}
subject to the constraints $0 < \alpha,\beta,\gamma \leq - \delta$ and $-1 \leq \delta \leq 0$. Here, the operator $\chi$ returns all possible (unique) signed permutations of the input arguments.
Now, let us provide an example of how these symmetry orbits are used. If $N = 156$, then one possible arrangement of points is four $S_1$ orbits, two $S_2$ orbits, one $S_3$ orbit, three $S_4$ orbits, two $S_5$ orbits, and one $S_7$ orbit whence
\[
N = 4|S_1| + 2|S_2| + |S_3| + 3|S_4| + 2|S_5| + |S_7| = 156,
\]
as required. Inspecting the number of arguments for each of the orbits and summing we find that this decomposition has a total of $26$ degrees of freedom. Together these fix the locations of all of our abscissa and hence become the parameters in our non-linear least squares problem. We note here that this represents a substantial decrease compared with an asymmetric rule which has some $4N = 624$ degrees of freedom. Of course, for a given $N$ there are typically a very large number of distinct decompositions into symmetry orbits. For our example of $N = 156$ there are some $8518$ unique decompositions.
Our overall goal when identifying quadrature rules is to find one with the minimum value of $N$ for a particular strength $p$. Unfortunately, given a $p$ it is not possible to determine $N$ theoretically; instead it must be treated as a hyper-parameter to the overall optimisation process. Similarly, once given an $N$ it is, in general, not possible to know \emph{a priori} which particular symmetric decomposition will yield a rule--if any. Thus, it is also necessary to treat the specific decomposition as a hyper-parameter, too.
We have implemented the aforementioned system into the open source symmetric quadrature rule package Polyquad \cite{witherden2015identification}. Running this modified version of Polyquad on a workstation, we have identified symmetric quadrature rules on the cubic pyramid for $2 \le p \le 12$. The total number of points required for these rules can be seen in Table \ref{tab:rules}.
The abscissa for all of our rules are inside of our reference pyramid, and feature positive weights. The rules themselves are provided in quadruple precision in the electronic supplemental material. An example of the $p = 5$ quadrature rule is provided in Table~\ref{tab:example}.
\begin{table}[h!]
\centering
\caption{Number of points $N$ required for a rule of strength $p$ inside of the cubic pyramid.}
\begin{tabular}{r|rrrrrrrrrrr} \toprule
$p$ & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\
$N$ & 7 & 10 & 21 & 29 & 50 & 65 & 114 & 163 & 251 & 323 & 552 \\
\bottomrule
\end{tabular}
\label{tab:rules}
\end{table}
\begin{table}[h!]
\centering
\caption{The $p = 5$ quadrature rule on the cubic pyramid $K^{\ast}$ with $N = 29$ points. The remaining rules are provided in the electronic supplemental material.}
\begin{tabular}{r|l|l} \toprule
Orbit & Abscissa & Weight \\
\hline
$S_1$ & -0.32846339581882957902277574789024 & 0.084184481885656624083314191434215
\\
\hline
$S_2$ & 0.59942266549153142521578075246447 & 0.11340384654119720346152792514806 \\
& -0.74912930534502345701037962143888 & \\
\hline
$S_2$ & 0.76070275297226284532398968584027 & 0.089004419809134534555110633350815 \\
& -0.95754362470717092670167055158225 & \\
\hline
$S_3$ & 0.43384835705078179649267535015814 & 0.033983660519520996993103370708523 \\
& -0.5766200661792940506119682920933 & \\
\hline
$S_3$ & 0.69719510426558290268820473081072 & 0.05368707948202312148400343648805 \\
& -0.91852788682445245487884982935091 & \\
\bottomrule
\end{tabular}
\label{tab:example}
\end{table}
\subsection{Numerical experiments}
The objective of this section is to numerically validate a subset of the quadrature rules in Table~\ref{tab:rules}. Towards this end, we performed a series of numerical experiments on the rules with strengths $p = 2, 3, \ldots, 9$. These rules are capable of exactly integrating certain monomial functions by construction, as well as approximately integrating certain transcendental functions. In what follows, we limit our focus to: a) weighted combinations of monomial functions, and b) analytic transcendental functions. In addition, we execute all of the numerical experiments in quadruple precision arithmetic in order to demonstrate the full precision of the rules.
\subsubsection{Polynomial integration}
In this section, we evaluate the ability of our quadrature rules to integrate a weighted combination of monomial functions on the reference cubic pyramid $K^{\ast}$. More specifically, we consider the following polynomial function of degree~$m$
\begin{align*}
f_{\text{poly}} \left(\boldsymbol{x}; m\right) = \sum_{r = 0}^{m} \; \sum_{s=0}^{m-r} \; \sum_{t=0}^{m-r-s} \; \sum_{v=0}^{m-r-s-t} C_{rstv} \, x_{1}^{r} x_{2}^{s} x_{3}^{t} x_{4}^{v},
\end{align*}
where the constants $C_{rstv}$ are given by the following formula
\begin{align*}
C_{rstv} = 24\frac{\left(r+1\right)\left(s+1\right)\left(t+1\right)\left(v+1\right)}{\left(m+1\right)\left(m+2\right)\left(m+3\right)\left(m+4\right)}.
\end{align*}
One may observe that this function contains all possible distinct monomials of degree $m$. It turns out that the integral of this function on the domain $K^{\ast}$ can be computed exactly as follows
\begin{align*}
J_{\infty} \left(\boldsymbol{x}; m\right) = \int_{K^{\ast}} f_{\text{poly}} \left(\boldsymbol{x}\right) d\boldsymbol{x} = \sum_{r = 0}^{m} \; \sum_{s=0}^{m-r} \; \sum_{t=0}^{m-r-s} \; \sum_{v=0}^{m-r-s-t} C_{rstv} I_{rstv},
\end{align*}
where
\begin{align*}
I_{rstv} = \int_{K^{\ast}} x_{1}^{r} x_{2}^{s} x_{3}^{t} x_{4}^{v} \, d\boldsymbol{x} = \frac{\left(1+ \left(-1\right)^{r}\right)\left(1+ \left(-1\right)^{s}\right)(1+ \left(-1\right)^{t}) \left(-1\right)^{v}}{\left(1+r\right)\left(1+s\right)\left(1+t\right)\left(r+s+t+v+4\right)},
\end{align*}
is the exact integral of each monomial.
The approximate integral of $f_{\text{poly}}$ can be computed by evaluating the following quadrature formula
\begin{align*}
J_{p} = \sum_{j=1}^{N} \omega_{j} f_{\text{poly}} \left(\boldsymbol{x}_j\right),
\end{align*}
where the number of points $N = N\left(p\right)$, weights $\omega_j = \omega_j \left(p\right)$, and abscissa $\boldsymbol{x}_j = \boldsymbol{x}_j \left(p\right)$ are functions of the quadrature rule strength $p$. Evidently, we obtain exact integration $J_{p} = J_{\infty}$ when $p \geq m$, and approximate integration $J_{p} \approx J_{\infty}$ when $p < m$.
Figure~\ref{polynomial_error} shows the quadrature error produced by integrating $f_{\text{poly}}$ for different values of $m$ and $p$. In accordance with expectations, we obtain exact integration whenever $p \geq m$, to within machine precision ($10^{-32}$).
\begin{figure}[h!]
\begin{center}
\includegraphics[width=9.5cm]{Figs/poly_plot.eps}
\end{center}
\caption{Absolute error in the numerical integration of $f_{\text{poly}}$ over the reference cubic pyramid $K^{\ast}$. The dashed line marks the threshold of machine precision.}\label{polynomial_error}
\end{figure}
\subsubsection{Transcendental integration}
In this section, we evaluate the ability of our quadrature rules to integrate a set of transcendental functions on a family of successively refined meshes. For this purpose, we consider the following functions
\begin{align*}
f_1 \left(\boldsymbol{x}\right) &= \sin\left(\pi x_{1}^{2}\right) \sin\left(\pi x_{2}^{2}\right) \sin\left(\pi x_{3}^{2}\right) \sin\left(\pi x_{4}^{2}\right), \\[1.5ex]
f_2 \left(\boldsymbol{x}\right) &= \exp\left(x_{1}^{2}\right) \exp\left(x_{2}^{2}\right) \exp\left(x_{3}^{2}\right) \exp\left( x_{4}^{2}\right), \\[1.5ex]
f_3 \left(\boldsymbol{x}\right) &= \exp\left( x_1 \right) \exp\left( \tfrac{1}{2} x_2 \right) \exp\left( \tfrac{1}{3} x_3 \right) \exp\left( \tfrac{1}{4} x_4 \right).
\end{align*}
Note that the first two functions are symmetric with respect to the coordinates $x_1, \ldots, x_4$, whereas the last function is asymmetric.
The integral of each transcendental function was computed on the domain $\Omega = \left[0,1\right]^{4}$ as follows
\begin{align*}
J_{\infty} = \int_{\Omega} f_{\text{trans}} \left(\boldsymbol{x}\right) d \boldsymbol{x}.
\end{align*}
The analytical solution of this integral is generally unknown. As a result, the `exact' integration was carried out using a vectorized adaptive integration routine in Matlab. This routine approximates the integral of a smooth function to an arbitrary level of precision by successively subdividing the domain of integration, and leveraging nested Gauss-Kronrod quadrature rules to estimate the integration error on each subinterval. One may consult the work of Shampine~\cite{shampine2008matlab,shampine2008vectorized} for details of the Matlab implementation, and Notaris~\cite{notaris2016gauss} for a general review of Gauss-Kronrod quadrature rules. In our case, we used the adaptive Matlab routine to approximate the integrals to within machine precision.
The transcendental functions above were also integrated using the aforementioned quadrature rules on the cubic pyramid. In order to facilitate this process, the domain $\Omega$ was covered with a uniform mesh of $M^{4}$ tesseract elements where $M$ is a positive integer. Thereafter, each tesseract was subdivided into 8 cubic pyramid elements in accordance with the partition operator $\mathcal{B}$, yielding a total of $N_{e} = 8M^{4}$ cubic pyramid elements. The quadrature rules were transformed (mapped) from the reference cubic pyramid $K^{\ast}$ to the individual cubic pyramids in the mesh via a linear mapping procedure. The approximate integral of each function was then computed in the following fashion
\begin{align*}
J_{p} = \sum_{i=1}^{N_{e}} \sum_{j=1}^{N} \omega_{j}^{K_i} f_{\text{trans}} \left(\boldsymbol{x}_{j}^{K_i} \right),
\end{align*}
where $\omega_{j}^{K_i}$ are the quadrature weights and $\boldsymbol{x}_{j}^{K_i}$ are the point locations in each element $K_i$.
Figures~\ref{f1_error}, \ref{f2_error}, and \ref{f3_error} illustrate the quadrature errors for integrating $f_1$, $f_2$, and $f_3$ on $\Omega$ for mesh parameters $M = 1, 2, \ldots, 10$. In each case, the error converges at a rate of $h^{p+1}$ or higher. In addition, the pairs of even and odd quadrature rules converge at the same rates, e.g. rules with $p =2$ and $p= 3$ converge at a rate of $h^4$. Upon combining these insights together, we estimate that the rules will typically converge at a rate of $h^{p+2}$ for even values of $p$, and $h^{p+1}$ for odd values of $p$. The precise reason for this behavior is unknown, although it is likely due to fortunate cancellations of truncation error terms for quadrature rules with even values of $p$.
In addition, we note that increasing the number of quadrature points (by increasing $p$) does not always yield a more accurate result when integrating transcendental functions. For example, the 50 point rule with $p = 6$ sometimes outperforms the 65 point rule with $p = 7$ (see Figure~\ref{f3_error}). This behavior is likely due to a convenient alignment between the symmetry orbits of the $p = 6$ rule and the local topology of the transcendental function $f_3$. We note that this trend is not general, as it does not hold for $f_1$ and $f_2$. In fact, in most cases the higher degree rules outperform the lower degree rules, as expected.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=9.5cm]{Figs/f1_plot.eps}
\end{center}
\caption{Absolute value of the error in the numerical integration of $f_1$ over the domain $\Omega$ for mesh parameters $M = 1, 2, \ldots, 10$.}\label{f1_error}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=9.5cm]{Figs/f2_plot.eps}
\end{center}
\caption{Absolute value of the error in the numerical integration of $f_2$ over the domain $\Omega$ for mesh parameters $M = 1, 2, \ldots, 10$.}\label{f2_error}
\end{figure}
\pagebreak
\clearpage
\begin{figure}[h!]
\begin{center}
\includegraphics[width=9.5cm]{Figs/f3_plot.eps}
\end{center}
\caption{Absolute value of the error in the numerical integration of $f_3$ over the domain $\Omega$ for mesh parameters $M = 1, 2, \ldots, 10$.}\label{f3_error}
\end{figure}
\section{Conclusion}
In this work, we have presented a novel refinement strategy for four-dimensional cubic pyramids. Given a cubic pyramid our method subdivides it into a conforming set of smaller cubic pyramids and bipentatopes. Moreover, all of the edges of each cubic pyramid have the same length, as do the edges of each bipentatope, which corresponds to an optimal configuration. Furthermore, we have also developed and evaluated a new set of polynomial quadrature rules inside of the cubic pyramid. Together, these results open up new pathways for four-dimensional meshing for space-time finite element methods.
For the sake of completeness, we also provided a comprehensive example of a four-dimensional conformal hybrid mesh on a canonical domain. The initial mesh contains tesseract and cubic pyramid elements. The mesh is conformal because the couplings between the tesseracts and the cubic pyramids are consistent, without the need for an interface subdomain. Following the formation of the initial mesh, it is refined by subdividing the tesseract elements in the standard fashion (splitting along planes that are orthogonal to the coordinate axes), and by subdividing the cubic pyramids in the aforementioned novel fashion. We believe that this strategy can be generalized to a much broader class of hybrid meshes, including curved meshes, without significant modifications.
\section*{Declarations}
\subsubsection*{Funding}
The authors did not receive support from any organization for the submitted work.
\subsubsection*{Conflict of interest/Competing interests}
The authors have no conflicts of interest to declare that are relevant to the content of this article.
\subsubsection*{Availability of data and material}
The quadrature rules are available as electronic supplemental material.
\subsubsection*{Code availability}
The Polyquad code is available at https://github.com/PyFR/Polyquad.
\bibliographystyle{spmpsci}
|
1,314,259,995,867 | arxiv | \section{Introduction}
Neutrinoless double-beta ($0\nu2\beta$) decay is a hypothetical
nuclear transformation that changes the lepton number by two
units when a candidate even-even nucleus emits two electrons with no neutrino in the final state. The observation of $0\nu2\beta$ decay would testify lepton number non-conservation and the presence of a Majorana term in neutrino masses, and give information on the neutrino-mass absolute scale along with the ordering of the neutrino-mass eigenstates \cite{Vergados:2012,Barea:2012,Rodejohann:2012}. It
should be stressed that many effects beyond the Standard Model can contribute to the $0\nu2\beta$
decay rate \cite{Deppisch:2012,Pas:2015,Bilenky:2015,DellOro:2016}.
In contrast with the two-neutrino mode ($2\nu2\beta$),
experimentally observed in eleven isotopes with half-lives in
the range 10$^{18}$--10$^{24}$ yr (see reviews
\cite{Saakyan:2013,Barabash:2015,Barabash:2015a} and references
therein) and allowed in the Standard Model, the $0\nu2\beta$ decay has not been detected yet. The most
sensitive experiments give only half-life limits on the level
of $T_{1/2} > 10^{25}-10^{26}$ yr, which correspond to constraints on the
effective Majorana neutrino mass around $\langle m_{\nu}\rangle < 0.1-1$ eV, in the degenerate hierarchy region of the
neutrino mass eigenstates (see reviews
\cite{DellOro:2016,Barabash:2015a,Giuliani:2012,Cremonesi:2014,Sarazin:2015} and
the recent KamLAND-Zen result \cite{Gando:2016}). The goal of the
next-generation $0\nu2\beta$ experiments is to probe the inverted
hierarchy region of the neutrino mass ($\langle m_{\nu}\rangle
\sim 0.05-0.02$~eV). This neutrino mass scale corresponds to
half-lives $T_{1/2}\sim 10^{27}-10^{28}$ yr even for the nuclei
with the highest decay probability
\cite{Vergados:2012,Barea:2012}. The attainment of a so high sensitivity
requires the construction of a detector containing a large number of
$2\beta$ active nuclei ($10^3-10^4$ moles of isotope of interest),
extremely low (ideally zero) radioactive background, high
detection efficiency (obtainable in the calorimetric approach
``source = detector'') and ability to distinguish the effect
searched for (in particular, as high as possible energy
resolution). Taking into account the extremely low decay
probability and the difficulties of the nuclear matrix elements
calculations \cite{Vergados:2012,Barea:2012}, the experimental
program should include a few candidate nuclei.
The technique of low temperature scintillating bolometers looks
very promising to satisfy the above mentioned requirements
\cite{Pirro:2006,Beeman:2012,Artusa:2014}. The nucleus $^{116}$Cd
is one of the most attractive candidates thanks to one of the
highest energy release ($Q_{2\beta}$ = 2813.50(13)~keV
\cite{Rahaman:2011}), comparatively large natural isotopic
abundance ($\delta$ = 7.512(54)\% \cite{Meija:2016}),
applicability of centrifugation for cadmium isotopes enrichment in
a large amount, and availability of cadmium tungstate crystal
scintillators (CdWO$_4$).
Cadmium tungstate crystals are routinely produced on an industrial basis and are among the most radiopure and efficient scintillators, with a long history of applications in low counting experiments to search for double-beta decay
\cite{Danevich:1989,Danevich:1995,Danevich:1996a,Danevich:2003a,Belli:2008}
and investigate rare $\alpha$ \cite{Danevich:2003b} and $\beta$ decays
\cite{Alessandrello:1993,Danevich:1996b,Belli:2007}.
Recently, high-quality radiopure CdWO$_4$ crystal scintillators
were developed from deeply-purified cadmium samples enriched
in the isotopes $^{106}$Cd \cite{Belli:2010} and $^{116}$Cd
\cite{Barabash:2011} with the help of the low-thermal-gradient
Czochralski crystal-growth technique \cite{Grigoriev:2014}. These
enriched scintillators are currently and succesfully used in the $0\nu2\beta$ decay
experiments with $^{106}$Cd \cite{Belli:2012,Belli:2016} and
$^{116}$Cd \cite{Poda:2014,Danevich:2016}. Important advantages
of the low-thermal-gradient Czochralski method are a high yield of
the crystal boules ($\approx87\%$) and an acceptable low level of
irrecoverable losses of enriched cadmium ($\approx2\%$). Thus,
production of high quality radiopure cadmium tungstate crystal
scintillators from enriched isotopes is already a well developed technique.
Starting from the beginning of nineties of the last century, CdWO$_4$ was intensively tested first as a pure bolometer \cite{Alessandrello:1993} and then as a scintillating bolometer with a high performance in terms of energy resolution, particle discrimination ability and low radioactive background
\cite{Pirro:2006,Gorla:2008,Gironi:2009,Arnaboldi:2010}.
The aforementioned results played a crucial role in including
CdWO$_4$ in the list of the possible candidates for the CUPID project
\cite{CUPID}. In this context, the first bolometric test of an
enriched $^{116}$CdWO$_4$ scintillating bolometer -- here reported -- adds a crucial missing piece of information in view of the full
implementation of the cadmium tungstate technology for $0\nu2\beta$ search. It should be stressed that reproducing the results achieved with materials of
natural isotopic composition with enriched crystal scintillators
is not trivial. Indeed, the procedures of purification of enriched
isotopes and the growth of crystals from enriched materials are
severely constrained by the strong requirements of a high yield in developing ready-to-use crystals and minimal losses of the costly enriched materials.
These requirements may affect negatively the bolometric
performance and the intrinsic background, which need to be specifically studied for bolometers containing enriched isotopes. Among the three candidates that are very attractive for the scintillating
bolometer technology, i.e. $^{100}$Mo, $^{82}$Se and $^{116}$Cd,
positive tests on enriched materials were performed before this
work only in the first two cases~\cite{Barabash:2014,Artusa:2016}.
The results here described on $^{116}$Cd complete the
investigation of these isotopes and enhance the unrivaled merits of the $^{116}$CdWO$_4$ technology.
\section{Test of a $^{116}$CdWO$_4$ scintillating bolometer}
A sample of enriched $^{116}$CdWO$_4$ crystal scintillator
was cut from the wide part of the growth cone
of a 1.9~kg crystal boule~\cite{Barabash:2011} (see Fig.1 in
Ref.~\cite{Barabash:2016}, where the boule and cut parts are
shown). The crystal mass and size are respectively 34.5~g and $28\times27\times 6$ mm, and the isotopic concentration of $^{116}$Cd is 82\%. The light detector (LD) consists of a high-purity germanium wafer ($\oslash$44$\times$0.175 mm) produced by Umicore. The scintillator and the Ge wafer were fixed in individual copper frames by using PTFE pieces and brass / copper screws. The inner
surface of the detector holder was covered by a reflecting foil
(Vikuiti$^{\rm TM}$ Enhanced Specular Reflector Film) to improve the scintillation light
collection. A neutron transmutation doped (NTD) Ge thermistor with
a mass of $\sim 50$~mg was glued on the $^{116}$CdWO$_4$ crystal
by six spots of epoxy (Araldite\textregistered) to register the
temperature pulses induced by the absorption of particles in the
$^{116}$CdWO$_4$ crystal. An approximately three-times-smaller NTD
Ge thermistor was attached to the LD with the aim to reduce the
added heat capacity and to increase the LD sensitivity. Both
bolometers were supplied with a silicon chip on top of which a heavily doped
meander was formed by donor ion implantation. The meander resistance is
stable down to millikelvin temperature and was used as a heater
\cite{Andreotti:2012} to inject periodically fixed amounts of
thermal energy for the detector stabilization. The partially
assembled $^{116}$CdWO$_4$ scintillating bolometer and the LD are
shown in Fig. \ref{fig:detector}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.48\textwidth]{1-enrCWO_small_bolometer.eps}
\caption{Photograph of the 34.5~g $^{116}$CdWO$_4$ scintillating bolometer
assembled on a copper plate covered by a reflecting foil (left) together with
the Ge-based light detector (right). See the text for the details.}
\label{fig:detector}
\end{figure}
The low-temperature tests of the $^{116}$CdWO$_4$ scintillating
bolometer were performed in a cryogenic laboratory of the CSNSM
(Orsay, France) by using a dry high-power dilution
refrigerator~\cite{Mancuso:2014} with a 4~K stage cooled by a
pulse-tube. The sample holder is mechanically decoupled from the
mixing chamber by four springs to reduce the acoustic noise caused
by the cryostat vibrations. The outer vacuum chamber of the
refrigerator is surrounded by a passive shield made of low
radioactivity lead (10~cm minimum thickness) to suppress the
environmental $\gamma$ background. The shield mitigates the
pile-up problem typical for above-ground measurements with
macro-bolometers, given the slow response of these devices (tens
or even hundreds of milliseconds). For the same reason, we have used a
relatively small $^{116}$CdWO$_4$ sample aiming to reduce the counting rate of the environmental
$\gamma$ background.
A low-noise electronics based on DC-coupled voltage-sensitive amplifiers~\cite{Arnaboldi:2002} and located inside a Faraday
cage was used in the experiment. The $^{116}$CdWO$_4$ and the LD
NTD sensors were biased with currents of 4.2~nA and 25~nA,
respectively. The bias current was injected through two load
resistors in series with a total resistance of 200~M$\Omega$ for
both channels. The stream data were filtered by a Bessel filter
with a high frequency cut-off at 675~Hz and acquired by a 16~bit
ADC with 10~kHz sampling frequency.
Most of the measurements were performed with the sample holder
temperature stabilized at 18.0 mK. However, the $^{116}$CdWO$_4$
detector was approximately 2~mK warmer due to a not reached
temperature equilibrium between the mixing chamber and the
detector itself, because the scintillating bolometer was mounted to
the mechanically-decoupled holder by means of brass rods, non-optimal for thermalization. Therefore, the NTD-Ge-thermistor
resistances ($R_{NTD}$) at the working temperature had a clear
trend to increase. For instance, the resistance of the heat
channel thermistor changed from an initial $\sim 0.4$~M$\Omega$ value to a final $\sim 1$~M$\Omega$ during the two-week background run. It is worth noting
that the sample-holder temperature reached 9.6~mK during a short
test with unregulated temperature, and the corresponding NTD-Ge-thermistor resistance of the $^{116}$CdWO$_4$ bolometer went
quickly up to 1.6~M$\Omega$ with a tendency to further increase.
We expect that a better thermal coupling and operation at lower
temperatures would enable much higher detector performance (see
the next Section). In this regard, we remark that the CUPID experiment
is expected to be performed at $\sim$10~mK base temperature, which
is in fact the value used in Cuoricino and CUORE-0, predecessors
of the CUORE experiment.
We accumulated 59.6 h data with a $^{232}$Th source (consisting of
a 15.2 g thoriated tungsten rod containing 1\% of Th), and 190.1 h
of background-only measurements, which altogether constitute 249.7
h life time. The $^{116}$CdWO$_4$ detector was calibrated by means
of the $\gamma$ quanta from the environmental radioactivity
(mainly emitted by $^{214}$Pb and $^{214}$Bi radionuclides from
the $^{238}$U chain) and in calibration run by $\gamma$ quanta
from the $^{232}$Th source (mainly $^{228}$Ac and $^{208}$Tl,
daughters of $^{232}$Th). The rear side of the LD was permanently
irradiated by a weak $^{55}$Fe X-ray source. In addition, an optic
fiber was mounted inside the cryostat to transmit LED light
pulses to the LD every 30~s, which can be also used for
calibration / stabilization purposes.
\section{Results and discussion}
The collected data were processed off-line by applying the optimum
filtering procedure \cite{Gatti:1986} and several
pulse-characterizing parameters were evaluated for each recorded
signal: the pulse amplitude, the rise- ($\tau_R$) and decay-
($\tau_D$) times\footnote{Here the rise-time is defined as the
time interval between 10\% an 90\% of the maximum amplitude of the
signal for the rising edge, while the decay-time corresponds to
the time interval between 90\% an 30\% of the maximum amplitude of
the signal for the decaying edge.}, several pulse-shape
indicators, and the DC baseline level of the pre-triggered part (over
0.15 s). In addition, the energy resolution of the filtered
baseline noise (FWHM$_{Bsl}$) and the amplitude of the signal
($S_{NTD}$) for a given deposited energy were estimated for each
data set ($1-3$ days of measurements). Some of these parameters,
characterizing the performance of the $^{116}$CdWO$_4$
scintillating bolometer and the LD, are given in Table
\ref{tab:performance}.
\begin{table}[!htb]
\caption{Technical data (see the text) for the $^{116}$CdWO$_4$ scintillating bolometer tested above ground at 18.0 mK (stabilized temperature of the sample holder). The $R_{NTD}$ and $S_{NTD}$ parameters correspond to the coldest conditions of the detector obtained at the end of the measurements. $\gamma$($\beta$) events registered by the $^{116}$CdWO$_4$ bolometer in the energy range $0.6-2.7$~MeV and the corresponding scintillation light signals detected by the LD in the energy range $\sim 15-85$~keV were used to evaluate the $\tau_R$ and $\tau_D$ parameters.} \footnotesize \centering
\begin{tabular}{cccccc}
\hline
Detector & $R_{NTD}$ & $S_{NTD}$ & FWHM$_{Bsl}$ & $\tau_R$ & $\tau_D$ \\
~ & M$\Omega$ & nV/keV & keV & ms & ms \\
\hline
LD & 0.12 & 258 & 0.6 & 1.3 & 4.7 \\
$^{116}$CdWO$_4$ & 1.0 & 135 & 1.5 & 5.1 & 28.5 \\
\hline
\end{tabular}
\label{tab:performance}
\end{table}
Taking into account the expected high light yield\footnote{Here we
define ``light yield'' the ratio between light and heat signal
amplitudes (converted into detected energy), which of course is lower than the absolute light yield of CdWO$_4$.} of cadmium tungstate at low temperatures
(e.g., $\sim$17 keV/MeV \cite{Arnaboldi:2010}), we have chosen a light
detector with a relatively modest performance, as it is visible
from Table \ref{tab:performance}. Therefore, we were not able to
separate clearly the $^{55}$Fe X-ray doublet (at 5.9 and 6.5 keV)
from the noise due to the poor energy resolution (FWHM$_{Fe55}$
$\approx 0.7$ keV). However, the LD time characteristics ($\tau_R$
and $\tau_D$ of the scintillation signals) are similar to that of
devices instrumented with small-size NTD Ge sensors (e.g., see the
performance of a first batch of six LDs preliminary tested for the
CUPID-0 detector array with Zn$^{82}$Se scintillating bolometers
\cite{Artusa:2016}).
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{3-run22_cwo_restab_spectrum.eps}
\caption{In the top panel, the energy spectra of $\gamma$($\beta$) events
accumulated by the 34.5 g $^{116}$CdWO$_4$ bolometer in a $\sim 190$~h background run
(blue dotted histogram, Bkg only) and in a $\sim 60$~h calibration run (red solid histogram,
Bkg + $^{232}$Th) performed above ground at CSNSM. The nuclides originating the observed peaks are specified. ``D.E.'' and ``S.E.'' labels refer to double-escape and single-escape peaks related to the 2615~keV full-energy $\gamma$ peak of $^{208}$Tl. In the bottom panel, details of the spectra in the $100-1000$~keV energy range.}
\label{fig:spectrum}
\end{figure}
The performance of the $^{116}$CdWO$_4$ bolometer during the tests
is characterised by a high sensitivity $S_{NTD}$ and a quite low baseline noise (see Table \ref{tab:performance}). The heat-pulse profile of the $^{116}$CdWO$_4$ detector, as well as the sensitivity $S_{NTD}$, are similar to those observed in low-temperature tests with CdWO$_4$ bolometers produced from cadmium with natural isotopic composition \cite{Pirro:2006,Alessandrello:1993,Gorla:2008,Gironi:2009,Arnaboldi:2010}. This confirms that CdWO$_4$ is an excellent bolometric material and that the detector performance is not spoiled by the Cd isotopical enrichement. The energy spectra acquired with the $^{116}$CdWO$_4$ bolometer in background ($\sim 190$~h) and calibration ($\sim 60$~h) runs, shown in Fig.~\ref{fig:spectrum}, contain a number of sharp $\gamma$ peaks; even small-intensity (a few \%) $\gamma$ quanta of $^{214}$Bi are well visible, which altogether demonstrate an excellent spectrometric performance of the detector. The energy resolution (FWHM) varies from 2.9(1) keV at
242.0 keV ($\gamma$ quanta of $^{214}$Pb) to 8.3(9) keV at 2614.5 keV ($\gamma$ quanta of $^{208}$Tl).
\begin{figure}[htb]
\centering
\includegraphics[width=0.48\textwidth]{2-run22_cwo_scatter-plot_v3.eps}
\caption{Scatter plot of the light-versus-heat signals collected in a $\sim 250$~h
run with the 34.5~g $^{116}$CdWO$_4$ scintillating
bolometer. (Inset) The low energy part of the scatter plot in the
proximity of the 609.3 keV $\gamma$ peak of $^{214}$Bi. Light-heat
anticorrelation is clearly visible as a negative slop of the
609.3 keV cluster.} \label{fig:scatter}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.48\textwidth]{4-run22_cwo_fwhm_stab_restab_v3.eps}
\caption{Energy resolution (FWHM) of the $^{116}$CdWO$_4$ bolometric detector
after the stabilization of its thermal response by using a heater (blue filled
rectangles). The energy resolution improves by considering light-heat anticorrelation (red open circles). The fits of the data by a function FWHM = $\sqrt{a^2 + (b \times E_{\gamma})^c}$ (where FWHM and energy $E_{\gamma}$ are in keV; $a$, $b$, and $c$ are free parameters) are shown by dashed lines. The dotted lines indicate FWHM = 7.5 keV expected at $Q_{2\beta}$ of $^{116}$Cd.}
\label{fig:fwhm}
\end{figure}
The LD data were also processed with the trigger records of the
$^{116}$CdWO$_4$ data with an adjusted time difference between
the two channels (due to the longer rise-time of the
$^{116}$CdWO$_4$ heat signals) to search for coincidences. A
scatter plot of the pulse amplitudes of coincident heat and light
signals is shown in Fig.~\ref{fig:scatter}. The structures
visible on this figure are associated with $\gamma$($\beta$) and
cosmic muons interactions in the scintillator, bulk or/and surface
trace contamination by $\alpha$ radioactive nuclides from U/Th
chains, nuclear recoils due to ambient neutron scattering on the
nuclei in the $^{116}$CdWO$_4$ crystal, events with a prevailing
interaction in the light detector, or/and pile-up events. An
event-by-event analysis of the population distributed just below
the clusters in the $\alpha$ band demonstrates that these sporadic events
are affected by a signal overlapping, which can produce a
single-like event in the heat channel but a clear pile-up in the
light channel because of the much shorter time response of the
latter. The data exhibit anticorrelation between light and heat
signal amplitudes, as illustrated in the inset of Fig.
\ref{fig:scatter}. This feature was already observed in CdWO$_4$
scintillating bolometers based on cadmium with natural isotopic composition and can be used to enhance the energy resolution of the heat channel \cite{Arnaboldi:2010}. The improvement
is shown in Fig. \ref{fig:fwhm}, where the FWHM values of the most
intensive $\gamma$ peaks before and after applying the
anticorrelation correction are presented. It is evident from Fig.
\ref{fig:fwhm} that the achieved improvement is quite modest
(around 10\%) in contrast to the results of
Refs.~\cite{Gorla:2008,Arnaboldi:2010}. This may be explained by
a higher uniformity of the light collection from our smaller sample,
which is expected to make the light-heat anticorrelation less
significant. The energy resolution can be improved
further in an underground cryostat shielded against environmental
$\gamma$ radiation.
\begin{figure}[htb]
\centering
\includegraphics[width=0.48\textwidth]{5-run22_cwo_q-plot.eps}
\caption{The energy dependence of the light yield measured over
$\sim 250$~h with the 34.5 g $^{116}$CdWO$_4$ scintillating bolometer.}
\label{fig:qplot}
\end{figure}
The data of the heat-light coincidences can be transformed into
the so-called {\sl Q-plot} shown in Fig.~\ref{fig:qplot}. The
projection of the points on the y-axis can be used to evaluate
the light yield (LY) for different classes of registered events.
The LY for $\gamma$ quanta, $\beta$ particles and cosmic muons in
the energy interval $0.6-2.7$~MeV is $\sim 31$ keV/MeV; the LY for alpha particles with energy $4-7$~MeV (with the energy scale determined by a $\gamma$ calibration) is 5.5(1) keV/MeV, while the LY for nuclear recoils is even less, i.e. 2.6(1) keV/MeV, because of the further quenching of the scintillation light for heavier ions (a comprehensive study
of this phenomenon for cadmium tungstate can be found in
Refs.~\cite{Bizzeti:2012,Tretyak:2010}). It is worth to note that
such high LY values have never been reported for CdWO$_4$-based
scintillating bolometers. In particular, approximately twice lower
values were obtained in Ref.~\cite{Arnaboldi:2010} (however, the
crystal used in that study was an order of magnitude larger
in volume). This excellent result is obtained by the twice-larger
area of the LD, the overall compact geometry of the arrangement
(which enhances the light collection), a high optical transmittance of the
material~\cite{Barabash:2011}, and a low self-absorption of the scintillation photons in our
relatively thin sample. By using the LY's, one can also estimate
the quenching factors for $\alpha$'s and nuclear recoils as
0.175(3) and 0.084(3), respectively.
To evaluate the discrimination power (DP) between $\gamma(\beta)$
and $\alpha$ event distributions, the LY data shown in Fig.
\ref{fig:qplot} were used within the $2.6-7$~MeV energy range and
the $4-38$~keV/MeV LY interval (cutting most of the pile-up events in
the vicinity of the $\alpha$ clusters). The obtained distributions
were fitted by Gaussian functions to estimate their mean values
($\mu_{\gamma(\beta)}$, $\mu_{\alpha}$) and standard deviations
($\sigma_{\gamma(\beta)}$, $\sigma_{\alpha}$). After defining
$$
{\rm DP} = (\mu_{\gamma(\beta)} - \mu_{\alpha})/\sqrt{\sigma^2_{\gamma(\beta)}+\sigma^2_{\alpha}} \ ,
$$
\noindent as usually done for scintillating bolometers~\cite{Artusa:2014}, we obtain DP = 17(1) in an energy interval which includes $Q_{2\beta}$ of $^{116}$Cd. This high value for the DP, which can be even improved in underground conditions, is compatible with a full suppression of $\alpha$-induced background in the $0\nu2\beta$ decay ROI of $^{116}$Cd.
\begin{figure}[htb]
\centering
\includegraphics[width=0.48\textwidth]{6-run22_cwo_spectrum_alpha.eps}
\caption{Energy spectra of $\alpha$ events accumulated by the 34.5
g $^{116}$CdWO$_4$ bolometer over 250 h of data taking. The energy
scale corresponds to $\gamma$ energy calibration. The bin width is
10~keV. The $\sim 6-7$\% shift of the $\alpha$ peaks from the nominal
$Q_{\alpha}$ values is caused by a thermal quenching (see details
in Ref.~\cite{Arnaboldi:2010}).} \label{fig:alphas}
\end{figure}
The radioactive contamination of the $^{116}$CdWO$_4$ crystal was
estimated by using the energy spectrum of the $\alpha$ events,
presented in Fig.~\ref{fig:alphas}. The events were selected under
the condition that the associated LY be below 10 keV/MeV. The
peaks of $^{238}$U, $^{234}$U, and $^{210}$Po were identified in
the data. The $\alpha$ events outside the energy regions expected for U/Th
with their daughters can be explained by a surface pollution of the
$^{116}$CdWO$_4$ detector or/and of the surrounding construction
materials (which did not undergo an accurate cleaning process). Therefore, we have estimated the specific activities of the nuclides, while for other members of the U/Th chains only limits were obtained by using the procedure
recommended by Feldman and Cousins \cite{Feldman:1998}. The
estimations of the $^{116}$CdWO$_4$ crystal scintillator
radioactive contamination are presented in Table
\ref{tab:rad-cont}. Data on radioactive contamination of the
$^{116}$CdWO$_4$ crystal No.~1 described in Ref.~\cite{Poda:2014} are also reported.
\begin{table}[!htb]
\caption{Radioactive contamination of the $^{116}$CdWO$_4$ crystal
scintillator. Data on the radioactive contamination of the
$^{116}$CdWO$_4$ crystal No.~1 \cite{Poda:2014} are given for
comparison.} \footnotesize \centering
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Chain & Nuclide & \multicolumn{2}{|l|}{Activity (mBq/kg)} \\
\cline{3-4}
~ & (sub-chain) & This work & No.~1 \cite{Poda:2014} \\
\hline
$^{232}$Th & $^{232}$Th & $\leq0.13$ & \\
~ & $^{228}$Th & $\leq0.07$ & $0.031(3)$ \\
$^{238}$U & $^{238}$U & $0.3(1)$ & $0.5(2)$ \\
~ & $^{234}$U & $0.26(9)$ & \\
~ & $^{230}$Th & $\leq0.07$ & \\
~ & $^{226}$Ra & $\leq0.07$ & $\leq 0.005$ \\
~ & $^{210}$Po & $0.23(8)$ & $0.6(2)$ \\
$^{235}$U & $^{235}$U & $\leq0.13$ & \\
\hline
\end{tabular}
\end{center}
\label{tab:rad-cont}
\end{table}
It should be noted that the $^{116}$CdWO$_4$ sample No.~1 was cut
from the same crystal boule, however our sample was closer to the
beginning of the boule. Therefore, the hint of a lower specific
activity of $^{238}$U and $^{210}$Po in the present sample (see Table \ref{tab:rad-cont}) can be explained by segregation of uranium and lead ($^{210}$Po being originated by $^{210}$Pb) in the CdWO$_4$
crystal growth process. As it was observed in Refs.
\cite{Barabash:2011,Danevich:2013,Poda:2013} the radioactive
contamination of the crystal boule by $^{228}$Th increases along
the boule from the growth cone to the bottom. Besides, the
contamination of the residuals after the crystal growth by
potassium, radium and thorium exceeds the boule contamination
significantly. These features indicate a strong segregation of the
radioactive impurities in the CdWO$_4$ crystal growing process.
Moreover, the radioactive contamination of sample No.~3 cut from
the $^{116}$CdWO$_4$ boule (here we again refer the reader to Fig. 1
in Ref.~\cite{Barabash:2016}) was significantly improved (in
particular, by one order of magnitude in thorium) after
recrystallization by the low-thermal-gradient Czochralski
method~\cite{Barabash:2016}. These results demonstrate encouraging
prospects for an enriched CdWO$_4$ crystal-scintillator production
with a radiopurity level satisfying the requirements of a
next-generation bolometric experiment.
\section{Conclusions}
A cadmium tungstate crystal scintillator with a mass of 34.5~g, enriched in $^{116}$Cd to 82\%, was tested over $\sim 250$~h at 18 mK as
a scintillating bolometer in an above-ground cryogenic laboratory.
The $^{116}$CdWO$_4$ detector exhibits a high energy resolution
(FWHM $\approx 2-7$~keV for $0.2-2.6$~MeV $\gamma$ quanta), and almost
complete discrimination between $\beta$($\gamma$) and $\alpha$
events (a discrimination power of $\sim 17$ was achieved in the
$2.6-7.0$~MeV region). These remarkable results were obtained in
spite of a significant pile-up effect related to the above-ground
location of the set-up.
We have found that the energy-to-voltage conversion and the time characteristics of the $^{116}$CdWO$_4$ signals are similar to those observed earlier with CdWO$_4$-based bolometers not produced from enriched material and sharing an akin detector design. The light yield observed in the present investigation is about twice higher (31 keV/MeV for $\gamma$ quanta) than that
given in the literature for CdWO$_4$ scintillating bolometers
thanks to the high optical quality of the enriched scintillator
and an efficient collection of the scintillation light in the detector
module.
The radioactive contamination of the $^{116}$CdWO$_4$ crystal by
$^{238}$U, $^{234}$U, and $^{210}$Po is estimated to be on the
level of $\sim0.3$ mBq/kg each, which is lower than that in the
$^{116}$CdWO$_4$ crystal samples cut from the same crystal boule
farther away from the growth cone (from which the studied sample was
obtained). This observation indicates a segregation of uranium and lead
in the CdWO$_4$ crystals growth process. For other $\alpha$
emitters belonging to the U/Th chains only limits on the level of
$0.07-0.13$~mBq/kg were obtained.
The present work demonstrates that $^{116}$CdWO$_4$ scintillating
bolometers represent one of the most promising technologies for
a next-generation bolometric experiment aiming at exploring the inverted
hierarchy region of the neutrino mass, as discussed in the
CUPID project.
\begin{acknowledgements}
The group from the Institute for Nuclear Research (Kyiv, Ukraine) was supported in part by the IDEATE International Associated Laboratory (LIA). The
researches were supported in part by the joint scientific project
``Development of Cd-based scintillating bolometers to search for
neutrinoless double-beta decay of $^{116}$Cd'' in the framework of
the PICS (Program of International Cooperation in Science) of CNRS
in years 2016-2018. This work was also supported by a public grant overseen by the French National Research Agency (ANR) as part of the ``Investissement d'Avenir'' program, through the IDI 2015 project funded by the IDEX Paris-Saclay, ANR-11-IDEX-0003-02.
\end{acknowledgements}
|
1,314,259,995,868 | arxiv | \section{First Section}
\subsection{A Subsection Sample}
Please note that the first paragraph of a section or subsection is
not indented. The first paragraph that follows a table, figure,
equation etc. does not need an indent, either.
Subsequent paragraphs, however, are indented.
\subsubsection{Sample Heading (Third Level)} Only two levels of
headings should be numbered. Lower level headings remain unnumbered;
they are formatted as run-in headings.
\paragraph{Sample Heading (Fourth Level)}
The contribution should contain no more than four levels of
headings. Table~\ref{tab1} gives a summary of all heading levels.
\vfill\pagebreak
\begin{table}
\caption{Table captions should be placed above the
tables.}\label{tab1}
\begin{tabular}{|l|l|l|}
\hline
Heading level & Example & Font size and style\\
\hline
Title (centered) & {\Large\bfseries Lecture Notes} & 14 point, bold\\
1st-level heading & {\large\bfseries 1 Introduction} & 12 point, bold\\
2nd-level heading & {\bfseries 2.1 Printing Area} & 10 point, bold\\
3rd-level heading & {\bfseries Run-in Heading in Bold.} Text follows & 10 point, bold\\
4th-level heading & {\itshape Lowest Level Heading.} Text follows & 10 point, italic\\
\hline
\end{tabular}
\end{table}
\noindent Displayed equations are centered and set on a separate
line.
\begin{equation}
x + y = z
\end{equation}
Please try to avoid rasterized images for line-art diagrams and
schemas. Whenever possible, use vector graphics instead (see
Fig.~\ref{fig1}).
\begin{figure}
\includegraphics[width=\textwidth]{fig1.eps}
\caption{A figure caption is always placed below the illustration.
Please note that short captions are centered, while long ones are
justified by the macro package automatically.} \label{fig1}
\end{figure}
\begin{theorem}
This is a sample theorem. The run-in heading is set in bold, while
the following text appears in italics. Definitions, lemmas,
propositions, and corollaries are styled the same way.
\end{theorem}
\begin{proof}
Proofs, examples, and remarks have the initial word in italics,
while the following text appears in normal font.
\end{proof}
For citations of references, we prefer the use of square brackets
and consecutive numbers. Citations using labels or the author/year
convention are also acceptable. The following bibliography provides
a sample reference list with entries for journal
articles~\cite{ref_article1}, an LNCS chapter~\cite{ref_lncs1}, a
book~\cite{ref_book1}, proceedings without editors~\cite{ref_proc1},
and a homepage~\cite{ref_url1}. Multiple citations are grouped
\cite{ref_article1,ref_lncs1,ref_book1},
\cite{ref_article1,ref_book1,ref_proc1,ref_url1}.
\section{First Section}
\subsection{A Subsection Sample}
Please note that the first paragraph of a section or subsection is
not indented. The first paragraph that follows a table, figure,
equation etc. does not need an indent, either.
Subsequent paragraphs, however, are indented.
\subsubsection{Sample Heading (Third Level)} Only two levels of
headings should be numbered. Lower level headings remain unnumbered;
they are formatted as run-in headings.
\paragraph{Sample Heading (Fourth Level)}
The contribution should contain no more than four levels of
headings. Table~\ref{tab1} gives a summary of all heading levels.
\begin{table}
\caption{Table captions should be placed above the
tables.}\label{tab1}
\begin{tabular}{|l|l|l|}
\hline
Heading level & Example & Font size and style\\
\hline
Title (centered) & {\Large\bfseries Lecture Notes} & 14 point, bold\\
1st-level heading & {\large\bfseries 1 Introduction} & 12 point, bold\\
2nd-level heading & {\bfseries 2.1 Printing Area} & 10 point, bold\\
3rd-level heading & {\bfseries Run-in Heading in Bold.} Text follows & 10 point, bold\\
4th-level heading & {\itshape Lowest Level Heading.} Text follows & 10 point, italic\\
\hline
\end{tabular}
\end{table}
\noindent Displayed equations are centered and set on a separate
line.
\begin{equation}
x + y = z
\end{equation}
Please try to avoid rasterized images for line-art diagrams and
schemas. Whenever possible, use vector graphics instead (see
Fig.~\ref{fig1}).
\begin{figure}
\includegraphics[width=\textwidth]{fig1.eps}
\caption{A figure caption is always placed below the illustration.
Please note that short captions are centered, while long ones are
justified by the macro package automatically.} \label{fig1}
\end{figure}
\begin{theorem}
This is a sample theorem. The run-in heading is set in bold, while
the following text appears in italics. Definitions, lemmas,
propositions, and corollaries are styled the same way.
\end{theorem}
\begin{proof}
Proofs, examples, and remarks have the initial word in italics,
while the following text appears in normal font.
\end{proof}
For citations of references, we prefer the use of square brackets
and consecutive numbers. Citations using labels or the author/year
convention are also acceptable. The following bibliography provides
a sample reference list with entries for journal
articles~\cite{ref_article1}, an LNCS chapter~\cite{ref_lncs1}, a
book~\cite{ref_book1}, proceedings without editors~\cite{ref_proc1},
and a homepage~\cite{ref_url1}. Multiple citations are grouped
\cite{ref_article1,ref_lncs1,ref_book1},
\cite{ref_article1,ref_book1,ref_proc1,ref_url1}.
\section{Introduction}
Game semantics is a versatile paradigm for giving semantics to a wide spectrum of
programming languages~\cite{AM98b,MT16b}.
It is well-suited for studying the observational equivalence of programs and, more generally, the
behaviour of a program in an arbitrary context.
About 20 years ago, it was discovered that the game semantics of a program can sometimes be expressed by a finite automaton or another simple computational model~\cite{GMcC00}.
This led to algorithmic uses of game semantics
for program analysis and verification~\cite{AGMO04,DGL06,GM06,BG08,HO09,HMO12,KMOWW12,MRT15,Dim16,Dim17}.
Thus far, these advances concerned mostly languages without concurrency.
In this work, we consider Finitary Idealized Concurrent Algol ($\fica$) and its fully abstract game semantics~\cite{GM08}.
It is a call-by-name language with higher-order features, side-effects, and concurrency implemented by a parallel composition operator and semaphores.
It is finitary since, as it is common in this context, base types are restricted to finite domains.
Quite surprisingly, the game semantics of this language is arguably simpler than that for the language without concurrency.
The challenge comes from algorithmic considerations.
Following the successful approach from the sequential case~\cite{GMcC00,Ong02,Mur04,MW08,CBMO19}, the first step is to find an automaton model abstracting the phenomena appearing in the semantics.
The second step is to obtain program fragments from structural restrictions on the automaton model.
In this paper we take both steps.
We propose \emph{leafy automata}: an automaton model working on nested data.
Data are used to represent pointers in plays, while the nesting of data reflects structural dependencies in the use of pointers.
Interestingly, the structural dependencies in plays boil down to imposing a tree structure on the data.
We show a close correspondence between the automaton model and the game semantics of $\fica$.
For every program, there is a leafy automaton whose traces (data words) represent precisely the plays in the semantics of the program (Theorem~\ref{thm:trans}).
Conversely, for every leafy automaton, there is a program whose semantics consists of plays representing the traces of the automaton (Theorem~\ref{thm:toalgol}).
(The latter result holds modulo a saturation condition we explain later.)
This equivalence shows that leafy automata are a suitable model for studying decidability
questions for $\fica$.
Not surprisingly, due to their close connection to $\fica$, leafy automata turn out to have an undecidable emptiness problem.
We use the undecidability argument to identify the source, namely communication across several unbounded levels, i.e., levels in which nodes can produce an unbounded number of children during the lifetime of the automaton.
To eliminate the problem, we introduce a restricted variant of leafy automata, called \emph{local},
in which every other level is bounded and communication is allowed to cross only one unbounded node.
Emptiness for such automata can be decided via reduction to a number of instances of Petri net reachability problem.
We also identify a fragment of $\fica$, dubbed \emph{local} $\fica$ ($\sfica$), which maps onto
local leafy automata.
It is based on restricting the distance between semaphore and variable
declarations and their uses inside the term.
This is a first non-rudimentary fragment of $\fica$ for which some verification tasks are decidable.
Overall, this makes it possible to use local leafy automata to analyse $\sfica$ terms and decide associated verification tasks.
\paragraph{Related work} Concurrency, even with only first-order recursion, leads to
undecidability~\cite{Ram00}.
Intuitively, one can encode the intersection of languages of two pushdown automata.
From the automata side, much research on decidable cases has concentrated on bounding interactions between stacks representing different threads of the program~\cite{QR05,TMP09,AGK14}.
From the game semantics side, the only known decidable fragment of $\fica$ is Syntactic Control
of Concurrency (SCC)~\cite{GMO06}, which imposes bounds
on the number of threads in which arguments can be used.
This restriction makes it possible to represent the game semantics of programs by finite automata.
In our work, we propose automata models that correspond to unbounded interactions with
arbitrary $\fica$ contexts, and importantly that remains true also when we restrict the terms to $\sfica$.
Leafy automata are a model of computation over an infinite alphabet.
This area has been explored extensively, partly motivated by applications to database theory, notably XML~\cite{Sch07}.
In this context, nested data first appeared in~\cite{BB07}, where the authors considered shuffle expressions as
the defining formalism. Later on, data automata~\cite{BDMSS11} and class memory automata~\cite{BS10} have been adapted to nested data in~\cite{DHLT14,CMO15}.
They are similar to leafy automata in that the automaton is allowed to access states related to previous uses of data values at various depths.
What distinguishes leafy automata is that the lifetime of a data value is precisely defined and follows a question and answer discipline in correspondence with game semantics.
Leafy automata also feature run-time ``zero-tests'', activated when reading answers.
For most models over nested data, the emptiness problem is undecidable.
To achieve decidability, the authors in~\cite{DHLT14,CMO15} relax the acceptance conditions so that the emptiness problem can eventually be recast as a coverability problem for a well-structured transition system.
In~\cite{CHMO15}, this result was used to show decidability of equivalence for a first-order (sequential) fragment of Reduced ML.
On the other hand, in~\cite{BB07} the authors relax the order of letters in words, which leads to an analysis based on semi-linear sets.
Both of these restrictions are too strong to permit the semantics of $\fica$, because of the game-semantic $\wait$ condition,
which corresponds to waiting until all sub-processes terminate.
Another orthogonal strand of work on concurrent higher-order programs is based on higher-order recursion schemes~\cite{Hague13,KI13}. Unlike $\fica$, they feature recursion but the computation is purely functional over a single atomic type~$o$.
\paragraph{Structure of the paper:}
In the next two sections we recall $\fica$ and its game semantics from~\cite{GM08}.
The following sections introduce leafy automata ($\la$) and their local variant ($\sla$),
where we also analyse the associated decision problems and, in particular, show that the non-emptiness problem for $\sla$ is decidable.
Subsequently, we give a translation from $\fica$ to $\la$ (and back) and define a fragment $\sfica$ of $\fica$ which can be translated into $\sla$.
\section{Finitary Idealized Concurrent Algol ($\fica$)}
\label{sec:fica}
Idealized Concurrent Algol~\cite{GM08} is a paradigmatic language combining higher-order with imperative computation in the style of Reynolds~\cite{Rey78}, extended to concurrency with parallel composition ($\parc$) and binary semaphores.
We consider its finitary variant $\fica$ over the finite datatype $\makeset{0,\ldots,\imax}$ ($\imax\ge 0$)
with loops but no recursion.
Its types $\theta$ are generated by the grammar
\[
\theta::=\beta\mid \theta\rarr\theta\qquad\qquad
\beta::=\comt\mid\expt\mid\vart\mid\semt
\]
where
$\comt$ is the type of commands;
$\expt$ that of $\makeset{0,\ldots,\imax}$-valued expressions;
$\vart$ that of assignable variables;
and $\semt$ that of semaphores.
The typing judgments are displayed in Figure~\ref{fig:icatypes}.
$\skipcom$ and $\divcom_\theta$ are constants representing termination and divergence respectively,
$i$ ranges over $\{0,$ $\cdots,$ $\imax\}$,
and $\mathbf{op}$ represents unary arithmetic operations, such as successor or predecessor (since we work over a finite datatype, operations of bigger arity can be defined using conditionals).
Variables and semaphores can be declared locally via $\mathbf{newvar}$ and $\mathbf{newsem}$.
Variables are dereferenced using $!M$, and semaphores are manipulated using two (blocking) primitives,
$\grb{s}$ and $\rls{s}$, which grab and release the semaphore respectively.
\begin{figure}[t]
\begin{center}
\AxiomC{$\phantom{\beta}$}
\UnaryInfC{$\Gamma\vdash\skipcom:\comt $}
\DisplayProof\quad
\AxiomC{$\phantom{\beta}$}
\UnaryInfC{$\Gamma\vdash\divcom_\theta:\theta $}
\DisplayProof\quad
\AxiomC{$\phantom{\beta}$}
\UnaryInfC{$\Gamma\vdash i:\expt$}
\DisplayProof\quad
\AxiomC{$\seq{\Gamma}{M:\expt}$}
\UnaryInfC{$\seq{\Gamma}{\arop{M}:\expt}$}
\DisplayProof\\[2ex]
\AxiomC{$\Gamma\vdash M:\comt$}
\AxiomC{$\Gamma\vdash N:\beta$}
\BinaryInfC{$\Gamma \vdash M;N:\beta$}
\DisplayProof\quad
\AxiomC{$\Gamma\vdash M:\comt$}
\AxiomC{$\Gamma\vdash N:\comt$}
\BinaryInfC{$\Gamma \vdash M\parc N:\comt$}
\DisplayProof\\[2ex]
\AxiomC{$\Gamma\vdash M:\expt$}
\AxiomC{$\Gamma\vdash N_1,N_2:\beta$}
\BinaryInfC{$\Gamma\vdash \cond{M}{N_1}{N_2}:\beta$}
\DisplayProof\quad
\AxiomC{$\Gamma\vdash M:\expt$}
\AxiomC{$\Gamma\vdash N:\comt$}
\BinaryInfC{$\Gamma\vdash \while{M}{N}:\comt$}
\DisplayProof\\[2ex]
\AxiomC{$\phantom{\beta}$}
\UnaryInfC{$\Gamma, x:\theta \vdash x: \theta$}
\DisplayProof\quad
\AxiomC{$\Gamma,x:\theta\vdash M:\theta'$}
\UnaryInfC{$\Gamma\vdash\lambda x. M:\theta\rarr\theta' $}
\DisplayProof\quad
\AxiomC{$\Gamma\vdash M:\theta\rarr\theta'$}
\AxiomC{$\Gamma\vdash N:\theta$}
\BinaryInfC{$\Gamma \vdash M N:\theta'$}
\DisplayProof\\[2ex]
\AxiomC{$\Gamma\vdash M:\vart$}
\AxiomC{$\Gamma\vdash N:\expt$}
\BinaryInfC{$\Gamma \vdash M\,\raisebox{0.065ex}{:}{=}\, N:\comt$}
\DisplayProof\quad
\AxiomC{$\Gamma\vdash M:\vart$}
\UnaryInfC{$\Gamma \vdash !M:\expt$}
\DisplayProof\\[2ex]
\AxiomC{$\Gamma\vdash M:\semt$}
\UnaryInfC{$\Gamma \vdash \rls{M}:\comt$}
\DisplayProof\,\,
\AxiomC{$\Gamma\vdash M:\semt$}
\UnaryInfC{$\Gamma \vdash \grb{M}:\comt$}
\DisplayProof\\[2ex]
\AxiomC{$\Gamma, x:\vart\vdash M:\comt,\expt$}
\UnaryInfC{$\Gamma\vdash \newin{x\,\raisebox{0.065ex}{:}{=}\, i}{M:\comt,\expt}$}
\DisplayProof\quad
\AxiomC{$\Gamma,x:\semt\vdash M:\comt,\expt$}
\UnaryInfC{$\Gamma\vdash \newsem{x\,\raisebox{0.065ex}{:}{=}\, i}{M:\comt,\expt}$}
\DisplayProof
\end{center}
\caption{$\fica$ typing rules\label{fig:icatypes}}
\vspace{-4mm}
\end{figure}
The small-step operational semantics of $\fica$ is reproduced in Appendix~\ref{apx:opsem}.
In what follows, we shall write $\divcom$ for $\divcom_\comt$.
\medskip
We are interested in \emph{contextual equivalence} of terms.
Two terms are contextually equivalent if there is no context that can distinguish them with respect to may-termination.
More formally, a term $\seq{}{M:\comt}$ is said to terminate, written $M\!\Downarrow$, if there
exists a terminating evaluation sequence from $M$ to $\skipcom$.
Then \emph{contextual (may-)equivalence} ($\Gamma\vdash M_1\cong M_2$) is defined by:
for all contexts $\ctx$ such that $\seq{}{\ctx[M]:\comt}$,
$\ctx[M_1]\!\Downarrow$ if and only if $\ctx[M_2]\!\Downarrow$.
The force of this notion is quantification over all contexts.
Since contextual equivalence becomes undecidable for $\fica$ very quickly~\cite{GMO06}, we will look at
the special case of testing equivalence with terms that always diverge,
e.g. given $\Gamma\vdash M: \theta$, is it the case that
$\seq{\Gamma}{M\cong\divcom_\theta}$?
Intuitively, equivalence with an
always-divergent term means that $\ctx[M]$ will never converge (must diverge) if $\ctx$ uses $M$.
At the level of automata, this will turn out to correspond to the emptiness problem.
\label{ex:verification} In verification tasks, with the above equivalence test, we can check whether uses of $M$ can ever lead to undesirable states. For example, for a given term $\seq{x:\vart}{M:\theta}$, the term
\[
\seq{f:\theta\rarr\comt}{\newin{x\,\raisebox{0.065ex}{:}{=}\, 0}{(f (M)\, ||\, \cond{!x=13}{\,\skipcom\,}{\,\divcom}})}
\]
will be equivalent
to $\divcom$ only when $x$ is never set to $13$ during a terminating execution.
Note that, because of quantification over all contexts, $f$ may use $M$ an arbitrary number of times,
also concurrently or in nested fashion, which is a very expressive form of quantification.
\cutout{
The automata-theoretic emptiness problems that will be studied in the paper correspond to
a special case of contextual equivalence, where a term is compared with a term that always diverges,
i.e. given $\Gamma\vdash M:\theta_h\rarr\cdots\rarr\theta_1\rarr\comt$, is it the case that
$\seq{\Gamma}{M\cong\lambda x_h\cdots x_1.\divcom}$? Intuitively, equivalence with an
always-divergent term means that $\ctx[M]$ will never converge (must diverge) if $\ctx$ uses $M$.
\rlnote{Can we provide a bit more motivation for focussing on emptiness, in addition to the kind of example that follows here? At a first reading, these two paragraphs currently come across as a little contrived. We should provide as robust advertising as possible here, since the criticism about applicability to program verification of what we are doing is perhaps one of the most likely ones.}
In verification tasks, the above property could be used to check whether uses of $M$ can ever lead to undesirable states. For example, given $\seq{x:\vart}{M:\theta}$,
\[
\seq{f:\theta\rarr\comt}{\newin{x\,\raisebox{0.065ex}{:}{=}\, 0}{(f (M)\, ||\, \cond{!x=13}{\,\skipcom\,}{\,\divcom}})}
\]
will be equivalent
to $\divcom$ if $x$ is never set to $13$ during a terminating execution.
Note that, because of quantification over all contexts, $f$ may use $M$ an arbitrary number of times,
also concurrently or in nested fashion, which is a very expressive form of quantification.
}
\section{Leafy automata\label{sec:leafy}}
We would like to be able to represent the game semantics of $\fica$ using automata.
To that end, we introduce \emph{leafy automata} ($\la$).
They are a variant of automata over nested data, i.e. a type of automata that read finite sequences of letters of the form $(t,d_0 d_1\cdots d_j)$ ($j\in \N$), where $t$ is a \emph{tag} from a finite set $\Sigma$ and each $d_i$ ($0\le i\le j$) is a \emph{data value} from an infinite set $\D$
In our case, $\D$ will have the structure of a countably infinite forest and the sequences $d_0\cdots d_j$ will correspond to branches of a tree.
Thus, instead of $d_0\cdots d_j$, we can simply write $d_j$, because $d_j$ uniquely determines its ancestors: $d_0,\dots,d_{j-1}$.
The following definition captures the technical assumptions on~$\D$.
\begin{definition}
$\D$ is a countably infinite set equipped with a function $\predc:\D\rarr\D\cup\{\bot\}$ (the \emph{parent} function) such that the following conditions hold.
\begin{itemize}
\item Infinite branching: $\predc^{-1}(\{d_\bot\})$ is infinite for any $d_\bot\in\D\cup\{\bot\}$.
\item Well-foundedness: for any $d\in\D$, there exists $i\in\N$, called the \emph{level of $d$}, such that $\predc^{i+1}(d)=\bot$.
Level-$0$ data values will be called \emph{roots}.
\end{itemize}
\end{definition}
In order to define configurations of leafy automata, we will rely on finite subtrees of $\D$, whose nodes will be labelled with states.
We say that $T\subseteq \D$ is a subtree of $\D$ iff $T$ is closed ($\forall x \in T \colon \pred{x}\in T\cup\{\bot\}$) and rooted ($\exists!x\in T\colon\pred{x}=\bot$).
Next we give the formal definition of a level-$k$ leafy automaton.
Its set of states $Q$ will be divided into layers, written $Q^{(i)}$ ($0\le i\le k$), which will be used to label level-$i$ nodes.
We will write $Q^{(i_1,\cdots, i_k)}$ to abbreviate $Q^{(i_1)}\times\cdots \times Q^{(i_k)}$, excluding any components $Q^{(i_j)}$ where $i_j < 0$. We distinguish $Q^{(0,-1)} = \{\dagger\}$
\begin{definition}
A level-$k$ leafy automaton ($k$-$\la$) is a tuple $\Aut=\abra{\Sigma,k,Q,\delta}$, where
\begin{itemize}
\item $\Sigma=\Sigma_\Q+\Sigma_\A$ is a finite alphabet, partitioned into questions and answers;
\item $k\geq 0$ is the level parameter;
\item $Q= \sum_{i=0}^k Q^{(i)}$ is a finite set of states, partitioned into sets $Q^{(i)}$ of level-$i$ states;
\item $\delta=\delta_\Q+\delta_\A$ is a finite transition function, partitioned into question- and answer-related transitions;
\item $\delta_\Q=\sum_{i=0}^k \delta^{(i)}_{\Q}$, where $\delta^{(i)}_{\Q} \subseteq Q^{(0,1,\cdots,i-1)}\times \Sigma_{\Q} \times Q^{(0,1,\cdots,i)}$ for $0\le i\le k$
\item $\delta_{\A}=\sum_{i=0}^k \delta^{(i)}_{\A}$, where $\delta^{(i)}_\A \subseteq Q^{(0,1,\cdots,i)}\times \Sigma_{\A} \times Q^{(0,1,\cdots,i-1)}$ for $0\le i\le k$.
\end{itemize}
\igw{Remark for the final version: use $Q^{[0,i]}$ notation}
\end{definition
Configurations of $\la$ are of the form $(D,E,f)$, where $D$ is a finite subset of~$\D$ (consisting of data values that have been encountered so far),
$E$ is a finite subtree of $\D$, and $f:E\rarr Q$ is a level-preserving function, i.e.\ if $d$ is a level-$i$ data value then $f(d)\in Q^{(i)}$.
A leafy automaton starts from the empty configuration $\kappa_0=(\emptyset,\emptyset,\emptyset)$ and proceeds according to $\delta$,
making two kinds of transitions. Each kind manipulates a single leaf: for questions one new leaf is added, for answers one leaf is removed.
Let the current configuration be $\kappa=(D,E,f)$
\begin{itemize}
\item
On reading a letter $(t,d)$ with $t\in\Sigma_\Q$ and $d\not\in D$ a fresh level-$i$ data, the automaton adds a new leaf $d$ in a configuration and updates the states on the branch to $d$.
So it changes its configuration to $\kappa'=(D\cup\{d\},E\cup\{d\},f')$ provided that $\pred{d}\in E$ and $f'$ satisfies:
\[
(f(\predc^i(d)),\cdots,f(\pred{d}), t, f'(\predc^i(d)),\cdots,f'(\pred{d}),f'(d))\in\delta^{(i)}_{\Q},
\]
$\dom{f'}=\dom{f} \cup\{d\}$, and $f'(x)=f(x)$ for all $x\not\in\{\pred{d},\cdots, \predc^i(d)\}$.
\item On reading a letter $(t,d)$ with $t\in\Sigma_\A$ and $d\in E$ a level-$i$ data which is a leaf, the automaton deletes $d$ and updates the states on the branch to $d$.
So it changes its configuration to $\kappa'=(D,E\setminus\{d\},f')$ where $f'$ satisfies:
\[
(f(\predc^i(d)),\cdots,f(\pred{d}), f(d), t, f'(\predc^i(d)),\cdots,f'(\pred{d}))\in\delta^{(i)}_{\A},
\]
$\dom{f'}=\dom{f}\setminus\{d\}$ and $f'(x)=f(x)$ for all $x\not\in\{\pred{d},\cdots, \predc^i(d)\}$.\item Initially $D$,$E$, and $f$ are empty; we proceed to $\kappa' = (\{d\},\{d\},\{d \mapsto q^{(0)}\})$ if $(t,d)$ is read where $\dagger \trans{t} q^{(0)} \in \delta^{(0)}_{\Q}$. The last move is treated symmetrically.
\end{itemize}
In all cases, we write $\kappa\trans{(t,d)}\kappa'$.
Note that a single transition can only change states on the branch ending in $d$. Other parts of the tree remain unchanged.
\begin{example}
Below we illustrate the effect of $\la$ transitions.
Let $D_1=\{d_0,d_1,d_1'\}$ and $d_2\not\in D_1$.
Let $\kappa_1=(D_1,E_1, f_1)$,
$\kappa_2=(D_1\cup\{d_2\},E_2, f_2)$, $\kappa_3=(D_1\cup\{d_2\}, E_1,f_1)$,
where the trees $E_1, E_2$ are displayed below and node annotations of the form $(q)$ correspond to values
of $f_1, f_2$, e.g. $f_1(d_0)=q^{(0)}$.
\[
\xymatrix@C=1mm@R=2mm{
&&d_0 (q^{(0)})\ar@{-}[ld]\ar@{-}[rd] &\\
E_1,f_1: &d_1' (q) & & d_1 (q^{(1)})\\
}\qquad\qquad
\xymatrix@C=1mm@R=3mm{
&&d_0 (r^{(0)})\ar@{-}[ld]\ar@{-}[rd] &\\
E_2,f_2: &d_1' (q) & & d_1 (r^{(1)})\ar@{-}[d]\\
&&& d_2(r^{(2)})\\}
\]
For $\kappa_1$ to evolve into $\kappa_2$ (on $(t,d_2)$),
we need $(q^{(0)}, q^{(1)}, t, r^{(0)}, r^{(1)}, r^{(2)})\in \delta^{(2)}_\Q$.
On the other hand, to go from $\kappa_2$ to $\kappa_3$ (on $(t,d_2)$),
we want $(r^{(0)},$ $r^{(1)},$ $r^{(2)},$ $t,$ $q^{(0)},$ $q^{(1)})\in \delta^{(2)}_\A$.
\end{example}
\begin{definition}
A \emph{trace} of a leafy automaton $\Aut$ is a sequence
$w=l_1\cdots l_h\in (\Sigma\times\D)^\ast$ such that $\kappa_0\trans{l_1}\kappa_1\dots\kappa_{h-1}\trans{l_h}\kappa_h$
where $\kappa_0=(\emptyset,\emptyset,\emptyset)$.
A configuration $\kappa=(D,E,f)$ is \emph{accepting} if
$E$ and $f$ are empty.
A trace $w$ is accepted by $\Aut$ if there is a non-empty sequence of transitions as above with $\kappa_h$ accepting.
The set of traces (resp. accepted traces) of $\Aut$ is denoted
by $\trace{\Aut}$ (resp. $\lang{\Aut}$).
\end{definition}
\begin{remark}
When writing states, we will often use superscripts $(i)$ to indicate the intended level.
So, $(q^{(0)},\cdots, q^{(i-1)}) \trans{t} (r^{(0)},\cdots, r^{(i)})$
refers to
$(q^{(0)},\cdots, q^{(i-1)}, t,$ $r^{(0)},$ $\cdots, r^{(i)})\in\delta^{(i)}_{\Q}$; similarly for $\delta^{(i)}_{\A}$ transitions.
For $i=0$, this degenerates to $\dagger\trans{t} r^{(0)}$ and $r^{(0)}\trans{t} \dagger$.
\end{remark}
\begin{example}\label{ex:la}
Consider the $1$-$\la$ over $\Sigma_\Q=\{\move{start},\move{inc}\}, \Sigma_\A=\{\move{dec},\move{end}\}$.
Let $Q^{(0)}=\{0\}$, $Q^{(1)}=\{0\}$ and define $\delta$ by:
$\dagger\trans{\move{start}} 0$,\quad $0\trans{\move{inc}} (0,0)$,\quad $(0,0)\trans{\move{dec}} 0$,\quad $0\trans{\move{end}} \dagger$.
The accepted traces of this $1$-$\la$ have the form $(\move{start},d_0)\,\, (||_{i=0}^n (\move{inc}, d_1^i)$ $(\move{dec},d_1^i))\,\, (\move{end},d_0)$,
i.e.\ they are valid histories of a single {non-negative} counter (histories such that the counter starts and ends at 0). In this case, all traces are simply prefixes of such words.
\end{example}
\begin{remark}\label{rem:lawork}
Note that, whenever a leafy automaton reads $(t,d)$ ($t\in\Sigma_\Q$)
and the level of $d$ is greater than $0$, then it must have read
a unique question $(t',\pred{d})$ earlier.
Also, observe that an $\la$ trace contains at most two occurrences of the same data value, such that the first is paired with a question and the second is paired with an answer. Because the question and the answer share the same data value, we can think of the answer as answering the question, like in game semantics. Indeed, justification pointers from answers to questions will be represented in this way in Theorem~\ref{thm:trans}. Finally, we note that $\la$ traces are invariant under tree automorphisms of $\D$.
\end{remark}
\begin{lemma}\label{lem:la2-1}
The emptiness problem for $2$-$\la$ is undecidable. For $1$-$\la$, it is reducible to the reachability problem for VASS in polynomial time and there is a reverse reduction in exponential time, so it is decidable in Ackermannian time~\cite{LerouxS19} but not elementary~\cite{CzerwinskiLLLM19}.
\end{lemma}
\begin{proof}
For $2$-$\la$ we reduce from the halting problem on two-counter-machines.
Two counters can be simulated using configurations of the form
\[\xymatrix@C=.5em@R=.5em{
& & &q\ar@{-}[lld]\ar@{-}[rrd] & &\\
& c_1\ar@{-}[ld]\ar@{-}[d]\ar@{-}[rd] & & & & c_2\ar@{-}[ld]\ar@{-}[d]\ar@{-}[rd] \ar@{-}[rrd] &\\
\star & \star & \star & & \star & \star &\star & \star
}\]
where there are two level-$1$ nodes, one for each counter.
The number of children at level $2$ encodes the counter value.
Zero tests can be implemented by removing the corresponding level-$1$ node and creating a new one.
This is possible only when the node is a leaf, i.e., it does not have children at level~$2$.
The state of the 2-counter machine can be maintained at level~$0$, the states at level $1$ indicate the name of the counter, and the level-$2$ states are irrelevant.
The translation from $1$-$\la$ to VASS is straightforward and based on representing $1$-$\la$ configurations by the state at level~$0$ and, for each state at level~$1$, the count of its occurrences. The reverse translation is based on the same idea and extends the encoding of a non-negative counter in Example~\ref{ex:la}, where the exponential blow up is simply due to the fact that vector updates in VASS are given in binary whereas $1$-$\la$ transitions operate on single branches.\qed
\end{proof}
\begin{lemma}\label{lem:la1}
$1$-$\la$ equivalence is undecidable.
\end{lemma}
\begin{proof}
We provide a direct reduction from the halting problem for 2-counter machines, where both counters are required to be zero initially as well as finally. The main obstacle is that implementing zero tests as in the proof of the first part of Lemma~\ref{lem:la2-1} is not available because we are restricted to leafy automata with levels $0$ and $1$ only. To overcome it, we exploit the power of the equivalence problem where one of the $1$-$\la$ will have the task not of correctly simulating zero tests but recognising zero tests that are incorrect. The full argument can be found in Appendix~\ref{apx:leafy}.\qed
\end{proof}
\section{From LA to FICA}
\label{sec:tofica}
In this section, we show how to represent leafy automata in $\fica$. Let $\Aut=\abra{\Sigma,k,Q,\delta}$ be a leafy automaton.
We shall assume that $\Sigma,Q\subseteq\{0,\cdots,\imax\}$ so that we can
encode the alphabet and states using type $\expt$.
We will represent a trace $w$ generated by $\Aut$ by a play $\play{w}$, which simulates
each transition with two moves, by $O$ and $P$ respectively. The child-parent links in $\D$ will be represented by justification pointers. We refer the reader to Appendix~\ref{apx:tofica} for details. Below we just state the lemma that
identifies the types that correspond to our encoding, where
we write $\theta^{\imax+1}\rarr\beta$ for $\underbrace{\theta \rarr\cdots\rarr\theta}_{\imax+1}\rarr\beta$.
\begin{lemma}
Let $\Aut$ be a $k$-$\la$ and $w\in\trace{\Aut}$. Then $\play{w}$ is a play in $\sem{\theta_k}$,
where $\theta_0=\comt^{\imax+1}\rarr\expt$ and $\theta_{i+1}=(\theta_i\rarr\comt)^{\imax+1}\rarr\expt$ ($i\ge 0$).
\end{lemma}
Before we state the main result, we recall from~\cite{GM08} that strategies corresponding to $\fica$ terms
satisfy a closure condition known as~\emph{saturation}: swapping two adjacent moves in a play belonging
to such a strategy yields another play from the same strategy,
as long as the swap yields a play and it is not the case
that the first move is by O and the second one by P.
Thus, saturated strategies express causal dependencies of P-moves on O-moves.
Consequently, one cannot expect to find a $\fica$-term such that the corresponding
strategy is the smallest strategy containing $\{\,\play{w}\,|\,w\in \trace{\Aut}\,\}$.
Instead, the best one can aim for is the following result.
\begin{theorem}\label{thm:toalgol}
Given a $k$-$\la$ $\Aut$, there exists a $\fica$ term $\seq{}{M_\Aut:\theta_k}$ such that $\sem{\seq{}{M_\Aut:\theta_k}}$
is the smallest saturated strategy containing $\{\,\play{w}\,|\,w\in \trace{\Aut}\,\}$.
\end{theorem}
\begin{proof}[Sketch]
Our assumption $Q\subseteq\{0,\cdots,\imax\}$ allows us to maintain $\Aut$-states in the memory of $\fica$-terms.
To achieve $k$-fold nesting, we rely on the higher-order structure of the term:
$\lambda f^{(0)}.f^{(0)}(\lambda f^{(1)}.f^{(1)}(\lambda f^{(2)}.f^{(2)}(\cdots \lambda f^{(k)}. f^{(k)})))$.
In fact, instead of the single variables $f^{(i)}$, we shall use sequences
$f^{(i)}_0\cdots f^{(i)}_\imax$, so that a question $t_{\Q}^{(i)}$ read by $\Aut$ at level $i$ can be simulated
by using variable $f^{(i)}_{t_{\Q}^{(i)}}$ (using our assumption $\Sigma\subseteq\{0,\cdots,\imax\}$).
Additionally, the term contains state-manipulating code that enables moves only if they are
consistent with the transition function of $\Aut$.\qed
\end{proof}
\section{Game semantics\label{sec:gs}}
Game semantics for programming languages involves
two players, called Opponent (O) and Proponent (P),
and the sequences of moves made by them can be viewed as interactions between
a program (P) and a surrounding context (O).
In this section, we briefly present
the fully abstract game model for $\fica$ from~\cite{GM08}, which we rely on in the paper.
The games are defined using an auxiliary concept of an arena.
\begin{definition}
An \emph{arena} $A$ is a
triple $\langle{M_A,\lambda_A,\vdash_A}\rangle$ where:
\begin{itemize}
\item $M_A$ is a set of \emph{moves};
\item $\lambda_A:M_A\rarr\makeset{O,P}\times\makeset{Q,A}$
is a function determining for each $m\in M_A$ whether
it is an \emph{Opponent} or a \emph{Proponent move},
and a \emph{question} or an \emph{answer};
we write $\lambda_A^{OP},\lambda_A^{QA}$ for the composite
of $\lambda_A$ with respectively the first and second projections;
\item $\vdash_A$ is a binary relation on $M_A$, called \emph{enabling},
satisfying: if $m\vdash_A n$ for no $m$ then $\lambda_A (n)= (O,Q)$,
if $m\vdash_A n$ then $\lambda_A^{OP}(m)\neq\lambda_A^{OP}(n)$,
and if $m\vdash_A n$ then $\lambda_A^{QA}(m)=Q$.
\end{itemize}
\end{definition}
We shall write $I_A$ for the set of all moves of $A$ which
have no enabler; such moves are called \emph{initial}.
Note that an initial move must be an Opponent question.
In arenas used to interpret base types all questions are initial and P-moves
answering them are detailed in the table below, where $i\in\makeset{0,\cdots,\imax}$.
\[\renewcommand\arraystretch{0.9}\begin{array}{c|c|c||c|c|c}
~\word{Arena}~ & ~\word{O-question}~ & ~\word{P-answers}~ &
~\word{Arena}~ & ~\word{O-question}~ &~\word{P-answers}~\\
\hline
\sem{\comt} & \mrun & \mdone &
\sem{\expt} & \mq & i\\[1ex]
\hline
\sem{\vart} & \mread & i & \sem{\semt}
& \mgrb & \mok \\
& \mwrite{i} & \mok &
& \mrls & \mok
\end{array}
\]
More complicated types are interpreted inductively using
the \emph{product} ($A\times B$)
and \emph{arrow} ($A\Rightarrow B$) constructions, given below.
\[\begin{array}{rcl}
M_{A\times B} &=& M_A+M_B\\
\lambda_{A\times B}&=& [\lambda_A,\lambda_B]\\
\vdash_{A\times B} &=& \vdash_A+\vdash_B\\
\end{array}\qquad
\begin{array}{rcl}
M_{A\Rightarrow B} &=& M_A+M_B\\
\lambda_{A\Rightarrow B}&=& [\abra{\lambda_A^{PO},\lambda_A^{QA}},\lambda_B]\\
\vdash_{A\Rightarrow B} &=& \vdash_A+\vdash_B+\makeset{\,(b,a)\mid b\in I_B\textrm{ and }a\in I_A}\\
\end{array}\]
where $\lambda_A^{PO}(m)= O$ iff $\lambda_A^{OP}(m)=P$.
We write $\sem{\theta}$ for the arena corresponding to type $\theta$. Below we draw (the enabling relations of) $A_1=\sem{\comt\rarr\comt\rarr\comt}$
and $A_2=\sem{(\vart\rarr\comt)\rarr\comt}$ respectively,
using superscripts to distinguish copies of the same move
(the use of superscripts is consistent with our future use of tags in Definition~\ref{def:tags}).
\[
\xymatrix@C=1mm@R=1mm{O &&&\mrun\ar@{-}[d]\ar@{-}[ld]\ar@{-}[lld]\\
P &\mrun^2\ar@{-}[d] &\mrun^1\ar@{-}[d] &\mdone\\
O &\mdone^2 &\mdone^1}\qquad\qquad
\xymatrix@C=1mm@R=1mm{O &&&&\mrun\ar@{-}[d]\ar@{-}[ld]\\
P && &\mrun^1\ar@{-}[d]\ar@{-}[ld]\ar@{-}[lld] &\mdone\\
O &\mread^{11}\ar@{-}[d]&\mwrite{i}^{11}\ar@{-}[d] & \mdone^1\\
P &i^{11}& \mok^{11}}
\]
Given an arena $A$, we specify next what it means to be a legal play in $A$.
For a start, the moves that players exchange will have to form
a \emph{justified sequence}, which is a finite sequence of moves
of $A$ equipped with pointers. Its first move is always initial and has no
pointer, but each subsequent move $n$ must have a unique pointer to an
earlier occurrence of a move $m$ such that $m\vdash_A n$. We say that
$n$ is (explicitly) justified by $m$ or, when $n$ is an answer, that
$n$ answers $m$. \cutout{Note that interleavings of several justified sequences
may not be justified sequences; instead we shall call them \emph{shuffled sequences}.}
If a question does not have an answer in a justified sequence, we say
that it is \emph{pending} in that sequence.
Below we give two justified sequences from $A_1$ and $A_2$ respectively.
\[
\rnode{A}{\mrun}\,\,\rnode{B}{\mrun}^1\justh{B}{A}\,\,\rnode{C}{\mrun}^2\justh{C}{A}\, \,\rnode{D}{\mdone^1}\justn{D}{B}{140}\,\, \rnode{E}{\mdone^2}\justn{E}{C}{140}\, \,\rnode{F}{\mdone}\justn{F}{A}{155}\qquad
\rnode{A}{\mrun} \,\, \rnode{B}{\mrun^1}\justj{B}{A}\,\, \rnode{C}{\mread^{11}}\justh{C}{B}\,\, \rnode{D}{0^{11}}\justf{D}{C}\, \,
\rnode{E}{\mwrite{1}^{11}}\justh{E}{B}\,\, \rnode{F}{\mok^{11}}\justh{F}{E}\,\, \rnode{G}{\mread^{11}}\justn{G}{B}{160} \,\, \rnode{H}{1^{11}}
\justh{H}{G}
\]
Not all justified sequences are valid. In order to constitute a legal
play, a justified sequence must satisfy a well-formedness condition
that reflects the ``static'' style of concurrency of our programming
language: any started sub-processes must end before the parent process terminates.
\cutout{any process starting sub-processes must wait for the
children to terminate in order to continue. In game terms: if a
question is answered then that question and all questions justified by
must have been answered (exactly once).} This is formalised as follows,
where the letters $q$ and $a$ to refer to question- and answer-moves
respectively, while $m$ denotes arbitrary moves.
\begin{definition}
The set $P_A$ of \emph{plays over $A$}
consists of the justified sequences $s$ over $A$ that satisfy
the two conditions below.
\begin{description}
\item[FORK]: In any prefix $s'= \cdots\rnode{A}{q} \cdots\rnode{B}{m}\justf{B}{A}$ of $s$, the question $q$ must be pending when $m$ is played.
\item[WAIT]: In any prefix $s'= \cdots\rnode{A}{q} \cdots\rnode{B}{a}\justf{B}{A}$ of $s$, all questions justified by $q$ must be answered.
\end{description}
\end{definition}
\cutout{For two shuffled sequences $s_1$ and $s_2$, $s_1\amalg s_2$ denotes
the set of all interleavings of $s_1$ and $s_2$.
For two sets of shuffled sequences $S_1$ and $S_2$,
$S_1\amalg S_2=\bigcup_{s_1\in S_1,s_2\in S_2}s_1\amalg s_2$.
Given a set $X$ of shuffled sequences, we define $X^0=X$, $X^{i+1}=X^i \amalg X$.
Then $X^\circledast$, called \emph{iterated shuffle} of $X$, is defined to
be $\bigcup_{i\in\N}X^i$. }
It is easy to check that the justified sequences given above are plays.
A subset $\sigma$ of $P_A$ is \emph{O-complete} if $s\in \sigma$
and $s o\in P_A$ imply $so\in\sigma$, when $o$ is an O-move.
\begin{definition}
A \emph{strategy} on $A$, written $\sigma:A$, is a
prefix-closed O-complete subset of $P_A$.
\end{definition}
\cutout{
Recall that O represents the role of the environment/context in game semantics.
Thus, strategies record all potential environment actions.
The game model of $\fica$ consists of \emph{saturated} strategies only: the saturation
condition stipulates that all possible (sequential) observations of
(parallel) interactions must be present in a strategy: actions of the
environment (O) can always be observed earlier if possible, actions of the
program (P) can be observed later. To formalize this, for any arena
$A$, we define a preorder $\preceq$ on $P_A$, as the least transitive
relation $\preceq$ satisfying
$s\, o\, m\, s'\preceq s\, m\, o\, s'$ and $s\, m\, p\, s'\preceq s\, p\, m\, s'$
for all $s,s'$,
where $o$ and $p$ are an O- and a P-move respectively (in the above pairs of plays
moves on the left-hand-side of $\preceq$ are assumed to have the same justifiers as on the right-hand-side).
\begin{definition}\label{def:sat}
A strategy $\sigma:A$ is \emph{saturated} iff, for all $s,s'\in P_A$,
if $s\in \sigma$ and $s'\preceq s$ then $s'\in\sigma$.
\end{definition}
\begin{remark}\label{rem:causal}
Definition~\ref{def:sat} states that saturated strategies are stable
under certain rearrangements of moves.
Note that $s_0\, p\, o\, s_1\not \preceq s_0\, o\, p\, s_1$, while other move-permutations are allowed.
Thus, saturated strategies express causal dependencies of P-moves on O-moves. This partial-order aspect
is captured explicitly in concurrent games based on event structures~\cite{CCRW17}.
\end{remark}
}
\cutout{The two saturation conditions, in various formulations, have a long
pedigree in the semantics of concurrency. For example, they have been
used by Udding to describe propagation of signals across wires in
delay-insensitive circuits~\cite{Udd86} and by Josephs {\em et al} to
specify the relationship between input and output in asynchronous
systems with channels~\cite{JJH90}. Laird has been the first to adopt
them in game semantics, in his model of Idealized CSP~\cite{Laird01}.
Arenas and saturated strategies form a Cartesian closed category $\clg{G}_{\rm sat}$,
in which $\clg{G}_{\rm sat}(A,B)$ consists of saturated strategies on $A\Rightarrow B$.
Strategies $\sigma:A\Rightarrow B$ and $\tau:B\Rightarrow C$ are composed
by considering all possible interleavings of plays from $\tau$ with
multiple plays from $\sigma$ that coincide in the shared arena $B$,
and then hiding the $B$ moves.
}
Suppose
$\Gamma=\{x_1:\theta_1,\cdots, x_l:\theta_l\}$
and $\seq{\Gamma}{M:\theta}$ is a $\fica$-term.
Let us write $\sem{\seq{\Gamma}{\theta}}$ for the arena $\sem{\theta_1}\times\cdots\times\sem{\theta_l}\Rightarrow\sem{\theta}$.
In~\cite{GM08} it is shown how to assign
a strategy on $\sem{\seq{\Gamma}{\theta}}$ to any $\fica$-term
$\seq{\Gamma}{M:\theta}$.
We write $\sem{\seq{\Gamma}{M}}$ to refer to that strategy.
For example, $\sem{\seq{\Gamma}{\divcom}}=\{\epsilon, \mrun\}$
and $\sem{\seq{\Gamma}{\skipcom}} = \{\epsilon,\mrun,\rnode{A}{\mrun}\, \rnode{B}{\mdone}\justf{B}{A}\}$.
\cutout{
$\seq{\Gamma}{M:\theta}$, where $\Gamma=\{x_1:\theta_1,\cdots, x_l:\theta_l\}$, using a strategy, written
through strategies, written $\seq{\Gamma}{M:\theta}$, where $\Gamma=\{x_1:\theta_1,\cdots, x_l:\theta_l\}$,
are interpreted as saturated strategies (written $\sem{\seq{\Gamma}{M}}$) in the arena
$\sem{\seq{\Gamma}{\theta}}=\sem{\theta_1}\times\cdots\times\sem{\theta_l}\Rightarrow\sem{\theta}$.
To model free identifiers $\seq{\Gamma,x:\theta}{x:\theta}$, one uses (the least saturated strategy generated by)
alternating plays in which P simply copies moves between the two instances of $\sem{\theta}$.
Other elements of the syntax are interpreted using strategy composition with special strategies.
Below we give a selection of constructs along with the plays that generate the corresponding special strategies.
\noindent
\renewcommand\arraystretch{1}
\[\begin{array}{lclclcl}
;& & \rnode{A}{q}\,\,\rnode{B}{\mrun}^2\justh{B}{A}\,\,\rnode{C}{\mdone^2}\justh{C}{B}\, \,\rnode{D}{q^1}\justn{D}{A}{140}\, \,\rnode{E}{a^1}\justh{E}{D}\, \,\rnode{F}{a}\justh{F}{A} & &
||& & \rnode{A}{\mrun}\,\,\rnode{B}{\mrun}^1\justh{B}{A}\,\,\rnode{C}{\mrun}^2\justh{C}{A}\, \,\rnode{D}{\mdone^1}\justn{D}{B}{140}\,\, \rnode{E}{\mdone^2}\justn{E}{C}{140}\, \,\rnode{F}{\mdone}\justn{F}{A}{155}\\[2ex]
\raisebox{0.065ex}{:}{=}& & \rnode{A}{\mrun}\,\,\rnode{B}{\mq^1}\justh{B}{A}\,\,\rnode{C}{i^1}\justh{C}{B}\, \,\rnode{D}{\mwrite{i}^2}\justn{D}{A}{140}\,\, \rnode{E}{\mok^2}\justh{E}{D}\, \,\rnode{F}{\mdone}\justn{F}{A}{150} & &
{!} & & \rnode{A}{\mq}\,\,\rnode{B}{\mread}^1\justn{B}{A}{120}\,\,\rnode{C}{i^1}\justf{C}{B}\,\, \rnode{D}{i}\justn{D}{A}{120}\\[2ex]
{\bf grab} && \rnode{A}{\mrun}\,\,\rnode{B}{\mgrb}^1\justn{B}{A}{110}\,\,\rnode{C}{\mok^1}\justf{C}{B}\, \,\rnode{D}{\mdone}\justn{D}{A}{135} & \qquad\qquad &
{\bf release} && \rnode{A}{\mrun}\,\,\rnode{B}{\mrls}^1\justn{B}{A}{110}\,\,\rnode{C}{\mok^1}\justn{C}{B}{120}\, \rnode{D}{\mdone}\justn{D}{A}{135}
\end{array}\]
\medskip
\begin{tabular}{ll}
${\bf newvar}\,x\,\raisebox{0.065ex}{:}{=}\, i$ & $\quad \rnode{A}{q} \,\, \rnode{B}{q^1}\justj{B}{A}\,\, (\rnode{C}{\mread^{11}}\justn{C}{B}{160}\,\, \rnode{D}{i^{11}}\justf{D}{C})^\ast\, \,
\big(\sum_{j=0}^\imax(\rnode{E}{\mwrite{j}^{11}}\justj{E}{B}\,\, \rnode{F}{\mok^{11}}\justh{F}{E}\,\, (\rnode{G}{\mread^{11}}\justn{G}{B}{160} \,\, \rnode{H}{j^{11}}
\justh{H}{G})^\ast)\big)^\ast
\,\, a^1\, \,a$\\[2ex]
${\bf newsem}\, x\,\raisebox{0.065ex}{:}{=}\, 0$&
$\quad \rnode{A}{q} \,\, \rnode{B}{q^1}\justf{B}{A}\,\, (
\rnode{C}{\mgrb^{11}}\justn{C}{B}{160}\,\, \rnode{D}{\mok^{11}}\justf{D}{C}\,\, \rnode{E}{\mrls^{11}}\justn{E}{B}{155}\,\,
\rnode{F}{\mok^{11}}\justh{F}{E})^\ast\, \, (\rnode{G}{\mgrb^{11}}\justn{G}{B}{160}\,\,\rnode{H}{\mok^{11}}\justf{H}{G}+\epsilon)\,\, a^1\, \,a$ \\[1ex]
\end{tabular}\\[1.5ex]
}
Given a strategy $\sigma$,
we denote by $\comp\sigma$ the set of non-empty \emph{complete} plays of $\sigma$, i.e. those in which all questions have been
answered.
The game-semantic interpretation $\sem{\cdots}$
turns out to provide a fully abstract model
in the following sense.
\begin{theorem}[\cite{GM08}]\label{thm:full}
\cutout{$\Gamma\vdash M_1\,\raisebox{-.5ex}{$\stackrel{\textstyle\sqsubset}{\scriptstyle{\sim}}$}\, M_2$ iff
$\comp{\sem{\Gamma\vdash M_1}}\subseteq \comp{\sem{\Gamma\vdash M_2}}$.}
$\Gamma\vdash M_1\cong M_2$ iff $\comp{\sem{\Gamma\vdash M_1}}=\comp{\sem{\Gamma\vdash M_2}}$.
\end{theorem}
In particular, since we have $\comp{\sem{\seq{\Gamma}{\divcom_\theta}}}=\emptyset$,
$\seq{\Gamma}{M:\theta}$ is equivalent to $\divcom_\theta$ iff
$\comp{\sem{\seq{\Gamma}{M}}}=\emptyset$.
\section{Local $\fica$}
\label{sec:tosla}
In this section we identify a family of $\fica$ terms
that can be translated into $\sla$ rather than $\la$. To achieve boundedness at even levels,
we remove $\mathsf{while}$\footnote{The automaton for $\while{M}{N}$ may repeatedly visit the automata for
$M$ and $N$, generating an unbounded number of children at level $0$ in the process.}.
To achieve restricted communication, we will constrain
the distance between a variable declaration and its use.
Note that in the translation, the application of function-type variables increases $\la$ depth.
So in $\sfica$ we will allow the link
between the binder $\mathbf{newvar}/\mathbf{newsem}\, x$ and each use of $x$ to ``cross" at most one occurrence of a free variable.
For example, the following terms
\begin{itemize}
\item $\newin{x\,\raisebox{0.065ex}{:}{=}\, 0}{x\,\raisebox{0.065ex}{:}{=}\, 1\, ||\, f(x\,\raisebox{0.065ex}{:}{=}\, 2)}$,
\item $\newin{x\,\raisebox{0.065ex}{:}{=}\, 0}{f(\newin{y}{f(y\,\raisebox{0.065ex}{:}{=}\, 1)\, ||\, x:=!y})}$
\end{itemize}
will be allowed, but not $\newin{x\,\raisebox{0.065ex}{:}{=}\, 0}{f(f(x\,\raisebox{0.065ex}{:}{=}\, 1))}$.
To define the fragment formally, given a term $Q$ in $\beta\eta$-normal form,
we use a notion of the \emph{applicative depth of a variable $x:\beta$ ($\beta=\vart,\semt$)
inside $Q$}, written $\ade{x}{Q}$ and defined inductively by the table below. The applicative depth is increased whenever a functional
identifier is applied to a term containing~$x$.
\[\begin{array}{lcl}
\textrm{shape of $Q$} && \ade{x}{Q} \\
\hline
x && 1\\
y\, (y\neq x),\, \skipcom,\,\divcom,\, i &\quad & 0\\
\arop{M},\, !M,\, \rls{M},\, \grb{M} && \ade{x}{M}\\
M;N,\, M||N,\, M\,\raisebox{0.065ex}{:}{=}\, N,\,\while{M}{N} & & \max(\ade{x}{M},\ade{x}{N})\\
{\cond{M}{N_1}{N_2}} && \max(\ade{x}{M},\ade{x}{N_1},\ade{x}{N_2})\\
{\lambda y.M}, \newin{\textbf{/newsem}\,y\,\raisebox{0.065ex}{:}{=}\, i}{M} && \ade{x}{M[z/y]},\textrm{where $z$ is fresh}\\
{f M_1\cdots M_k} && 1+ \max(\ade{x}{M_1},\cdots,\ade{x}{M_k})
\end{array}
\]
Note that in our examples above, in the first two cases the applicative depth of $x$ is $2$; and in the third case it is $3$.
\begin{definition}[Local $\fica$]
A $\fica$-term $\seq{\Gamma}{M:\theta}$ is \emph{local} if its $\beta\eta$-normal form
does not contain any occurrences of $\mathbf{while}$ and,
for every subterm of the normal form of the shape $\newin{/\mathbf{newsem}\, x\,\raisebox{0.065ex}{:}{=}\, i}{N}$, we have $\ade{x}{N} \le 2$.
We write $\lfica$ for the set of local $\fica$ terms.
\end{definition}
\begin{theorem}\label{thm:trans2}
For any $\lfica$-term $\seq{\Gamma}{M:\theta}$,
the automaton $\clg{A}_M$ obtained from the translation in Theorem~\ref{thm:trans}
can be presented as a $\sla$.
\end{theorem}
\begin{proof}[Sketch]
We argue by induction that the constructions from Theorem~\ref{thm:trans} preserve presentability as a $\sla$.
The case of parallel composition involves running copies of $M_1$ and $M_2$ in parallel without communication,
with their root states stored as a pair at level $0$. Note, though, that each of the automata transitions independently
of the state of the other automaton.
In consequence, if the automata $M_1$ and $M_2$ are $\sla$,
so will be the automaton for $M_1 || M_2$.
The branching bound after the construction is the sum of the two bounds for $M_1$ and $M_2$.
For $\seq{\Gamma}{\newin{x\,\raisebox{0.065ex}{:}{=}\, i}{M}}$, because the term is in $\lfica$,
so is $\seq{\Gamma,x:\vart}{M}$ and we have $\ade{x}{M}\le 2$.
Then we observe that in the translation of Theorem~\ref{thm:trans} ($\seq{\Gamma,x:\vart}{M:\theta}$) the questions related to $x$,
(namely $\mwrite{i}^{(x,\rho)}$ and $\mread^{(x,\rho)}$) correspond to creating leaves at levels $1$ or $3$, while the corresponding answers ($\mok^{(x,\rho)}$ and $i^{(x,\rho)}$ respectively)
correspond to removing such leaves.
In the construction for $\seq{\Gamma}{\newin{x}{M}}$,
such transitions need access to the root (to read/update the current state) and the root is indeed
within the allowable range: in an $\sla$
transitions creating/destroying leaves at level $3$ can read/write at level $0$.
All other transitions (not labelled by $x$) proceed as in $M$ and
need not consult the root for additional information about the current state, as it is propagated. Consequently, if $M$ is represented by a $\sla$ then the interpretation of $\newin{x\,\raisebox{0.065ex}{:}{=}\, i}{M}$ is also a $\sla$. The construction does not affect the branching bound, because
the resultant runs can be viewed as a subset of runs of the automaton for $M$, i.e. those in which reads and writes are related.
For $f M_h\cdots M_1$, we observe that the construction first
creates two nodes at levels $0$ and $1$, and the node at level $1$ is used
to run an unbounded number of copies of (the automaton for) $M_i$.
The copies do not need access to the states stored at levels $0$ and $1$,
because they are never modified when the copies are running.
Consequently, if each $M_i$ can be translated into a $\sla$,
the outcome of the construction in Theorem~\ref{thm:trans} is also a $\sla$. The new branching bound is the maximum over bounds from $M_1,\cdots, M_h$, because at even levels children are produced as in $M_i$ and
level $0$ produces only $1$ child.
\cutout{The other constructions involve running an automaton for a subterm (to completion), to be followed by an automaton
for another subterm. This does not violate the property of being a $\sla$, as the switch happens through a level-$1$ transition,
which can read/write the root. Boundedness is also preserved:
the new bound is the maximum of bounds obtained from the inductive hypothesis.}
\qed
\end{proof}
\begin{corollary}
For any $\sfica$-term $\seq{\Gamma}{M:\theta}$, the problem of
determining whether $\comp{\sem{\seq{\Gamma}{M}}}$ is empty
is decidable.
\end{corollary}
Theorems~\ref{thm:full} and \ref{thm:sla-decidable} imply the above. Thanks to Theorem~\ref{thm:full}, it is decidable if a $\sfica$ term is equivalent to a term that always diverges (cf.\ example on page~\pageref{ex:verification}).
In case of inequivalence, our results could also be
applied to extract the distinguishing context, first by
extracting the witnessing trace from the argument
underpinning Theorem~\ref{thm:sla-decidable} and then feeding it to
the Definability Theorem (Theorem 41~\cite{GM08}). This is
a valuable property given that in the concurrent setting bugs
are difficult to replicate.
\section{Local leafy automata ($\lla$)}
\label{sec:lla}
Here we identify a restricted variant of $\la$
for which the emptiness problem is decidable. We start with a technical definition.
\begin{definition}
A $k$-$\la$ is \emph{bounded} at level $i$ ($0\le i\le k$) if
there is a bound $b$ such that each node at level $i$ can create at most $b$ children during a run.
We refer to $b$ as the \emph{branching bound}.
\end{definition}
Note that we are defining a ``global'' bound on the number of children that a node at level $i$ may create across a whole run, rather than a ``local'' bound on the number of children a node may have in a given configuration.
To motivate the design of $\lla$, we observe that the undecidability argument (for the emptiness problem) for $2$-$\la$ used two consecutive levels ($0$ and $1$) that are not bounded.
For the node at level $0$, this corresponded to the number of zero tests, while an unbounded counter is simulated at level $1$.
In the following we will eliminate consecutive unbounded levels by introducing an alternating pattern of bounded and unbounded levels. Even-numbered layers ($i=0, 2, ...$) will be bounded, while odd-numbered layers will be unbounded. Observe in particular that the root (layer $0$) is bounded. As we will see later, this alternation reflects the term/context distinction in game semantics: the levels corresponding to terms are bounded, and the levels coresponding to contexts are unbounded.
With this restriction alone, it is possible to reconstruct the undecidability argument for $4$-$\la$, as two unbounded levels may still communicate. Thus we introduce a restriction on how many levels a transition can read and modify.
\begin{itemize}
\item when adding or removing a leaf at an odd level $2i+1$, the automaton will be able to
access levels $2i$, $2i-1$ and $2i-2$; while
\item when adding or removing a leaf at an even level $2i$, the automaton
will be able to access levels $2i-1$ and $2i-2$.
\end{itemize}
In particular, when an odd level produces a leaf, it will not be able to see the previous odd level.
The above constraints mean that the transition functions $\delta^{(i)}_{\Q}, \delta^{(i)}_{\Q}$ can be
presented in a more concise form, given below.
\[
\delta^{(i)}_{\Q}\subseteq \begin{cases}
Q^{(i-2, i-1)} \times \Sigma_\Q \times Q^{(i-2, i-1, i)}
& \text{if $i$ is even} \\
Q^{(i-3, i-2, i-1)} \times \Sigma_\Q \times Q^{(i-3, i-2, i-1, i)}
& \text{if $i$ is odd}
\end{cases}
\]
\[
\delta^{(i)}_\A \subseteq\begin{cases}
Q^{(i-2, i-1, i)} \times \Sigma_\A \times Q^{(i-2, i-1)}
& \text{if $i$ is even} \\
Q^{(i-3, i-2, i-1, i)} \times \Sigma_\A \times Q^{(i-3, i-2, i-1)}
& \text{if $i$ is odd}
\end{cases}
\]
In terms of the previous notation developed for $\la$,
$(q^{(i-2)}, q^{(i-1)},x,r^{(i-2)}, r^{(i-1)},$ $r^{(i)})\in\delta^{(i)}_\Q$ represents
all tuples of the form $(\vec{q}, q^{(i-2)}, q^{(i-1)}, x, \vec{q}, r^{(i-2)},r^{(i-1)}, r^{(i)})$,
where $\vec{q}$ ranges over $Q^{(0,\cdots,i-3)}$.
\begin{definition}
A level-$k$ \emph{local leafy automaton} ($k$-$\lla$) is a $k$-$\la$ whose transition function admits the above-mentioned
presentation and which is bounded at all even levels.
\end{definition}
\cutout{
\begin{definition}[$\sla$]
A level-$k$ \emph{short-sighted} leafy automaton ($k$-$\sla$) is a tuple $\mathcal{A} = \abra{\Sigma, k, Q, \delta}$, where
\begin{itemize}
\item $\Sigma = \Sigma_Q+\Sigma_A$ is a finite alphabet, partitioned into questions and answers;
\item $k\ge 0$ is the level parameter;
\item $Q= \sum_{i=0}^k Q^{(i)}$ is a finite set of states, partitioned into sets $Q^{(i)}$ (level-$i$ states);
\item $\delta = \delta_Q + \delta_A$ is the transition function, partitioned into question and answer transitions, such that
\item $\delta_\Q=\sum_{i=0}^k \delta^{(i)}_{\Q}$, where
\[
\delta^{(i)}_{\Q}\subseteq \begin{cases}
Q^{(i-2, i-1)} \times \Sigma_\Q \times Q^{(i-2, i-1, i)}
& \text{if $i$ is even} \\
Q^{(i-3, i-2, i-1)} \times \Sigma_\Q \times Q^{(i-3, i-2, i-1, i)}
& \text{if $i$ is odd}
\end{cases}
\]
\item $\delta_{\A}=\sum_{i=0}^k \delta^{(i)}_{\A}$, where
\end{itemize}
\end{definition}
Formally, given a $k$-$\sla$ $\Aut=\abra{\Sigma,k,Q,\delta}$, its configuration graph will be the same as that of the $k$-$\la$ $\Aut^+=\abra{\Sigma,k,Q,\delta^+}$, where $\delta^+$ is defined by including the missing states from $\delta$. For example, for question transitions which add an even leaf: $ \forall~\vec{q}\in \prod_{j=0}^{i-3} Q^{(j)}$,
\[
(\vec{q}, q^{(i-2)}, q^{(i-1)},
x,
\vec{q}, r^{(i-2)},r^{(i-1)},
r^{(i)})\in\delta^+ \]
if and only if
\[(q^{(i-2)}, q^{(i-1)},x,r^{(i-2)}, r^{(i-1)}, r^{(i)})\in\delta.\]
With this in mind, we extend the notation $\trace{\Aut}, \lang{\Aut}$ to $k$-$\sla$, and we may also consider boundedness of $\sla$ at various levels. We shall call a $k$-$\sla$ \emph{bounded} if all its even levels are bounded.
}
\newcommand{\ensuremath{\mathsf{INT}}}{\ensuremath{\mathsf{INT}}}
\begin{theorem}
\label{thm:sla-decidable}
The emptiness problem for $\lla$ is decidable.
\end{theorem}
\begin{proof}[Sketch]
Let $b$ be a bound on the number of children created by each even node during a run.
The critical observation is that, once a node $d$ at even level $2i$ has been created, all subsequent actions of descendants of $d$ access (read and/or write) the states at levels $2i-1$ and $2i-2$ at most $2b$ times. The shape of the transition function dictates that this can happen only when child nodes at level $2i+1$ are added or removed.
In addition, the locality property ensures that the automaton will never access levels $< 2i-2$ at the same time as node $d$ or its descendants.
We will make use of these facts to construct \emph{summaries} for nodes on even levels which completely describe such a node's lifetime, from its creation as a leaf until its removal, and in between performing at most $2b$ reads-writes of the parent and grandparent states.
A summary is a sequence quadruples of states: two pairs of states of levels $2i-2$ and $2i-1$. The first pair are the states we expect to find on these levels, while the second are the states to which we update these levels. Hence a summary at level $2i$ is a complete record of a valid sequence of read-writes and stateful changes during the lifetime of a node on level $2i$.
We proceed by induction and show how to calculate the complete set of summaries at level $2i$ given the complete set of summaries at level $2i+2$. We construct a program for deciding whether a given sequence is a summary at level $2i$.
This program can be evaluated via Vector Addition Systems with States (VASS). Since we can finitely enumerate all candidate summaries at level $2i$, this gives us a way to compute summaries at level $2i$. Proceeding this way, we finally calculate summaries at level $2$.
At this stage, we can reduce the emptiness problem for the given $\lla$ to a reachability test on a VASS.
The complete argument is given in Appendix~\ref{apx:sla}.
\qed
\end{proof}
Let us remark also that the problem becomes undecidable if we remove either boundedness restriction, or allow transitions to look one level further.
\section{Conclusion and further work}
We have introduced leafy automata, $\la$, and shown that they correspond to the game
semantics of Finitary Idealized Concurrent Algol ($\fica$).
The automata formulation makes combinatorial challenges posed by the equivalence problem explicit.
This is exemplified by a very transparent undecidability proof of the emptiness problem for $\la$.
Our hope is that $\la$ will allow to discover interesting fragments of $\fica$ for which some variant of the equivalence problem is decidable.
We have identified one such instance, namely local leafy automata ($\lla$), and a fragment of $\fica$ that can be translated to them.
The decidability of the emptiness problem for $\lla$ implies decidability of a simple instance of the equivalence problem.
This in turn allows to decide some verification questions as in the example on page~\pageref{ex:verification}.
Since these types of questions involve quantification over all contexts, the use of a fully-abstract semantics appears essential to solve them.
The obvious line of future work is to find some other subclasses of $\la$ with decidable emptiness problem.
Another interesting target is to find an automaton model for the call-by-value setting, where answers enable questions~\cite{AM97b,HY97}.
It would also be worth comparing our results with abstract machines~\cite{FG13}, the
Geometry of Interaction~\cite{LTY17}, and the $\pi$-calculus~\cite{BHY01}.
\section{From FICA to LA}
\label{sec:tola}
Recall from Section~\ref{sec:gs} that, to interpret base types, game semantics uses moves from the set
\[\begin{array}{rcl}
\moveset &=& M_{\sem{\comt}}\cup M_{\sem{\expt}}\cup M_{\sem{\vart}} \cup M_{\sem{\semt}}\\
&=&\{\, \mrun,\, \mdone,\, \mq,\, \mread,\, \mgrb,\, \mrls,\, \mok\, \}\cup \{\,i,\, \mwrite{i}{}\,|\, 0\le i \le \max\,\}.
\end{array}\]
The game semantic interpretation of
a term-in-context $\seq{\Gamma}{M:\theta}$ is a strategy over the arena $\sem{\seq{\Gamma}{\theta}}$,
which is obtained through product and arrow constructions, starting from arenas corresponding to base types.
As both constructions rely on the disjoint sum, the moves from $\sem{\seq{\Gamma}{\theta}}$ are derived
from the base types present in types inside $\Gamma$ and $\theta$.
To indicate the exact occurrence of a base type from which each move originates, we will annotate elements of $\moveset$ with
a specially crafted scheme of superscripts.
Suppose $\Gamma=\{x_1:\theta_1,\cdots, x_l:\theta_l\}$.
The superscripts will have one of the two forms, where $\vec{i}\in\N^\ast$ and $\rho\in\N$:
\begin{itemize}
\item $(\vec{i},\rho)$ will be used to represent moves from $\theta$;
\item $(x_v\vec{i}, \rho)$ will be used to represent moves from $\theta_v$ ($1\le v\le l$).
\end{itemize}
The annotated moves will be written as $m^{(\vec{i},\rho)}$ or $m^{(x_v\vec{i},\rho)}$, where $m\in\moveset$.
We will sometimes omit $\rho$ on the understanding that this represents $\rho=0$.
Similarly, when $\vec{i}$ is omitted, the intended value is~$\epsilon$. Thus, $m$ stands for $m^{(\epsilon,0)}$.
The next definition explains how the $\vec{i}$ superscripts are
linked to moves from $\sem{\theta}$.
Given $X\subseteq \{ m^{(\vec{i},\rho)} \,|\, \vec{i}\in\N^\ast,\,\rho\in\N\}$ and $y\in \N\cup \{x_1,\cdots, x_l\}$,
we let $yX = \{m^{(y\vec{i},\rho)}\,|\, m^{(\vec{i},\rho)}\in X\}$
\begin{definition}\label{def:tags}
Given a type $\theta$, the corresponding alphabet $\alp{\theta}$ is defined as follows
\[\begin{array}{rcl}
\alp{\beta}&=&\{\, m^{(\epsilon,\rho)}\,|\, m\in M_{\sem{\beta}},\,\rho\in\N\,\}\qquad \beta=\comt,\expt,\vart,\semt\\
\alp{\theta_h\rarr\ldots\rarr\theta_1\rarr\beta}&=& \bigcup_{u=1}^h (u\alp{\theta_u}) \cup \alp{\beta}
\end{array}\]
For $\Gamma=\{x_1:\theta_1,\cdots, x_l:\theta_l\}$,
the alphabet $\alp{\seq{\Gamma}{\theta}}$ is defined to be
$\alp{\seq{\Gamma}{\theta}}=\bigcup_{v=1}^l (x_v \alp{\theta_v}) \cup \alp{\theta}$.
\end{definition}
\begin{example}
The alphabet $\alp{\seq{f:\comt\rarr\comt, x:\comt}{\comt}}$ is
\[\{ \mrun^{(f1,\rho)}, \mdone^{(f1,\rho)}, \mrun^{(f,\rho)}, \mdone^{(f,\rho)}, \mrun^{(x,\rho)},\mdone^{(x,\rho)},\mrun^{(\epsilon,\rho)},\mdone^{(\epsilon,\rho)}\,|\, \rho\in \N \}.\]
\end{example
To represent the game semantics of terms-in-context, of the form $\seq{\Gamma}{M:\theta}$,
we are going to use \emph{finite subsets} of $\alp{\seq{\Gamma}{\theta}}$ as alphabets in leafy automata.
The subsets will be finite, because $\rho$ will be bounded.
Note that $\alp{\theta}$ admits a natural partitioning into questions and answers, depending on whether the underlying move is a question or answer.
We will represent plays using data words in which the underpinning
sequence of tags will come from an alphabet as defined above.
Superscripts and data are used to represent justification pointers.
Intuitively, we represent occurrences of questions with data values.
Pointers from answers to questions just refer to these values.
Pointers from questions use bounded indexing with the help of~$\rho$.
Initial question-moves do not have a pointer and to represent such questions we simply use $\rho=0$.
For non-initial questions,
we rely on the tree structure of $\D$ and use $\rho$ to indicate the ancestor of the currently read data value that we mean to point at.
Consider a trace $w (t_i,d_i)$ ending in a non-initial question, where $d_i$ is a level-$i$ data value and $i>0$.
In our case, we will have $t_i\in\alp{\seq{\Gamma}{\theta}}$, i.e.
$t_i=m^{(\cdots, \rho)}$.
By Remark~\ref{rem:lawork}, trace $w$ contains unique occurrences of questions
$(t_0,d_0), \cdots, (t_{i-1},d_{i-1})$ such that $\pred{d_j}=d_{j-1}$ for $j=1,\cdots, i$.
The pointer from $(t_i,d_i)$ goes to one of these questions, and we use $\rho$ to represent
the scenario in which the pointer goes to $(t_{i-(1+\rho)},d_{i-(1+\rho)})$.
Pointers from answer-moves to question-moves are represented
simply by using the same data value in both moves (in this case we use $\rho=0$).
We will also use $\epsilon$-tags $\eq$ (question) and $\ea$ (answer), which do not contribute moves to the represented play. Each $\eq$ will always be answered with $\ea$. Note that the use of $\rho,\eq,\ea$ means that several data words may represent the same play (see Examples~\ref{ex:play},~\ref{ex:play2}).
\begin{example}\label{ex:play}
Suppose that $d_0=\pred{d_1}, d_1=\pred{d_2}=\pred{d_2'}, d_2=\pred{d_3}$, $d_2'=\pred{d_3'}$.
Then the data word
$(\mrun,d_0)$ $(\mrun^f,d_1)$ $(\mrun^{f1}, d_2)$ $(\mrun^{f1}, d_2')$ $(\mrun^{(x,2)},d_3)$ $(\mrun^{(x,2)}, d_3')$ $(\mdone^x,d_3)$,
which is short for
$(\mrun^{(\epsilon,0)},d_0)$ $(\mrun^{(f,0)},d_1)$ $(\mrun^{(f1,0)}, d_2)$ $(\mrun^{(f1,0)}, d_2')$ $(\mrun^{(x,2)},d_3)$ $(\mrun^{(x,2)}, d_3')$ $(\mdone^{(x,0)},d_3)$,
represents the play
\medskip
\[\begin{array}{ccccccc}
\rnode{Z}{\mrun} &
\rnode{A}{\mrun^f}\justf{A}{Z} &
\rnode{B}{\mrun^{f1}}\justf{B}{A} &
\rnode{C}{\mrun^{f1}}\justn{C}{A}{140} &
\rnode{D}{\mrun^{x}}\justn{D}{Z}{150} &
\rnode{E}{\mrun^{x}} \justn{E}{Z}{150} &
\rnode{F}{\mdone^x}\justn{F}{D}{140}\\
O &P&O&O&P &P & O.
\end{array}\]
\end{example}
\begin{example}
Consider the $\la$ $\Aut=\abra{Q,3,\Sigma,\delta}$,
where $Q^{(0)}=\{0,1,2\}$, $Q^{(1)}=\{0\}$,
$Q^{(2)}=\{0,1,2\}$, $Q^{(3)}=\{0\}$,
$\Sigma_\Q=\{\mrun,\mrun^f,\mrun^{f1},\mrun^{(x,2)}\}$,
$\Sigma_\A=\{\mdone,\mdone^f,$ $\mdone^{f1},\mdone^x\}$,
and $\delta$ is given by
\[\begin{array}{c}
\dagger\trans{\mrun} 0\qquad
0 \trans{\mrun^f}{(1,0)}\qquad
(1,0) \trans{\mdone^f} 2 \qquad
2 \trans{\mdone} \dagger \qquad
(1,0) \trans{\mrun^{f1}} (1,0,0)\\
(1,0,0)\trans{\mrun^{(x,2)}} (1,0,1,0)\qquad
(1,0,1,0) \trans{\mdone^{(x,0)}} (1,0,2)\qquad
(1,0,2) \trans{\mdone^{f1}} (1,0)
\end{array}\]
Then traces from $\trace{\Aut}$ represent
all plays from $\sigma=\llbracket f:\comt\rarr\comt,\, x:\comt \,\vdash\, f x
\rrbracket$, including the play from Example~\ref{ex:play},
and $\lang{\Aut}$ represents $\comp{\sigma}$.
\end{example}
\begin{example}\label{ex:play2}
One might wish to represent plays of
$\sigma$ from the previous Example
using data values
$d_0,d_1,d_1',d_1'',d_2,d_2'$ such that
$d_0=\pred{d_1}=\pred{d_1'}=\pred{d_1''}$, $d_1=\pred{d_2}=\pred{d_2'}$,
so that the play from Example~\ref{ex:play} is represented
by
$(\mrun^{(\epsilon,0)},d_0)$ $(\mrun^{(f,0)},d_1)$ $(\mrun^{(f1,0)}, d_2)$ $(\mrun^{(f1,0)}, d_2')$ $(\mrun^{(x,0)},d_1')$ $(\mrun^{(x,0)}, d_1'')$ $(\mdone^{(x,0)},d_1')$.
Unfortunately, it is impossible to construct a $2$-$\la$ that would
accept all representations of such plays. To achieve this,
the automaton would have to make sure that the number of $\mrun^{f1}$s is the same
as that of $\mrun^x$s. Because the former are labelled with level-$2$
values and the latter with incomparable level-$1$ values,
the only point of communication (that could be used for comparison)
is the root. However, the root cannot
accommodate unbounded information, while plays of $\sigma$
can feature an unbounded number of $\mrun^{f1}$s, which could well be consecutive.
\end{example}
Before we state the main result linking $\fica$ with leafy automata, we note some structural properties
of the automata.
Questions will create a leaf, and answers will remove a leaf.
P-moves add leaves at odd levels (questions) and remove leaves at even levels (answers),
while O-moves have the opposite effect at each level.
Finally, when removing nodes at even levels we will not need to check if a node is a leaf.
We call the last property \emph{even-readiness}.
Even-readiness is a consequence of the WAIT condition in the game semantics.
The condition captures well-nestedness of concurrent interactions --
a term can terminate only after subterms terminate.
In the leafy automata setting, this is captured by the requirement that only leaf nodes can be removed, i.e. a node can be removed only if
all of its children have been removed beforehand.
It turns out that, for \emph{P-answers} only,
this property will come for free.
Formally, whenever the automaton arrives at a configuration
$\kappa=(D,E,f)$, where $d\in E$ and there is a transition
\[
(f(\predc^{(2i)}(d)),\cdots,f(\pred{d}), f(d), t, f'(\predc^{(2i)}(d)),\cdots,f'(\pred{d}))\in\delta^{(2i)}_{\A},
\]
then $d$ is a leaf.
In contrast, our automata will not satisfy the same property for O-answers (the environment) and for such transitions it is crucial that
the automaton actually checks that only leaves can be removed.
\begin{theorem}\label{thm:trans}
For any $\fica$-term $\seq{\Gamma}{M:\theta}$, there exists an even-ready leafy automaton $\Aut_M$
over a finite subset of $\alp{\seq{\Gamma}{\theta}}+\{\eq,\ea\}$ such that the set of plays represented by data words from $\trace{\Aut_M}$
is exactly $\sem{\seq{\Gamma}{M:\theta}}$. Moreover,
$\lang{\Aut_M}$ represents $\comp{\sem{\seq{\Gamma}{M:\theta}}}$
in the same sense.
\end{theorem}
\begin{proof}[Sketch]
Because every $\fica$-term can be converted to $\beta\eta$-normal form, we use induction on the structure of such normal forms.
The base cases are: $\seq{\Gamma}{\skipcom:\comt}$ ($Q^{(0)}= \{0\}$;
$\dagger \trans{\mrun} 0$,
$0 \trans{\mdone} \dagger$), $\seq{\Gamma}{\divcom:\comt}$ ($Q^{(0)}= \{0\}$; $\dagger \trans{\mrun} 0$),
and $\seq{\Gamma}{i:\expt}$ ($Q^{(0)}= \{0\}$;
$\dagger \trans{\q} 0$, $0 \trans{i} \dagger$).
The remaining cases are inductive.
When referring to the inductive hypothesis for a subterm $M_i$,
we shall use subscripts $i$ to refer to the automata components,
e.g. $Q_i^{(j)}$, $\trans{\mm}_i$ etc.
In contrast, $Q^{(j)}$, $\trans{\mm}$ will refer to the automaton that is being constructed.
Inference lines $\frac{\qquad}{\qquad}$ will indicate that the transitions listed under the line should be added
to the new automaton provided the transitions listed above the line are present in the automaton obtained via induction hypothesis. We discuss a selection of technical cases below.
\paragraph{$\seq{\Gamma}{M_1|| M_2}$}
In this case we need to run the automata for $M_1$
and $M_2$ concurrently. To this end, their level-$0$ states will be combined ($Q^{(0)} = Q_1^{(0)} \times Q_2^{(0)}$),
but not deeper states ($Q^{(j)}= Q_1^{(j)}+Q_2^{(j)}, 1\le j\le k$).
The first group of transitions activate and terminate the two components respectively:
$\frac{\dagger\trans{\mrun}_1 q_1^{(0)}\qquad \dagger\trans{\mrun}_2 q_2^{(0)}}{\dagger\trans{\mrun}(q_1^{(0)},q_2^{(0)}) }$,
$\frac{q_1^{(0)}\trans{\mdone}_1 \dagger\qquad q_2^{(0)}\trans{\mdone}_2 \dagger}{(q_1^{(0)},q_2^{(0)})\trans{\mdone}\dagger}$.
The remaining transitions advance each component:
$\frac{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm}_1 (r_1^{(0)},\cdots, r_1^{(j')})\qquad q_2^{(0)}\in Q_2^{(0)}}{((q_1^{(0)},q_2^{(0)}), \cdots, q_1^{(j)}) \trans{\mm} ((r_1^{(0)},q_2^{(0)}),\cdots, r_1^{(j')})}$,
$\frac{q_1^{(0)}\in Q_1^{(0)}\qquad (q_2^{(0)}, \cdots, q_2^{(j)}) \trans{\mm}_2 (r_2^{(0)},\cdots, r_2^{(j')})
}{((q_1^{(0)},q_2^{(0)}), \cdots, q_2^{(j)}) \trans{\mm} ((q_1^{(0)},r_2^{(0)}),\cdots, r_2^{(j')})}$, where $\mm\neq\mrun,\mdone$.
\paragraph{$\seq{\Gamma}{\newin{x\,\raisebox{0.065ex}{:}{=}\, i}{M_1}}$}
By~\cite{GM08}, the semantics of this term is obtained from the semantics of $\sem{\seq{\Gamma,x}{M_1}}$ by
\begin{enumerate}
\item restricting to plays in which the moves $\mread^x$, $\mwrite{n}^x$ are followed immediately by answers,
\item selecting those plays in which each answer to a $\mread^x$-move is consistent
with the preceding $\mwrite{n}^x$-move (or equal to $i$, if no $\mwrite{n}^x$ was made),
\item erasing all moves related to $x$, e.g. those of the form $m^{(x,\rho)}$.
\end{enumerate}
To implement 1., we will lock the automaton after each $\mread^x$- or $\mwrite{n}^x$-move, so that only an answer to that move can be played next. Technically, this will be done by adding an extra bit (lock) to the level-$0$ state.
To deal with 2., we keep track of the current value of $x$, also at level $0$.
This makes it possible to ensure that
answers to $\mread^x$ are consistent with the stored value and that $\mwrite{n}^x$ transitions cause the right change.
Erasing from condition 3 is implemented by replacing all moves with the $x$ subscript with $\eq,\ea$-tags.
Accordingly,
we have $Q^{(0)}=(Q_1^{(0)} + (Q_1^{(0)}\times \{\mathit{lock}\})) \times\{0,\cdots,\imax\}$ and $Q^{(j)} = Q_1^{(j)}$ ($1\le j\le k$).
As an example of a transition, we give the transition related to writing:
$\frac{(q_1^{(0)},\cdots, q_1^{(j)})\trans{\mwrite{z}^{(x,\rho)}}_1 (r_1^{(0)},\cdots, r_1^{(j')})\qquad 0\le n,z\le \imax}{
((q_1^{(0)},n),\cdots, q_1^{(j)})\trans{\eq} ((r_1^{(0)},\lock, z),\cdots, r_1^{(j')})}$.
\paragraph{$\seq{\Gamma}{f M_h \cdots M_1:\comt}$ with $(f: \theta_h\rarr\cdots\rarr\theta_1\rarr\comt)$}
Here we will need $Q^{(0)} = \{0,1,2\}$, $Q^{(1)}=\{0\}$, $Q^{(j+2)}= \sum_{u=1}^{h} Q_u^{(j)}$ ($0\le j\le k$).
The first group of transitions corresponding to calling
and returning from $f$: $\dagger \trans{\mrun} 0$,\quad $0\trans{\mrun^f} (1,0)$,\quad
$(1,0)\trans{\mdone^f} 2$,\quad $2\trans{\mdone} \dagger$.
Additionally, in state $(1,0)$ we want to enable the environment to spawn an unbounded number of copies of each of $\seq{\Gamma}{M_u:\theta_u}$ ($1\le u\le h$).
This is done through rules that embed the actions of the automata for $M_u$ while (possibly) relabelling the moves in line with our convention for representing moves from game semantics.
Such transitions have the general form
$\frac{ (q_u^{(0)},\cdots, q_u^{(j)}) \trans{m^{(t,\rho)}}_u (q_u^{(0)},\cdots, q_u^{(j')})}{(1,0,q_u^{(0)},\cdots, q_u^{(j)}) \trans{m^{(t',\rho')}} (1,0,q_u^{(0)},\cdots, q_u^{(j')})}$.
Note that this case also covers $f:\comt$ ($h=0$).
More details and the remaining cases are covered in Appendix~\ref{apx:tola}.
In Appendix~\ref{apx:example} we give an example of a term and the corresponding $\la$.
\qed
\end{proof}
\section{Additional material for Section~\ref{sec:lla}}
\subsection{Proof of Theorem \ref{thm:sla-decidable}}
\label{apx:sla-decidable}
{
\newcommand{\ensuremath{q^{(i-2)}}}{\ensuremath{q^{(2i-2)}}}
\newcommand{\ensuremath{q^{(i-1)}}}{\ensuremath{q^{(2i-1)}}}
\newcommand{\ensuremath{q^{(i)}}}{\ensuremath{q^{(2i)}}}
\newcommand{\ensuremath{q^{(i+1)}}}{\ensuremath{q^{(2i+1)}}}
\newcommand{\ensuremath{q^{(i+2)}}}{\ensuremath{q^{(2i+2)}}}
\newcommand{\ensuremath{q^{\prime(i-2)}}}{\ensuremath{q^{\prime(2i-2)}}}
\newcommand{\ensuremath{q^{\prime(i-1)}}}{\ensuremath{q^{\prime(2i-1)}}}
\newcommand{\ensuremath{q^{\prime(i)}}}{\ensuremath{q^{\prime(2i)}}}
\newcommand{\ensuremath{q^{\prime(i+1)}}}{\ensuremath{q^{\prime(2i+1)}}}
\newcommand{\ensuremath{q^{\prime(i+2)}}}{\ensuremath{q^{\prime(2i+2)}}}
\renewcommand{\ensuremath{\mathsf{INT}}}{\mathsf{interrupt}}
\renewcommand{\read}{\mathsf{read}}
\renewcommand{\write}{\mathsf{write}}
\newcommand{\mathit{state}}{\mathsf{state}}
\newcommand{\mathsf{child}}{\mathsf{child}}
\newcommand{\mathsf{grandchildren}}{\mathsf{grandchildren}}
\renewcommand{\r}{\mathsf{r}}
\newcommand{\mapping}[1]{\ensuremath{\{~#1~\}}}
\renewcommand{\and}{\ensuremath{,~}}
\makeatletter
\newcommand{\alist}[1]{%
\begin{aligned}[t] & #1\@ifnextchar\bgroup{\gobblenextarg}{ \end{aligned} }}
\newcommand{\@ifnextchar\bgroup{\gobblenextarg}{ \end{aligned} }}{\@ifnextchar\bgroup{\gobblenextarg}{ \end{aligned} }}
\newcommand{\gobblenextarg}[1]{ \\ & #1\@ifnextchar\bgroup{\gobblenextarg}{ \end{aligned} }}
\makeatother
\newcommand{\vassrule}[3]{
\[
\begin{aligned}
\textbf{for all}~~ & #1 \\
\textbf{if}~~ & #2 \\
\textbf{then}~~ & #3
\end{aligned}
\]
}
We continue from the proof sketch described that accompanies Theorem \ref{thm:sla-decidable}. Recall that we intend to inductively construct summaries at level $2i$, given summaries at level $2i+2$, by reduction to VASS reachability.
Let us define the \emph{candidate set} $C^{(2i)}$ for layer $2i$ as follows:
\[
C^{(2i)} = Q^{(2i)} \times (Q^{(2i-2, 2i-1)} \times Q^{(2i-2, 2i-1)})^{\leq 2b} \times Q^{(2i)}
\]
A candidate $c = (\alpha^{(2i)}, \mathsf{INT}_1, \cdots, \mathsf{INT}_m, \omega^{(2i)})$ is a \emph{summary} at level $2i$ if and only if some series of transitions exists such that a node at level $2i$ may go from state $\alpha^{(2i)}$ without any children to state $\omega^{(2i)}$ without any children, and in between performing read-write accesses $\mathsf{INT}_n$ in order. We naturally define the \emph{summary set} $S^{(2i)} \subseteq C^{(2i)}$ as the subset of the candidate set such that every element is a summary at level $2i$.
Constructing the summary set for the deepest layer ($2i = k$) is simple: nodes on the bottom-most level may spawn no children by definition and so they have no opportunity to change state, ergo the summary set is the identity function on $Q^{(k)}$: $S^{(k)} = \{~(q,q) ~|~ q \in Q^{(k)}~\}$.
Assume that we have the complete summary set $S^{(2i+2)}$ for level $2i+2$. We shall compute the summary set $S^{(2i)}$ for level $2i$.
Enumerating the whole candidate set is simple, as every component is bounded. For each candidate $s = (\alpha, \ensuremath{\mathsf{INT}}(s,0), \cdots, \ensuremath{\mathsf{INT}}(s,m), \omega)$, with $m$ interrupt points, we test whether or not it is a summary at level $2i$ in the following~way.
For convenience, let us define $\ensuremath{\mathsf{INT}}(s, r) = ((\ensuremath{q^{(i-2)}}, \ensuremath{q^{(i-1)}}), (\ensuremath{q^{\prime(i-2)}}, \ensuremath{q^{\prime(i-1)}}))$ as the read-write pair in the $r$th interrupt of the candidate $s$ that we are testing. We also define $\read(s,r) = (\ensuremath{q^{(i-2)}}, \ensuremath{q^{(i-1)}})$ and $\write(s,r) = (\ensuremath{q^{\prime(i-2)}}, \ensuremath{q^{\prime(i-1)}})$.
The procedure for determining whether a candidate $s$ is a valid summary is represented here by a nondeterministic program over finite variables and unbounded counters (without zero tests). The program is a set of rules which will be nondeterministically evaluated. Either some run of the program will complete and \texttt{ACCEPT}, or all paths will block with no more rules that can be applied. Such programs are known to be reducible to VASS reachability.
The variables of the program are:
\[
\r \in [0,2b] \qquad
\mathit{state} \in Q^{(2i)} \qquad
\mathsf{child}[j] \in Q^{(2i+1)} \cup \{\bot,\top\}
\]
and the unbounded counters are:
\[
\mathsf{grandchildren}[j,s,p] \in \mathbb{N}
\]
such that:
\begin{itemize}
\item $\r$ captures the number of reads-writes performed so far in the evaluation.
\item $\mathit{state}$ captures the state of our level $2i$ node.
\item $\mathsf{child}[j]$ (where $j \in [1,b]$) captures the state of the (at most $b$) children at level $2i+1$. We include additional states $\{\bot, \top\}$ to represent children which have not yet been created, and children which have since been destroyed, respectively.
\item $\mathsf{grandchildren}[j,s,p]$ (where $j \in [1,b]$, $s \in S^{(2i+2)}$, $p \in [0,m]$) are counters for every combination of child, level $2i+2$ summary, and interrupt point along that summary. Note that $j$ also includes zero, to represent that no interrupts have yet been performed.
\end{itemize}
The initial values are:
\[
\r = 0 \qquad
\mathit{state} = \alpha
\qquad \forall j \colon \mathsf{child}[j] = \bot
\qquad \forall j,s,p \colon \mathsf{grandchildren}[j,s,p] = 0
\]
The rules of the program are as follows. When we write $\textbf{for all}$, one rule is created for every possible assignment of the free variables.
\vspace{5mm}
{
\noindent \textbf{Adding a child.} We first ensure that there is an unused child, and then proceed to populate that child with a state. The precise shape of these additions is dictated by $\delta_Q^{(2i+1)}$. Note that this rule is predicated on the values of $\ensuremath{\mathsf{INT}}(s,r)$ in the summary.
\vassrule
{\alist
{j \in [1,b]}
{(\read(s,r), \ensuremath{q^{(i)}}) \xrightarrow{x_\Q} (\write(s,r), \ensuremath{q^{\prime(i)}}, \ensuremath{q^{\prime(i+1)}}) \in \delta_\Q^{(2i+1)} }
}
{\alist
{\mathit{state} = \ensuremath{q^{(i)}}}
{\mathsf{child}[j] = \bot}
}
{\alist
{\mathit{state} = \ensuremath{q^{\prime(i)}}}
{\mathsf{child}[j] = \ensuremath{q^{\prime(i+1)}}}
{\r = \r + 1}
}
}
{
\noindent \textbf{Removing a child.} We move a child at level $2i+1$ into the top state to show that it has been removed. No further operations on grandchildren are permitted.
\vassrule
{\alist
{j \in [1,b]}
{(\read(s,r), \ensuremath{q^{(i)}}, \ensuremath{q^{(i+1)}}) \xrightarrow{x_\A} (\write(s,r), \ensuremath{q^{\prime(i)}}) \in \delta_\A^{(2i+1)} }
}
{\alist
{\mathit{state} = \ensuremath{q^{(i)}}}
{\mathsf{child}[j] = \ensuremath{q^{(i+1)}}}
}
{\alist
{\mathit{state} = \ensuremath{q^{\prime(i)}}}
{\mathsf{child}[j] = \top}
{\r += 1}
}
}
{
\noindent \textbf{Adding a grandchild.} We add a new leaf as the child of a currently active child. We choose a summary for the node to follow and indicate that it is yet to make any interrupts.
\vassrule
{\alist
{j \in [1,b]}
{(\ensuremath{q^{(i)}}, \ensuremath{q^{(i+1)}}) \xrightarrow{x_\Q} (\ensuremath{q^{\prime(i)}}, \ensuremath{q^{\prime(i+1)}}, \ensuremath{q^{\prime(i+2)}}) \in \delta_\Q^{(2i+2)} }
{s^{(2i+2)} = (\ensuremath{q^{\prime(i+2)}}, \cdots) \in S^{(2i+2)}}
}
{\alist
{\mathit{state} = \ensuremath{q^{(i)}}}
{\mathsf{child}[j] = \ensuremath{q^{(i+1)}}}
}
{\alist
{\mathit{state} = \ensuremath{q^{\prime(i)}}}
{\mathsf{child}[j] = \ensuremath{q^{\prime(i+1)}}}
{\mathsf{child}[j,s^{(2i+2)},0] \text{ += } 1}
}
}
{
\noindent \textbf{Removing a grandchild.} We may remove a grandchild at level $2i+2$ once it has progressed through all interrupt points (hence reached a new child-free state $\ensuremath{q^{\prime(i+2)}}$. The shape of these additions is dictated by $\delta_A^{(2i+2)}$.
\vassrule
{\alist
{j \in [1,b]}
{(\ensuremath{q^{(i)}}, \ensuremath{q^{(i+1)}}, \ensuremath{q^{(i+2)}}) \xrightarrow{x_\A} (\ensuremath{q^{\prime(i)}}, \ensuremath{q^{\prime(i+1)}}) \in \delta_\A^{(2i+2)} }
{s^{(2i+2)} = (\cdots, \ensuremath{\mathsf{INT}}(s^{(2i+2)},m), \ensuremath{q^{(i+2)}}) \in S^{(2i+2)}}
}
{\alist
{\mathit{state} = \ensuremath{q^{(i)}}}
{\mathsf{child}[j] = \ensuremath{q^{(i+1)}}}
{\mathsf{grandchildren}[j, s^{(2i+2)}, m] \geq 1}
}
{\alist
{\mathit{state} = \ensuremath{q^{\prime(i)}}}
{\mathsf{child}[j] = \ensuremath{q^{\prime(i+1)}}}
{\mathsf{grandchildren}[j, s^{(2i+2)}, m] \text{ -= } 1}
}
}
{
\noindent \textbf{Progressing a grandchild.} Grandchildren may progress at any time, so long as the current state of our top-level node and the intermediate child align with the read values of the next interrupt in the summary that it is following.
\vassrule
{\alist
{j \in [1,b]}
{n \in [0,2b-1]}
{s^{(2i+2)} = (\cdots, \ensuremath{\mathsf{INT}}(s^{(2i+2)},n), \cdots) \in S^{(2i+2)}}
{(\ensuremath{q^{(i)}}, \ensuremath{q^{(i+1)}}),(\ensuremath{q^{\prime(i)}}, \ensuremath{q^{\prime(i+1)}}) = \ensuremath{\mathsf{INT}}(s^{(2i+2)},n)}
}
{\alist
{\mathit{state} = \ensuremath{q^{(i)}}}
{\mathsf{child}[j] = \ensuremath{q^{(i+1)}}}
{\mathsf{grandchildren}[j, s^{(2i+2)}, n] \geq 1}
}
{\alist
{\mathit{state} := \ensuremath{q^{\prime(i)}}}
{\mathsf{child}[j] := \ensuremath{q^{\prime(i+1)}}}
{\mathsf{grandchildren}[j, s^{(2i+2)}, n] \text{ -= } 1}
{\mathsf{grandchildren}[j, s^{(2i+2)}, n+1] \text{ += } 1}
}
}
The program will \texttt{ACCEPT} if it reaches the following configuration:
\[
\r = m \qquad
\mathit{state} = \omega
\qquad \forall j \colon \mathsf{child}[j] = \top
\qquad \forall j,s,p \colon \mathsf{grandchildren}[j,s,p] = 0
\]
Hence we have reached the end state $\omega$; all $m$ interrupts have been performed; all children have been used; and no grandchildren remain.
The translation of the above system of instructions to VASS is short: the current state is the product of all the program variables, and the vector components are the $\mathsf{grandchildren}$ counters. The rules exactly correspond to transitions, with omitted variables permitting any value.
}
\section{Additional material for Section~\ref{sec:lla}}
\label{apx:sla}
\subsection{Proof of Theorem \ref{thm:sla-decidable}}
\label{apx:sla-decidable}
We present a proof of decidability of the
emptiness problem for $\lla$, Theorem \ref{thm:sla-decidable}.
There are two main steps in the proof.
The first step uses a notion of summary for some even layer $2i$.
This allows to restrict an automaton to first $2i$ layers.
The second step is a method for computing a summary for layer $2i$ from a
summary for layer $2i+2$.
\subsection*{Summaries}
The structure of transitions of $\sla$ provides a notation of a domain for data
values.
The \emph{domain} of a data value $d \in \D$ is the set of data values whose
associated state may be modified by a transition that adds or removes $d$, i.e.,
when reading a letter annotated by $d$.
\[
\dom{d} = \begin{cases}
\set{\predc^2(d), \predc(d), d} & \text{if d is at an even level} \\
\set{\predc^3(d), \predc^2(d), \predc(d), d} & \text{if d is at an odd level}
\end{cases}
\]
Domains give us a notion of independence: Two letters $(t_1, d_1)$, $(t_2, d_2)$ are \emph{independent} if the domains of $d_1$ and $d_2$ are disjoint. We remark that if $w$ is a trace of some $\sla$ then every sequence obtained by permuting adjacent independent letters of $w$ is also a trace of the same $\sla$ ending in the same configuration.
Let us fix an $k$-$\sla$ automaton $\ensuremath{\mathcal{A}} = \abra{\Sigma_\ensuremath{\mathcal{A}}, k_\ensuremath{\mathcal{A}}, Q_\ensuremath{\mathcal{A}},
\delta_\ensuremath{\mathcal{A}}}$, and let $b$ be its even-layer bound.
Suppose, on an accepting trace on $\ensuremath{\mathcal{A}}$, we encounter some data value $d$ at
even layer $2i$.
On an accepting trace value $d$ occurs twice: the first occurrence corresponds to
adding $d$, the second to deleting $d$. Let
$w$ be the part of the trace in between, and including, these two occurrences of $d$.
We can classify letters $(t',d')$ in $w$ into one of three categories:
\begin{enumerate}
\item \emph{$d$-internal}, when $\dom{d'}$ is included in the subtree rooted at $d$;
\item \emph{$d$-external}, when $\dom{d'}$ is disjoint from the subtree rooted at $d$;
\item \emph{$d$-frontier}, when $\dom{d'}$ contains $d$ and its parent.
\end{enumerate}
Note that these three categories partition the set of all letters in $w$.
The frontier letters are the ones with data value $d$, as well as those with
children of $d$.
The later are from layer $2i+1$.
Letters with data values from bigger layers are either $d$-internal or
$d$-external.
At this point we use branching bound $b$ of the automaton.
The number of children of $d$ is bounded by $b$, and every child of $d$ appears
twice in $w$.
Hence, the number of $d$-frontier letters in $w$ is at most $b+2$, counting the letters with~$d$.
The $d$-frontier letters divide $w$ into subwords, giving us a sequence of
transitions:
\begin{equation}
\label{eqn:expanded}
\kappa_1\trans{m_1}\kappa'_1\trans{w_1}\kappa_{2}\trans{m_{2}}\kappa'_{2}\trans{w_2}\dots\kappa_{l}\trans{m_l}\kappa'_l\trans{w_l}\kappa_{l+1}\trans{m_{l+1}}\kappa'_{l+1}
\end{equation}
where $m_1,\dots,m_l$ are $d$-frontier letters; $m_1$ adds node $d$ while $m_{l+1}$ deletes $d$.
Configuration $\kappa'_1$ is the first in which $d$ appears in the tree, so $d$
is a leaf node in $\kappa'_1$. Likewise, $\kappa_l$ is the last configuration in
which $d$ appears, as it is removed by $m_{l+1}$, so $d$ is a leaf node in
$\kappa_{l+1}$.
We now use independence properties.
Every word $w_j$ contains only $d$-internal and $d$-external letters.
Due to independence, $w_j$ is equivalent to some $u_jv_j$, with $u_j$
containing only $d$-internal letters of $w_j$ and $v_j$ containing only the
$d$-external letters of $w_j$.
(Actually $u_1$ and $u_l$ are empty but we do not need to make a case
distinction in the rest of the argument)
From here, we can see that the $d$-internal parts
$u_1,\cdots,u_l$ of $w$ only interact with the $d$-external parts at a bounded
number of positions, and those positions exactly correspond to the frontier
transitions $m_2,\cdots,m_l$. Hence, if we could characterize the interactions
that can occur at level $2i$, then we could replace the sequences of transitions
on every $u_j$ by a single short-cut transition.
This would eliminate the need for levels $\geq 2i$ in the automaton.
We introduce a notion of a summary to implement the idea of short-cut
transitions.
A \emph{summary} for level $2i$ is a function $f \colon \set{1,\dots, 2(l+1)} \to Q^{2i-2} \times
Q^{2i-1}$; for some $l\leq b+1$.
Intuitively, from some trace $w$ expanded as in Equation
\ref{eqn:expanded}, we can extract $f$ such that $f(2j-1)$ is a pair of states
labelling $\predc^2(d)$ and $\predc(d)$ in $\kappa_j$, while $f(2j)$ is a pair
of states labelling these nodes in $\kappa'_{j}$.
This is only the intuition because we do not have runs of $\ensuremath{\mathcal{A}}$ at hand to
compute $f$.
To formalise the idea of summaries for a given automaton, we will introduce the notion of a \emph{cut
automaton}. Intuitively, the behaviour of a cut automaton $\ensuremath{\mathcal{A}}^{\downarrow}(2i, f)$ will
represent the behaviours of $\ensuremath{\mathcal{A}}$ contained within some subtree rooted in a
data value at layer $2i$.
The states and transitions of $\ensuremath{\mathcal{A}}^{\downarrow}(2i, f)$ are those of $\ensuremath{\mathcal{A}}$ but lifted
up so that level $2i$ becomes the root level:
\begin{equation*}
\Q^{\downarrow (l-2i)} = \Q^{(l)} \qquad \delta_\Q^{\downarrow (l-2i)}=\delta_\Q^{(l)}\qquad \delta_\A^{\downarrow (l-2i)}=\delta_\A^{(l)}\qquad\text{for $l\geq2i+2$}
\end{equation*}
The two to layers, $0$ and $1$, are special as just lifting transitions would
make them stick above the root.
Here is also the place where we use the summary $f$.
\begin{equation*}
Q^{\downarrow(0)}=Q^{(2i)}\times\dom{f}\qquad Q^{\downarrow(1)}=Q^{(2i+1)}
\end{equation*}
The extra component at layer $0$ will be used for layer $1$ transitions.
Before defining transitions we introduce some notation.
For a summary $f$ we write $\maxdom{f}$ for the maximal element in the domain of
$f$.
We use an abbreviated notation for transitions.
If $f(j)=(\ensuremath{q^{(i-2)}},\ensuremath{q^{(i-1)}})$, and $f(j+1)=(\ensuremath{q^{\prime(i-2)}},\ensuremath{q^{\prime(i-1)}})$ then we write
\begin{equation*}
f(j)\trans{a}(f(j+1),\ensuremath{q^{\prime(i)}})\ \text{instead of}\ (\ensuremath{q^{(i-2)}},\ensuremath{q^{(i-1)}})\trans{a}(\ensuremath{q^{\prime(i-2)}},\ensuremath{q^{\prime(i-1)}},\ensuremath{q^{\prime(i)}})\ .
\end{equation*}
Transitions at levels $0$ and $1$ are adaptations of those of levels $2i$ and $2i+1$
in the original automaton.
A node that was at level $2i$ is now the root so it has no predecessors anymore.
The initial and final moves of $\ensuremath{\mathcal{A}}^{\downarrow}(2i, f)$ create and destroy the root.
They use $f$ to predict what are states of predecessors in a corresponding move
of $\ensuremath{\mathcal{A}}$.
\begin{align*}
\delta_\Q^{\downarrow (0)} \text{ contains }& \ \dagger\trans{a}(\ensuremath{q^{\prime(i)}},1)\\
& \text{\quad if there is a transition $f(1)\trans{a} (f(2),\ensuremath{q^{\prime(i)}})$ in $\delta_\Q^{(2i)}$}\\
\delta_\A^{\downarrow (0)} \text{ contains }& \ (q,r)\trans{a}\dagger\\
&\text{\quad if $r=\maxdom{f}-1$ and there is $(f(r),q)\trans{a} f(r+1)$ in $\delta_\A^{(2i)}$}
\end{align*}
Finally, we have transitions that add and delete nodes on level $1$:
\begin{align*}
\text{in } \delta_\Q^{\downarrow (1)} \text{ we have }&
(\ensuremath{q^{(i)}},r) \trans{a} ((\ensuremath{q^{\prime(i)}},r+2),\ensuremath{q^{\prime(i+1)}})\\
&\text{\qquad if } (f(r),\ensuremath{q^{(i)}}) \trans{a}(f(r+1),\ensuremath{q^{\prime(i)}},\ensuremath{q^{\prime(i+1)}}) \in \delta_\Q^{(2i+1)}\\
\text{in } \delta_\A^{\downarrow (1)} \text{ we have }&
((\ensuremath{q^{(i)}},r),\ensuremath{q^{(i+1)}}) \trans{a} ((\ensuremath{q^{\prime(i)}},r+2))\\
& \text{\qquad if } (f(r),\ensuremath{q^{(i)}},\ensuremath{q^{(i+1)}}) \trans{a} (f(r+1),\ensuremath{q^{\prime(i)}}) \in \delta_\A^{(2i+1)}
\end{align*}
\renewcommand{s}{\sigma}
We can now formally define the set of summaries for an even layer $2i$:
\begin{equation*}
\mathit{Summary}(\ensuremath{\mathcal{A}},2i)=\set{f \colon \ensuremath{\mathcal{A}}^{\downarrow}(2i,f) \text{ accepts some trace}}
\end{equation*}
The next step is to define an automaton that uses such a set of summaries.
The idea is that when a node of layer $2i$ is created it is assigned a summary
from the set of summaries.
Then all moves below this node are simulated by consulting this summary.
So we will never need layers below $2i$.
Let $\mathcal{S}$ be a set of summaries at level $2i$.
We will now define $\ensuremath{\mathcal{A}}^{\uparrow}(2i, \mathcal{S})$.
It will be $(2i+1)$-$\lla$ automaton.
The states and transitions of $\ensuremath{\mathcal{A}}^{\uparrow}(2i, \mathcal{S})$
are exactly the states and transitions of $\ensuremath{\mathcal{A}}$ for levels $0$ to $2i-1$.
The set of states at level $2i$ is
\[
Q^{(2i)} = \set{(f,r) \colon f\in\mathcal{S}, r\in\dom{f} }\ .
\]
So a state at layer $2i$ is a summary function and a \emph{use counter}
indicating the part of the summary that has been used.
For technical reasons we will also need one state at layer $2i+1$. We set
$Q^{(2i+1)}=\set{\bullet}$.
The transitions $\delta_\Q^{\uparrow (2i)}$ and $\delta_\A^{\uparrow (2i)}$ are defined as follows.
\begin{align*}
\text{in } \delta_\Q^{\uparrow (2i)} \text{ we have }& f(1) \trans{a} (f(2),(f,3))&\quad
\text{if } f \in \mathcal{S}\\
\text{in } \delta_\A^{\uparrow (2i)} \text{ we have }& (f(r),(f,r)) \trans{a} f(r+1)&\quad
\text{if } r = \maxdom{f}-1
\end{align*}
These transitions imply that for every node created at level $2i$, the automaton
guesses a summary and sets the summary's use counter to $3$.
It is $3$ and not $1$ because the first two values of $f$ are used for the
creation of the node.
The node can be deleted once this bounded counter value is maximal.
Finally, we define the transitions in $\delta_\Q^{\uparrow (2i+1)}$ and $\delta_\A^{\uparrow (2i+1)}$:
\begin{align*}
\text{In } \delta_\Q^{\uparrow (2i+1)} \text{ we have }& (f(r),(f,r))\trans{a} (f(r+1),(f,r+2),\bullet)\\
& \qquad\text{if } r<\maxdom{f}-1\\
\text{In } \delta_\A^{\uparrow (2i+1)} \text{ we have }& (f(r),(f,r),\bullet)\trans{a} (f(r),(f,r))\\
& \text{if } r=\maxdom{f}-1
\end{align*}
So the automaton creates a child node whenever it uses a summary.
The use counter is increased by $2$ at such a transition.
Once the use counter cannot be increased anymore, $\delta_\A^{\uparrow (2i+1)}$
provides transitions for deleting children at layer $2i+1$.
No other transitions are applicable at this point.
Once there are no children, the root can be removed by a $\delta_\A^{\uparrow(2i)}$ transition.
The next lemma states formally the relation between the two automata we have
introduced and the original one.
Recall that $\ensuremath{\mathcal{A}}^{\downarrow}$ is used to define a set of summaries.
The lemma is proved by stitching runs of $\ensuremath{\mathcal{A}}^{\uparrow}$ and $\ensuremath{\mathcal{A}}^{\downarrow}$.
\begin{lemma}
For every $k$-level automaton $\ensuremath{\mathcal{A}}$ and level $2i<k$,
$\ensuremath{\mathcal{A}}$ accepts a trace iff $\ensuremath{\mathcal{A}}^{\uparrow}(2i,\mathit{Summary}(\ensuremath{\mathcal{A}},2i))$
accepts a trace.
\end{lemma}
The next lemma shows how to use summaries of level $2i+1$ to compute summaries
at level $2i$.
\begin{lemma}\label{lem:summary-step}
Take a summary $f$ of some level $2i$, and consider $\ensuremath{\mathcal{B}}=\ensuremath{\mathcal{A}}^{\downarrow}(2i,f)$.
Then $\ensuremath{\mathcal{B}}$ accepts some trace iff $\ensuremath{\mathcal{B}}^{\uparrow}(2,\mathit{Summary}(\ensuremath{\mathcal{A}},2i+2))$ accepts some trace.
\end{lemma}
\begin{proof}
Follows from $\mathit{Summary}(\ensuremath{\mathcal{A}},2i+2)=\mathit{Summary}(\ensuremath{\mathcal{B}},2)$ and the previous lemma.
\end{proof}
The lemma reduces the task of computing summaries to checking emptiness of
automata with $3$ layers.
In the next subsection we show how to reduce the later problem to the
reachability problem in VASS.
With this lemma we can compute $\mathit{Summary}(\ensuremath{\mathcal{A}},2i)$ inductively.
Once we compute $\mathit{Summary}(\ensuremath{\mathcal{A}},2)$, we can reduce testing emptiness of
$\ensuremath{\mathcal{A}}^{\uparrow}(2,\mathit{Summary}(\ensuremath{\mathcal{A}},2))$ to VASS reachability.
This turns out to be degenerate case of computing summaries, so the same
technique as for computing summaries applies.
\subsection*{Computing summaries}
We compute $\mathit{Summary}(\ensuremath{\mathcal{A}}, 2i)$ assuming that we know $\mathit{Summary}(\ensuremath{\mathcal{A}}, 2i+2)$.
For this we use Lemma~\ref{lem:summary-step}.
We reduce testing emptiness of $\ensuremath{\mathcal{B}}^{\uparrow}(2,\mathit{Summary}(\ensuremath{\mathcal{A}},2i+2))$ from that lemma to VASS
reachability.
Since presenting a VASS directly would be quite unreadable, we present a
nondeterministic program that will use variables ranging
over bounded domains and some fixed set of non-negative counters.
By construction, every counter will be tested for $0$ only at the end of the
computation.
This structure allows us to emulate our nondeterministic program in a VASS, such that acceptance by the program is equivalent to reachability of a particular configuration in the VASS.
We fix a summary $\widehat{f}$ of level $2i$. Observe that the number of summaries at
level $2i$ is bounded, and so it is sufficient to check whether a given
candidate summary $\widehat{f}$ is a valid summary.
The variables of the program are as follows:
\begin{align*}
\widehat{r}\in~&\dom{\widehat{f}}\\
\mathit{state}\in~&Q_{2i}\cup\set{\bot}\\
\mathit{state}[j]\in~&Q_{2i+1}\cup\set{\bot,\top} & j \in \set{1,\dots,b}\\
\mathit{children}[j,f,r]\in~&\mathbb{N} &\text{$f$ summary at level $(2i+2)$, $r \in\dom{f}$}
\end{align*}
Intuitively, $\mathit{state}$ and $\widehat{r}$ represent a state from $Q^{(0)}$ of
$\ensuremath{\mathcal{B}}^{\uparrow}(2,\mathit{Summary}(\ensuremath{\mathcal{A}},2i+2))$.
The initial configuration is empty so $\mathit{state}=\bot$.
Variable $\mathit{state}[j]$, represents the state of $j$-th child of the root.
By boundedness, the root can have at most $b$ children.
Value $\mathit{state}[j]=\bot$ means that the child has not yet been yet created, and
$\mathit{state}[j]=\top$ that the child has been deleted.
Counter $\mathit{children}[j,f,r]$ indicates the number of children of the $j$-th child of
the root with a particular summary $f$ of level $2i+2$ and usage counter $r$.
Following these intuitions the initial values of the variables are $\widehat{r}=1$,
$\mathit{state}=\bot$, $\mathit{state}[j]=\bot$ for every $j$, and $\mathit{children}[j,f,r]=0$ for
every $j$, $f$ and $r$.
The program $\mathtt{TEST}(\widehat{f})$ we are going to write is a set of rules that are executed
nondeterministically.
Either the program will eventually \texttt{accept}, or it will block with no further rules that can be applied.
We later show that the program has an accepting run for $\widehat{f}$ iff
$\widehat{f}\in\mathit{Summary}(\ensuremath{\mathcal{A}},2i)$. The rules of the program refer to transitions of $\ensuremath{\mathcal{A}}$
and simulate the definition of $\ensuremath{\mathcal{B}}^{\uparrow}(2,\mathit{Summary}(\ensuremath{\mathcal{A}},2i+2))$ from Lemma~\ref{lem:summary-step}.
They are defined as follows.
\makeatletter
\newcommand{\alist}[1]{%
\begin{aligned}[t] & #1\@ifnextchar\bgroup{\gobblenextarg}{ \end{aligned} }}
\newcommand{\@ifnextchar\bgroup{\gobblenextarg}{ \end{aligned} }}{\@ifnextchar\bgroup{\gobblenextarg}{ \end{aligned} }}
\newcommand{\gobblenextarg}[1]{ \\ & #1\@ifnextchar\bgroup{\gobblenextarg}{ \end{aligned} }}
\makeatother
\newcommand{\vassrule}[2]{
\begin{aligned}
\textbf{if}~~ & #1 \\
\textbf{then}~~ & #2
\end{aligned}
}
\paragraph{Initializing the root}
We have a rule
\begin{equation*}
\vassrule{\mathit{state}=\bot}
{\alist
{\mathit{state}=\ensuremath{q^{\prime(i)}}}
{\widehat{r}=3}
}
\end{equation*}
for every transition $f(1)\trans{a} (f(2),\ensuremath{q^{\prime(i)}})$ in $\delta_\Q^{(2i)}$.
\paragraph{Removing the root and accepting.} The program is able to accept when
it has completed all of its interaction with the outside world. Observe that
this is the only time that the counters are tested for zero.
Since this occurs at the end of the program, it can be easily checked by VASS reachability.
\[
\vassrule
{\alist
{\mathit{state}=\ensuremath{q^{(i)}}}
{\widehat{r} = \maxdom{\widehat{f}}-1}
{\forall j \colon \mathit{state}[j] = \top}
{\forall (j,f,r) \colon \mathit{children}[j,f,r] = 0}
}
{\texttt{accept}}
\]
for every $(f(\widehat{r}),\ensuremath{q^{(i)}})\trans{a} f(\widehat{r}+1)$ in $\delta_\A^{(2i)}$.
\paragraph{Adding a node at level $2i+1$.} We ensure that we are in the correct
state and ensure that the summary we are testing aligns with some transition
from the automaton.
\begin{equation*}
\vassrule%
{\alist
{\mathit{state} = \ensuremath{q^{(i)}}}
{\widehat{f}(\widehat{r}) = (\ensuremath{q^{(i-2)}}, \ensuremath{q^{(i-1)}})}
{\widehat{f}(\widehat{r}+1) = (\ensuremath{q^{\prime(i-2)}}, \ensuremath{q^{\prime(i-1)}})}
{\widehat{r} + 2 < \maxdom{\widehat{f}}}
{\exists j \colon \mathit{state}[j] = \bot}
}%
{\alist
{\mathit{state} := \ensuremath{q^{\prime(i)}}}
{\mathit{state}[j] := \ensuremath{q^{\prime(i+1)}}}
{\widehat{r} = \widehat{r} + 2}
}
\end{equation*}
for every transition
\begin{equation*}
(\ensuremath{q^{(i-2)}}, \ensuremath{q^{(i-1)}}, \ensuremath{q^{(i)}}) \xrightarrow{t} (\ensuremath{q^{\prime(i-2)}}, \ensuremath{q^{\prime(i-1)}}, \ensuremath{q^{\prime(i)}}, \ensuremath{q^{\prime(i+1)}}) \in \delta_\Q^{(2i+1)}
\end{equation*}
\paragraph{Removing a node at level $2i+1$.} We delete a child according to
some transition from $\delta_\Q^{(2i+1)}$. While the zero test (ensuring $j$ is
a leaf) is not performed here directly, no further operations will be made on
children counters of this child and hence the zero test performed at the end of
the simulation does the job.
\begin{equation*}
\vassrule
{\alist
{\mathit{state} = \ensuremath{q^{(i)}}}
{\widehat{f}(\widehat{r}) = (\ensuremath{q^{(i-2)}}, \ensuremath{q^{(i-1)}})}
{\widehat{f}(\widehat{r}+1) = (\ensuremath{q^{\prime(i-2)}}, \ensuremath{q^{\prime(i-1)}})}
{\widehat{r} + 2 < \maxdom{\widehat{f}}}
{\exists j \colon \mathit{state}[j] = \ensuremath{q^{(i+1)}}}
}
{\alist
{\mathit{state} := \ensuremath{q^{\prime(i)}}}
{\mathit{state}[j] := \top}
{\widehat{r} = \widehat{r} + 2}
}
\end{equation*}
for every transition
\begin{equation*}
(\ensuremath{q^{(i-2)}}, \ensuremath{q^{(i-1)}}, \ensuremath{q^{(i)}}, \ensuremath{q^{(i+1)}}) \xrightarrow{t} (\ensuremath{q^{\prime(i-2)}}, \ensuremath{q^{\prime(i-1)}}, \ensuremath{q^{\prime(i)}}) \in \delta_\A^{(2i+1)}
\end{equation*}
\paragraph{Adding a node at level $2i+2$.} Firstly we ensure that there is some
child $j$ where such a node can be appended. We simulate creation of a child by
nondeterministically choosing a summary and increasing the corresponding
unbounded counter. Index $3$ in $\mathit{children}[j,f,3]$ means that this child is
after the first interaction with its ancestors at levels $2i$ and $2i+1$, that
happened at its creation.
\begin{equation*}
\vassrule
{\alist
{\mathit{state} = \ensuremath{q^{(i)}}}
{\exists j \colon \mathit{state}[j] = \ensuremath{q^{(i+1)}}}
}
{\alist
{\mathit{state} = \ensuremath{q^{\prime(i)}}}
{\mathit{state}[j] = \ensuremath{q^{\prime(i+1)}}}
{\mathit{children}[j,f,3] \text{ += } 1}
{\text{ for some $f \in \mathit{Summary}(2i+2)$ s.t.}}
{\text{\qquad \qquad \qquad $f(1)=(\ensuremath{q^{(i)}},\ensuremath{q^{(i+1)}})$ and $f(2)=(\ensuremath{q^{\prime(i)}},\ensuremath{q^{\prime(i+1)}})$}}
}
\end{equation*}
\paragraph{Progressing a child at level $2i+2$.} We identify an appropriate
child $j$ which itself has a child in state $(f, r)$. We use the
test $r + 2 < \maxdom{f}$ to ensure that the last interaction of the node is
reserved for deletion of our root node.
\begin{equation*}
\vassrule
{\alist
{\mathit{state} = \ensuremath{q^{(i)}}}
{
\exists (j,f,r) \colon \mathit{state}[j] = \ensuremath{q^{(i+1)}}\\
& \hspace{15mm} \text{ and } f(r) = (\ensuremath{q^{(i)}}, \ensuremath{q^{(i+1)}}) \\
& \hspace{15mm} \text{ and } f(r+1) = (\ensuremath{q^{\prime(i)}}, \ensuremath{q^{\prime(i+1)}})\\
& \hspace{15mm} \text{ and } (r+2) < \maxdom{f} \\
& \hspace{15mm} \text{ and } \mathit{children}[j,f,r] \geq 1
}
}
{\alist
{\mathit{state} := \ensuremath{q^{\prime(i)}}}
{\mathit{state}[j] := \ensuremath{q^{\prime(i+1)}}}
{\mathit{children}[j,f,r+2] \text{ += } 1}
{\mathit{children}[j,f,r] \text{ -= } 1}
}
\end{equation*}
Observe that the test $\mathit{children}[j,f,r] \geq 1$ can be simulated by a VASS because
we have $\mathit{children}[j,f,r] \text{ -= } 1$ in the statement that follows.
\paragraph{Removing a node at level $2i+2$.} We find a child which has completed
its summary to the point that it can now be removed. We use the last values in
$f$ to determine how to remove the node.
\begin{equation*}
\vassrule
{\alist
{\mathit{state} = \ensuremath{q^{(i)}}}
{
\exists (j,f,r) \colon \mathit{state}[j] = \ensuremath{q^{(i+1)}}\\
& \hspace{15mm} \text{ and } f(r) = (\ensuremath{q^{(i)}}, \ensuremath{q^{(i+1)}}) \\
& \hspace{15mm} \text{ and } f(r+1) = (\ensuremath{q^{\prime(i)}}, \ensuremath{q^{\prime(i+1)}})\\
& \hspace{15mm} \text{ and } (r+1) = \maxdom{f}\\
& \hspace{15mm} \text{ and } \mathit{children}[j,f,r] \geq 1
}
}
{\alist
{\mathit{state} := \ensuremath{q^{\prime(i)}}}
{\mathit{state}[b] := \ensuremath{q^{\prime(i+1)}}}
{\mathit{children}[j,f,r] \text{ -= } 1}
}
\end{equation*}
\begin{lemma}
Program $\mathtt{TEST}(\widehat{f})$ accepts iff
$\widehat{f}\in\mathit{Summary}(\ensuremath{\mathcal{A}},2i)$.
\end{lemma}
\begin{proof}
By definition, $\widehat{f}\in\mathit{Summary}(\ensuremath{\mathcal{A}},2i)$ if automaton $\ensuremath{\mathcal{B}}=\ensuremath{\mathcal{A}}^{\downarrow}(2i,\widehat{f})$ accepts a
trace.
By Lemma~\ref{lem:summary-step} this is equivalent to
$\ensuremath{\mathcal{B}}^{\uparrow}(2,\mathit{Summary}(\ensuremath{\mathcal{A}},2i+2))$ accepting some trace.
It can be checked that the instructions of $\mathtt{TEST}(\widehat{f})$ correspond
one-to-one to transitions of $\ensuremath{\mathcal{B}}^{\uparrow}(2,\mathit{Summary}(\ensuremath{\mathcal{A}},2i+2))$.
So an accepting run of $\mathtt{TEST}(\widehat{f})$ can be obtained from a trace accepted by
$\ensuremath{\mathcal{B}}^{\uparrow}(2,\mathit{Summary}(\ensuremath{\mathcal{A}},2i+2))$, and vice versa.
\end{proof}
}
\section{Additional material for Section~\ref{sec:tofica}}
\label{apx:tofica}
\paragraph{Word representation}
Let $\Aut=\abra{\Sigma,k,Q,\delta}$ be a leafy automaton.
We shall assume that $\Sigma,Q\subseteq\{0,\cdots,\imax\}$ so that we can
encode the alphabet and states using type $\expt$.
First we discuss how to assign a play $\play{w}$ to a trace $w$ of $\Aut$.
The basic idea is to simulate each transition with two moves, by $O$ and $P$ respectively.
The child-parent links in $\D$ will be represented by justification pointers.
\begin{itemize}
\item Suppose $w=w' (t,d)$ with $t\in\Sigma_\Q$.
We will represent $(t,d)$ by a segment of the form $\rnode{A}{{\mq}^{{\vec{i}}}}\quad \rnode{B}{\mrun}^{t\vec{i}}\justg{B}{A}$.
If $w'=\epsilon$, we let $\play{w}=\rnode{A}{\mq} \,\,\rnode{B}{\mrun^{t}}\justf{B}{A}$, i.e. $\vec{i}=\epsilon$.
If $w'\neq\epsilon$ then, because $w$ is a trace, $w'$ must contain a unique
occurrence of $(t',\pred{d})$ for some $t'\in\Sigma_\Q$.
Then, if $(t',\pred{d})$ was represented by $\rnode{A}{\mq^{\vec{i'}}} \rnode{B}{\mrun}^{t'\vec{i'}}\justg{B}{A}$ in $\play{w'}$,
we let $\play{w}=\rnode{C}{\play{w'}}\,\,\,\rnode{K}{\rnode{D}{{\mq}}^{1 t'{\vec{i'}}}}\justn{D}{C}{100}\,\,\, \rnode{B}{\mrun}^{t 1 t'\vec{i'}}\justn{B}{K}{100}$,
where $\mq^{1 t'\vec{i'}}$ points at $\mrun^{t' \vec{i'}}$.
\item Suppose $w=w'(t,d)$ with $t\in\Sigma_{\A}$.
Because $w$ is a trace, $w'$ must contain a unique occurrence $(t',d)$ for some $t'\in\Sigma_\Q$.
If $(t',d)$ is represented by the segment $\rnode{A}{\mq^{\vec{i}}} \rnode{B}{\mrun^{t'\vec{i}}}\justf{B}{A}$
in $\play{w'}$,
we set $\play{w}=\play{w'} \,\,\mdone^{t'\vec{i}}\,\, t^{\vec{i}}$, where the two answer-moves are justified by
$\mrun^{t'\vec{i}}$ and ${\mq^{\vec{i}}}$ respectively.
Because $w$ is a trace, we can be sure that after processing $w'$, $\Aut$ enters a configuration in which $d$ is a leaf.
Thus, the two answers will satisfy the game-semantic $\wait$ condition, and $\play{w}$ will be well-defined.
\end{itemize}
The $\fork$ condition is satisfied for $\play{w}$, because reading an answer removes the corresponding data value from the configuration and, hence, it cannot be used as a justifier afterwards.
In what follows, we write $\theta^n\rarr\beta$ for $\underbrace{\theta \rarr\cdots\rarr\theta}_{n}\rarr\beta$ for $n\in\N$.
The lemma below identifies the types that correspond to our encoding of traces.
\begin{lemma}
Let $N=\imax+1$. Suppose $\Aut$ is a $k$-$\la$ and $w\in\trace{\Aut}$.
Then $\play{w}$ is a play in $\sem{\theta_k}$,
where $\theta_0=\comt^N\rarr\expt$ and $\theta_{i+1}=(\theta_i\rarr\comt)^N\rarr\expt$ ($i\ge 0$).
\end{lemma}
\cutout{
Before we state the main result, we recall from~\cite{GM08} that strategies corresponding to $\fica$ terms
satisfy a closure condition known as~\emph{saturation}: swapping two adjacent moves in a play belonging
to such a strategy yields another play from the same strategy,
provided the swap yields a play and it is not the case
that the first move is an O-move and the second one a P-move.
Thus, saturated strategies express causal dependencies of P-moves on O-moves.
Consequently, one cannot expect to find a $\fica$-term such that the corresponding
strategy is the smallest strategy containing $\{\,\play{w}\,|\,w\in \trace{\Aut}\,\}$.
Instead, the best one can aim for is the following result.
}
\subsection{Saturation}
{
The game model~\cite{GM08} of $\fica$ consists of \emph{saturated} strategies only: the saturation
condition stipulates that all possible (sequential) observations of
(parallel) interactions must be present in a strategy: actions of the
environment (O) can always be observed earlier if possible, actions of the
program (P) can be observed later. To formalize this, for any arena
$A$, we define a preorder $\preceq$ on $P_A$, as the least transitive
relation $\preceq$ satisfying
$s\, o\, m\, s'\preceq s\, m\, o\, s'$ and $s\, m\, p\, s'\preceq s\, p\, m\, s'$
for all $s,s'$,
where $o$ and $p$ are an O- and a P-move respectively (in the above pairs of plays
moves on the left-hand-side of $\preceq$ are assumed to have the same justifiers as on the right-hand-side).
\begin{definition}\label{def:sat}
A strategy $\sigma:A$ is \emph{saturated} iff, for all $s,s'\in P_A$,
if $s\in \sigma$ and $s'\preceq s$ then $s'\in\sigma$.
\end{definition}
\begin{remark}\label{rem:causal}
Definition~\ref{def:sat} states that saturated strategies are stable
under certain rearrangements of moves.
Note that $s_0\, p\, o\, s_1\not \preceq s_0\, o\, p\, s_1$, while other move-permutations are allowed.
Thus, saturated strategies express causal dependencies of P-moves on O-moves. This partial-order aspect
is captured explicitly in concurrent games based on event structures~\cite{CCRW17}.
\end{remark}
}
\subsection{Proof of Theorem~\ref{thm:toalgol}}
\begin{proof}
Our assumption $Q\subseteq\{0,\cdots,\imax\}$ allows us to maintain $\Aut$-states in the memory of $\fica$-terms.
A question $t_{\Q}^{(i)}$ read by $\Aut$ at level $i$ is represented by the variable $f^{(i)}_{t_{\Q}^{(i)}}$,
the corresponding answers $t_{\A}^{(i)}$ are represented by constants $t_{\A}^{(i)}$ (using our assumption $\Sigma\subseteq\{0,\cdots,\imax\}$).
The level $i$ of the data tree is encoded by the order of the variable
$f^{(i)}_{t_{\Q}^{(i)}}$.
For $0\le i < k$, the variables $f_t^{(i)}$ are meant to have type
$\theta_{k-i-1}\rarr\comt$ and $f_t^{(k)}:\comt$.
This ensures that questions and answers respect the tree structure on data.
To achieve nesting, we rely on a higher-order structure of the term:
$\lambda f^{(0)}.f^{(0)}(\lambda f^{(1)}.f^{(1)}(\lambda f^{(2)}.f^{(2)}(\cdots \lambda f^{(k)}. f^{(k)})))$.
Recall that the semantics of $f M$ consists of an arbitrary number of interleavings of $M$. This feature is used to mimic the fact that a leafy
automaton can spawn unboundedly many offspring.
Finally, instead of single variables $f^{(i)}$, we will actually use sequences
$f^{(i)}_0\cdots f^{(i)}_\imax$, which will be used to induce the right move $\mrun^{t\vec{i}}$ when representing $t\in\Sigma_{\Q}\subseteq \{0,\cdots,\imax\}$. Additionally, the term contains state-manipulating code that enables $P$-moves only if they are
consistent with the transition function of $\Aut$.
To achieve this, every level is equipped with a local variable $X^{(i)}$ of type
$\expt$, so that
states on a single branch are represented by $\vvec{X^{(i)}} = (X^{(0)},\cdots,X^{(i)})$.
Given $\alpha\in\{\Q,\A\}$ and $-1\le j\le k$, we write
$\vvec{r_{\alpha}^{(j)}}$ for a tuple of values $(r_{\alpha}^{(0)},\cdots, r_{\alpha}^{(j)})$ on
the understanding that $\vvec{r_{\alpha}^{(-1)}}=\dagger$. A similar convention will apply to $\vvec{u_{\alpha}^{(j)}}$.
Then we use $\vvec{X^{(i)} [ {u_{\alpha}^{(j')}}/{r_{\alpha}^{(j)}}]}$, where $-1\le j,j' \le i$, as shorthand for $\fica$ code that checks componentwise whether the values of
$\vvec{X^{(j)}}$ equal $\vvec{r_{\alpha}^{(j)}}$ and, if so, updates $\vvec{X^{(j')}}$ to $\vvec{u_{\alpha}^{(j')}}$ (if the check fails, the code should diverge). For $j=-1$ (resp. $j'=-1$),
there is nothing to check (resp. update). All occurrences of $\vvec{X^{(i)} [ {u_{\alpha}^{(j')}}/{r_{\alpha}^{(j)}}]}$ will be protected by a semaphore to ensure mutual exclusion. Consequently,
they will induce exactly the causal dependencies (cf. Remark~\ref{rem:causal})
consistent with sequences of $\Aut$-transitions, i.e. with the shape of $\play{w}$ for some $w\in\trace{\Aut}$.
To select transitions at each stage, we rely on non-deterministic choice $\bigoplus$, which can be encoded in $\fica$\footnote{
$M_1\oplus M_2 =
\newin{X\,\raisebox{0.065ex}{:}{=}\, 0}{((X\,\raisebox{0.065ex}{:}{=}\, 0\, \parc\, X\,\raisebox{0.065ex}{:}{=}\, 1});\cond{!X}{M_1}{M_2})$.}.
Below we define inductively a family of terms $\seq{}{M_i:\theta_{k-i}}$ ($0\le
i\le k$). Term $M_\Aut$ is then obtained by making a simple change to $M_0$.
For any $0\le i\le k$, let $M_i$ be the term
\[\begin{array}{rl}
\lambda f_0^{(i)}\cdots f_\imax^{(i)}. &\newin{X^{(i)}\,\raisebox{0.065ex}{:}{=}\, 0}{}\\
\bigoplus\limits_{(\vvec{r_{\Q}^{(i-1)}}, {\displaystyle t_{\Q}^{(i)}}, \vvec{u_{\Q}^{(i)}})\in\delta_{\Q}^{(i)}}& \Big( \grb{s}; \vvec{X^{(i)} [ {u_{\Q}^{(i)}}/{r_{\Q}^{(i-1)}}]} ;\rls{s};\quad f_{t_{\Q}^{(i)}}^{(i)}\, M_{i+1}; \\[-4mm]
&\quad \bigoplus_{(\vvec{r_{\A}^{(i)}}, {\displaystyle t_{\A}^{(i)}}, \vvec{u_{\A}^{(i-1)}})\in\delta_{\A}^{(i)}} \big( \grb{s}; \vvec{X^{(i)} [ {u_{\A}^{(i-1)}}/{r_{\A}^{(i)}}]} ;\rls{s}; t_{\A}^{(i)}\big)\Big).
\end{array}\]
We write $M_{k+1}$ for empty space (this is for a good reason, because
$f_t^{(k)}:\comt$).
The above term $M_i$ declares a new variable to store the state, and then makes a
non-deterministic choice for question transitions that create data values at
level $i$.
The update of the state is protected by a semaphore.
Then the appropriate $f^{(i)}_t$ is applied to term $M_{i+1}$ that simulates moves of the
automaton on data in the subtree of the freshly created node.
This is followed by the code making a non-deterministic choice over all answer transitions. To define $M_{\Aut}$, it now suffices to declare the semaphore in $M_0$, i.e. given $M_0 = \lambda f_0^{(0)}\cdots f_\imax^{(0)}. {\newin{X^{(0)}\,\raisebox{0.065ex}{:}{=}\, 0}{M}}$
we let $M_\Aut$ be
\[
\lambda f_0^{(0)}\cdots f_\imax^{(0)}. \newsem{s\,\raisebox{0.065ex}{:}{=}\, 0}{\newin{X^{(0)}\,\raisebox{0.065ex}{:}{=}\, 0}{M}}.
\]
\end{proof}
\begin{example}
We illustrate the outcome of the construction from Theorem~\ref{thm:toalgol} for $k=1$.
\[\arraycolsep=1.4pt
\begin{array}{lll}
\lambda f_0^{(0)}\cdots f_\imax^{(0)}. &\multicolumn{2}{l}{\newsem{s\,\raisebox{0.065ex}{:}{=}\, 0}{ \newin{X^{(0)}\,\raisebox{0.065ex}{:}{=}\, 0}{}}}\\
\bigoplus\limits_{(\dagger, {t_{\Q}^{(0)}}, {u_{\Q}^{(0)}})\in\delta_{\Q}^{(0)}}&\multicolumn{2}{l}{ \Bigg( \grb{s}; \, \vvec{X^{(0)} [ {u_{\Q}^{(0)}}/\dagger]};\,\rls{s};}\\
&\multicolumn{1}{r}{\quad f_{t_{\Q}^{(0)}}^{(0)}\,\, \bigg(\,\, \lambda f_0^{(1)}\cdots f_\imax^{(1)}.} & {\newin{X^{(1)}\,\raisebox{0.065ex}{:}{=}\, 0}{}}\\
&\multicolumn{1}{r}{\bigoplus\limits_{({r_{\Q}^{(0)}}, {t_{\Q}^{(1)}}, \vvec{u_{\Q}^{(1)}})\in\delta_{\Q}^{(1)}}}& {\Big( \grb{s}; \, \vvec{X^{(1)} [ {u_{\Q}^{(1)}}/{r_{\Q}^{(0)}}]};\, \rls{s};\,\, f_{t_{\Q}^{(1)}}^{(1)}; }\\
&&{\quad \bigoplus_{(\vvec{r_{\A}^{(1)}}, t_{\A}^{(1)}, {u_{\A}^{(0)}})\in\delta_{\A}^{(1)}}} \big( \grb{s}; \, \vvec{X^{(1)} [ {u_{\A}^{(0)}}/{r_{\A}^{(1)}}]} ;\, \rls{s};\, t_{\A}^{(1)}\big)\Big)\bigg);\\
&\multicolumn{2}{l}{\quad \bigoplus_{({r_{\A}^{(0)}}, t_{\A}^{(0)}, \dagger)\in\delta_{\A}^{(0)}} \big( \grb{s}; \, \vvec{X^{(0)} [ \dagger/{r_{\A}^{(0)}}]} ;\,\rls{s}; \,t_{\A}^{(0)})\Bigg)}\\
\end{array}
\]
where
\[\begin{array}{rcl}
\vvec{X^{(0)} [ {u_{\Q}^{(0)}}/\dagger]} &=& X^{(0)}\,\raisebox{0.065ex}{:}{=}\, u_{\Q}^{(0)}\\
\vvec{X^{(1)} [ {u_{\Q}^{(1)}}/{r_{\Q}^{(0)}}]} &=& \cond{(X^{(0)}=r_{\Q}^{(0)})}{(X^{(0)}\,\raisebox{0.065ex}{:}{=}\, u_{\Q}^{(0)};X^{(1)}\,\raisebox{0.065ex}{:}{=}\, u_{\Q}^{(1)})}{\Omega}\\
\vvec{X^{(1)} [ {u_{\A}^{(0)}}/{r_{\A}^{(1)}}]} &=& \cond{((X^{(0)}=r_{\A}^{(0)}) \wedge (X^{(1)}=r_{\A}^{(1)}))}{(X^{(0)}\,\raisebox{0.065ex}{:}{=}\, u_{\A}^{(0)})}{\Omega}\\
\vvec{X^{(0)} [ \dagger/{r_{\A}^{(0)}}]} &=& \cond{(X^{(0)}=r_{\A}^{(0)})}{\skipcom}{\Omega}\\
\Omega &=&\while{1}{\skipcom}
\end{array}\]
\end{example}
\section{Additional material for Section~\ref{sec:fica}}
\label{apx:opsem}
\subsection{Operational semantics of $\fica$}
{
The operational semantics is defined using a (small-step) transition
relation $\step{s}{M}{s'}{M'}$, where $\mem$ is a set of variable names
denoting active \emph{memory cells} and \emph{semaphore locks}.
$s,s'$ are states, i.e.\ functions $s,s':\mem\rightarrow\makeset{0,\cdots,\imax}$, and $M,M'$ are
terms. We write $s\otimes (v\mapsto i)$ for the state obtained by augmenting $s$ with $(v\mapsto i)$, assuming $v\not\in \dom{s}$.
The basic reduction rules are given in Figure~\ref{fig:os},
where $c$ stands for any language constant ($i$ or $\skipcom$)
and $\widehat{\mathbf{op}}:\{0,\cdots,\imax\}\rarr\{0,\cdots,\imax\}$
is the function corresponding to $\mathbf{op}$.
In-context reduction is given by the schemata:
\begin{center}
\AxiomC{$\mem,v\vdash M[v/x],s\otimes(v\mapsto i)\longrightarrow M',s'\otimes(v\mapsto i') $ \quad $M\neq c$}
\UnaryInfC{$\mem\vdash\newin{x\,\raisebox{0.065ex}{:}{=}\, i}{M},s\longrightarrow \newin{x\,\raisebox{0.065ex}{:}{=}\, i'}{M'[x/v]}, s' $}
\DisplayProof\\[2ex]
\AxiomC{$\mem,v\vdash M[v/x],s\otimes(v\mapsto i)\longrightarrow M',s'\otimes(v\mapsto i') $\quad $M\neq c$}
\UnaryInfC{$\mem\vdash\newsem{x\,\raisebox{0.065ex}{:}{=}\, i}{M},s\longrightarrow \newsem{x\,\raisebox{0.065ex}{:}{=}\, i'}{M'[x/v]}, s' $}
\DisplayProof\\[2ex]
\AxiomC{$\step{s}{M}{s'}{M'}$}
\UnaryInfC{$\step{s}{\mathcal E[M]}{s'}{\mathcal E[M']}$}
\DisplayProof
\end{center}
where reduction contexts $\mathcal E[-]$ are produced by the
grammar:
\[\begin{array}{rcl}
\mathcal E[-] &::=& [-] \mid \mathcal E;N
\mid (\mathcal E\,\parc\, N)
\mid (M\,\parc\, \mathcal E)
\mid {\mathcal E} N
\mid \arop{\mathcal E}
\mid \cond{\mathcal E}{N_1}{N_2}\\
&&\mid {!}\mathcal E
\mid \mathcal E\,\raisebox{0.065ex}{:}{=}\, m
\mid M\,\raisebox{0.065ex}{:}{=}\,\mathcal E
\mid \grb{\mathcal E}
\mid \rls{\mathcal E}.
\end{array}\]
\begin{figure}[t]
\begin{center}
$\begin{array}{rclcrcl}
\astep {s}{\skipcom\parc\skipcom}{s}{\skipcom} &\quad & \astep {s}{\cond{i}{N_1}{N_2}}{s}{N_1},\quad i\neq 0\\
\astep {s}{\skipcom;c}{s}{c} && \astep {s}{\cond{0}{N_1}{N_2}}{s}{N_2}\\
\astep {s}{\arop{i}}{s}{\widehat{\mathbf{op}}(i)} && \astep{s}{(\lambda x.M) N}{s}{M[N/x]}\\
\astep {s}{\newin{x\,\raisebox{0.065ex}{:}{=}\, i}c}{s}{c} && \astep {s\otimes(v\mapsto i)}{{!}v}{s\otimes(v\mapsto i)}{i}\\
\astep {s}{\newsem{x\,\raisebox{0.065ex}{:}{=}\, i}c}{s}{c} &&\astep {s\otimes(v\mapsto i)}{v\,\raisebox{0.065ex}{:}{=}\, i'}{s\otimes(v\mapsto i')}{\skipcom}
\end{array}$
\end{center}
\begin{center}
$\begin{array}{rcl}
\astep {s\otimes(v\mapsto 0)}{\grb v}{s\otimes(v\mapsto 1)}{\skipcom}\\
\astep {s\otimes(v\mapsto i)}{\rls v}{s\otimes(v\mapsto 0)}{\skipcom},\quad i\neq 0\\
\astep {s}{\while{M}{N}}{s}{\cond{M}{(N;\while{M}{N})}{\skipcom}}
\end{array}$
\end{center}
\caption{Reduction rules for $\fica$}
\label{fig:os}
\end{figure}
$\seq{}{M:\comt}$ is said to terminate, written $M\Downarrow$, if
$\emptyset \vdash \emptyset,\,M\longrightarrow^\ast \emptyset,\skipcom$.
\bigskip
Idealized Concurrent Algol~\cite{GM08} also features variable and semaphore constructors,
called $\textbf{mkvar}$ and $\textbf{mksem}$ respectively,
which play a technical role in the full abstraction argument, similarly to~\cite{AM97a}.
We omit them in the main body of the paper, because
they do not present technical challenges, but they are covered in the Appendix for the sake of completeness.
\paragraph{Typing rules}
\[
\AxiomC{$\Gamma\vdash M:\expt\rarr\comt$}
\AxiomC{$\Gamma\vdash N:\expt$}
\BinaryInfC{$\Gamma\vdash \mkvar{M}{N}:\vart$}
\DisplayProof
\quad
\AxiomC{$\Gamma\vdash M:\comt$}
\AxiomC{$\Gamma\vdash N:\comt$}
\BinaryInfC{$\Gamma\vdash \mksem{M}{N}:\semt$}
\DisplayProof
\]
\paragraph{Reduction rules}
\begin{align*}
\step {s&}{(\mkvar{M}{N})\,\raisebox{0.065ex}{:}{=}\, M'}{s}{M M'}\\
\step {s&}{{!}(\mkvar{M}{N}}{s}{N}\\
\step {s&}{\grb {\mathbf{mksem}\,M N}}{s}{M}\\
\step {s&}{\rls {\mathbf{mksem}\,M N}}{s}{N}
\end{align*}
\paragraph{$\eta$ rules for $\vart,\semt$}
\[\begin{array}{rcl}
M &\longrightarrow & \mkvar{(\lambda x^\expt. M\,\raisebox{0.065ex}{:}{=}\, x)}{!M}\\
M &\longrightarrow & \mksem{\grb{M}}{\rls{M}}
\end{array}\]
Using $\mathbf{mkvar}$ and $\mathbf{mksem}$,
one can define $\divcom_\theta$ as syntactic sugar using $\divcom=\divcom_\comt$ only.
\[
\divcom_\theta=\left\{
\begin{array}{lcl}
\divcom && \theta=\comt\\
\divcom;0 && \theta=\expt\\
\mkvar{\lambda x^\expt.\divcom}{\divcom_\expt} && \theta=\vart\\
\mksem{\divcom}{\divcom} & &\theta=\semt\\
\lambda x^{\theta_1}.\divcom_{\theta_2} & & \theta=\theta_1\rarr\theta_2\\
\end{array}\right.
\]
\section{Additional material for Section~\ref{sec:leafy}}
\label{apx:leafy}
\subsection{Proof of Lemma~\ref{lem:la1}}
We proceed by reducing from the halting problem for deterministic two-counter machines~\cite[pp.~255--258]{Min67}.
The input to the halting problem is a deterministic two-counter machine \\* $\mathcal{C} = (Q_\mathcal{C}, q_0, q_F, T)$, where $Q_\mathcal{C}$ is the set of states, $q_0, q_F \in Q_\mathcal{C}$ are the initial and final states respectively, and $T : Q_\mathcal{C} ~\setminus~\{q_F\} \rightarrow (\move{INC} \cup \move{JZDEC})$ is the step function. Steps in $\move{INC}$ are of the form $(i, q') \in \{1,2\} \times Q_\mathcal{C}$ (increment counter $i$ and go to state $q'$). Steps in $\move{JZDEC}$ are of the form $(i, q', q'') \in \{1,2\} \times Q_\mathcal{C} \times Q_\mathcal{C}$ (if counter $i$ is zero then go to state $q'$; else decrement counter $i$ and go to state $q''$). The question is whether, starting from $q_0$ with both counters zero, $\mathcal{C}$ eventually reaches $q_F$ with both counters zero.
\bigskip
We first construct a $1$-$\la$ that recognises the language of all data words such that:
\begin{itemize}
\item
the underlying word (i.e., the projection onto the finite alphabet) encodes a path through the transition relation of $\mathcal{C}$ from the initial state to the final state, in other words a pseudo-run where the non-negativity of counters and the correctness of zero tests are ignored;
\item
the occurrences of the letters that encode increments and decrements of $\mathcal{C}$ form pairs that are labelled by the same level-$1$ data values, where each increment is earlier than the corresponding decrement, which assuming that both counters are zero initially ensures their non-negativity throughout the pseudo-run and their being zero finally.
\end{itemize}
The second $1$-$\la$ is slightly more complex. It accepts data words that have the same properties as those accepted by the first $1$-$\la$, and in addition:
\begin{itemize}
\item there exists some increment followed by a zero test of the same counter before a decrement with the same data value has occurred, in other words there is at least one incorrect zero test in the pseudo-run.
\end{itemize}
The two sets of accepted traces will be equal if and only if all pseudo-runs that satisfy the initial, non-negativity and final conditions necessarily contain some incorrect zero test, i.e.\ if and only if $\mathcal{C}$ does not halt as required. We give the formal construction below.
\bigskip
\newcommand\Aone{\mathcal{A}_1(\mathcal{C})}
\newcommand\Atwo{\mathcal{A}_2(\mathcal{C})}
The two LAs we compute are $\Aone = \langle \Sigma, 1, Q, \delta_1 \rangle$ and $\Atwo = \langle \Sigma, 1, Q, \delta_2 \rangle$.
The alphabet, $\Sigma = \Sigma_\Q \cup \Sigma_\A$, is defined as follows:
\[
\Sigma_\Q = \{ \move{start}, \move{inc_1}, \move{inc_2}, \move{zero_1}, \move{zero_2} \}
\qquad
\Sigma_\A = \{ \move{end}, \move{dec_1}, \move{dec_2}, \move{zero'_1}, \move{zero'_2} \}
\]
Traces of $\Aone$ and $\Atwo$ represent pseudo-runs of $\mathcal{C}$, i.e.~sequences of steps of the machine. Aside from $\move{start}$ and $\move{end}$, each letter in the trace corresponds to the machine performing either an $\move{INC}$ step ($\move{inc}$), the ``then'' of a $\move{JZDEC}$ step ($\move{zero}$), or the ``else'' of a $\move{JZDEC}$ step ($\move{dec}$). The $\move{zero'}$ transition is a necessity which allows us to erase leaves added by $\move{zero}$. Each of $\move{inc}$, $\move{dec}$, $\move{zero}$, $\move{zero'}$ has two variants which encode $i$, the counter number in the corresponding step. We will say that two letters \emph{match} if they have the same data value.
By construction $\Aone$ will accept exactly the traces with the following properties, which correspond to the high-level description of our first $1$-$\la$:
\begin{itemize}
\item The first letter in the trace is $\move{start}$ and the last is a matching $\move{end}$.
\item For each occurrence of $\move{inc_i}$, there is a matching $\move{dec_i}$ later in the trace.
\item For each occurrence of $\move{zero_i}$, there is a matching $\move{zero'_i}$ later in the trace.
\item The letters in the trace (excluding $\move{start}$ and $\move{end}$) form a sequence $(a_0,\ldots,a_{n-1})$; there exists some sequence of states $(s_0,\ldots,s_n) \in Q_\mathcal{C}^{n+1}$ such that for all $i \in (0,\ldots,n-1)$, $s_{i+1}$ appears as the second or third component of~$T(s_{i})$, and $a_i$ is a step which may be performed at state $s_i$ (irrespective of counter values).
\end{itemize}
The state space of the root, $Q^{(0)} = Q_{\mathcal{C}} \times \{\circ, \star, \mathbf{1}, \mathbf{2} \}$, comprises pairs where the first component corresponds to a state of $\mathcal{C}$ and the second tracks an observation of some invalid sequence. The second component is only used in $\Atwo$. We denote the pair at the root by square brackets. The states of the leaves at level 1 are $Q^{(1)} = \bigcup \big\{~ \{ i, 0_{i}, i\star \} ~\big|~ i \in \{1, 2\} ~\big\}$, where $0_{i}$ denotes a temporary leaf generated by $\move{zero_i}$, $i$ denotes a counter, and $i\star$ denotes a counter being observed in $\Atwo$.
The transition function $\delta_1$ of $\Aone$ is defined as follows.
\[
\dagger \trans{\move{start}}_1 [q_0, \circ]
\qquad
[q_F, \circ] \trans{\move{end}}_1 \dagger
\qquad
\frac{q \trans{\move{INC}} (i,q')~\in T}{
[q, \circ] \trans{\move{inc_i}}_1 ([q', \circ], i)
}
\]
\[
\frac{q \trans{\move{JZDEC}} (i, q', q'')~\in T}{
([q, \circ], i) \trans{\move{dec_i}}_1 [q'', \circ]
\qquad
[q, \circ] \trans{\move{zero_i}}_1 ([q', \circ], 0_{i})
}
\qquad
\frac{q \in Q_\mathcal{C}}
{
([q, \circ], 0_{i}) \trans{\move{zero'_i}}_1 [q, \circ]
}
\]
\bigskip
By construction $\Atwo$ accepts exactly those traces of $\Aone$ where at least one $\move{zero_i}$ letter occurs in between an $\move{inc_i}$ letter and the matching letter $\move{dec_i}$. In other words, the ``then'' of a $\move{JZDEC}$ step has been taken while the counter was nonzero. This is not a legal step, and so such a trace does not represent a computation of $\mathcal{C}$. This implements the high-level description of our second $1$-$\la$.
In order to accept a word, $\Atwo$ must change the second component of the root's state from $\star$ to $\circ$. It does this by nondeterministically choosing to observe some $\move{inc}$ transition. From here, it proceeds as in $\Aone$ until either it meets the matching $\move{dec}$, in which case the automaton rejects, or it meets an $\move{ifz}$ transition on the same counter, at which point it marks the second component with $\circ$ and proceeds as in $\Aone$.
The transition function $\delta_2$ of $\Atwo$ is defined as follows:
\[
\dagger \trans{\move{start}}_2 [q_0, \star]
\qquad
[q_F, \circ] \trans{\move{end}}_2 \dagger
\qquad
\frac{
q \trans{\move{INC}} (i,q')~\in T
\qquad
x \in \{\circ, \star, \mathbf{1}, \mathbf{2}\}
}{
[q, x] \trans{\move{inc_i}}_2 ([q', x], i)
\qquad
[q, \star] \trans{\move{inc_i}}_2 ([q, \mathbf{i}], i\star)
}
\]
\[
\frac{
q \trans{\move{JZDEC}} (i,q',q'')~\in T
\qquad
x \in \{\circ, \star, \mathbf{1}, \mathbf{2}\}
}{
[q, x] \trans{\move{zero_i}}_2 ([q', x], 0_{i})
\qquad
[q, \mathbf{i}] \trans{\move{zero_i}}_2 ([q', \circ], 0_{i})
}
\qquad
\frac{
q \in Q_\mathcal{C}
\qquad
x \in \{\circ, \star, \mathbf{1}, \mathbf{2}\}
}{
([q, x], 0_{i}) \trans{\move{zero'_i}}_2 [q, x]
}
\]
\[
\frac{
q \trans{\move{JZDEC}} (i,q',q'')~\in T
\qquad
x \in \{\circ, \star, \mathbf{1}, \mathbf{2}\}
}{
([q, x], i) \trans{\move{dec_i}}_2 [q'', x]
\qquad
([q, \circ], i\star) \trans{\move{dec_i}}_2 [q'', \circ]
}
\]
\bigskip
$\Aone$ captures every correctness condition for halting computations of $\mathcal{C}$ except the legality of $\move{zero}$ steps. Hence, $\Atwo$ accepts exactly those accepted traces of $\Aone$ which are \emph{not} halting computations of $\mathcal{C}$, and so $\mathcal{C}$ performs a halting computation if and only if $\Aone \neq \Atwo$.
\section{Additional material for Section~\ref{sec:tola}}
\label{apx:tola}
\subsection{Proof of Theorem~\ref{thm:trans}}
Because every $\fica$-term can be converted to $\beta\eta$-normal form, we use induction on the structure of such normal forms.
The base cases are:
\begin{itemize}
\item $\seq{\Gamma}{\skipcom:\comt}$: $Q^{(0)}= \{0\}$, $\dagger \trans{\mrun} 0$,\,\, $0 \trans{\mdone} \dagger$;
\item $\seq{\Gamma}{\divcom_\comt:\comt}$: $Q^{(0)}= \{0\}$, $\dagger \trans{\mrun} 0$;
\item $\seq{\Gamma}{\divcom_\theta:\theta}$: $Q^{(0)}= \{0\}$, $\dagger \trans{m} 0$,
assuming $\theta=\theta_l\rarr\cdots\rarr\theta_1\rarr\beta$ and $m$ ranges over
question-moves from $M_{\sem{\beta}}$;
\item $\seq{\Gamma}{i:\expt}$: $Q^{(0)}= \{0\}$, $\dagger \trans{\q} 0$,\,\, $0 \trans{i} \dagger$.
\end{itemize}
Observe that they are clearly even-ready, because only one node is ever created.
The remaining cases are inductive.
Note that we will use $\mm$ to range over $\alp{\seq{\Gamma}{\theta}}+\{\eq,\ea\}$, i.e. not only $M_{\sem{\seq{\Gamma}{\theta}}}$, and
recall our convention that $m\in M_{\sem{\seq{\Gamma}{\theta}}}$ stands for $m^{(\epsilon,0)}$.
When referring to the inductive hypothesis, i.e. the automaton constructed for some subterm $M_i$,
we will use the subscript $i$ to refer to its components, e.g. $Q_i^{(j)}$, $\trans{\mm}_i$ etc.
In contrast, we shall use $Q^{(j)}$, $\trans{\mm}$ to refer to the automaton that is being constructed.
The construction will often use inference lines $\frac{\qquad}{\qquad}$ to indicate that the transitions listed under the line should be added
to the new automaton as long as the transitions listed above the line are present in an automaton given by the inductive hypothesis.
Sometimes we will invoke the inductive hypothesis for several terms, which can provide several automata of different depths.
Without loss of generality, we will then assume that they all have the same depth $k$, because an automaton of lower depth can be
viewed as one of higher depth.
\begin{itemize}
\item $\seq{\Gamma}{\arop{M_1}:\expt}$: $Q^{(j)}=Q_1^{(j)}$ ($0\le j\le k$). In order to interpret unary operators it suffices
to modify transitions carrying the final answer in the automaton for $M_1$. Formally, this is done as follows.
\[
\frac{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm}_1 (r_1^{(0)},\cdots, r_1^{(j')}) \qquad \mm\neq i}{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm} (r_1^{(0)},\cdots, r_1^{(j')})}
\qquad
\frac{q_1^{(0)} \trans{i}_1 \dagger}{q_1^{(0)} \trans{\widehat{\mathbf{op}}(i)} \dagger}
\]
Above, $j$ ranges over $\{-1,0,\cdots, k\}$, so that $(q_1^{(0)}, \cdots, q_1^{(j)})$ can also stand for $\dagger$.
Even-readiness is preserved by the construction, because the
configuration graph of the original automaton is preserved.
\item $\seq{\Gamma}{M_1|| M_2}:\comt$: $Q^{(0)} = Q_1^{(0)} \times Q_2^{(0)}$,
$Q^{(j)}= Q_1^{(j)}+Q_2^{(j)}$ $(1\le j\le k)$.
The first group of transitions activate and terminate the two components respectively:
\[
\frac{\dagger\trans{\mrun}_1 q_1^{(0)}\qquad \dagger\trans{\mrun}_2 q_2^{(0)}}{\dagger\trans{\mrun}(q_1^{(0)},q_2^{(0)}) }\qquad
\frac{q_1^{(0)}\trans{\mdone}_1 \dagger\qquad q_2^{(0)}\trans{\mdone}_2 \dagger}{(q_1^{(0)},q_2^{(0)})\trans{\mdone}\dagger}.
\]
The remaining transitions allow each component to progress.
\[
\frac{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm}_1 (r_1^{(0)},\cdots, r_1^{(j')})\qquad q_2^{(0)}\in Q_2^{(0)}\qquad \mm\neq\mrun,\mdone}{((q_1^{(0)},q_2^{(0)}), \cdots, q_1^{(j)}) \trans{\mm} ((r_1^{(0)},q_2^{(0)}),\cdots, r_1^{(j')})}\qquad
\]
\[
\frac{q_1^{(0)}\in Q_1^{(0)}\qquad (q_2^{(0)}, \cdots, q_2^{(j)}) \trans{\mm}_2 (r_2^{(0)},\cdots, r_2^{(j')})\qquad \mm\neq\mrun,\mdone
}{((q_1^{(0)},q_2^{(0)}), \cdots, q_2^{(j)}) \trans{\mm} ((q_1^{(0)},r_2^{(0)}),\cdots, r_2^{(j')})}\]
Even-readiness at even levels different from $0$ follows from even-readiness of the automata obtained in IH, because the construction simply runs them
concurrently without interaction at these levels. For level $0$, we observe that, whenever the root reaches state $(q_1^{(0)},q_2^{(0)})$,
even-readiness of the two automata implies that each of them has removed all nodes below the root, i.e. the root will be a leaf.
\cutout{
\item {$M_1|| M_2$ (a different translation)}
$\seq{\Gamma}{M_1|| M_2}$: $Q^{(0)} = \{0,1,2\}$, $Q^{(j+1)}= Q_1^{(j)}+Q_2^{(j)} (0\le j\le k)$.
The automaton starts up with $\dagger\trans{\mrun} 0$,
the components are activated (sequentially but silently) by
\[
\frac{\dagger\trans{\mrun}_1 q_1^{(0)}}{0\trans{\eq}(1,q_1^{(0)})}\qquad
\frac{\dagger\trans{\mrun}_2 q_2^{(0)}}{1\trans{\eq}(2,q_2^{(0)})}.
\]
When the level-$0$ state is $2$, they progress independently thanks to
\[
\frac{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm}_1 (r_1^{(0)},\cdots, r_1^{(j')})}{(2, q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm} (2, r_1^{(0)},\cdots, r_1^{(j')})}\qquad \mm\neq\mrun,\mdone
\]
\[
\frac{(q_2^{(0)}, \cdots, q_2^{(j)}) \trans{\mm}_2 (r_2^{(0)},\cdots, r_2^{(j')})}{(2, q_2^{(0)}, \cdots, q_2^{(j)}) \trans{\mm} (2, r_2^{(0)},\cdots, r_2^{(j')})}\qquad \mm\neq\mrun,\mdone,
\]
and are wound down by
\[
\frac{q_1^{(0)}\trans{\mdone}_1 \dagger}{(2,q_1^{(0)}) \trans{\ea} 2}\qquad
\frac{q_2^{(0)}\trans{\mdone}_2 \dagger}{(2,q_2^{(0)}) \trans{\ea} 2}\qquad 2\trans{\mdone} \dagger.
\]
After the translation we need to readjust pointers from moves of the form $m^{x,\off}$, where $x$ is a variable from $\Gamma$, so that they point at the initial occurrence of $\mrun$.
This can be done simply by incrementing the offset by $1$, i.e. each move of the form $m^{x,\off}$ is replaced with $m^{x,\off+1}$.
}
\cutout{\item $\seq{\Gamma}{M_1;M_2}:\comt$: $Q^{(0)} = \{0,1,1.5, 2,3\}$, $Q^{(j+1)}= Q_1^{(j)}+Q_2^{(j)} (0\le j\le k)$.
The automaton starts up with $\dagger\trans{\mrun} 0$,
the first component is then enabled by
\[
\frac{\dagger\trans{\mrun}_1 q_1^{(0)}}{0\trans{\eq}(1,q_1^{(0)})}\qquad
\frac{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm}_1 (r_1^{(0)},\cdots, r_1^{(j')})\qquad \mm\neq\mrun,\mdone}{(1, q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm} (1, r_1^{(0)},\cdots, r_1^{(j')})}.
\]
When the automaton for $M_1$ finishes, the automaton for $M_2$ is activated by
\[
\frac{q_1^{(0)}\trans{\mdone}_1 \dagger}{(1,q_1^{(0)}) \trans{\ea} 1.5}\qquad
\frac{\dagger\trans{\mrun}_2 q_2^{(0)}}{1.5\trans{\eq}(2,q_2^{(0)})}\qquad
\frac{(q_2^{(0)}, \cdots, q_2^{(j)}) \trans{\mm}_2 (r_2^{(0)},\cdots, r_2^{(j')})\qquad \mm\neq\mrun,\mdone}{(2, q_2^{(0)}, \cdots, q_2^{(j)}) \trans{\mm} (2, r_2^{(0)},\cdots, r_2^{(j')})}.
\]
When $M_2$ is done, we wind down the computation with
$\frac{q_2^{(0)}\trans{\mdone}_2 \dagger}{(2,q_2^{(0)}) \trans{\ea} 3}$ and $3\trans{\mdone} \dagger$.
Note that the construction adds an extra level at the top of the data tree, which corresponds to the initial move and
the corresponding answer. Consequently,
we need to adjust pointers from moves that should point at the initial move. In this case,
these will be moves of the form $m^{x,\off}$, where $m$ is a question and $x$ is a variable from $\Gamma$.
The adjustment can be performed simply by renaming, i.e. in transitions,
each \emph{question-tag} of the form $m^{(x,\off)}$ is replaced with $m^{(x,\off+1)}$.}
\item $\seq{\Gamma}{M_1;M_2}:\comt$: $Q^{(i)} = Q^{(i)}_1 + Q^{(i)}_2$ ($0\le i\le k$).
We let the automaton for $M_1$ run first (except for the final step
$\mdone$):
\[
\frac{\dagger\trans{\mrun}_1 q_1^{(0)}}{\dagger\trans{\mrun}q_1^{(0)}}\qquad
\frac{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm}_1 (r_1^{(0)},\cdots, r_1^{(j')})\qquad \mm\neq\mdone}{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm} (r_1^{(0)},\cdots, r_1^{(j')})}.
\]
Whenever the automaton $M_1$ can terminate, we pass control to
the automaton for $M_2$ via
\[
\frac{q_1^{(0)}\trans{\mdone}_1 \dagger \qquad
\dagger \trans{\mrun}_2 q_2^{(0)} \qquad
q_2^{(0)} \trans{\mm}_2 (r_2^{(0)},\cdots, r_2^{(j')})\qquad \mm\neq \mrun}{
q_1^{(0)} \trans{\mm} (r_2^{(0)},\cdots, r_2^{(j')})}
\]
and allow it to continue
\[
\frac{
(q_2^{(0)}, \cdots, q_2^{(j)}) \trans{\mm}_2 (r_2^{(0)},\cdots, r_2^{(j')})\qquad \mm\neq \mrun}{(q_2^{(0)}, \cdots, q_2^{(j)}) \trans{\mm} (r_2^{(0)},\cdots, r_2^{(j')})}.
\]
Note that the construction relies crucially on even-readiness
of the automaton for $M_1$, because we move to the automaton for $M_2$
as soon as the automaton $M_1$ arrives at a configuration with level-$0$
state $q_1^{(0)}$ such that $q_1^{(0)}\trans{\mdone}_1 \dagger$.
Thanks to even-readiness, we can conclude that the root will be the only
node in the configuration then and the transition can indeed fire, i.e. $M_1$ is really finished.
Even-readiness of the new automaton follows from the fact that the original automata were even-ready, because we are re-using their transitions (and when the automaton for $M_2$ is active,
that for $M_1$ has not left any nodes).
\item $\seq{\Gamma}{M_1;M_2:\beta}$
The general case is nearly the same as the $\comt$ case presented
above except that we need to keep track of what initial move has been played
in order to perform the transition to $M_2$ correctly.
This is especially important for $\beta=\vart,\semt$, where there are multiple initial moves.
This extra information will be stored at level $0$, while the automaton
corresponding to $M_1$ is active. Below we present a general construction
parameterized by the set $I$ of initial moves.
The set $I$ is defined as follows.
\begin{itemize}
\item $\beta=\comt$: $I=\{\mrun\}$
\item $\beta=\expt$: $I=\{\mq\}$
\item $\beta=\vart$: $I=\{\mread,\mwrite{0},\cdots, \mwrite{\imax}\}$
\item $\beta=\semt$: $I=\{\mgrb,\mrls\}$
\end{itemize}
\bigskip
States
\[\begin{array}{rcl}
Q^{(0)} &=& (Q^{(0)}_1\times I) + Q^{(0)}_2\\
Q^{(i)} &=& Q^{(i)}_1 + Q^{(i)}_2\qquad (0 < i\le k)
\end{array}\]
Transitions
\[
\frac{\dagger\trans{\mrun}_1 q_1^{(0)}\qquad x\in I}{\dagger\trans{x} (q_1^{(0)},x)}
\]
\[
\frac{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm}_1 (r_1^{(0)},\cdots, r_1^{(j')})\qquad \mm\neq\mdone\qquad x\in I}{((q_1^{(0)},x), \cdots, q_1^{(j)}) \trans{\mm} ((r_1^{(0)},x),\cdots, r_1^{(j')})}.
\]
\[
\frac{q_1^{(0)}\trans{\mdone}_1 \dagger \qquad
\dagger \trans{x}_2 q_2^{(0)} \qquad
q_2^{(0)} \trans{\mm}_2 (r_2^{(0)},\cdots, r_2^{(j')})\qquad x\in I\qquad \mm\not\in I}{
(q_1^{(0)},x) \trans{\mm} (r_2^{(0)},\cdots, r_2^{(j')})}
\]
\[
\frac{
(q_2^{(0)}, \cdots, q_2^{(j)}) \trans{\mm}_2 (r_2^{(0)},\cdots, r_2^{(j')})\qquad \mm\not\in I}{(q_2^{(0)}, \cdots, q_2^{(j)}) \trans{\mm} (r_2^{(0)},\cdots, r_2^{(j')})}
\]
None of the $M_1;M_2$ cases requires an adjustment of pointers, because the inherited indices are accurate.
\item $\seq{\Gamma}{\newin{x\,\raisebox{0.065ex}{:}{=}\, i}{M_1}}:\beta$.
By~\cite{GM08},
$\sem{\seq{\Gamma}{\newin{x\,\raisebox{0.065ex}{:}{=}\, i}{M_1}}}$ can be obtained by
\begin{itemize}
\item first restricting $\sem{\seq{\Gamma,x}{M_1}}$ to plays in which
the moves $\mread^x$, $\mwrite{n}^x$ are followed immediately by answers,
\item selecting only those plays in which each answer to a $\mread^x$-move is consistent
with the preceding $\mwrite{n}^x$-move (or equal to $i$, if no preceding $\mwrite{n}^x$ was made),
\item erasing all moves related to $x$, e.g. those of the form $m^{(x,\rho)}$.
\end{itemize}
To implement the above recipe, we will lock the automaton after each $\mread^x$- or $\mwrite{n}^x$-move, so that only an answer to that move can be played next.
Technically, this will be done by annotating the level-$0$ state with a $\mathit{lock}$-tag.
Moreover, at level $0$, we will also keep track of the current value of $x$. This will help us ensure that
answers to $\mread^x$ are consistent with the stored value and that $\mwrite{n}^x$ transitions cause the right change.
Eventually, all moves with the $x$ subscript will be replaced with $\eq,\ea$ to model hiding.
Accordingly,
we take $Q^{(0)}=(Q_1^{(0)} + (Q_1^{(0)}\times \{\mathit{lock}\})) \times\{0,\cdots,\imax\}$ and $Q^{(j)} = Q_1^{(j)}$ ($1\le j\le k$).
First, we make sure that the state component is initialised to $i$ and that it can be arbitrary at the very end:
\[
\frac{\dagger \trans{\mm}_1 q_1^{(0)}}{\dagger\trans{\mm} (q_1^{(0)},i)}
\qquad
\frac{q_1^{(0)} \trans{\mm}_1 \dagger\qquad 0\le n\le \imax}{(q_1^{(0)},n)\trans{\mm} \dagger}.
\]
Transitions involving moves different from $\mwrite{z}^x$, $\mok^x$, $\mread^x$, $z^x$ (and the moves handled above) progress unaffected
while preserving $n$ (the current value of $x$ recorded at level $0$):
\[
\frac{\begin{array}{lcl}
(q_1^{(0)},\cdots, q_1^{(j)})\trans{\mm}_1 (r_1^{(0)},\cdots, r_1^{(j')}) &\quad & \mm\neq \mread^x, z^x, \mwrite{z}^x,\mok^x\\
&& 0\le j,j'\qquad 0\le n\le \imax\qquad
\end{array}
}{((q^{(0)}_1,n),\cdots, q_1^{(j)})\trans{\mm}((r_1^{(0)},n),\cdots, r_1^{(j')})}.
\]
Transitions using $\mread^x$, $\mwrite{z}^x$ add a lock at level $0$.
The lock can be lifted only if a corresponding answer is played (because of the lock, a unique $\mwrite{z}^x$ or $\mread^x$ will be pending).
Its value must be consistent with the value of $x$ recorded at level $0$.
\[
\frac{(q_1^{(0)},\cdots, q_1^{(j)})\trans{\mwrite{z}^{(x,\rho)}}_1 (r_1^{(0)},\cdots, r_1^{(j')})\qquad 0\le n,z\le \imax}{
((q_1^{(0)},n),\cdots, q_1^{(j)})\trans{\eq} ((r_1^{(0)},\lock, z),\cdots, r_1^{(j')})}
\]
\[
\frac{(q_1^{(0)},\cdots, q_1^{(j)})\trans{\mread^{(x,\rho)}}_1 (r_1^{(0)},\cdots, r_1^{(j')})\qquad 0\le n\le \imax}{
((q_1^{(0)},n),\cdots, q_1^{(j)}) \trans{\eq} ((r_1^{(0)},\lock,n),\cdots, r_1^{(j')}))}
\]
\[
\frac{(r_1^{(0)},\cdots, r_1^{(j')})\trans{\mok^x}_1 (t_1^{(0)},\cdots, t_1^{(j)})\qquad 0\le n\le \imax}{
((r_1^{(0)},\lock, n),\cdots, r_1^{(j')})\trans{\ea} ((t_1^{(0)},n),\cdots, t_1^{(j)})}
\]
\[
\frac{(r_1^{(0)},\cdots, r_1^{(j')})\trans{n^{x}}_1 (t_1^{(0)},\cdots, t_1^{(j)})\qquad 0\le n\le \imax}{
((r_1^{(0)},\lock, n),\cdots, r_1^{(j')})\trans{\ea} ((t_1^{(0)},n),\cdots, t_1^{(j)})}
\]
As the construction involves running the original automaton and transitions corresponding to P-answers
are not modified, even-readiness follows directly from IH. For the same reason, the indices corresponding
to justification pointers need no adjustment.
\item The case of $\newsem{x\,\raisebox{0.065ex}{:}{=}\, i}{M_1}$ is similar to $\newin{x\,\raisebox{0.065ex}{:}{=}\, i}{M_1}$. We represent the state of the semaphore using an additional bit at level $0$,
where $0$ means free and $1$ means taken.
We let $Q^{(0)}=(Q_1^{(0)} + (Q_1^{(0)}\times \{\lock\})) \times\{0,1\}$ and $Q^{(j)} = Q_1^{(j)}$ ($1\le j\le k$).
First, we make sure the bit is initialised to $i$ and can be arbitrary at the very end.
\[
\frac{\dagger \trans{\mm}_1 q_1^{(0)}\qquad i=0}{\dagger\trans{\mm} (q_1^{(0)},0)}\qquad
\frac{\dagger \trans{\mm}_1 q_1^{(0)}\qquad i>0}{\dagger\trans{\mm} (q_1^{(0)},1)}\qquad
\frac{q_1^{(0)} \trans{\mm}_1 \dagger\qquad z\in\{0,1\}}{(q_1^{(0)},z)\trans{\mm} \dagger}
\]
Transitions involving moves other than $\mrls^{(x,\rho)}$, $\mgrb^{(x,\rho)}$ and $\mok^x$
proceed as before, while preserving the state of the semaphore.
\[
\frac{
(q_1^{(0)},\cdots, q_1^{(j)})\trans{\mm}_1(r_1^{(0)},\cdots, r_1^{(j')})\qquad z\in \{0,1\} \qquad \mm\neq \mrls^{(x,\rho)}, \mgrb^{(x,\rho)}, \mok^x
}{((q_1^{(0)},z),\cdots, q_1^{(j)})\trans{\mm}((r_1^{(0)},z),\cdots, r_1^{(j')})}
\]
Transitions using $\mrls^{(x,\rho)}$, $\mgrb^{(x,\rho)}$ proceed only if they are compatible with the current state of the semaphore,
as represented by the extra bit.
At the same time, each time $\mgrb^{(x,\rho)}$ or $\mrls^{(x,\rho)}$ is played,
we lock the automaton so that the corresponding answer can be played next.
The moves are then hidden and replaced with $\eq$ and $\ea$.
\[
\frac{(q_1^{(0)},\cdots, q_1^{(j)})\trans{\mgrb^{(x,\rho)}}_1 (r_1^{(0)},\cdots, r_1^{(j')})}{
((q_1^{(0)},0),\cdots, q_1^{(j)})\trans{\eq}
((r_1^{(0)},\lock,1),\cdots, r_1^{(j')})}\]
\[
\frac{(q_1^{(0)},\cdots, q_1^{(j)})\trans{\mrls^{(x,\rho)}}_1 (r_1^{(0)},\cdots, r_1^{(j')})}{
((q_1^{(0)},1),\cdots, q_1^{(j)})\trans{\eq} ((r_1^{(0)},\lock,0),\cdots, r_1^{(j')})}
\]
\[
\frac{(r_1^{(0)},\cdots, r_1^{(j')})\trans{\mok^x}_1 (t_1^{(0)},\cdots, t_1^{(j)})\qquad z\in\{0,1\}}{
((r_1^{(0)},\lock,z),\cdots, r_1^{(j')})\trans{\ea} ((t_1^{(0)},z),\cdots, t_1^{(j)})}
\]
\item $\seq{\Gamma}{f M_h \cdots M_1:\comt}$ with $(f: \theta_h\rarr\cdots\rarr\theta_1\rarr\comt)\in\Gamma$.
Note that this also covers the case $f:\comt$.
$Q^{(0)} = \{0,1,2\}$, $Q^{(1)}=\{0\}$, $Q^{(j+2)}= Q^{(j)}$ ($0\le j\le k$).
First we add transitions corresponding to calling and returning from $f$: $\dagger \trans{\mrun} 0$,\quad $0\trans{\mrun^f} (1,0)$,\quad
$(1,0)\trans{\mdone^f} 2$,\quad $2\trans{\mdone} \dagger$.
In state $(1,0)$ we want to enable the environment to spawn an unbounded number of copies of each of $\seq{\Gamma}{M_u:\theta_u}$ ($1\le u\le h$).
This is done through the following rules, which embed the actions of the automata for $M_u$ while relabelling the moves.
\begin{itemize}
\item Moves from $M_u$ corresponding to $\theta_u$ obtain an additional annotation $f u$, as they are now the $u$th argument of
$f:\theta_h\rarr\cdots\rarr\theta_1\rarr\comt$.
\[
\frac{ (q_u^{(0)},\cdots, q_u^{(j)}) \trans{m^{(\vec{i},\rho)}}_u (q_u^{(0)},\cdots, q_u^{(j')})}{(1,0,q_u^{(0)},\cdots, q_u^{(j)}) \trans{m^{(f u \vec{i},\rho)}} (1,0,q_u^{(0)},\cdots, q_u^{(j')})}
\]
Note that above we mean $j,j'$ to range over $\{-1,0,\cdots, k\}$, so that $(q_u^{(0)}, \cdots, q_u^{(j)})$ and
$(q_u^{(0)},\cdots, q_u^{(j')})$ can also stand for $\dagger$.
The pointer structure is simply inherited in this case, but an additional pointer needs to be created to $\mrun^f$ from the old initial move for $M_u$, i.e. $m^{(\epsilon,0)}$,
which did not have a pointer earlier.
Fortunately, because we also use $\rho=0$ in initial moves to represent the lack of a pointer, by copying $0$ now
we indicate that the move $m^{fu,\rho}$ points one level up, i.e. at the new $\mrun^f$ move, as required.
\item The moves from $M_u$ that originate from $\Gamma$,
i.e. moves of the form $m^{(x_v\vec{i},\rho)}$ ($1\le v \le l$), where $(x_v\in \theta_v)\in\Gamma$,
need no relabelling except for question moves that should point at the initial move.
These moves correspond to question-tags of the form $m^{(x_v,\rho)}$.
Leaving $\rho$ unchanged in this case would mean pointing at $m^{f u,0}$, whereas we need to point at $\mrun$ instead.
To readjust such pointers, we simply add $2$ to $\rho$, and
preserve $\rho$ in other moves.
\[
\frac{ (q_u^{(0)},\cdots, q_u^{(j)}) \trans{m^{(x_v,\rho)}}_u (q_u^{(0)},\cdots, q_u^{(j')})\qquad \textrm{$m$ is a question}}{(1,0,q_u^{(0)},\cdots, q_u^{(j)}) \trans{m^{(x_v,\rho+2)}} (1,0,q_u^{(0)},\cdots, q_u^{(j')})}
\]
\[
\frac{ (q_u^{(0)},\cdots, q_u^{(j)}) \trans{m^{(x_v\vec{i},\rho)}}_u (q_u^{(0)},\cdots, q_u^{(j')})\qquad\textrm{$\vec{i}\neq \epsilon$ or ($\vec{i}=\epsilon$ and $m$ is an answer)}}{(1,0,q_u^{(0)},\cdots, q_u^{(j)}) \trans{m^{(x_v \vec{i},\rho)}} (1,0,q_u^{(0)},\cdots, q_u^{(j')})}
\]
\end{itemize}
The construction clearly preserves even-readiness at level $0$.
For other even levels, this follows directly from IH as we are simply
running copies of the automata from IH.
\item $\seq{\Gamma}{f M_h \cdots M_1:\expt}$. Here we follow the same recipe as for $\comt$ except that the initial and final transitions need to be changed from
\[
\dagger \trans{\mrun} 0\qquad 0\trans{\mrun^f} (1,0)\qquad (1,0)\trans{\mdone^f} 2\qquad 2\trans{\mdone} \dagger
\]
to
\[
\dagger \trans{\mq} 0\qquad 0\trans{\mq^f} (1,0)\qquad (1,0)\trans{i^f} 2^i\qquad 2^i\trans{i} \dagger.
\]
\item $\seq{\Gamma}{f M_h \cdots M_1:\vart}$. Here a slightly more complicated adjustment is needed to account for the two kinds of initial moves.
Consequently, we need to distinguish two copies of $1$, i.e. $1^r$ and $1^w$.
\[
\dagger \trans{\mread} 0\qquad 0\trans{\mread^f} (1^r,0)\qquad (1^r,0)\trans{i^f} 2^i\qquad 2^i\trans{i} \dagger.
\]
\[
\dagger \trans{\mwrite{i}} 0^i\qquad 0^i\trans{\mwrite{i}^f} (1^w,0)\qquad (1^w,0)\trans{\mok} 2\qquad 2\trans{\mok} \dagger.
\]
All the other rules allowing for transitions between states of the form $(1,0,\cdots)$ need to be replicated for $(1^r,0,\cdots)$ and $(1^w,0,\cdots)$.
\item $\seq{\Gamma}{f M_h \cdots M_1:\semt}$. This is similar to the previous case.
To account for the two kinds of initial moves, we use states $1^g$ and $1^r$.
\[
\dagger \trans{\mgrb} 0^g\qquad 0^g\trans{\mgrb^f} (1^g,0)\qquad (1^g,0)\trans{\mok^f} 2^g\qquad 2^g\trans{\mok} \dagger
\]
\[
\dagger \trans{\mrls} 0^r\qquad 0^r\trans{\mrls^f} (1^r,0)\qquad (1^r,0)\trans{\mok^f} 2^r\qquad 2^r\trans{\mok} \dagger
\]
All the other rules allowing for transitions between states of the form $(1,0,\cdots)$ need to be replicated for $(1^r,0,\cdots)$ and $(1^g,0,\cdots)$.
\item $\seq{\Gamma}{\lambda x.M_1: \theta_h\rarr \cdots\rarr \theta_1\rarr \beta}$: This is simply dealt with by renaming labels in the automaton for
$\seq{\Gamma,x:\theta_h}{M_1: \theta_{h-1}\rarr\cdots\rarr\theta_1\rarr\beta}$:
tags of the form $m^{(x\vec{i},\rho)}$ must be renamed as $m^{(h\vec{i},\rho)}$.
\item $\seq{\Gamma}{\cond{M_1}{M_2}{M_3}:\beta}$
This case is similar to $M_1;M_2$ except that $M_1$ of type $\expt$, so the associated move is $\mq$ rather than $\mrun$.
Morever, once $M_1$ terminates, the automaton for either $M_2$ or $M_3$ must be activated, as appropriate.
\bigskip
States
\[\begin{array}{rcl}
Q^{(0)} &=& (Q^{(0)}_1\times I) + Q^{(0)}_2 +Q^{(0)}_3\\
Q^{(i)} &=& Q^{(i)}_1 + Q^{(i)}_2 + Q^{(i)}_3\qquad (0 < i\le k)
\end{array}\]
Transitions
\[
\frac{\dagger\trans{\mq}_1 q_1^{(0)}\qquad x\in I}{\dagger\trans{x} (q_1^{(0)},x)}
\]
\[
\frac{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm}_1 (r_1^{(0)},\cdots, r_1^{(j')})\qquad \mm\not\in\{0,\cdots,\imax\}\qquad x\in I}{((q_1^{(0)},x), \cdots, q_1^{(j)}) \trans{\mm} ((r_1^{(0)},x),\cdots, r_1^{(j')})}.
\]
\[
\frac{q_1^{(0)}\trans{i}_1 \dagger \qquad i>0 \qquad
\dagger \trans{x}_2 q_2^{(0)} \qquad
q_2^{(0)} \trans{\mm}_2 (r_2^{(0)},\cdots, r_2^{(j')})\qquad x\in I\qquad \mm\not\in I}{
(q_1^{(0)},x) \trans{\mm} (r_2^{(0)},\cdots, r_2^{(j')})}
\]
\[
\frac{q_1^{(0)}\trans{0}_1 \dagger \qquad
\dagger \trans{x}_3 q_3^{(0)} \qquad
q_3^{(0)} \trans{\mm}_3 (r_3^{(0)},\cdots, r_3^{(j')})\qquad x\in I\qquad \mm\not\in I}{
(q_1^{(0)},x) \trans{\mm} (r_3^{(0)},\cdots, r_3^{(j')})}
\]
\[
\frac{
(q_2^{(0)}, \cdots, q_2^{(j)}) \trans{\mm}_2 (r_2^{(0)},\cdots, r_2^{(j')})\qquad \mm\not\in I}{(q_2^{(0)}, \cdots, q_2^{(j)}) \trans{\mm} (r_2^{(0)},\cdots, r_2^{(j')})}
\]
\[
\frac{
(q_3^{(0)}, \cdots, q_3^{(j)}) \trans{\mm}_3 (r_3^{(0)},\cdots, r_3^{(j')})\qquad \mm\not\in I}{(q_3^{(0)}, \cdots, q_3^{(j)}) \trans{\mm} (r_3^{(0)},\cdots, r_3^{(j')})}
\]
None of the cases requires an adjustment of pointers, because the inherited indices are accurate.
Even-readiness follows directly from IH.
\item $\seq{\Gamma}{\while{M_1}{M_2}:\comt}$:
\bigskip
States
\[
Q^{(j)}= Q_1^{(j)}+Q_2^{(j)}\qquad 0\le j\le k
\]
Transitions
\[
\frac{\dagger\trans{\mq}_1 q_1^{(0)}}{\dagger\trans{\mrun} q_1^{(0)}} \qquad
\frac{q_1^{(0)}\trans{0}_1 \dagger}{ q_1^{(0)} \trans{\mdone} \dagger}
\]
\[
\frac{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm}_1 (r_1^{(0)},\cdots, r_1^{(j')})\qquad \mm\not \in\{\mq,0,\cdots,\imax\}}{(q_1^{(0)}, \cdots, q_1^{(j)})
\trans{\mm} (r_1^{(0)},\cdots, r_1^{(j')})}
\]
\[
\frac{q_1^{(0)}\trans{i}_1 \dagger \qquad i>0\qquad\dagger \trans{\mrun}_2 q_2^{(0)}\trans{\mm}_2 (r_2^{(0)}, r_2^{(1)})\qquad \mm\neq\mdone}{
q_1^{(0)}\trans{\mm} (r_2^{(0)}, r_2^{(1)})}
\]
\[
\frac{q_1^{(0)}\trans{i}_1 \dagger\qquad i>0 \qquad\dagger \trans{\mrun}_2 q_2^{(0)}\trans{\mdone}_2 \dagger\qquad \dagger\trans{\mq}_1 r_1^{(0)}
\trans{\mm}_1 (u_1^{(0)}, u_1^{(1)}) \qquad \mm\not\in\{0,\cdots,\imax\}}{
q_1^{(0)}\trans{\mm} (u_1^{(0)},u_1^{(1)})}
\]
\[
\frac{q_1^{(0)}\trans{i}_1 \dagger\qquad i>0 \qquad\dagger \trans{\mrun}_2 q_2^{(0)}\trans{\mdone}_2 \dagger\qquad \dagger\trans{\mq}_1 r_1^{(0)}
\trans{0}_1 \dagger }{
q_1^{(0)}\trans{\mdone} \dagger}
\]
---
\[
\frac{(q_2^{(0)}, \cdots, q_2^{(j)}) \trans{\mm}_2 (r_2^{(0)},\cdots, r_2^{(j')})\qquad \mm\not \in\{\mrun,\mdone\}}{(q_2^{(0)}, \cdots, q_2^{(j)})
\trans{\mm} (r_2^{(0)},\cdots, r_2^{(j')})}
\]
\[
\frac{q_2^{(0)}\trans{\mdone}_2 \dagger \qquad\dagger \trans{\mq}_1 q_1^{(0)}\trans{\mm}_1 (r_1^{(0)}, r_1^{(1)})\qquad \mm\not\in\{0,\cdots,\imax\}}{
q_2^{(0)}\trans{\mm} (r_1^{(0)}, r_1^{(1)})}
\]
\[
\frac{q_2^{(0)}\trans{\mdone}_2 \dagger \qquad\dagger \trans{\mq}_1 q_1^{(0)}\trans{0}_1 \dagger}{
q_2^{(0)}\trans{\mdone} \dagger}
\]
\[
\frac{q_2^{(0)}\trans{\mdone}_2 \dagger\qquad\dagger \trans{\mq}_1 q_1^{(0)}\trans{i}_1 \dagger\qquad i>0\qquad \dagger\trans{\mrun}_2 r_2^{(0)}
\trans{\mm}_2 (u_2^{(0)},u_2^{(1)})\qquad \mm\neq\mdone}{
q_2^{(0)}\trans{\mm} (u_2^{(0)}, u_2^{(1)})}
\]
As before, no pointers need adjustment, even-readiness is inherited.
\item $\seq{\Gamma}{!M_1:\expt}$
To model dereferencing, it suffices to explore the plays that start with $\mread$ in the automaton for $M_1$, the $\mread$ gets relabelled to $\mq$.
\bigskip
States
\[
Q^{(j)}= Q_1^{(j)}\qquad (0\le j\le k)
\]
Transitions
\[
\frac{\dagger\trans{\mread}_1 q_1^{(0)}}{\dagger \trans{\mq} q_1^{(0)}}\qquad
\frac{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm}_1 (r_1^{(0)},\cdots, r_1^{(j')})\qquad \mm\neq\mread,\mwrite{i}{},\mok}{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm} (r_1^{(0)},\cdots, r_1^{(j')})}\qquad
\]
Note that the second rule will also handle transitions
with the tag $i$. No pointer readjustment is needed, as the inherited pointers are accurate.
Even-readiness follows from IH.
\item $\seq{\Gamma}{M_1\,\raisebox{0.065ex}{:}{=}\, M_2:\comt}$
For assignment, we first direct the computation into the automaton for $M_2$ and, depending on the final move $i$, continue
in the automaton for $M_1$ as if $\mwrite{i}$ was played.
This is similar to $M_1;M_2$.
\bigskip
States
\[\begin{array}{rcl}
Q^{(i)} &=& Q^{(i)}_1 + Q^{(i)}_2\qquad (0 \le i\le k)
\end{array}\]
Transitions
\[
\frac{\dagger\trans{\mq}_2 q_2^{(0)}}{\dagger\trans{\mrun} q_2^{(0)}}
\]
\[
\frac{(q_2^{(0)}, \cdots, q_2^{(j)}) \trans{\mm}_2 (r_2^{(0)},\cdots, r_2^{(j')})\qquad \mm\not\in\{0,\cdots,\imax\} }{
(q_2^{(0)}, \cdots, q_2^{(j)}) \trans{\mm} (r_2^{(0)},\cdots, r_2^{(j')})}
\]
\[
\frac{q_2^{(0)}\trans{i}_2 \dagger \qquad i\in\{0,\cdots,\imax\}\qquad
\dagger \trans{\mwrite{i}}_1 q_1^{(0)} \qquad
q_1^{(0)} \trans{\mm}_1 (r_1^{(0)},\cdots, r_1^{(j')})\qquad \mm\neq\mok}{
q_2^{(0)} \trans{\mm} (r_1^{(0)},\cdots, r_1^{(j')})}
\]
\[
\frac{
(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm}_1 (r_1^{(0)},\cdots, r_1^{(j')})\qquad \mm\not\in \{\mread,\mwrite{0},\cdots,\mwrite{\imax},0,\cdots, \imax,\mok\}}{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm} (r_1^{(0)},\cdots, r_1^{(j')})}
\]
\[
\frac{q_1^{(0)}\trans{\mok}_1 \dagger}{ q_1^{(0)}\trans{\mdone} \dagger}
\]
None of the cases requires an adjustment of pointers, because the inherited indices are accurate.
\item $\seq{\Gamma}{\grb{M_1}:\comt}$: $Q^{(j)}= Q_1^{(j)}$ ($0\le j\le k$). Here we simply need to direct the automaton to perform the same transitions as $M_1$ would, starting from $\mgrb$.
At the same time, $\mgrb$ and the corresponding answer $\mok$ have to be relabelled as $\mrun$ and $\mdone$ respectively.
\[
\frac{\dagger\trans{\mgrb}_1 q_1^{(0)}}{\dagger \trans{\mrun} q_1^{(0)}}\qquad
\frac{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm}_1 (r_1^{(0)},\cdots, r_1^{(j')})\qquad\mm\neq\mgrb,\mrls,\mok}{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm} (r_1^{(0)},\cdots, r_1^{(j')})}\qquad
\frac{q_1^{(0)}\trans{\mok}_1 \dagger}{q_1^{(0)} \trans{\mdone} \dagger}
\]
\item $\seq{\Gamma}{\rls{M_1}:\comt}$: $Q^{(j)}= Q_1^{(j)}$ ($0\le j\le k$). Here we simply need to direct the automaton to perform the same transitions as $M_1$ would, starting from $\mrls$.
At the same time, $\mrls$ and the corresponding answer $\mok$ have to be relabelled as $\mrun$ and $\mdone$ respectively.
\[
\frac{\dagger\trans{\mrls}_1 q_1^{(0)}}{\dagger \trans{\mrun} q_1^{(0)}}\qquad
\frac{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm}_1 (r_1^{(0)},\cdots, r_1^{(j')})\qquad\mm\neq\mgrb,\mrls,\mok}{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm} (r_1^{(0)},\cdots, r_1^{(j')})}\qquad
\frac{q_1^{(0)}\trans{\mok}_1 \dagger}{q_1^{(0)} \trans{\mdone} \dagger}
\]
{
\item $\seq{\Gamma}{\mkvar{M_1}{M_2}:\vart}$. Recall that $\seq{\Gamma}{M_1:\expt\rarr\comt}$.
Because we are using terms in normal form $M_1=\lambda x^\expt.M_1'$.
For $0\le i\le\imax$, consider $N_i=M_1'[i/x]$, which is of smaller size than $M_1$. Let us apply IH to $N_i$
and write $Q_{1i}^{(j)}$ and $\trans{}_{1i}$ for components of the resultant automaton.
Let $Q^{(j)}= \sum_{i=0}^\imax Q_{1i}^{(j)} +Q_2^{(j)}$ ($0<j\le k$).
In this case, after $\mwrite{i}$ we redirect transitions to the automaton for $N_i$, and after $\mread$ - to $M_2$,
relabelling the initial and final moves as appropriate.
\[
\frac{\dagger\trans{\mrun}_{1i} q_{1i}^{(0)}\qquad 0\le i\le \imax}{\dagger \trans{\mwrite{i}} q_{1i}^{(0)}}\qquad
\frac{q_{1i}^{(0) }\trans{\mdone}_{1i} \dagger \qquad 0\le i\le \imax}{q_{1i}^{(0)} \trans{\mok} \dagger}
\]
\[
\frac{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm}_{1i} (r_1^{(0)},\cdots, r_1^{(j')})\qquad\mm\neq\mrun,\mdone}{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm} (r_1^{(0)},\cdots, r_1^{(j')})}
\]
\[
\frac{\dagger\trans{\mq}_2 q_2^{(0)}}{\dagger \trans{\mread} q_2^{(0)}}\qquad
\frac{(q_2^{(0)}, \cdots, q_2^{(j)}) \trans{\mm}_2 (r_2^{(0)},\cdots, r_2^{(j')})\qquad\mm\neq\mq,i}{(q_2^{(0)}, \cdots, q_2^{(j)}) \trans{\mm} (r_2^{(0)},\cdots, r_2^{(j')})}\qquad
\frac{q_2^{(0)}\trans{i}_2 \dagger}{q_2^{(0)} \trans{i} \dagger}
\]
\item $\seq{\Gamma}{\mksem{M_1}{M_2}:\semt}$. $Q^{(j)}= Q_1^{(j)}+Q_2^{(j)}$ ($0\le j\le k$). In this case, after $\mgrb$ we redirect transitions to the automaton for $M_1$, and after $\mrls$ - to $M_2$.
\[
\frac{\dagger\trans{\mrun}_1 q_1^{(0)}}{\dagger \trans{\mgrb} q_1^{(0)}}\qquad
\frac{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm}_1 (r_1^{(0)},\cdots, r_1^{(j')})\qquad\mm\neq\mrun,\mdone}{(q_1^{(0)}, \cdots, q_1^{(j)}) \trans{\mm} (r_1^{(0)},\cdots, r_1^{(j')})}\qquad
\frac{q_1^{(0)}\trans{\mdone}_1 \dagger}{q_1^{(0)} \trans{\mok} \dagger}
\]
\[
\frac{\dagger\trans{\mrun}_2 q_2^{(0)}}{\dagger \trans{\mrls} q_2^{(0)}}\qquad
\frac{(q_2^{(0)}, \cdots, q_2^{(j)}) \trans{\mm}_2 (r_2^{(0)},\cdots, r_2^{(j')})\qquad\mm\neq\mrun,\mdone}{(q_2^{(0)}, \cdots, q_2^{(j)}) \trans{\mm} (r_2^{(0)},\cdots, r_2^{(j')})}\qquad
\frac{q_2^{(0)}\trans{\mdone}_2 \dagger}{q_2^{(0)} \trans{\mok} \dagger}
\]
}
\end{itemize}
\section{Additional material for Section~\ref{sec:tosla}}
\subsection{Proof of Theorem~\ref{thm:trans2}}
We start with a technical lemma that identifies
the level of moves corresponding to free variables
of type $\vart$ and $\semt$.
Given $x:\vart$, moves of the form $\mwrite{i}^{(x,\rho)}$
and $\mread^{(x,\rho)}$ (by P) will be referred to
as the associated questions, while
$\mok^{(x,\rho)}$ and $i^{(x,\rho)}$ (by O) will be called
the associated answers.
We use analogous terminology for $x:\semt$: the associated questions
are $\mgrb^{(x,\rho)}$ and $\mrls^{(x,\rho)}$, while the associated answer is $\mok^{(x,\rho)}$.
\begin{lemma}\label{lem:ade}
Given a $\fica$-term $\seq{\Gamma}{M:\theta}$ in
$\beta\eta$-normal form, let $\Aut_M$ be the automaton produced
by Theorem~\ref{thm:trans}.
For any $x:\vart$ or $x:\semt$ such that $\ade{x}{M}=i$,
the transitions corresponding to
the moves associated with $x$ add/remove leaves
at odd levels $1,3,\cdots, 2i-1$.
\end{lemma}
\begin{proof}
We reason by induction on $M$, inspecting each construction in turn.
For $M\equiv \skipcom,\divcom,i$, the result holds vacuously,
because there are no moves associated with $x$ ($i=0$).
In the following cases, $\ade{x}{M}$ is calculated
by taking the maximum of $\ade{x}{M'}$ for subterms
and the automata constructions
never modify the level of transitions in automata
obtained by IH.
Consequently, the lemma can be established
by appeal to IH:
$M_1||M_2$, $M_1;M_2$, $\cond{M_1}{M_2}{M_3}$,
$\while{M_1}{M_2}$, $!M_1$, $M_1\,\raisebox{0.065ex}{:}{=}\, M_2$,
$\grb{M_1}$, $\rls{M_1}$, $\newin{y}{M_1}$,
$\newsem{y}{M_1}$.
The remaining case is $M\equiv f M_h\cdots M_1$.
\begin{itemize}
\item Note that this case also covers $f\equiv x$, in which
case $\ade{x}{M}=1$ and transitions associated with $x$
involved leaves at level $2\cdot 1-1=1$, as required.
\item If $f\not\equiv x$ then $\ade{x}{M}=1+\max(\ade{x}{M_1},\cdots, \ade{x}{M_h})$.
In this case, the automata construction lowers transitions associated with $x$ by exactly two levels,
so by IH, they will appear at levels $1+2,\cdots, (2i-1)+2$. Note that $(2i-1)+2 = 2(i+1)-1$, i.e. the lemma holds.
\end{itemize}
\end{proof}
Observe that subterms of $\lfica$ terms are in $\lfica$,
i.e. we can reason by structural induction.
\begin{lemma}
Suppose $\seq{\Gamma}{M:\theta}$ is from $\lfica$.
The automaton $\clg{A}_M$ obtained from the translation in Theorem~\ref{thm:trans}
is presentable as a $\sla$.
\end{lemma}
\begin{proof}
\begin{description}
\item In many cases, the construction merely relabels the given automaton.
Then a simple appeal to the inductive hypothesis will suffice.
The relevant cases are: $!M_1, \arop{M_1}, \rls{M_1}, \grb{M_1}, \lambda x. M_1$.
\item[$M\equiv M_1 || M_2$]
The case of parallel composition involves running copies of $M_1$ and $M_2$ in parallel without communication,
with their root states stored as a pair at level $0$. Note, though, that each of the automata transitions independently
of the state of the other automaton, which means that, if the automata $M_1$ and $M_2$ are $\sla$,
so will be the automaton for $M_1 || M_2$.
The branching bound after the construction is the sum of the two bounds for $M_1$ and $M_2$.
\item [$M\equiv M_1;M_2$] The construction schedules the automaton for $M_1$ first and there is a transition to (a disjoint copy of) the second one only after the configuration of the first automaton consists of the root only. Otherwise the automata never communicate. As the transition from the first to the second automaton happens at the root, it can be captured as a $\sla$ transition. Consequently,
if the automata for $M_1,M_2$ are $\sla$, so is the automaton for $M$. Here the branching bound
is simply the maximum of the bounds for $M_1$ and $M_2$.
The same argument applies to $\cond{M_1}{M_2}{M_3}$, $M_1\,\raisebox{0.065ex}{:}{=}\, M_2$.
\item[$M\equiv \newin{x\,\raisebox{0.065ex}{:}{=}\, i}{M_1}$]
Transitions not associated with $x$ are embedded into the automaton
for $M$ except that at level $0$, the new automaton keeps track of the current value stored in $x$. Because these transitions proceed uniformly without ever depending on the value stored at the root, this is consistent with $\sla$ behaviour.
For transitions associated with $x$, we note that,
because $M$ is from $\lfica$, we have $\ade{x}{M_1}\le 2$.
By Lemma~\ref{lem:ade}, this means that the transitions
related to $x$ correspond to creating/removing leaves at either level $1$ or $3$.
These transitions need to read/write the root but, because they concern nodes at level $0$ or $3$,
they will be consistent with the definition of a $\sla$.
All other transitions (not labelled by $x$) proceed as in $M$ and
need not consult the additional information about the current state
stored in the root (the extra information is simply propagated). Consequently, if $M$ is represented by a $\sla$ then the interpretation of $\newin{x\,\raisebox{0.065ex}{:}{=}\, i}{M}$ is also a $\sla$. The construction does not affect the branching bound, because
the resultant runs can be viewed as a subset of runs of the automaton for $M$, i.e. those in which reads and writes are related.
The case of $M\equiv \newsem{x\,\raisebox{0.065ex}{:}{=}\, i}{M_1}$ is analogous.
\item[$M\equiv f M_h\cdots M_1$]
For $f M_h\cdots M_1$, we observe that the construction first
creates two nodes at levels $0$ and $1$, and the node at level $1$ is used
to run an unbounded number of copies of (the automaton for) $M_i$.
The copies do not need access to the states stored at levels $0$ and $1$,
because they are never modified when the copies are running.
Consequently, if each $M_i$ can be translated into a $\sla$,
the outcome of the construction in Theorem~\ref{thm:trans} is also a $\sla$. The new branching bound is the maximum over bounds from $M_1,\cdots, M_h$, because at even levels children are produced as in $M_i$ and
level $0$ produces only $1$ child.
\end{description}
\end{proof}
\subsection{Example}\label{apx:example}
Here is a worked example of Theorem \ref{thm:trans} for the term $t = $
\[
\seq{f:\comt\rarr\comt}{\newin{x\,\raisebox{0.065ex}{:}{=}\, 0}{(f (x\,\raisebox{0.065ex}{:}{=}\, 1 || x\,\raisebox{0.065ex}{:}{=}\, 13)\, ||\, \cond{!x=13}{\,\skipcom\,}{\,\divcom}})}
\]
\cutout{
We start with the following statement:
\begin{verbatim}
newvar x=0 in
f( write(x:=1) || write(x:=13) )
|| if read(x) = 13 then skip else div
\end{verbatim}
}
We will show some simple subterms of this term, and then how to combine them using $||$ and introduce $\textbf{newvar}$. We will first construct the sub-automaton representing the following subterm:
\[
f (x\,\raisebox{0.065ex}{:}{=}\, 1 || x\,\raisebox{0.065ex}{:}{=}\, 13)
\]
\cutout{
\begin{verbatim}
f(write (x:=1) || write(x:=13))
\end{verbatim}
}
For convenience we will call this subterm $w$ as in ``write''. The states for $\mathcal{A}(w)$ are as follows:
\begin{align*}
Q_w^{(0)} &= \{0_w, 1_w, 2_w\} & \qquad Q_w^{(1)} &= \{0_w\} \\
Q_w^{(2)} &= \{0_1,1_1,2_1\} \times \{0_{13},1_{13},2_{13}\} & \qquad Q_w^{(3)} &= \{0_1,0_{13}\}
\end{align*}
Note: in the standard construction, the subterms will not be annotated with the subscripts given. We show them here to emphasise that the union operation performed by combining branches is the \emph{disjoint} union of the states from each side.
The transitions for $\mathcal{A}(w)$ are as follows. When we write transitions here, places where values are symbolic (e.g. $u$ or $v$) represent one transition for every possible value that may appear in those places.
\[
\dagger \trans{\mathsf{run}} 0_w
\qquad
2_w \trans{\mathsf{done}} \dagger
\]
\[
0_w \trans{\mathsf{run}^{(f,0)}} (1_w, 0_w)
\qquad
(1_w, 0_w) \trans{\mathsf{done}^{(f,0)}} 2_w
\]
\[
(1_w, 0_w) \trans{\mathsf{run}^{(f1,0)}} (1_w, 0_w, (0_1, 0_{13}))
\qquad
(1_w, 0_w, (2_1, 2_{13})) \trans{\mathsf{done}^{(f1,0)}} (1_w, 0_w)
\]
\[
(1_w, 0_w, (0_1, v)) \trans{\mwrite{1}^{(x,2)}} (1_w, 0_w, (1_1, v), 0_1)
\qquad
(1_w, 0_w, (1_1, v), 0_1) \trans{\mathsf{ok}^{(x,0)}} (1_w, 0_w, (2_1, v))
\]
\[
(1_w, 0_w, (u, 0_{13})) \trans{\mwrite{1}^{(x,2)}} (1_w, 0_w, (u, 1_{13}), 0_{13})
\qquad
(1_w, 0_w, (u, 1_{13}), 0_{13}) \trans{\mok^{(x,0)}} (1_w, 0_w, (u, 2_{13}))
\]
where $u \in \{0_1,1_1,2_1\}$ and $v\in \{0_{13},1_{13},2_{13}\}$.
\vspace{3mm}
We now do the same for the following term, $r$ (for ``read''):
\[
\cond{!x=13}{\,\skipcom\,}{\,\divcom}
\]
\cutout{
\begin{verbatim}
if read(x) = 13 then skip else div
\end{verbatim}
}
The states for $\mathcal{A}(r)$ are simpler, as this term is shallow.
\[
Q_r^{(0)} = \{ 0_r, 1_r, 2_r^0, \cdots, 2_r^{\imax} \}
\qquad
Q_r^{(1)} = \{ 0_r \}
\]
The transitions for $\mathcal{A}(r)$ are as follows.
\[
\dagger \trans{\mathsf{run}} 0_r
\qquad
2_r^{13} \trans{\mdone} \dagger
\]
\[
0_r \trans{\mread^{(x,0)}} (1_r, 0_r)
\qquad
(1_r,0_r) \trans{z^{(x,0)}} 2_r^{z}
\]
where $z\in\{0,\cdots,\imax\}$.
Observe that only reaching state $2_r^{13}$ (hence, reading a value $13$ from $x$) will allow this automaton to terminate.
\vspace{3mm}
Combining these two automata is relatively simple. We will first apply the procedure for parallel composition ($||$), and then apply the $\textbf{newvar}$ context. See Theorem \ref{thm:trans} for the precise workings of these steps. The final automaton $\mathcal{A}(t)$ for our term $t$ is as follows.
\vspace{1mm}
States:
\begin{align*}
Q^{(0)} = (Q^{\prime (0)}+ Q^{\prime (0)} \times \{lock\}) \times X\\
\text{ where } Q^{\prime (0)} = Q_w^{(0)} \times Q_r^{(0)} \text{ and } X = \{0,\cdots,\imax\}
\end{align*}
\[
Q^{(1)} = Q_r^{(1)} + Q_w^{(1)} \qquad \qquad
Q^{(2)} = Q_w^{(2)} \qquad \qquad
Q^{(3)} = Q_w^{(3)}
\]
\vspace{1mm}
Transitions:
\[
\dagger \trans{\mathsf{run}} ((0_r, 0_w), 0)
\qquad
((2_w, 2_r^{13}), n) \trans{\mathsf{done}} \dagger
\]
\[
((0_w, b), n) \trans{\mathsf{run}^{(f,0)}} (((1_w, b), n), 0_w)
\qquad
(((1_w, b), n), 0_w) \trans{\mathsf{done}^{(f,0)}} ((2_w, b), n)
\]
\begin{align*}
(((1_w, b),n), 0_w)
& \trans{\mathsf{run}^{(f1,0)}} (((1_w, b), n), 0_w, (0_1, 0_{13}))
\\
(((1_w, b), n), 0_w, (2_1, 2_{13}))
& \trans{\mathsf{done}^{(f1,0)}} (((1_w, b),n), 0_w)
\\\\
(((1_w, b), n), 0_w, (0_1, v))
& \trans{\eq} (((1_w, b), lock, 1), 0_w, (1_1, v), 0_1)
\\
(((1_w, b), lock, n), 0_w, (1_1, v), 0_1)
& \trans{\ea} (((1_w, b), n), 0_w, (2_1, v))
\\
(((1_w, b), n), 0_w, (u, 0_{13}))
& \trans{\eq} (((1_w, b), lock, 13), 0_w, (u, 1_{13}), 0_{13})
\\
(((1_w, b), lock, n), 0_w, (u, 1_{13}), 0_{13})
& \trans{\ea} (((1_w, b), n), 0_w, (u, 2_{13}))
\\\\
((a, 0_r), n)
& \trans{\eq} (((a, 1_r), lock, n), 0_r)
\\
(((a, 1_r), lock, n),0_r)
& \trans{\ea} ((a, 2_r^{n}), n)
\end{align*}
where $u \in \{0_1,1_1,2_1\}$, $v\in \{0_{13},1_{13},2_{13}\}$, $a \in \{ 0_w, 1_w, 2_w \}$ and $b \in \{ 0_r, 1_r, 2_r \}$.
\cutout{
Transitions:
\[
\dagger \trans{\mathsf{run}} ((0_r, 0_w), 0)
\qquad
((2_w, 2_r^{13}), n) \trans{\mathsf{done}} \dagger
\]
\[
((0_w, b), n) \trans{\mathsf{run}^{(f,0)}} (((1_w, b), n), 0_w)
\qquad
(((1_w, b), n), 0_w) \trans{\mathsf{done}^{(f,0)}} ((2_w, b), n)
\]
\begin{align*}
(((1_w, b),n), 0_w)
& \trans{\mathsf{run}^{(f1,0)}} (((1_w, b), n), 0_w, (0_1, 0_{13}))
\\
(((1_w, b), n), 0_w, (2_1, 2_{13}))
& \trans{\mathsf{done}^{(f1,0)}} (((1_w, b),n), 0_w)
\\\\
(((1_w, b), n), 0_w, (0_1, v))
& \trans{\mwrite{1}^{(x,2)}} (((1_w, b), lock, 1), 0_w, (1_1, v), 0_1)
\\
(((1_w, b), lock, n), 0_w, (1_1, v), 0_1)
& \trans{\mathsf{ok}^{(x,0)}} (((1_w, b), n), 0_w, (2_1, v))
\\
(((1_w, b), n), 0_w, (u, 0_{13}))
& \trans{\mwrite{13}^{(x,2)}} (((1_w, b), lock, 13), 0_w, (u, 1_{13}), 0_{13})
\\
(((1_w, b), lock, n), 0_w, (u, 1_{13}), 0_{13})
& \trans{\mok^{(x,0)}} (((1_w, b), n), 0_w, (u, 2_{13}))
\\\\
((a, 0_r), n)
& \trans{\mread^{(x,0)}} (((a, 1_r), lock, n), 0_r)
\\
(((a, 1_r), lock, n),0_r)
& \trans{n^{(x,0)}} ((a, 2_r^{n}), n)
\end{align*}
}
\section{Proof of Theorem \ref{thm:sla-decidable}}
\label{apx:sla-decidable}
{
\newcommand{\ensuremath{q^{(i-2)}}}{\ensuremath{q^{(i-2)}}}
\newcommand{\ensuremath{q^{(i-1)}}}{\ensuremath{q^{(i-1)}}}
\newcommand{\ensuremath{q^{(i)}}}{\ensuremath{q^{(i)}}}
\newcommand{\ensuremath{q^{(i+1)}}}{\ensuremath{q^{(i+1)}}}
\newcommand{\ensuremath{q^{(i+2)}}}{\ensuremath{q^{(i+2)}}}
\newcommand{\ensuremath{q^{\prime(i-2)}}}{\ensuremath{q^{\prime(i-2)}}}
\newcommand{\ensuremath{q^{\prime(i-1)}}}{\ensuremath{q^{\prime(i-1)}}}
\newcommand{\ensuremath{q^{\prime(i)}}}{\ensuremath{q^{\prime(i)}}}
\newcommand{\ensuremath{q^{\prime(i+1)}}}{\ensuremath{q^{\prime(i+1)}}}
\newcommand{\ensuremath{q^{\prime(i+2)}}}{\ensuremath{q^{\prime(i+2)}}}
\newcommand{\mapping}[1]{\ensuremath{\{~#1~\}}}
\renewcommand{\and}{\ensuremath{,~}}
For every even layer $i = k, k-2, ..., 0$, we construct summary sets of the following shape:
\[
S^{(i)}_x = (\alpha, s_1, ..., s_{\leq b}, \omega) \in Q^{(i)} \times (Q^{(i-2, i-1)} \times Q^{(i-2, i-1)})^{\leq 2b} \times Q^{(i)}
\]
\ad{TODO: RLA probably won't appear in the paper so this proof must include all the prelims that were dealt with in the RLA proof.}
where $X^{\leq 2b}$ indicates a sequence of elements of $X$ of length at most $2b$, twice the even layer bound. In words, this indicates that a node at layer $i$ may go from state $\alpha$ to state $\omega$, being child-free at each end, and performing at most $r \leq 2b$ stateful read-writes $\mathsf{INT}_{x,r}$ of the next even layer up. For simplicity, we will assume that every operation on values above is an atomic read/write.
\paragraph* {Inductive step.}
Suppose that we have the summary sets at level $i+2$.
We construct a VASS query for every possible combination of
\begin{itemize}
\item start state $\alpha \in Q^{(i)}$;
\item end state $\omega \in Q^{(i)}$;
\item read value $(r, \ensuremath{q^{(i-2)}}), (r,\ensuremath{q^{(i-1)}})$ in each of the $r \in [0,2b]$ possible instances of reading the state above;
\item write value $(r, \ensuremath{q^{\prime(i-2)}}), (r, \ensuremath{q^{\prime(i-1)}})$ in each of the $r \in [0,2b]$ possible instances of writing to the state above.
\item integer $u \in [0,b]$ of \emph{used} children.
\end{itemize}
The reachability query asks whether we can, from
\[
\left\{
\begin{aligned}
\forall r \in [0,2u-1] \colon (\mathsf{read}, r, \ensuremath{q^{(i-2)}}) = 1, (\mathsf{read}, r, \ensuremath{q^{(i-1)}}) = 1 & \qquad
\alpha = 1 \\
\forall j \in [1,b] \colon (j, \bot) = 1 & \qquad
(c,0) = 1
\end{aligned}
\right\}
\]
all other places zero, reach
\[
\left\{
\begin{aligned}
\forall r \in [0,2u-1] \colon (\mathsf{write}, r, \ensuremath{q^{\prime(i-2)}}) = 1, (\mathsf{write}, r, \ensuremath{q^{\prime(i-1)}}) = 1 & \qquad
\omega = 1 \\
\forall j \in [1, u] \colon (j, \top) = 1 & \qquad
\forall j \in [u+1, b] \colon (j, \bot) = 1\\
(c, 2u) = 1
\end{aligned}
\right\}
\]
all other places zero.
\ad{$(c,2u)$ means the stateful read-write counter is at position $2u$, i.e. $2u$ stateful reads-writes have been performed. This corresponds to creating and destroying $u$ children.}
If the reachability query succeeds, then we add
\[(\alpha, ((\ensuremath{q^{(i-2)}}_0, \ensuremath{q^{(i-1)}}_0),(\ensuremath{q^{\prime(i-2)}}_0, \ensuremath{q^{\prime(i-1)}}_0)) ... ((\ensuremath{q^{(i-2)}}_{2u}, \ensuremath{q^{(i-1)}}_{2u}),(\ensuremath{q^{\prime(i-2)}}_{2u}, \ensuremath{q^{\prime(i-1)}}_{2u})) ,\omega)\]
to the summary set at level $i$.
\paragraph{Places.}
The complete set of places required is as follows:
\vspace{3mm}
\textbf{Read/Write simulation}
\begin{itemize}
\item For each element of $r \in [0,2b]$, we need a gadget representing an atomic read-write, with:
\begin{itemize}
\item A place $(\mathsf{read}, r, q^{(i-2)})$ for each state $q \in Q^{(i-2)}$;
\item A place $(\mathsf{read}, r, q^{(i-1)})$ for each state $q \in Q^{(i-1)}$;
\item A place $(\mathsf{write}, r, q^{(i-2)})$ for each state $q \in Q^{(i-2)}$;
\item A place $(\mathsf{write}, r, q^{(i-1)})$ for each state $q \in Q^{(i-1)}$.
\end{itemize}
\item We also need a counter gadget with $2b+1$ places $(\mathsf{c}, 0), ..., (\mathsf{c}, 2b)$, which tracks how many of the read-writes have been used.
\end{itemize}
\textbf{Level-$i$ state}
\begin{itemize}
\item For each element $q^{(i)} \in Q^{(i)}$, we need one state $q^{(i)}$.
\end{itemize}
\textbf{Children}
\begin{itemize}
\item For each element of $j \in [1,b]$, we need a gadget representing a child of the level-$i$ node, with:
\begin{itemize}
\item A place $(j, q^{(i+1)})$ for each element $q^{(i+1)} \in Q^{(i+1)} \cup \{\bot, \top\}$;
\item A place $(j, q^{(i+2)})$ for each element $q^{(i+2)} \in Q^{(i+2)}$;
\item A place $(j, \mathsf{INT}_{x,n})$ for each ``interrupt point'' $\mathsf{INT}_{x,n}$ in each summary $S^{(i+2)}_x \in S^{(i+2)}$.
\end{itemize}
\end{itemize}
The addition of $\bot$ and $\top$ at level $i+1$ ensures that we do not reuse child gadgets.
\paragraph*{Transitions.}
The transition generators are given below.
\textbf{Adding a child at layer $i+1$:}
\[
\frac
{(\ensuremath{q^{(i-2)}}, \ensuremath{q^{(i-1)}}, \ensuremath{q^{(i)}}) \xrightarrow{x_Q} (\ensuremath{q^{\prime(i-2)}}, \ensuremath{q^{\prime(i-1)}}, \ensuremath{q^{\prime(i)}}, \ensuremath{q^{\prime(i+1)}}) \in \delta_Q \qquad j \in [1,b] \qquad r \in [0,2b-1]}
{\left\{\begin{aligned}
(\mathsf{read}, r, \ensuremath{q^{(i-2)}}) \mapsto -1 & \and (\mathsf{write}, r, \ensuremath{q^{\prime(i-2)}}) \mapsto +1 \\
(\mathsf{read}, r, \ensuremath{q^{(i-1)}}) \mapsto -1 & \and (\mathsf{write}, r, \ensuremath{q^{\prime(i-1)}}) \mapsto +1 \\
\ensuremath{q^{(i)}} \mapsto -1 & \and \ensuremath{q^{\prime(i)}} \mapsto +1 \\
(j, \bot) \mapsto -1 & \and (j,\ensuremath{q^{\prime(i+1)}}) \mapsto +1 \\
(c, r) \mapsto -1 & \and (c, r+1) \mapsto +1
\end{aligned} \right\} \in T}
\]
\textbf{Adding a child at layer $i+2$:}
\[
\frac
{(\ensuremath{q^{(i)}}, \ensuremath{q^{(i+1)}}) \xrightarrow{x_Q} (\ensuremath{q^{\prime(i)}}, \ensuremath{q^{\prime(i+1)}}, \ensuremath{q^{\prime(i+2)}}) \in \delta_Q \qquad j \in [1,b]}
{\left\{\begin{aligned}
\ensuremath{q^{(i)}} \mapsto -1 & \and \ensuremath{q^{\prime(i)}} \mapsto +1 \\
(j, \ensuremath{q^{(i+1)}}) \mapsto -1 & \and (j, \ensuremath{q^{\prime(i+1)}}) \mapsto +1 \\
(j, \ensuremath{q^{\prime(i+2)}}) \mapsto +1
\end{aligned} \right\} \in T}
\]
\textbf{Removing a child at layer $i+1$:}
\[
\frac
{(\ensuremath{q^{(i-2)}}, \ensuremath{q^{(i-1)}}, \ensuremath{q^{(i)}}, \ensuremath{q^{(i+1)}}) \xrightarrow{x_A} (\ensuremath{q^{\prime(i-2)}}, \ensuremath{q^{\prime(i-1)}}, \ensuremath{q^{\prime(i)}}) \in \delta_A \qquad j \in [1,b] \qquad r \in [0,2b-1]}
{\left\{\begin{aligned}
(\mathsf{read}, r, \ensuremath{q^{(i-2)}}) \mapsto -1 & \and (\mathsf{write}, r, \ensuremath{q^{\prime(i-2)}}) \mapsto +1 \\
(\mathsf{read}, r, \ensuremath{q^{(i-1)}}) \mapsto -1 & \and (\mathsf{write}, r, \ensuremath{q^{\prime(i-1)}}) \mapsto +1 \\
\ensuremath{q^{(i)}} \mapsto -1 & \and \ensuremath{q^{\prime(i)}} \mapsto +1 \\
(j, \ensuremath{q^{(i+1)}}) \mapsto -1 & \and (j,\top) \mapsto +1 \\
(c, r) \mapsto -1 & \and (c, r+1) \mapsto +1
\end{aligned} \right\} \in T}
\]
\textbf{Removing a child at layer $i+2$:}
\[
\frac
{(\ensuremath{q^{(i)}}, \ensuremath{q^{(i+1)}}, \ensuremath{q^{(i+2)}}) \xrightarrow{x_A} (\ensuremath{q^{\prime(i)}}, \ensuremath{q^{\prime(i+1)}}) \in \delta_A \qquad j \in [1,b]}
{\left\{\begin{aligned}
\ensuremath{q^{(i)}} \mapsto -1 & \and \ensuremath{q^{\prime(i)}} \mapsto +1 \\
(j, \ensuremath{q^{(i+1)}}) \mapsto -1 & \and (j, \ensuremath{q^{\prime(i+1)}}) \mapsto +1 \\
(j, \ensuremath{q^{(i+2)}}) \mapsto -1
\end{aligned} \right\} \in T}
\]
\textbf{Simulation of children at layer $i+2$:}
\ad{Allow the child to freely move to the first interrupt point:}
\[
\frac
{(\alpha^{(i+2)}, \mathsf{INT}_{x,0},...,\mathsf{INT}_{x,n}, \omega^{(i+2)}) \in S^{(i+2)}
\qquad j \in [1,b]}
{\left\{\begin{aligned}
(j, \alpha^{(i+2)}) \mapsto -1 \and (j, \mathsf{INT}_{x,0}) \mapsto +1
\end{aligned} \right\} \in T}
\]
\ad{Perform an interrupt then freely move to the next one:}
\[
\frac
{(\alpha^{(i+2)}, \mathsf{INT}_{x,0},...,\mathsf{INT}_{x,n}, \omega^{(i+2)}) \in S^{(i+2)}
\qquad r \in [0, n-1]
\qquad ((\ensuremath{q^{(i)}}, \ensuremath{q^{(i+1)}}), (\ensuremath{q^{\prime(i)}}, \ensuremath{q^{\prime(i+1)}})) = \mathsf{INT}_{x,r}
\qquad j \in [1,b]}
{\left\{\begin{aligned}
\ensuremath{q^{(i)}} \mapsto -1 & \and \ensuremath{q^{\prime(i)}} \mapsto +1 \\
(j,\ensuremath{q^{(i+1)}}) \mapsto -1 & \and (j,\ensuremath{q^{\prime(i+1)}}) \mapsto +1 \\
(j, \mathsf{INT}_{x,r}) \mapsto -1 & \and (j, \mathsf{INT}_{x,r+1}) \mapsto +1
\end{aligned} \right\} \in T}
\]
\ad{Perform the final interrupt:}
\[
\frac
{(\alpha^{(i+2)}, \mathsf{INT}_{x,0},...,\mathsf{INT}_{x,n}, \omega^{(i+2)}) \in S^{(i+2)}
\qquad ((\ensuremath{q^{(i)}}, \ensuremath{q^{(i+1)}}), (\ensuremath{q^{\prime(i)}}, \ensuremath{q^{\prime(i+1)}})) = \mathsf{INT}_{x,n}
\qquad j \in [1,b]}
{\left\{\begin{aligned}
\ensuremath{q^{(i)}} \mapsto -1 & \and \ensuremath{q^{\prime(i)}} \mapsto +1 \\
(j,\ensuremath{q^{(i+1)}}) \mapsto -1 & \and (j,\ensuremath{q^{\prime(i+1)}}) \mapsto +1 \\
(j, \mathsf{INT}_{x,r}) \mapsto -1 & \and (j, \omega^{(i+2)}) \mapsto +1
\end{aligned} \right\} \in T}
\]
\paragraph* {Base case.}
The base case is $i=k$ for even $k$, or $i=k-1$ for odd $k$. This is the same as the inductive case, except there are no nodes at layer $k+2$ so the VASS construction is slightly easier. No special tricks are needed.
\paragraph* {Level 0.}
Level 0 can be handled the same as any other level, but there will be no read-writes above so elements of the summary set will be in $Q^{(0,0)}$. In order to use the same inductive mechanism we can pretend that $Q^{(-1)} = Q^{(-2)} = \{\bot\}$.
} |
1,314,259,995,869 | arxiv | \section{\@startsection {section}{1}{\zeta@}%
{-3.5ex \@plus -1ex \@minus -.2ex}%
{2.3ex \@plus.2ex}%
{\normalfont\large\bfseries}}
\renewcommand\subsection{\@startsection{subsection}{2}{\zeta@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{1.5ex \@plus .2ex}%
{\normalfont\normalsize\bfseries}}
\renewcommand\subsubsection{\@startsection{subsubsection}{3}{\zeta@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{1.5ex \@plus .2ex}%
{\normalfont\normalsize\it}}
\renewcommand\paragraph{\@startsection{paragraph}{4}{\zeta@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{1.5ex \@plus .2ex}%
{\normalfont\normalsize\bf}}
\numberwithin{equation}{section}
\def\revise#1 {\raisebox{-0em}{\rule{3pt}{1em}}%
\marginpar{\raisebox{.5em}{\vrule width3pt\
\vrule width0pt height 0pt depth0.5em
\hbox to 0cm{\hspace{0cm}{%
\parbox[t]{4em}{\raggedright\footnotesize{#1}}}\hss}}}}
\newcommand\fnxt[1] {\raisebox{.12em}{\rule{.35em}{.35em}}\mbox{\hspace{0.6em}}#1}
\newcommand\nxt[1] {\\\fnxt#1}
\newcommand{{\it i.e.,}\ }{{\it i.e.,}\ }
\newcommand{{\it e.g.,}\ }{{\it e.g.,}\ }
\newcommand{\mt}[1]{\textrm{\tiny #1}}
\def{\cal A} {{\cal A}}
\def{\mathfrak A} {{\mathfrak A}}
\def{\underline \calA} {{\underline {\mathfrak A}}}
\def{\cal B} {{\cal B}}
\def{\cal C} {{\cal C}}
\def{\cal D} {{\cal D}}
\def{\cal E} {{\cal E}}
\def{\cal F} {{\cal F}}
\def{\cal G} {{\cal G}}
\def{\mathfrak G} {{\mathfrak G}}
\def{\cal H} {{\cal H}}
\def{\cal I} {{\cal I}}
\def{\cal J} {{\cal J}}
\def{\cal K} {{\cal K}}
\def{\cal L} {{\cal L}}
\def{\cal M} {{\cal M}}
\def{\cal N} {{\cal N}}
\def{\cal O} {{\cal O}}
\def{\cal P} {{\cal P}}
\def{\cal Q} {{\cal Q}}
\def{\cal R} {{\cal R}}
\def{\cal S} {{\cal S}}
\def{\cal T} {{\cal T}}
\def{\cal U} {{\cal U}}
\def{\cal V} {{\cal V}}
\def{\cal W} {{\cal W}}
\def{\cal X} {{\cal X}}
\def{\mathbb C} {{\mathbb C}}
\def{\mathbb N} {{\mathbb N}}
\def{\mathbb P} {{\mathbb P}}
\def{\mathbb Q} {{\mathbb Q}}
\def{\mathbb R} {{\mathbb R}}
\def{\mathbb Z} {{\mathbb Z}}
\def\partial {\partial}
\def\bar\partial {\bar\partial}
\def{\rm e} {{\rm e}}
\def{\rm i} {{\rm i}}
\def{\circ} {{\circ}}
\def\mathop{\rm Tr} {\mathop{\rm Tr}}
\def{\rm Re\hskip0.1em} {{\rm Re\hskip0.1em}}
\def{\rm Im\hskip0.1em} {{\rm Im\hskip0.1em}}
\def{\it id} {{\it id}}
\def\de#1#2{{\rm d}^{#1}\!#2\,}
\def\De#1{{{\cal D}}#1\,}
\def{\frac12}{{\frac12}}
\newcommand\topa[2]{\genfrac{}{}{0pt}{2}{\scriptstyle #1}{\scriptstyle #2}}
\def\undertilde#1{{\vphantom#1\smash{\underset{\widetilde{\hphantom{\displaystyle#1}}}{#1}}}}
\def\mathop{{\prod}'}{\mathop{{\prod}'}}
\def\gsq#1#2{%
{\scriptstyle #1}\square\limits_{\scriptstyle #2}{\,}}
\def\sqr#1#2{{\vcenter{\vbox{\hrule height.#2pt
\hbox{\vrule width.#2pt height#1pt \kern#1pt
\vrule width.#2pt}\hrule height.#2pt}}}}
\def\square{%
\mathop{\mathchoice{\sqr{12}{15}}{\sqr{9}{12}}{\sqr{6.3}{9}}{\sqr{4.5}{9}}}}
\newcommand{\fft}[2]{{\frac{#1}{#2}}}
\newcommand{\ft}[2]{{\textstyle{\frac{#1}{#2}}}}
\def\mathop{\mathchoice{\sqr{8}{32}}{\sqr{8}{32}{\mathop{\mathchoice{\sqr{8}{32}}{\sqr{8}{32}}
{\sqr{6.3}{9}}{\sqr{4.5}{9}}}}
\newcommand{\mathfrak{w}}{\mathfrak{w}}
\newcommand{\mathfrak{q}}{\mathfrak{q}}
\newcommand{\mathfrak{w}}{\mathfrak{w}}
\newcommand{{\Omega}}{{\Omega}}
\def\alpha{\alpha}
\def\beta{\beta}
\def\omega{\omega}
\def\rho{\rho}
\def\delta{\delta}
\def\epsilon{\epsilon}
\def\chi{\chi}
\def\gamma{\gamma}
\def\gamma{\gamma}
\def\hat{x}{\hat{x}}
\def\hat{\rho}{\hat{\rho}}
\def\hat{\chi}{\hat{\chi}}
\def\hat{h}{\hat{h}}
\def\hat{f}{\hat{f}}
\def\hat{g}{\hat{g}}
\def\hat{K}{\hat{K}}
\def\phi{\phi}
\def\psi{\psi}
\def\hat{h}{\hat{h}}
\def\nabla_\mu{\nabla_\mu}
\def\nabla_\nu{\nabla_\nu}
\def\Gamma{\Gamma}
\def{\rm arctanh}{{\rm arctanh}}
\def\Delta{\Delta}
\def\kappa{\kappa}
\def\rm dilog{\rm dilog}
\def\tilde{a}{\tilde{a}}
\def\lambda{\lambda}
\def\zeta{\zeta}
\def\Omega{\Omega}
\def\Omega{\Omega}
\def\tilde{f}{\tilde{f}}
\def\tilde{h}{\tilde{h}}
\def\tilde{K}{\tilde{K}}
\def\tilde{k}{\tilde{k}}
\def\tilde{g}{\tilde{g}}
\def\hat{\r}{\hat{\rho}}
\def\hat{f}{\hat{f}}
\def\hat{K}{\hat{K}}
\def\hat{P}{\hat{P}}
\def\rangle{\Longrightarrow}
\def\vev#1{\langle #1 \rangle}
\def\hat{p}_0{\hat{p}_0}
\def\hat{K}_0{\hat{K}_0}
\def\hat{a}{\hat{a}}
\def\hat{\beta}{\hat{\beta}}
\def\hat{G}{\hat{G}}
\def{\chi\rm{SB}}{{\chi\rm{SB}}}
\def\kappa{\kappa}
\def\epsilon{\epsilon}
\def\tau{\tau}
\def\sigma{\sigma}
\def\langle{\langle}
\def\rangle{\rangle}
\def\tilde{p}{\tilde{p}}
\def\tilde{a}{\tilde{a}}
\def\hat{\phi}{\hat{\phi}}
\def{\rm arcsinh}{{\rm arcsinh}}
\def\comment#1{{\bf [[#1]]}}
\catcode`\@=12
\begin{document}
\title{\bf Universal relaxation in quark-gluon plasma at strong coupling}
\date{May 19, 2015}
\author{
Alex Buchel$^{1,2,3}$ and Andrew Day$^{1}$ \\[0.4cm]
\it $^1$\,Department of Applied Mathematics, $^2$\,Department of Physics and Astronomy, \\
\it University of Western Ontario\\
\it London, Ontario N6A 5B7, Canada\\
\it $^3$\,Perimeter Institute for Theoretical Physics\\
\it Waterloo, Ontario N2J 2W9, Canada
}
\Abstract{We use top-down gauge theory/string theory correspondence to compute
relaxation rates in strongly coupled nonconformal gauge theory
plasma. We compare models with difference mechanisms of breaking the
scale invariance: "hard breaking'' (by relevant operators) and "soft
breaking'' (by marginal operators). We find that the thermalization
time of the transverse traceless fluctuations of the stress-energy
tensor is rather insensitive to the mechanisms of breaking the scale
invariance over a large range of the scale-breaking parameter
$\delta=\frac 13-c_s^2$. We comment on the relevance of the results to
QCD quark-gluon plasma.
}
\makepapertitle
\body
\let\version\@version\@gobble cascading quasinormal modes
\tableofcontents
\section{Introduction}\label{intro}
In a recent paper \cite{Buchel:2015saa} it was pointed out that the equilibration rates
in strongly coupled nonconformal quark-gluon plasma (QGP) are surprisingly
insensitive to the presence of the conformal symmetry breaking scale.
Specifically, considering supersymmetric mass deformations within
${\cal N}=2^*$ gauge theory and using holography \cite{m1,Aharony:1999ti} the authors computed the
spectra of quasinormal modes for a variety of scalar operators, as well as the energy-momentum tensor. In each case,
the lowest quasinormal frequency, which provides an approximate upper bound on the thermalization time,
was found to be proportional to temperature, up to a pre-factor with only a mild temperature dependence.
Similar results were reported for a conformal plasma with a finite charge density and in the
presence of an external magnetic field \cite{Fuini:2015hba}, as well
as in a phenomenological nonconformal holographic models \cite{Janik:2015waa,Ishii:2015gia}.
In this paper we continue investigation of the equilibration rates in
strongly coupled quark-gluon plasma with holographic string
theory dual (the top-down models). One drawback of ${\cal N}=2^*$ model
studied in \cite{Buchel:2015saa} is the fact that the conformal invariance
there is broken quite mildly --- over the range of the supersymmetric
mass deformation parameter $\frac{m}{T}$, the scale invariance is
violated by
\begin{equation}
\max_{\ft mT} \frac{\epsilon-3p}{\epsilon} \approx 20\%\,.
\eqlabel{n2trace}
\end{equation}
In \eqref{n2trace} $\epsilon$ and $p$ are the energy density and the pressure in ${\cal N}=2^*$ plasma.
On the contrary, the latest results from the HotQCD Collaboration \cite{Bazavov:2014pvz}
indicate that the analogous quantity in QCD is approximately $50\%$. Furthermore,
it is important to verify how robust are the results of \cite{Buchel:2015saa}
in theories with a different mechanism for breaking the scale invariance.
The ideal model to address these two questions is the Klebanov-Strassler (KS)
cascading gauge theory \cite{ks}. First, the conformal invariance in
KS gauge theory is broken much stronger \cite{kstalk}; second, while the renormalization group (RG)
flow in ${\cal N}=2^*$ gauge theory is induced by relevant operators, the RG flow in KS gauge theory is induced
by marginal, but not exactly marginal, operators. We compute the lowest quasinormal mode
associated with the transverse traceless fluctuations of the stress-energy tensor in KS gauge theory.
We omit technical details and focus on results only\footnote{The thermodynamics of
KS gauge theory has been studied in \cite{b,kbh1,kbh2,aby,abk,hyd3,ksbh}.}.
In the next section we recall definitions of ${\cal N}=2^*$ gauge theory and
KS gauge theory. We compare the thermodynamics of the two models with
that of the lattice QCD \cite{Bazavov:2014pvz}. In section \ref{quasi}
we present results for the lowest quasinormal mode of the transverse
traceless fluctuations of the stress-energy tensor in KS plasma, and compare them
with the corresponding computations in \cite{Buchel:2015saa}.
Finally, we conclude in section \ref{conclude}.
\section{Thermodynamics of strongly coupled nonconformal plasma from holography}\label{thermo}
The best studied example of the gauge theory/string theory correspondence is that between
the maximally supersymmetric ${\cal N}=4$ $SU(N)$ supersymmetric Yang-Mills theory (SYM)
and string theory in $AdS_5\times S^5$ \cite{m1}. SYM is conformally invariant.
At strong coupling, the energy density and the pressure of equilibrium SYM plasma
at temperature $T$ is given by
\begin{equation}
\epsilon=\frac{3}{8}\pi^2 N^2 T^4\,,\qquad p=\frac{1}{8}\pi^2 N^2 T^4\,.
\eqlabel{n4}
\end{equation}
In what follows we find it convenient to parameterized thermodynamic potentials of nonconformal plasma with the following conformal symmetry breaking
parameters:
\begin{equation}
\Theta\equiv \frac{\epsilon-3p}{\epsilon}\,,\qquad \delta\equiv \frac 13-c_s^2\,,
\eqlabel{cftbreaking}
\end{equation}
where $c_s$ is the speed of sound waves in plasma. As we see below, parameterization \eqref{cftbreaking} allows to compare different
holographic models with lattice QCD. Note that from \eqref{n4},
\begin{equation}
\Theta\bigg|_{{\cal N}=4}=0\,,\qquad \delta\bigg|_{{\cal N}=4}=0\,.
\eqlabel{n4trace}
\end{equation}
${\cal N}=2^*$ gauge theory is obtained as a mass deformation of ${\cal N}=4$ SYM, where an ${\cal N}=2$ hypermultiplet receives
a mass $m$. This is a {\it relevant} deformation of the conformal SYM, as the renormalization group flow is induced
by bosonic and fermionic mass terms of the hypermultiplet. At large temperatures, {\it i.e.,}\ $\frac{m}{T}\ll 1$,
the thermodynamics of ${\cal N}=2^*$ gauge theory plasma is given by \cite{Buchel:2003ah,Buchel:2004hw}
\begin{equation}
\begin{split}
&\epsilon=\frac{3}{8}\pi^2 N^2 T^4\biggl( 1-\frac 23\frac{\Gamma(3/4)^4}{\pi^4}\frac{m^2}{T^2}+{\cal O}\left(\frac{m^4}{T^4}\ln\frac Tm\right)\biggr)\,,\\
&p=\frac{1}{8}\pi^2 N^2 T^4\biggl( 1-2\frac{\Gamma(3/4)^4}{\pi^4}\frac{m^2}{T^2}+{\cal O}\left(\frac{m^4}{T^4}\ln\frac Tm\right)\biggr)\,,
\end{split}
\eqlabel{n2thermo}
\end{equation}
resulting in
\begin{equation}
\Theta=6\delta +{\cal O}(\delta^2\ln\delta)\,.
\eqlabel{thetan2}
\end{equation}
Klebanov-Strassler cascading gauge theory is ${\cal N}=1$ four-dimensional supersymmetric $SU(K+P)\times SU(K)$
gauge theory with two chiral superfields $A_1, A_2$ in the $(K+P,\overline{K})$
representation, and two fields $B_1, B_2$ in the $(\overline{K+P},K)$ representation.
Perturbatively, this gauge theory has two gauge couplings $g_1, g_2$ associated with
two gauge group factors, and a quartic
superpotential
\begin{equation}
W\sim \mathop{\rm Tr} \left(A_i B_j A_kB_\ell\right)\epsilon^{ik}\epsilon^{j\ell}\,.
\eqlabel{w}
\end{equation}
When $P=0$ above theory flows in the infrared to a
superconformal fixed point, commonly referred to as Klebanov-Witten (KW)
theory \cite{kw}. At the IR fixed point KW gauge theory is
strongly coupled --- the superconformal symmetry together with
$SU(2)\times SU(2)\times U(1)$ global symmetry of the theory implies
that anomalous dimensions of chiral superfields $\gamma(A_i)=\gamma(B_i)=-\frac 14$, {\it i.e.,}\ non-perturbatively large.
Notice that the superpotential \eqref{w} is {\it marginal} at the fixed point.
When $P\ne 0$, conformal invariance of the above $SU(K+P)\times SU(K)$
gauge theory is broken. It is useful to consider an effective description
of this theory at energy scale $\mu$ with perturbative couplings
$g_i(\mu)\ll 1$. It is straightforward to evaluate NSVZ beta-functions for
the gauge couplings. One finds that while the sum of the gauge couplings
does not run
\begin{equation}
\frac{d}{d\ln\mu}\left(\frac{\pi}{g_s}\equiv \frac{4\pi}{g_1^2(\mu)}+\frac{4\pi}{g_2^2(\mu)}\right)=0\,,
\eqlabel{sum}
\end{equation}
the difference between the two couplings is
\begin{equation}
\frac{4\pi}{g_2^2(\mu)}-\frac{4\pi}{g_1^2(\mu)}\sim P \ \left[3+2(1-\gamma_{ij})\right]\ \ln\frac{\mu}{\Lambda}\,,
\eqlabel{diff}
\end{equation}
where $\Lambda$ is the strong coupling scale of the theory and $\gamma_{ij}$ are anomalous dimensions of operators $\mathop{\rm Tr} A_i B_j$.
Given \eqref{diff} and \eqref{sum} it is clear that the effective weakly coupled description of $SU(K+P)\times SU(K)$ gauge theory
can be valid only in a finite-width energy band centered about $\mu$ scale. Indeed, extending effective description both to the UV
and to the IR one necessarily encounters strong coupling in one or the other gauge group factor. As explained
in \cite{ks}, to extend the theory past the strongly coupled region(s) one must perform Seiberg duality \cite{sd}.
Turns out, in this gauge theory, Seiberg duality transformation is a self-similarity transformation of the effective description
so that $K\to K-P$ as one flows to the IR, or $K\to K+P$ as the energy increases. Thus, extension of the effective
$SU(K+P)\times SU(K)$ description to all energy scales involves and infinite sequence - a {\it cascade } - of Seiberg dualities
where the rank of the gauge group is not constant along RG flow, but changes with energy according to \cite{b}
\begin{equation}
K=K(\mu)\sim 2 P^2 \ln \frac \mu\Lambda\,,
\eqlabel{effk}
\end{equation}
at least as $\mu\gg \Lambda$. Since \cite{ks}
\begin{equation}
\gamma_{ij}=-\frac 12+{\cal O}\left(\frac{P^2}{K^2}\right)\,,
\eqlabel{guv}
\end{equation}
the superpotential \eqref{w} is marginal, but {\it not exactly marginal} at the
Klebanov-Witten ultraviolet fixed point of the theory.
Although there are infinitely many duality cascade steps in the UV, there is only a finite number of duality transformations as one
flows to the IR (from a given scale $\mu$). The space of vacua of a generic cascading gauge
theory was studied in details in
\cite{dks}. In the simplest case, when $K(\mu)$ is an integer multiple of $P$, cascading gauge
theory confines in the
infrared with a spontaneous breaking of the chiral symmetry $U(1)\supset {\mathbb Z}_2$ \cite{ks}.
Here, the full global symmetry of the ground state is $SU(2)\times SU(2)\times {\mathbb Z}_2$.
At large temperatures, {\it i.e.,}\ $\frac{\Lambda}{T}\ll 1$, the thermodynamics of
KS gauge theory plasma is given by
\begin{equation}
\begin{split}
&\epsilon=\frac{243}{256}\frac{\Lambda^4}{\pi^4}e^{\frac{2 K(T)}{P^2}}\left(1+\frac{P^2}{3 K(T)}+{\cal O}\left(\frac{P^4}{K(T)^2}\right)\right)
\,,\\
&p=\frac{81}{256}\frac{\Lambda^4}{\pi^4}e^{\frac{2 K(T)}{P^2}}\left(1-\frac{P^2}{ K(T)}+{\cal O}\left(\frac{P^4}{K(T)^2}\right)\right)\,,
\end{split}
\eqlabel{ksthermo}
\end{equation}
with
\begin{equation}
\frac{dK(T)}{d\ln \frac{T}{\Lambda}}=2P^2 \left(1+{\cal O}\left(\frac{P^2}{K(T)}\right)\right)\,,
\eqlabel{dkdt}
\end{equation}
resulting in
\begin{equation}
\Theta=3\delta +{\cal O}(\delta^2)\,.
\eqlabel{thetaks}
\end{equation}
\begin{figure}[t]
\begin{center}
\psfrag{x}{{$\delta$}}
\psfrag{y}{{$\Theta$}}
\includegraphics[width=5in]{thermo.eps}
\end{center}
\caption{ Parameterization of $\Theta=\frac{\epsilon-3p}{\epsilon}$ with $\delta=\frac 13-c_s^2$ in strongly coupled
gauge theory plasma for QCD (the red dots), ${\cal N}=2^*$ (the solid green line),
and cascading gauge theory (the solid blue line). The dashed red line represents the conformal violation parameter
$\delta$ in QCD at $T=0.3$GeV. The black dot is the $\frac{m}{T}\to 0$ limit of ${\cal N}=2^*$
thermodynamics. Vertical blue lines represent the phase transitions in cascading gauge theory plasma:
the confinement/deconfinement (dashed) and the chiral symmetry breaking (dotted).} \label{figure1}
\end{figure}
Using results of Table I of \cite{Bazavov:2014pvz} we can reconstruct $\Theta$-vs-$\delta$ for QCD.
These results are presented by red dots in figure \ref{figure1}. The dashed vertical red line
represents QCD nonconformality parameter $\delta$ at temperature of\footnote{We choose this as a characteristic temperature for
initializing hydrodynamic codes to model RHIC collisions.} 0.3GeV. QCD data points at temperatures higher than
0.3GeV correspond to weaker breaking of conformal invariance --- they are to the left of the dashed red line.
The solid green line parameterizes ${\cal N}=2^*$ thermodynamics \cite{Buchel:2007vy}.
In the deep infrared, {\it i.e.,}\ $\frac{m}{T}\to 0$,
${\cal N}=2^*$ thermodynamics reduces to that of the five-dimensional CFT \cite{Buchel:2007mf,HoyosBadajoz:2010td}. The latter
limit is represented by a black dot,
\begin{equation}
\{\delta,\Theta\}\bigg|_{{\rm black\ dot}}=\left\{\frac{1}{12},\frac 14\right\}\,.
\eqlabel{bd}
\end{equation}
The solid blue line parameterizes the thermodynamics of cascading gauge theory
plasma \cite{abk,ksbh}. The vertical blue dashed and dotted lines represent the nonconformality parameter $\delta$
of KS gauge theory at the first-order confinement/deconfinement transition,
\begin{equation}
T_{\rm deconfinement}=0.6141111(3) \Lambda\,,\qquad \delta_{\rm deconfinement}=0.2238(9)\,,
\eqlabel{ksconf}
\end{equation}
and the (perturbative) chiral symmetry breaking phase transition,
\begin{equation}
T_{\chi sB}=0.8749(0) T_{\rm deconfinement}\,,\qquad \delta_{\chi sB}=0.30567(2)\,,
\eqlabel{kscsb}
\end{equation}
correspondingly.
Notice that while conformal invariance of cascading gauge theory plasma can be broken much more strongly
(especially in the vicinity of the phase transitions) compare to that
of ${\cal N}=2^*$ plasma, the results are of little relevance to QCD QGP --- in fact, for
QCD temperatures $T\gtrsim 0.3$GeV both ${\cal N}=2^*$ and KS plasma have very similar equations of state,
the latter are further quite reasonable compared with lattice QCD.
\section{Relaxation in strongly coupled nonconformal plasma from holography}\label{quasi}
\begin{figure}[t]
\begin{center}
\psfrag{x}{{$\delta$}}
\psfrag{y}{{$-{\rm Im\hskip0.1em}\frac{\omega}{2\pi T}$}}
\includegraphics[width=5in]{compare.eps}
\end{center}
\caption{Minus imaginary part of the lowest quasinormal mode at zero spatial momentum of the transverse traceless fluctuations
of the stress-energy tensor in ${\cal N}=2^*$ (the solid green line) and KS (the solid blue line)
gauge theory plasma as a function of $\delta=\frac 13-c_s^2$. The black dot
denotes the lowest quasinormal mode of dimension $\Delta=5$ operator of the effective five-dimensional
CFT in the deep IR of ${\cal N}=2^*$ plasma, see \eqref{bdw}. The dashed red line represents the conformal violation parameter
$\delta$ in QCD at $T=0.3$GeV. Vertical blue lines represent the phase transitions in cascading gauge theory plasma:
the confinement/deconfinement (dashed) and the chiral symmetry breaking (dotted).} \label{figure2}
\end{figure}
\begin{figure}[t]
\begin{center}
\psfrag{x}{{$\frac{q}{2\pi T}$}}
\psfrag{y}{{$\frac{{\rm Im\hskip0.1em}\omega}{{\rm Im\hskip0.1em} \omega_{q=0}}$\ {\rm and}\ $\frac{{\rm Re\hskip0.1em}\omega}{{\rm Re\hskip0.1em} \omega_{q=0}}$ }}
\includegraphics[width=5in]{q.eps}
\end{center}
\caption{Momentum dependence of the lowest quasinormal mode of the transverse traceless fluctuations of the
stress-energy tensor in cascading gauge theory plasma at the ultraviolet fixed point (solid lines),
the deconfinement phase transition (dashed lines), and the chiral symmetry breaking phase transition
(dotted lines). The green/red lines represent the real/minus imaginary parts of the frequencies.
The data is normalized to zero momentum values of the frequencies, see \eqref{w0}. }\label{figure3}
\end{figure}
We now study the effects of conformal symmetry breaking on the thermalization time in
strongly coupled gauge theory plasma comparing top-down holographic models: ${\cal N}=2^*$
and KS gauge theory plasma. We focus on relaxation of the transverse traceless fluctuations
of the stress-energy tensor. In the holographic dual they are encoded as quasinormal modes
of helicity-2 graviton polarizations \cite{Kovtun:2005ev}. These fluctuations are always
equivalent to fluctuations of a minimally coupled massless scalar \cite{Buchel:2004qq}.
In figure \ref{figure2} we plot the minus imaginary part of the lowest quasinormal modes at zero spatial momentum of the transverse
traceless fluctuations on the stress-energy tensor in ${\cal N}=2^*$ (the solid green line)
\cite{Buchel:2015saa} and cascading (the solid blue line) gauge theory plasma
as a function of the conformal symmetry breaking parameter $\delta=\frac 13-c_s^2$.
In ${\cal N}=2^*$ gauge theory plasma $\delta\in [0,\frac{1}{12}]$ with the upper limit denoted by the black dot,
representing the imaginary part of the lowest quasinormal mode of dimension $\Delta=5$ operator in the effective
five-dimensional CFT in the IR:
\begin{equation}
\left\{\delta,-{\rm Im\hskip0.1em}\frac{\omega}{2\pi T}\right\}\bigg|_{{\rm black\ dot}}=\left\{\frac{1}{12},1.07735(7)\right\}\,.
\eqlabel{bdw}
\end{equation}
Notice that over all the parameter range of $\delta$ of ${\cal N}=2^*$ the relaxation rates
of ${\cal N}=2^*$ gauge theory and KS gauge theory are practically identical. This is the basis of the
universality observation for the relaxation rates in strongly coupled nonconformal gauge theory plasma
with a dual holographic description.
In \cite{Buchel:2015saa} it was found that momentum dependence of the relaxation rates
is rather weak in strongly coupled gauge theory plasma with a holographic dual.
We confirm that observation here comparing the momentum dependence of the
lowest quasinormal mode of the transverse traceless fluctuations of the stress-energy
tensor in KS plasma for three value of $\delta$: $\delta=0$ (the UV conformal fixed point) (solid lines),
$\delta=\delta_{{\rm deconfinement}}$ (dashed lines), and $\delta=\delta_{\chi sB}$ (dotted lines), see \eqref{ksconf} and \eqref{kscsb}.
In figure \ref{figure3} we plot real (green) and minus imaginary (red) parts of the quasinormal frequencies,
reduced to their zero momentum values:
\begin{equation}
\begin{split}
&\omega_{q=0}^{\delta=0}=1.5597(3)-i\ 1.3733(4)\,,\\
&\omega_{q=0}^{\delta=\delta_{{\rm deconfinement}}}=1.5825(8)-i\ 0.70783(5)\,,\\
&\omega_{q=0}^{\delta=\delta_{\chi sB}}=1.46632-i\ 0.47044(1)\,.
\end{split}
\eqlabel{w0}
\end{equation}
\section{Conclusion}\label{conclude}
Relaxation rates in strongly coupled gauge theory plasma are encoded in the
lowest quasinormal modes of matter-gravity fluctuations in the corresponding holographic dual.
We studied the dependence of the relaxation rates on the mechanism of breaking the conformal
invariance in top-down holographic models. Specifically, we compared ${\cal N}=2^*$ gauge theory
rates \cite{Buchel:2015saa} with those of cascading gauge theory. In the former,
the conformal invariance is broken by relevant operators, while in the latter it is broken
by marginal (but not exactly marginal) operators. Remarkably, at least for the relaxation of
transverse traceless fluctuations of the stress-energy tensor, the rates are very close.
Additionally, we found very weak momentum dependence of the quasinormal mode frequencies.
All these provide further support for the universality of the relaxation rates in
strongly coupled gauge theories with holographic duals observed in
\cite{Buchel:2015saa,Fuini:2015hba,Janik:2015waa,Ishii:2015gia}.
It is important to emphasize that not all relaxation rates in cascading gauge theory plasma
are roughly proportional to the temperature. For example, in the vicinity of the chiral symmetry
breaking phase transition, the symmetry breaking fluctuations destabilize the system \cite{ksbh}
with the corresponding relaxation rate vanishing precisely at the transition point.
We believe that this subtlety is of little consequence to QCD applications though, as conformal invariance
there is broken much more strongly than in QGP produced at RHIC and LHC (see figure \ref{figure1}).
~\\
\section*{Acknowledgments}
We would like to thank Michal Heller and Pavel Kovtun for valuable discussions.
AB thanks the Galileo Galilei Institute for Theoretical Physics for the hospitality
and the INFN for partial support during the completion of this work.
Research at Perimeter
Institute is supported by the Government of Canada through Industry
Canada and by the Province of Ontario through the Ministry of
Research \& Innovation. AB gratefully acknowledge further support by an
NSERC Discovery grant.
|
1,314,259,995,870 | arxiv | \section{Introduction}
Markov Random Fields (MRFs) provide a useful framework to model high dimensional probability distributions via an associated dependency graph $\mathbf{G}$, which captures the conditional independence relationships between random variables. Here, the nodes correspond to the random variables; edges represent the conditional independence relationships between these nodes. Any random variable conditioned on the random variables with which it shares an edge is independent of all the remaining random variables.
This `Markov' property has encouraged the adoption of MRFs in a wide variety of fields such as computer vision, finance, biology, and social networks. Here, MRFs model various inference tasks via popular algorithms such as loopy belief propagation, message passing, etc. For a deeper understanding of these underlying ideas and applications, we refer the reader to \cite{lauritzen1996graphical, koller2009probabilistic, wainwright2008graphical, pearl2014probabilistic}.
A special class of graphical model where the underlying graph is tree-structured is suited for applications where sample efficient learning, and time-efficient inference are required with strong theoretical guarantees. As a result, the problem of learning tree-structured graphical models from data has been well-studied since the 1960s. In the seminal work \cite{chow1968approximating}, the authors propose the Chow-Liu algorithm, which shows that the maximum weight spanning tree of the empirical mutual information between all the pairs of random variables corresponds to the maximum-likelihood tree estimate. In practice, it is rare to observe the random variables without noise, as sources of noise are ubiquitous, e.g. errors in sensors, incorrect human labeling. In \cite{nikolakakis2019learning}, the authors present numerous motivating examples from social science, epidemiology, biology, differential privacy, and finance, where noise is present in the observations. Unfortunately, in the face of corruption by unequal noise in the nodes, the Chow-Liu algorithm breaks down. This occurs as the noise in the random variables alters the order of the pairwise mutual information. The noise also destroys the tree structure by adding fictitious edges. Moreover, as noise is unknown, the structure of a noisy graphical model could possibly originate from different tree structures. This brings the recoverability of the original tree structure into question.
In this paper, we focus on learning the underlying tree-structured graphical model on non-noisy discrete random variables using samples that are corrupted by a $k$-ary symmetric noise channel (where $k$ is the size of the common support of all the random variables). Our work reveals a rich recoverability landscape for MRFs under symmetric noise. We discover that when $k\geq 3$, for a fixed underlying tree structure, the recoverability is determined by the pairwise PMF of the non-noisy random variables. This is in contrast to the Gaussian graphical model and Ising model results (\cite{katiyar2019robust}, \cite{katiyar2020robust}, \cite{tandon2021sga}) where, for a fixed tree structure, edges within a \textit{leaf cluster} (a leaf node, its parent, and its siblings) are never recoverable irrespective of the probability distribution of the non-noisy random variables. We completely characterize the recoverability for $k\geq 2$ by providing the necessary and sufficient conditions for the identifiability of the edges within a \textit{leaf cluster}.
Our contributions can be summarized as follows:
\begin{itemize} [leftmargin=*,noitemsep]\vspace{-5pt}
\item[1.] \textbf{Identifiability Characterization}:
In \textit{Theorem \ref{th:k_ary_iden}}, we completely characterize the recoverability of tree-structured MRF on support size $k$ when the observations come from unknown $k$-ary symmetric channel noise where each node has a different error probability. We show the identifiability depends on the PMF of the non-noisy random variables, which is unobserved. This dependence can then be translated to the PMF of the noisy random variables, which is observed, that provides the characterization.
We show that for the special class of {\em Symmetric Graphical Models} (as defined in \textit{Section \ref{sec:symmetric}}), for any $k$, the nodes within a \textit{leaf cluster} are unidentifiable. On the other direction, we show for the class of Perturbed Symmetric Graphical Models (details in \textit{Section \ref{sec:pertured_symmetric}}) for $k\geq 4$, the exact tree is identifiable.
\item[2.] \textbf{Algorithm}: We develop an algorithm that recovers the class of candidate trees that can explain the noisy observations. In the identifiable setting, this corresponds to recovering the exact tree. The algorithm is iterative where we recover one edge from the candidate tree per iteration. \textit{(Section \ref{sec:algo})}.
\item[3.] \textbf{Sample Complexity Analysis:} We provide novel sample complexity lower bounds and upper bounds (\textit{Section \ref{sec:sample_complexity}}). Our upper bounds are shown to have orderwise tight dependence on underlying graph parameters, size of the graph, edge parameters (related to underlying conditional MF), and noise parameters. The lower bound proof relies on a novel construction of a class of graphical models including perturbed symmetric graphical models where part of the \textit{leaf clusters} are identifiable.
\item[4.] \textbf{Experiments:}\footnote{The code containing the implementation of the algorithm is available at \url{https://github.com/ashishkatiyar13/NoisyTreeMRF}} We demonstrate the efficacy of our algorithm via extensive numerical experiments for a variety of trees with different structures, edge parameters, corruption, and support sizes.
\end{itemize}
\section{Related Work}
We divide the related work into three main categories:\\
\textbf{Learning Generic Graphical Models from Non-Noisy Samples:} There exists a rich literature on the problem of learning graphical models on discrete random variables which assume access to non-noisy samples \cite{bresler2014structure, bresler2008reconstruction, bresler2015efficiently, bresler2014hardness, lee2007efficient, klivans2017learning, wu2019sparse, ravikumar2010high}. However, these models do not provide guarantees in the face of noise in the samples.\\
\textbf{Learning Tree-Structured Graphical Models:} The special class of tree-structured graphical models has also been extensively studied beginning with the classical Chow-Liu algorithm was proposed in \cite{chow1968approximating}. Chow-Liu algorithm's error exponents for Gaussian graphical models and graphical models on discrete random variables were analyzed in \cite{tan2010learning} and \cite{tan2011large} respectively. Results in \cite{tan2011large} were further refined in \cite{tandon2020exact} under additional assumptions of homogeneity and zero external field in tree-structured Ising models. In \cite{bresler2016learning} the authors approximate the distribution of generic Ising models using tree-structured Ising models. More recently, in \cite{daskalakis2020tree}, the authors provide an algorithm to learn tree-structured Ising models providing total variation distance guarantees. In \cite{bhattacharyya2020near}, the authors provide finite sample guarantees for the Chow-Liu algorithm. As these algorithms assume access to non-noisy samples, no performance guarantees can be established when the samples have noise.\\
\textbf{Robust Estimation of Graphical Models:} Robust estimation of graphical models has been studied in multiple prior works but they are unable to resolve our setting. The algorithms in \cite{goel2019learning, lindgren2019robust, hamilton2017information} learn graphical models on discrete random variables without the tree structure assumption but assume access to error probabilities. This is complementary to our setting as we have the tree structure constraint but do not require the knowledge of the error probabilities.
In \cite{tandon2020exact,nikolakakis2019learning,nikolakakis2020information}, the authors study the recovery of trees using noisy samples. Critically, they operate in the restricted regime where the Chow-Liu algorithm converges to the correct tree.
While these results are insightful in their own right, their assumptions are generally violated in our setting making their results inapplicable.
For Gaussian graphical models and Ising models, the unidentifiability properties are established in \cite{katiyar2019robust} and \cite{katiyar2020robust}, respectively. In \cite{tandon2021sga} the authors extend the results in \cite{katiyar2019robust, katiyar2020robust}, providing better sample complexity results and a more efficient algorithm. The critical limitation of these results is that they do not extend to discrete random variables with support sizes larger than 2 and therefore fail to capture the nuanced identifiability properties demonstrated in our setting.
Finally, our problem can be posed as the latent tree graphical model estimation problem, where the noisy nodes are observed and non-noisy nodes are latent. Results for learning latent tree graphical models in \cite{pearl1986structuring, chang1996full, choi2011learning}, and {\em independently and concurrently} in \cite{ casanellas2021robust}, can be used to recover the underlying tree barring the nodes within leaf clusters. Importantly, these models do not assume any structure on the noise, and thereby, contrived noise models make it impossible to recover nodes within a leaf cluster. As a result they fail to uncover the possibility of identifiability within a leaf cluster when we consider the natural $k$-ary symmetric channel noise model.
\section{Problem Setup}
Let $\mathbf{X} = [X_1, X_2\dots X_n]$ be the vector of random variables with a common support set, $\mathcal{S} = \{s_1, s_2, \dots s_k\}$ such that their graphical model structure is a tree $T^*$.
The vanilla learning problem is to recover the tree $T^*$ from i.i.d samples of $X_i$.
In this paper, we consider the problem of recovering $T^*$ but we do not get to observe samples of $X_i$. Instead, the samples of $X_i$ pass through a $k$-ary symmetric noise channel and we observe the output denoted by $X_i'$, that is,
\begin{equation} \label{eq:noise}
X_i' = \begin{cases}X_i & \text{ w.p. }1 - q_i,\\
U_i & \text{ w.p. } q_i,
\end{cases}
\end{equation}
where $q_i$ is the probability of error for $X_i$ and $U_i$ is a discrete random variable independent of $\mathbf{X}$ and $U_j$ $\forall j\neq i$, distributed uniformly on $\mathcal{S}$. Note that $q_i$ can be unequal for all $X_i$.
The vector of the noisy random variables is denoted by $\mathbf{X'} = [X_1', X_2'\dots X_n']$. Due to the noise in $X_i$, the graphical model of the nodes in $\mathbf{X'}$ is no longer given by $T^*$. In general, \textit{the graphical model on the noisy random variables can be a complete graph}.
\paragraph{Matrix PMF and Distance Notation:} We denote the joint PMF matrix for random variables ($X_a$, $X_b$), and ($X_a'$, $X_b'$) by the matrix $P_{a,b}$ and $P_{a',b'}$ respectively, such that:
\begin{equation*}
(P_{a,b})_{i,j} = P(X_a = s_i, X_b = s_j),
(P_{a',b'})_{i,j} = P(X_a' = s_i, X_b' = s_j).
\end{equation*}
The conditional PMF of $X_a$ conditioned on $X_b$ is denoted by the matrix $P_{a|b}$ while the marginal distribution of random variables $X_a$ and $X_a'$ are denoted using diagonal matrices $P_a$ and $P_{a'}$ respectively such that:
\begin{equation*}
(P_{a|b})_{i,j} = P(X_a = s_i|X_b = s_j), (P_a)_{i,i} = P(X_a = s_i), (P_{a'})_{i,i} = P(X_a' = s_i).
\end{equation*}
The information distance metric between proposed in \cite{lake1994reconstructing}, is defined as follows:
\begin{equation}\label{eq:dist}
d_{i,j} =-\log\tfrac{|det(P_{i,j})|}{\sqrt{det(P_i)det(P_j)}}, d_{i',j'} =-\log\tfrac{|det(P_{i',j'})|}{\sqrt{det(P_{i'})det(P_{j'})}}.
\end{equation}
We require the following assumptions that are natural and standard in this line of literature (c.f. \cite{chang1996full,choi2011learning}).
\begin{assumption}\label{ass:pmf}
The probability mass at every support for each non-noisy random variable is bounded away from $0$ : $(P_a)_{i,i}\geq p_{min}>0$.
\end{assumption}
\begin{assumption}\label{ass:distance}
The distance $d_{i,j}$ between adjacent non-noisy random variables is bounded: $0<d_{min}<d_{i,j}<d_{max}$.
\end{assumption}
\begin{assumption}\label{ass:max_error}
The probability of error is upper bounded away from 1: $q_i \leq q_{max} < 1$.
\end{assumption}
Assumption \ref{ass:pmf} ensures that the probability mass at any support is not arbitrarily small for any random variable. The bounds on the distance in Assumption \ref{ass:distance} ensure that no adjacent random variables are duplicates or independent. Assumption \ref{ass:max_error} ensures that the noisy observations are not independent of the underlying random variables. Our sample complexity lower bounds in Section \ref{sec:sample_complexity} show that the problem becomes infeasible if these assumptions are not satisfied.
Lastly, we also formally define a \textit{leaf cluster} as follows:
\begin{definition} \label{def:leafcluster}
The \textbf{leaf cluster} of any leaf node is the set containing that leaf node, its parent node and all its sibling leaf nodes.
\end{definition}
\section{Identifiability Results} \label{sec:identifiability}
In this section, we prove that the identifiability of the underlying tree is determined by the joint PMF of leaf parent pairs. The proof is divided in 3 parts - (i) prove that the only potential unidentifiability is within the leaf clusters of the tree, (ii) analyze the existence of valid probability of error for a tree on three nodes, (iii) extend the analysis to a generic tree and arrive at the necessary and sufficient condition for identifiability.
\subsection{Potential unidentifiability is limited to leaf clusters}
For any tree $T^*$, \cite{katiyar2019robust} defined the equivalence class $\mathcal{T}_{T^*}$ to be the set of all the trees obtained by different permutations of nodes within a leaf cluster, and showed that in the Gaussian graphical model setting, $\mathcal{T}_{T^*}$ can be recovered. We show here that with a few new proof ideas, essentially the same is true for graphical models on discrete random variables with general support size $k$:
\begin{lemma}\label{le:lim_unid_gen}
Suppose the random variables in $\mathbf{X}$ form a tree graphical model $T^*$. Given samples from noisy random variables $X_i'$, it is possible to recover the equivalence class $\mathcal{T}_{T^*}$.
\end{lemma}
\textit{Proof Idea.} The proof of this lemma is similar in spirit to \cite{katiyar2019robust} and so we defer the details to Appendix \ref{ap:lemma1}. The proof depends on categorizing groups of 4 nodes as a {\em non-star} when 2 of the nodes lie in one subtree and the remaining 2 nodes lie in a disjoint subtree. The key new element we need for this categorization in the discrete setting for general $k$, is the information distance metric $d_{i,j}$ as defined in \eqref{eq:dist}.\\
\textbf{Remarks:} (i) Lemma \ref{le:lim_unid_gen} is not limited to the $k$-ary symmetric noise channel and holds for any noise channel such that when conditioned on $X_i$, $X_i'$ is independent of $X_j$ $\forall j \in [n] \neq i$ and $X_i$ and $X_i'$ are not independent. This result was independently and concurrently derived in \cite{casanellas2021robust}. (ii) If there are no restrictions on the noise channel, recovering $\mathcal{T}_{T^*}$ is the best we can do. That is, for every tree in $\mathcal{T}_{T^*}$, it is possible to construct a noise model that can produces the noisy observation. This analysis along with the proof of Lemma \ref{le:lim_unid_gen} is included in Appendix \ref{ap:lemma1}.
\subsection{Error Estimation for a Tree on 3 Nodes}
\paragraph{Additional Notation for $k$-ary Symmetric Channel:}
For each random variable $X_a$, we define a $k\times k$ error matrix $E_a$ as follows:
\begin{equation*}
E_a = (1-q_a)I + \tfrac{q_a}{k}O,
\end{equation*}
where $O$ is a matrix of all ones. Recall that $k$ is the common support size for all the random variables and $q_a$ is the probability of error of $X_a$.\\
We denote the error estimated for node $X_a$ which enforces $X_b\perp X_c|X_a$ by $\Tilde{q}_{a}^{b,c}$ and we also define the matrix $\ind{\Tilde{E}}{a}{b}{c}$ as:
\begin{equation*}
\ind{\Tilde{E}}{a}{b}{c} = (1-\ind{\Tilde{q}}{a}{b}{c})I + \tfrac{\ind{\Tilde{q}}{a}{b}{c}}{k}O.
\end{equation*}
Note that $P_{a',b'}$ and $P_{a,b}$ are related as follows:
\begin{equation}\label{eq:noisy_joint_pmf}
P_{a',b'} = E_aP_{a,b}E_b.
\end{equation}
It is also easy to see that:
\begin{equation}\label{eq:noisy_pmf}
P_{a'} = (1-q_a)P_a + \tfrac{q_a}{k} I.
\end{equation}
\paragraph{Error Estimation:}
Suppose there exist 3 nodes such that $X_1\perp X_3|X_2$ and we observe $X_1'$, $X_2'$ and $X_3'$ through a $k$-ary symmetric channel as defined in Equation \eqref{eq:noise}. The conditional independence relationship gives us:
\begin{equation}\label{eq:cond_ind}
P_{1, 3} = P_{1,2}P_2^{-1}P_{2,3}.
\end{equation}
From Equation \eqref{eq:noisy_joint_pmf}, we have $P_{1',3'} = E_1P_{1,3}E_3$, $P_{1',2'} = E_1P_{1,2}E_2$, $P_{2',3'} = E_2P_{2,3}E_3$. From Equation \eqref{eq:noisy_pmf}, we have $P_{2'} = (1-q_2)P_2 + \frac{q_2}{k}I$. By substituting these in Equation \eqref{eq:cond_ind} we get the following quadratic equation with matrix coefficients in noise parameter $q_2$ (details in Appendix \ref{ap:quadratic}):
\begin{equation}\label{eq:err_est_quad}
\begin{aligned}
&\frac{q_2^2}{k^2}(O - kI) - \frac{q_2}{k}(OP_{2'} + P_{2'}O - kP_{2'} - I) +
P_{2',3'} P_{1,' 3'}^{-1}P_{1',2'}-P_{2'} = 0,
\end{aligned}
\end{equation}
where the $0$ on the RHS is a $k\times k$ matrix of all $0$s.
The key insight here is that, Equation \eqref{eq:err_est_quad} depends only on the noisy observations. Therefore, in the absence of the knowledge of conditional independence relation, it can be used as a test to check if the noisy observations can potentially be explained by $X_1\perp X_3|X_2$.
Precisely, for a graph on 3 nodes $(X_1, X_2, X_3)$, $X_2$ is a potential middle node if the we can satisfy Equation~\eqref{eq:err_est_quad} for some noise parameter $q_2 \in [0,q_{max}]$. In other words, $X_2$ is a potential middle node if the following holds, with $\|\cdot\|_F$ as the Forbenius norm of a matrix:
\begin{equation}\label{eq:err_est_x}
\begin{aligned}
&\min_{0\leq x\leq q_{max}} \|\frac{x^2}{k^2}(O - kI) - \frac{x}{k}(OP_{2'} + P_{2'}O - kP_{2'} - I) +
P_{2',3'} P_{1,' 3'}^{-1}P_{1',2'}-P_{2'}\|_F = 0.
\end{aligned}
\end{equation}
This is equivalent to $k^2$ quadratic equations corresponding to each element of the matrix having a common root which lies between $0$ and $q_{max}$. These equations need not be unique.
\subsection{Extension to a generic tree}
Before presenting the identifiability result, we first establish some notation. Let $\mathcal{L}$ be the set containing all the leaf nodes of the tree-structured graphical model $T^*$. Now, consider the subset of leaf nodes with the following property: the leaf node $X_2$, its parent node $X_1$, and any arbitrary node $X_3$ from the graph have a solution to Equation \eqref{eq:err_est_x}. We label this subset $\mathcal{L}^{sub} \subseteq \mathcal{L}$. $\mathcal{T}_{T^*}^{sub}\subseteq \mathcal{T}_{T^*}$ represents the equivalence class where only leaves in $\mathcal{L}^{sub}$ can exchange positions with their parents.\\
The next theorem completely characterizes the identifiability of the underlying tree for a $k$-ary symmetric noise channel.
\begin{theorem}\label{th:k_ary_iden}
Suppose the random variables in $\mathbf{X}$ form a tree-structured graphical model $T^*$. Let $\mathbf{X}'$ be the observed noisy output after passing $\mathbf{X}$ through a $k$-ary symmetric channel. Then, we show that for any leaf node $X_2 \in \mathcal{L}^{sub}$ and its parent node $X_1$, equation \eqref{eq:err_est_x} remains unchanged for any arbitrary third node $X_3$ from the graph. Using $\mathbf{X}'$, we can recover $\mathcal{T}_{T^*}^{sub}$. Moreover, for every tree $\Tilde{T}\in\mathcal{T}_{T^*}^{sub}$, there exist random variables $\Tilde{\mathbf{X}}$ and a $k$-ary symmetric channels such that the graphical model of $\Tilde{\mathbf{X}}$ is $\Tilde{T}$ and the $k$-ary channel output is $\mathbf{X}'$.
\end{theorem}
\textit{Proof Idea:} As the unidentifiability is only between the nodes within a \textit{leaf cluster}, the key idea is to study a subset of 3 nodes comprising of a leaf parent pair and an arbitrary third node. It is clear that, Equation \eqref{eq:err_est_x} has a solution when the parent node is the middle node. Whenever Equation \eqref{eq:err_est_x} does not have a solution for a given node being a candidate center node, we can rule out the possibility of that node being a parent node. We further show that when the solution exists for a leaf node as a candidate center node, we can construct a tree where the parent node exchanges position with the leaf node. The details are presented in Appendix \ref{ap:k_ary_iden_proof}.
\subsection{Examples} \label{sec:examples}
In this section, we do not assume access to $q_{max}$ and analyse the solution to Equation \eqref{eq:err_est_x} with the constraint $0<x<1$. Extension to the setting of $0<x<q_{max}$ is straightforward where we reject any solution $x>q_{max}$. We first prove that symmetric graphical models are unidentifiable. Next, we present perturbed symmetric graphical models that are unidentifiable for $k=3$ but are identifiable for $k\geq4$. Finally, we show that our analysis recovers the existing results for $k=2$.
\paragraph{Symmetric graphical models:}\label{sec:symmetric} Symmetric graphical models are a class of graphical models where the marginals of all the random variables are uniform on the support and the conditional PMF matrix $P_{a|b}$ for random variables $X_a$, $X_b$ that have an edge between them, takes the following form:
$$
P_{a|b} = P_{b|a} = \alpha_{a,b}I+(1-\alpha_{a,b})\tfrac{O}{k}.
$$
Recall that $O$ is the matrix of all ones. The bounds on the distance in Assumption \ref{ass:distance} enforces $\exp{(-d_{max}/(k-1))}<\alpha_{a,b}<\exp{(-d_{min}/(k-1))}$.
\begin{theorem}\label{th:symmetric}
Suppose the random variables in $\mathbf{X}$ form a tree graphical model $T^*$. Let $X_2$ be any leaf node and $X_1$ be its parent node. If $P_1 = P_2 = \frac{I}{k}$ and $P_{2|1} = \alpha_{2,1}I+(1-\alpha_{2,1})\frac{O}{k}$ such that $\exp{(-d_{max}/(k-1))}<\alpha_{2,1}<\exp{(-d_{min}/(k-1))}$, then Equation \eqref{eq:err_est_x} has a solution.
\end{theorem}
The proof is included in Appendix \ref{ap:symmetric}. Since, Equation \eqref{eq:err_est_x} has a solution for every leaf node $X_2$ as the candidate center node, using Theorem \ref{th:k_ary_iden}, we conclude that symmetric graphical models are unidentifiable.
\paragraph{Perturbed symmetric graphical models:}\label{sec:pertured_symmetric}
We first define a $k\times k$ perturbation matrix $\Delta_{a,b}$. For a given offset $0<c_{a,b}<k$, the term in the $(i,j)$ position of $\Delta_{a,b}$ is:
$$
\Delta_{a,b}(i,j) = \left\{\begin{array}{rl}
\delta_{a,b}, & \text{for } j = ((i-1+c_{a,b})\mod k) + 1\\
0, & \text{o/w}.
\end{array}\right.
$$
In the perturbed symmetric model, the marginals continue to be uniform on the support but the conditional PMF matrix $P_{a|b}$ for adjacent $X_a$ and $X_b$ is modified to:
$$
P_{a|b} = (\alpha_{a,b}-\delta_{a,b})I+(1-\alpha_{a,b})\tfrac{O}{k}+\Delta_{a,b}.
$$
Here $\alpha_{a,b}$ and $\delta_{a,b}$ are chosen such that Assumption \ref{ass:distance} is satisfied. We find that perturbed symmetric graphical models are unidentifiable for $k = 3$ but become identifiable for $k\geq 4$.
\begin{theorem}\label{th:perturbed_symmetric}
Suppose the random variables in $\mathbf{X}$ form a tree graphical model $T^*$. Let $X_2$ be any leaf node and $X_1$ be its parent node. Suppose $P_1 = P_2 = \frac{I}{k}$ and $P_{2|1} = (\alpha_{a,b}-\delta_{a,b})I+(1-\alpha_{a,b})\frac{O}{k}+\Delta_{a,b}$ such that $|\delta_{a,b}|>0, \alpha_{a,b}\neq\delta_{a,b}$, and $\alpha_{a,b}$, $\delta_{a,b}$ are such that the distance assumptions in \ref{ass:distance} are satisfied. Then, equation \eqref{eq:err_est_x} has a solution for $k=3$, but does not have a solution for $k\geq 4$.
\end{theorem}
\textit{Proof Idea.} The proof for $k\geq 4$ relies on lower bounding the Frobenius norm of the quadratic away from 0. In conjunction with Theorem \ref{th:k_ary_iden}, this implies that the exact tree is identifiable when $k\geq4$. For $k=3$, we explicitly calculate the solution to Equation \eqref{eq:err_est_x}. Note that, for $k=3$ the class of symmetric and perturbed symmetric graphical models together comprise all the joint PMF matrices that are circulant. In fact, for $k=3$, when the marginals are uniformly distributed, the joint PMF matrix being circulant is a necessary and sufficient condition for unidentifiability. These details are presented in Appendix \ref{ap:perturbed_symmetric}.
\paragraph{Unidentifiability when $k = 2$:} We now discuss the unidentifiability for $k=2$.
\begin{lemma}\label{le:bin_sol}
Suppose the random variables in $\mathbf{X}$ have support size $k=2$ and they form a tree graphical model $T^*$. The random variables in $\mathbf{X}$ pass through a binary symmetric channel with positive probability of error and we observe $\mathbf{X}'$. For any 3 nodes $(X_1, X_2, X_3)$, Equation \eqref{eq:err_est_x} always has a valid solution.
\end{lemma}
The proof of Lemma \ref{le:bin_sol} is in Appendix \ref{ap:bin_sol}. Corollary \ref{cor:ising} recovers the unidentifiability results of \cite{katiyar2020robust}.
\begin{corollary}\label{cor:ising}
When the random variables in $\mathbf{X}$ have a support size of 2 and all the parents of leaf nodes have non-zero noise, we have $\mathcal{T}_{T^*}^{sub} = \mathcal{T}_{T^*}$.
\end{corollary}
\section{Algorithm}\label{sec:algo}
In this section, we present the algorithm to recover a tree from $\mathcal{T}_{T^*}^{sub}$ given samples corrupted by a $k$-ary symmetric noise channel as inputs. \\
\textbf{Key Idea:}
The algorithm to recover the tree is an iterative one. During an iteration, we have an active set of nodes which are guaranteed to form a subtree. At each iteration, we find a leaf parent pair in the subtree, record that edge, and remove the leaf node from the active set of nodes. The algorithm to recover the tree structure is presented in Algorithm \ref{alg:find_tree}.
\begin{figure}
\centering
\begin{minipage}{0.65\textwidth}
\begin{algorithm}[H]
\caption{Recover Tree Structure}\label{alg:find_tree}
\textit{Input}: Pairwise noisy distributions, $P'_{i,j}$ $\forall{i,j} \in [n]$\\
\textit{Output}: List of edges, $Edges$
\begin{small}
\begin{algorithmic}[1]
\Procedure{FindTree}{$P'_{i,j}$ $\forall{i,j} \in [n]$}
\State $ActiveSet \gets \{1, 2, \dots n\}$, $Edges \gets \{\}$, $Parents\gets \{\}$
\While{$|ActiveSet| > 2$}
\State $leaf,parent \gets $ \textsc{GetLeafParent}($P'_{i,j}$, $ActiveSet$, $\dots$\\
\hspace{18em} $Edges$, $Parents$)
\State{$ActiveSet \gets ActiveSet\setminus leaf$}
\State{$Edges\gets Edges\cup (leaf,parent)$}
\State{$Parents\gets Parents\cup parent$}
\EndWhile
\State $Edges\gets Edges\cup(ActiveSet[0],ActiveSet[1])$\\
\Return $Edges$
\EndProcedure
\end{algorithmic}
\end{small}
\end{algorithm}
\end{minipage}~\hfill
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{algo_step.jpg}
\caption{(a) If the node $z$ lies between $l$ and $r$, $l$ becomes $z$, hence getting closer to $r$. (b) If the node $r$ lies between $l$ and $z$, both $l$ and $r$ shift towards the right with $l$ becoming $r$ and $r$ becoming $z$.}
\label{fig:alg_step}
\end{minipage}
\vspace{-2pt}
\end{figure}
\\
\textbf{Finding a leaf parent pair:}
We next describe the algorithm to find a leaf parent pair. We maintain two nodes - a left node $l$, and a right node $r$. The idea is to move both the nodes towards the right side till $r$ is a leaf node and $l$ is its parent node. In order to do this we consider a third node $z$ and perform the following operations:
\vspace{-0.7pc}
\begin{enumerate}[leftmargin = *]
\item If the center node in $(l,r,z)$ is $z$, we shift node $l$ to node $z$,
\item If the center node in $(l,r,z)$ is $r$, we shift node $l$ to node $r$ and node $r$ to node $z$.
\end{enumerate}
\vspace{-0.7pc}
This is illustrated in Figure (\ref{fig:alg_step}). Finding the center node can be done by checking the feasibility of Equation \eqref{eq:err_est_x} for different candidate center nodes.
If Equation \eqref{eq:err_est_x} has a solution for more than one nodes, we use an alternative method which uses the 3 nodes in conjunction with different $4^{th}$ nodes. These 4 nodes are categorized as star/non-star to arrive at the center node. While doing the test for the center node, we only consider the nodes with pairwise distances smaller than $4d_{max} + 3\eta_{max}$. Here $\eta_{max}$ is an upper bound on the distance between a clean and noisy node. For a given $p_{min}$ and $q_{max}$ from Assumption \ref{ass:pmf} and \ref{ass:max_error} respectively, $\eta_{max} = (1-k)\log (1-q_{max}) - 0.5k \log (kp_{min})$ (details in Appendix \ref{ap:algo}).
This makes it easy to adapt the algorithm for the finite sample setting.
\paragraph{Finite sample algorithm:}
The finite sample version of the algorithm uses the empirical estimate of the joint PMF of random variables to test for the center node given a set of three nodes. We only perform the test for nodes that whose empirical distance is small to avoid a sample complexity exponential in the diameter of the graph. For the test of center node by checking for existence of a solution to Equation \eqref{eq:err_est_x} using empirical PMF estimates, we need the following additional assumption:
\begin{assumption}\label{ass:fin_sample_err_est}
When Equation \eqref{eq:err_est_x} does not have a solution, we have the following inequality:
\begin{align*}
\min_{0\leq x<q_{max}}& \|\frac{x^2}{k^2}(O - kI) - \frac{x}{k}(OP_{2'} + P_{2'}O - kP_{2'} - I) +
P_{2',3'} P_{1,' 3'}^{-1}P_{1',2'}-P_{2'}\|_F > t_0
\end{align*}
\end{assumption}
This assumption ensures that when Equation \eqref{eq:err_est_x} does not have a solution for a leaf node $X_2$ as a center node, it can be detected in the presence of perturbations due to finite samples.
In Appendix \ref{ap:algo}, we provide the details of the algorithm including finding the center node, and necessary modifications for executing the algorithm using finite samples. In addition, we also include the pseudocode and the proof of correctness of the algorithm.
\vspace{-5pt}
\paragraph{Insights into the input parameters of the algorithm:}
The algorithm in its vanilla form requires $d_{min},~d_{max},~ q_{max}, p_{min}$ and $t_0$ in addition to the noisy samples as inputs. While the dependence on the knowledge of $q_{max}$ is necessary, it is possible to obtain estimates of bounds of $d_{min}$ and $d_{max}$ using the noisy samples. This comes at the cost of higher sample complexity. Dependence on $t_0$ can also be avoided at the cost of higher time complexity. This is detailed as follows:
\vspace{-5pt}
\begin{itemize}[leftmargin = *, noitemsep]
\item The upper bound on $d_{max}$ is denoted by $\Tilde{d}_{max}$. It is defined as $\Tilde{d}_{max} = \max_i\min_{j\neq i}d_{i'j'}$. This bound can potentially be lose by $2\eta_{max}$.
\item If the ground truth is such that $d_{min} - 2\eta_{max} > 0$ then a lower bound on $d_{min}$, denoted by $\Tilde{d}_{min}$, can be defined as $\Tilde{d}_{min} = \min_i\min_{j\neq i}d_{i'j'} - 2\eta_{max}$. This bound can also be loose by $2\eta_{max}$.
\item If $p_{min}$ and $q_{max}$ are such that $p_{min}>q_{max}$ then a valid lower bound on $p_{min}$ is $\min_i(P_{a'})_{i,i} - q_{max}$ which can potentially be lose by $q_{max}$.
\item In the absence of the knowledge of $t_0$, we can use the star/non-star test for finding the center node among 3 nodes as long as no 2 nodes belong to the same \textit{leaf cluster}. This increases the time complexity of finding the center node from $\mathcal{O}(1)$ to $\mathcal{O}(n)$. Once we get nodes within the same \textit{leaf cluster}, the potential center node with the minimum objective function in Equation \eqref{eq:err_est_x} is chosen as the center node.
\end{itemize}
\vspace{-10pt}
\section{Sample Complexity Results}\label{sec:sample_complexity}\vspace{-5pt}
In this section, we provide both the sample complexity upper bounds and sample complexity lower bounds for recovering the tree using our algorithm in presence of corrupted samples.
\begin{theorem}[\textbf{Sample Complexity Upper Bound}]\label{th:ub}
Suppose the random variables in $\mathbf{X}$ form a tree graphical model $T^*$ and we observe $\mathbf{X}'$ such that Assumptions \ref{ass:pmf}, \ref{ass:distance}, \ref{ass:max_error} and \ref{ass:fin_sample_err_est} are satisfied. Then, the finite sample Algorithm \ref{alg:find_tree} correctly recovers $\mathcal{T}_{T^*}^{sub}$ with probability at least $1-\delta$ if the number of samples $N$ satisfies
\begin{small}
\begin{align*}
N = \mathcal{O}\Bigg(\max\Bigg\{&\tfrac{k^2\exp(8d_{\max})}{(1-q_{max})^{6(k-1)}(0.9p_{min}^{2.5})^{2k}(1-\exp{(-2d_{min})})^2(k-1)^{2(k-1)}}\Bigg. \Bigg.,\\
&\Bigg.\Bigg.\tfrac{k \exp(16d_{\max})}{t_0^2 (1-q_{max})^{12(k-1)}(0.9p_{min}^{2.5})^{4k}(k-1)^{4(k-1)}}\Bigg\}\log\left(\tfrac{2nk(n-1)}{\delta}\right)\Bigg)
\end{align*}
\end{small}
\end{theorem}
In the unidentifiable setting, since Equation \eqref{eq:err_est_x} always has a solution, our algorithm finds more than one candidate center nodes and therefore resorts to the star/non-star test for finding the center node. In the sample complexity, the second term in the $\max$ comes from the quadratic test and therefore it can be dropped. As a result, since we have an easier learning problem of learning only $\mathcal{T}_{T^*}$, the sample complexity has better dependence on $d_{max}, q_{max}$ and $p_{min}$.
\begin{theorem}[\textbf{Sample Complexity Lower Bound}]\label{th:lb}
Suppose the random variables in $\mathbf{X}$ form a tree graphical model $T^*$ and we observe $\mathbf{X}'$ such that Assumptions \ref{ass:pmf}, \ref{ass:distance}, \ref{ass:max_error} and \ref{ass:fin_sample_err_est} are satisfied. Then any algorithm that correctly recovers $\mathcal{T}_{T^*}^{sub}$ with probability at least $1-\delta$ requires $N$ samples where
$$
N = \Omega\left(\tfrac{\exp\left(\tfrac{2d_{\max}}{k-1}\right)}{(k-1)(1-q_{\max})^2\left(1-\exp\left(-\tfrac{d_{\min}}{k-1}\right)\right)} (1- \delta) \log(n)\right)
$$
Furthermore, for $k \geq 4$, $0< t_0 \leq \tfrac{k}{10}\exp(-2\tfrac{d_{\max}}{k-1})$, we additionally have
$$
N = \Omega\left(\max_{d\in \{d_{\max}, d_{\min}\}}\exp\left(-\tfrac{2d}{k-1}\right)\left(1-\exp\left(-\tfrac{d}{k-1}\right)\right) \tfrac{k(1- \delta) \log(n)}{ t_0^2}\right)
$$
\end{theorem}
\vspace{-5pt}
We note that our lower bounds on sample complexity shows our certain dependence on the problem parameters cannot be improved orderwise.
Firstly, we see the dependence on the graph size scales as $\Theta(\log(n))$ which is standard in graphical model learning. We observe that the sample complexity scales as ${\exp(\Theta(d_{\max}))}$ as a function of the $d_{max}$. Furthermore, for small enough $t_0$ and support size $4$ or more, the dependence on the lower bound for the quadratic term $Q(x)$, $t_0$, scales as $\Theta(\frac{1}{t_0^2})$ highlighting the significance of the term $Q(x)$ in the recovery of MRFs under unknown symmetric noise model.
Our lower bound proof for $t_0$ dependence in the (partially) identifiable case uses a family of $(n+1)$ star graphs with $n$ edges each, where one graph is a perturbed symmetric graphical model (Section \ref{sec:pertured_symmetric}), and for the other graphs we select one edge each and replace the conditional PMF with the one from a symmetric model.
Thus, the equivalence class $\mathcal{T}_{T^*}^{sub}$ for each graph in the family is unique. For the lower bounds in the unidentifiable scenario, we generalize the construction in \cite{tandon2021sga} to $k>2$ support size using symmetric graphical models. Our derivation for KL divergence for symmetric graphical model, and perturbed symmetric graphical models used in the lower bound proofs can be of independent interest.\vspace{-10pt}
\section{Experiments}\label{sec:exp}
In this section, we present the experiments demonstrating the efficacy of our algorithm (The code can be found at \url{https://github.com/ashishkatiyar13/NoisyTreeMRF}.). We first demonstrate the performance of our algorithm for the $k = 2$ setting and demonstrate that our algorithm considerably outperforms the algorithm in \cite{tandon2021sga}. Next, we showcase the performance of our algorithm for the $k = 4$ setting with the perturbed symmetric model. As discussed in Section \ref{sec:examples}, the exact tree is identifiable in this scenario. \vspace{-8pt}
\begin{figure}
\begin{subfigure}{\textwidth}
\centering
\includegraphics[scale = 0.27]{our_vs_sga_chain.pdf}
\caption{Chain Graph}
\label{fig:our_vs_sga_chain}
\end{subfigure}
\newline
\begin{subfigure}{\textwidth}
\centering
\includegraphics[scale = 0.27]{our_vs_sga_star.pdf}
\caption{Star Graph}
\label{fig:our_vs_sga_star}
\end{subfigure}
\caption{For both chain and star graphs, our algorithm outperforms SGA for 4 different settings - (i) $\rho_{max} = 0.6, q_{max} = 0.4$, (ii) $\rho_{max} = 0.6, q_{max} = 0.0$, (iii) $\rho_{max} = 0.8, q_{max} = 0.4$, (iv) $\rho_{max} = 0.8, q_{max} = 0.0$}
\label{fig:our_vs_sga}
\end{figure}
\subsection{Support size, $k = 2$ (Unidentifiable setting):}\vspace{-5pt}
In this part, we compare the performance of our algorithm for chain and star graphs to that of SGA proposed in \cite{tandon2021sga}. We use the exact same settings as in \cite{tandon2021sga} and demonstrate that we outperform SGA. \\
For chain graphs, the nodes are labeled $X_1$ to $X_{12}$ from left to right. The star graphs have $X_1$ as the center node and $X_2,\dots X_{12}$ are leaf nodes connected to $X_1$\vspace{-3pt}
\paragraph{Setting:} (i) Number of nodes = 12. (ii) Correlation of all the adjacent nodes = $\rho$. (iii) Alternate nodes have maximum noise ($q_i$ = 0 if $i~\%~2 = 0$, $q_i$ = $q_{max}$ if $i~\%2~ = 1$). (iv) Assume access to $\rho$. (v) Number of iterations = 1000 \\
For both, chain graphs and star graphs, we vary $\rho$ in $\{0.6, 0.8\}$ and $q_{max}$ in $\{0, 0.4\}$.
We would like to point out that $q_{max}$ is defined differently in our setting and in SGA; $q_{max}$ in our setting is twice the SGA's $q_{max}$. The final results are presented in Figures \ref{fig:our_vs_sga_chain} and \ref{fig:our_vs_sga_star} respectively. \vspace{-10pt}
\begin{figure}
\centering
\includegraphics[scale = 0.4]{random_graph.jpg}
\caption{Randomly generated graph used for algorithm evaluation.}
\label{fig:random_graph}
\end{figure}
\subsection{Support size, $k = 4$ (Identifiable Setting):} \vspace{-5pt}
In this part we see the impact of $\delta$ on the performance of the algorithm for different graphs. We execute the algorithm for a lot of randomly generated graphs and the algorithm converges to the correct output. We report the results for 3 different graph structures - star, chain and one of the many randomly generated graphs (Figure \ref{fig:random_graph}).
\paragraph{Setting}:
(i) Number of nodes = 7.\\
(ii) Graph Shape = \{Chain, Star, Random\}\\
(iii) Distance of all the adjacent nodes = $\exp(-0.7)$. \\
(iv) Error probability is uniformly sampled from $[0,0.2]$.\\
(v) $\delta\in \{0.00, 0.02, 0.04\}$\\
(vi) Assume access to $q_{max}$, $d_{min}$ but not to $d_{max}$, $t_0$.\\
(vii) Number of iterations = 100
\begin{figure}
\centering
\includegraphics[scale = 0.2]{app_k_4_delta.pdf}
\caption{Comparing the performance of our
algorithm and Chow-Liu over different values of $\delta_{i,j}\in \{0.00, 0.02, 0.04\}$ and different graph shapes - chain, star, random. Setting: $d_{min} = d_{max} = \exp(-0.7)$, $q_{max} = 0.2$, $\#$ of nodes$=7$. For both algorithms, we provide results for two cases: i) when the exact underlying tree is recovered, ii) when a tree from the equivalence class is recovered.}
\label{fig:app_k_4_delta}
\end{figure}
\paragraph{Takeaways:}
\begin{enumerate}[leftmargin=*,noitemsep]\vspace{-5pt}
\item We witness the transition from unidentifiability to identifiability. When $\delta = 0$, the exact graph cannot be recovered and hence the exact recovery fraction remains low consistently regardless of the number of samples. Higher $\delta$ has faster convergence to the correct graph.
\item Learning a tree from the equivalence class requires much fewer samples.
\item For the given noise model when the probability of error is randomly selected, for a significant number of realizations in the star shape, the Chow-Liu remains in the equivalence class. However, it lags behind considerably compared to our algorithm.
\item Chow-Liu has high error for complete recovery.
\end{enumerate}
We also perform extensive experiments where we evaluate the impact of the probability of error and the distance between adjacent nodes and present the results in Appendix \ref{ap:exps}.
\vfill
\pagebreak
|
1,314,259,995,871 | arxiv | \section{The Elsevier article class}
\paragraph{Installation} If the document class \emph{elsarticle} is not available on your computer, you can download and install the system package \emph{texlive-publishers} (Linux) or install the \LaTeX\ package \emph{elsarticle} using the package manager of your \TeX\ installation, which is typically \TeX\ Live or Mik\TeX.
\paragraph{Usage} Once the package is properly installed, you can use the document class \emph{elsarticle} to create a manuscript. Please make sure that your manuscript follows the guidelines in the Guide for Authors of the relevant journal. It is not necessary to typeset your manuscript in exactly the same way as an article, unless you are submitting to a camera-ready copy (CRC) journal.
\paragraph{Functionality} The Elsevier article class is based on the standard article class and supports almost all of the functionality of that class. In addition, it features commands and options to format the
\begin{itemize}
\item document style
\item baselineskip
\item front matter
\item keywords and MSC codes
\item theorems, definitions and proofs
\item lables of enumerations
\item citation style and labeling.
\end{itemize}
\section{Front matter}
The author names and affiliations could be formatted in two ways:
\begin{enumerate}[(1)]
\item Group the authors per affiliation.
\item Use footnotes to indicate the affiliations.
\end{enumerate}
See the front matter of this document for examples. You are recommended to conform your choice to the journal you are submitting to.
\section{Bibliography styles}
There are various bibliography styles available. You can select the style of your choice in the preamble of this document. These styles are Elsevier styles based on standard styles like Harvard and Vancouver. Please use Bib\TeX\ to generate your bibliography and include DOIs whenever available.
Here are two sample references: \cite{Feynman1963118,Dirac1953888}.
\section*{References}
\section{Introduction}
One of the most straightforward ways to solve numerically the time dependent hyperbolic partial differential equations (PDEs) is to use a Method Of Lines (MOL) when the spatial and the temporal parts of PDE are discretized separately. Typically, a discretization of the space is performed in the first step that approximates the PDE by a system of ordinary differential equations (ODEs), for which, afterwards, a chosen numerical integration is used \cite{shu_essentially_1998,leveque_finite_2004}.
Quite naturally, the first candidates to obtain numerical solutions of ODEs are explicit methods that deliver numerical solutions without the need to solve any algebraic system of equations. Such approximations require a careful choice of discretization steps not only due to accuracy requirements but also for stability reasons. The second requirement is specific to numerical methods and, if not followed, unstable behaviors of numerical solutions can occur even for well-posed problems. For many types of PDEs and the methods for their numerical solutions, such stability requirements are well understood, typically formulated in the form of a stability condition on the choice of time steps. If the known stability restriction does not limit the choice of discretization steps more than the accuracy requirement, the usage of explicit methods is well justified.
Nevertheless, in several cases such restrictions can be too demanding or simply hard to follow, and consequently implicit time discretization methods for MOL are also considered, see, e.g., \cite{ arbogast2020third, puppo_quinpi_2022}. Such methods are receiving increasing attention, especially for models that describe several dynamic processes with different characteristic speeds, of which only those with slow or moderate speed are of practical interest. In such cases, the terms related to the processes with the fastest speed are treated implicitly. As an example we mention here only so called ``all Mach number'' solvers of Euler equations that are treated, e.g., with fully implicit methods \cite{barsukow2017numerical} or IMEX methods \cite{avgerinos_linearly_2019,boscheri_second_2020,zeifang2020novel} using also relaxation methods \cite{
berthon_all_2020,thomann_implicit_2021} .
The implicit methods do not explicitly define the values of numerical solution, but instead formulate algebraic systems to be fulfilled by the discrete numerical values. Therefore, to find the numerical solution, a system of algebraic equations must be solved. Clearly, this is the main price to be paid by the usage of implicit methods that must be well justified, especially if the algebraic systems are nonlinear. Consequently, methods are demanded that simplify the task of (nonlinear) algebraic solvers, and that is the main motivation of our study.
Implicit numerical integrators can be used analogously to the explicit methods applied to the system of ODEs obtained with MOL for PDEs, see, e.g., Diagonally Implicit Runge-Kutta (DIRK) methods \cite{puppo_quinpi_2022}, adaptive Runge-Kutta methods \cite{arbogast2020third}, implicit multi-rate methods \cite{carciopolo2019conservative} and so on. Other types of temporal discretization methods offer a coupled treatment of both discretization methods. We mention explicit methods based on Taylor series expansions like the Lax-Wendroff procedure
\cite{qiu_finite_2003,zorio_approximate_2017,carrillo2019compact,carrillo2021lax}, a local time-space DG discretization \cite{dumbser_finite_2008,han2021dec}, and two-, or even multi-derivative implicit schemes \cite{gottlieb_high_2022}, see also a review paper \cite{seal_high-order_2014}. Concerning the methods based on Taylor series, several authors have recognized that it can be advantageous to discretize each replaced term in the Taylor expansion with different spatial discretizations \cite{qiu_finite_2003, seal_high-order_2014, tsai_two-derivative_2014, li_two-stage_2019,carrillo2019compact,zeifang_two-derivative_2021}. A natural question arises if such an approach can be used to simplify the algebraic equations resulting from implicit or implicit-explicit time discretization of hyperbolic problems.
In the context of nonconservative advection equation, it has been recognized that a special coupling of temporal and spatial discretization can result in second order accurate numerical schemes that are unconditionally stable and that result in simpler algebraic systems than the ones obtained with fully implicit schemes \cite{frolkovic2016semi, frolkovic2018semi}. Such schemes were introduced under the abbreviation IIOE (``Inflow-Implicit/Outflow-Explicit'') finite volume method for the scalar advection equation in \cite{mikula2014inflow} and later successfully applied in, e.g., \cite{frolkovic2015semi, hahn2019iterative, ibolya2020numerical}. The IIOE scheme in \cite{mikula2014inflow} resembles the so-called ``angle derivative'' type methods for constant velocity advection; see the discussion and references in \cite{mccartin_method_2005}.
As the notation ``Inflow-Implicit and Outflow-Explicit'' suggests, spatial and temporal discretizations are aware of each other and proposed in a coupled way. In \cite{frolkovic2018semi,frolkovic_semi-implicit_2021} it is shown that the schemes can be derived by considering and approximating mixed spatial-temporal derivatives in the Taylor expansion of the solution when using the Lax-Wendroff (LW) procedure only partially. The idea of not using LW to completely replace the time derivatives with the space derivatives using PDE is used also in different contexts in \cite{duraisamy_implicit_2007,carrillo2019compact,carrillo2021lax}.
Opposite to level set advection equations where the solution is supposed to be continuous, the hyperbolic problems allow for discontinuous solutions. For this type of problems, the numerical scheme must be conservative to approximate correctly the movement of shock waves and the scheme must be nonlinear even for linear problems if higher resolution than the first order scheme with non-oscillatory numerical solutions is required. In this context, we introduce a novel semi-implicit method for some representative models of hyperbolic systems extending the approach for linear problems in \cite{frolkovic2018semi,frolkovic_semi-implicit_2021} with some ideas from \cite{duraisamy_implicit_2007,lozano_implicit_2021} for nonlinear problems.
For scalar PDE the proposed method involves a free parameter, for which the method is always second order accurate in time and space for smooth solutions. In the case of discontinuous solutions we propose a predictor-corrector procedure to find solution dependent values of the parameter for which the scheme is Total Variation Diminishing (TVD). For the case of Courant number larger than one, in general, additional limiting must be used, which we propose in a form of flux-corrected transport scheme \cite{deLuna2021maximum,kuzmin2022bound}. The system of algebraic equations is solved efficiently with only one forward and one backward sweep using the fast sweeping method \cite{lozano_implicit_2021}. In this method, a scalar algebraic equation is solved for each grid point that is nonlinear only due to the nonlinearity of the model (if nonlinear). In the case of hyperbolic systems we express the second order correction part of the scheme in characteristic variables as suggested, e.g., in \cite{leveque_finite_2004,duraisamy_implicit_2007}.
The paper is organized as follows. In Section \ref{sec1} we present the method for the scalar case with details for the linear advection equation given in Section \ref{sec2}. In Section \ref{sec3} we present algorithmic details of the high-resolution method for the scalar nonlinear case, and in Section \ref{sec4} we give additional details for hyperbolic systems. The Section \ref{sec5} on numerical experiments presents several test examples that illustrate the properties of the method. Finally, in Section \ref{sec-conc} we make some concluding remarks and in Appendix \ref{sec-app} we give some technical details.
\section{Scalar conservation laws}
\label{sec1}
In this section, we aim to solve numerically the scalar nonlinear hyperbolic equation written in the form
\begin{equation}
\label{cl}
u_t + f(u)_x = 0 \,, \quad u(x,0) = u^0(x) \,, \,\, x \in R \,, \, t > 0 \,,
\end{equation}
where $u=u(x,t)$ is the unknown function with initial values prescribed by a given $u^0$ and $f$ is a given flux function.
To discretize (\ref{cl}) we follow the approach of conservative finite difference methods as described, e.g., in \cite{shu_essentially_1998,qiu_finite_2003,lozano_implicit_2021}. For that purpose we use the standard notation for grid nodes $x_i$, $i=0,1,2\ldots,I$ with a uniform step $\Delta x \equiv x_i-x_{i-1}$ and discrete times $0=t^0<t^1<\ldots$ with $\Delta t \equiv t^{n+1}-t^n$, $n=0,1,\ldots,N$ where the integers $I$ and $N$ are given. Furthermore, $x_{i+1/2}=x_i+\Delta x/2 $, $f_i^n := f(u_i^n)$, etc. Our aim is to find approximations $u_i^{n+1} \approx u(x_i,t^{n+1})$. To do so, we follow the standard form of conservative schemes,
\begin{equation}
\label{step0}
u_i^{n+1} + \frac{\tau}{h} \left( F_{i+1/2}^{n+1} - F_{i-1/2}^{n+1} \right) = u_i^n \,,
\end{equation}
where the numerical fluxes will be defined by a numerical flux function.
To propose the numerical flux function, we use the approach of the fractional step scheme presented in \cite{lozano_implicit_2021}. There, the flux function $f$ is split into the sum of two functions having nonnegative and nonpositive derivatives,
\begin{equation}
\label{split}
f = f^+ + f^- \,, \quad \frac{df^+}{du} \ge 0 \,,\,\, \frac{df^-}{du}\le 0 \,, \,\, u \in R \,.
\end{equation}
One choice for (\ref{split}) is analogous to the Lax-Friedrichs vector splitting,
\begin{equation}
\label{lf}
f^+(u) := \frac{1}{2} \left( f(u) + \alpha u\right) \,, \quad f^-(u) := \frac{1}{2} \left( f(u) - \alpha u\right) \,,
\end{equation}
where the parameter $\alpha$ is fixed at the maximum value of $|f'(u)|$ over the relevant values of $u$.
Having the splitting, the simplest variant of the fractional step method consists of two partial steps with the first step given by solving the algebraic equations,
\begin{equation}
\label{step1}
u_i^{n+1} + \frac{\Delta t}{\Delta x} F_{i+1/2}^{+,n+1} = u_i^{n} + \frac{\Delta t}{\Delta x} F_{i-1/2}^{+,n+1} \,, \,\, i=1,2,\ldots,I \,,
\end{equation}
where the numerical flux $F_{1/2}^{n+1}$ shall be determined from the boundary conditions.
The second step is given by the solution of
\begin{equation}
\label{step2}
u_i^{n+1} - \frac{\Delta t}{\Delta x} F_{i-1/2}^{-,n+1} = u_i^{n} - \frac{\Delta t}{\Delta x} F_{i+1/2}^{-,n+1} \,, \,\, i=I-1,I-2,\ldots,0 \,,
\end{equation}
where the values $u_i^n$ in (\ref{step2}) are equal to the values $u_i^{n+1}$ of the first fractional step (\ref{step1}) that we do not distinguish in the notation. Again, the value $F_{I-1/2}^{n+1}$ in (\ref{step2}) is determined from the boundary conditions. The numerical fluxes in (\ref{step1}) and (\ref{step2}) are given in \cite{lozano_implicit_2021} by the first order accurate upwind approximation,
\begin{equation}
\label{F}
F_{i+1/2}^{+,n+1} = f_{i}^{+,n+1} \,, \quad
F_{i-1/2}^{-,n+1} = f_{i}^{-,n+1} \,.
\end{equation}
In what follows, we propose a second order and a high-resolution extension of the numerical fluxes in (\ref{F}).
The most important advantage of the proposed method (\ref{step1}) - (\ref{F}) is that each algebraic equation contains only the single unknown $u_i^{n+1}$. The main disadvantage of (\ref{F}) is the low order accuracy that we aim to improve here. Note that in our numerical experiments we use the fractional step method in the first order accurate form (\ref{step1}) - (\ref{step2}), for higher order extensions see a discussion in \cite{lozano_implicit_2021}.
The numerical flux functions in our semi-implicit method take the following parametric form,
\begin{eqnarray}
\label{F2a}
F_{i+1/2}^{+,n+1} = f_{i}^{+,n+1} -
\frac{l_{i}}{2} \left( (1-\omega_{i}) (f_{i}^{+,n+1} - f_{i+1}^{+,n})
+ \omega_{i} (f_{i-1}^{+,n+1} - f_{i}^{+,n}) \right) \,, \\[1ex]
\label{F2b}
F_{i-1/2}^{-,n+1} = f_{i}^{-,n+1} -
\frac{l_{i}}{2}\left( (1-\omega_{i}) (f_{i}^{-,n+1} - f_{i-1}^{-,n})
+ \omega_{i} ( f_{i+1}^{-,n+1} - f_{i}^{-,n}) \right)\,,
\end{eqnarray}
where the parameters $\omega_i \in [0,1]$ and $l_i \in [0,1]$ shall be chosen. The parameters are different in (\ref{F2a}) and (\ref{F2b}) (in fact, also in each time step), which we do not emphasize in the notation. For a fixed value of $\omega_i \equiv \bar \omega$ and $l_i \equiv 1$ the method is second order accurate for smooth solutions if either $f \equiv f^+$ or $f^- \equiv f$, see the Appendix. In the case of linear advection equation, the scheme is unconditionally stable for $\omega_i \ge 0$ having no restriction on the choice of $\tau$ due to the stability, see the proof in \cite{frolkovic_semi-implicit_2021}.
Note that replacement of (\ref{F}) by the definitions (\ref{F2a}) and (\ref{F2b}) again results in a fully upwinded form in the implicit part of the schemes (\ref{step1}) and (\ref{step2}) for any particular choice of parameters. Consequently, the left hand sides of (\ref{step1}) and (\ref{step2}) contain again only single unknown values $u_i^{n+1}$ if computed in the order defined in (\ref{step1}) and (\ref{step2}).
Any constant choice of parameters $\omega_i$ gives a scheme with a fixed stencil that can result for non-smooth problems in numerical solutions with unphysical oscillations that are not diminishing with a grid refinement. To suppress such behavior, we define variable values of $\omega_i$ depending on the numerical solution that results in a nonlinear numerical scheme even for the linear advection equation. In what follows, we propose such a dependency of $\omega$ in (\ref{step1}) - (\ref{step2}) on $u_i^{n+1}$ to obtain a Total Variation Diminishing (TVD) scheme \cite{leveque_finite_2004,toro_riemann_2009}.
\section{Linear advection equation}
\label{sec2}
For clarity of presentation, we derive the nonlinear numerical scheme for the simplest case of the linear advection equation with positive constant velocity $\bar v$, when $f(u) \equiv f^+(u)= \bar v u$. The Courant number is denoted by
\begin{equation}
C = \frac{\bar v \Delta t}{\Delta x} \,.
\nonumber
\end{equation}
Our aim is to propose a function $\omega=\omega(r)$ such that $\omega_i=\omega(r_i)$ and
\begin{equation}
\label{r}
r_{i} = \frac{u_{i-1}^{n+1} - u_{i}^n}{u_{i}^{n+1} - u_{i+1}^n} \,,
\end{equation}
if $u_{i}^{n+1} \neq u_{i+1}^n$. As $r_i$ in (\ref{r}) depends on the unknown value $u_i^{n+1}$, the resulting scheme will be nonlinear even for the linear advection equation.
Using (\ref{r}) and $l_i \equiv 1$ we can express the numerical fluxes as follows,
\begin{eqnarray}
\label{upsim}
F_{i-1/2}^{+,n+1} = \bar v \left( u_{i-1}^{n+1} -\frac{1}{2} \left( 1-\omega_{i-1} + \omega_{i-1} r_{i-1} \right) (u_{i-1}^{n+1} - u_{i}^{n}) \right)\,, \\[1ex]
\label{upsip}
F_{i+1/2}^{+,n+1} = \bar v \left( u_{i}^{n+1} -\frac{1}{2 r_i} \left( 1-\omega_{i} + \omega_{i} r_i \right) (u_{i-1}^{n+1} - u_{i}^{n}) \right)\,,
\end{eqnarray}
if $r_i \neq 0$ (that we suppose for the rest of this derivation and comment later). Denoting
\begin{equation}
\nonumber
\Psi_{i} = 1-\omega_{i} + \omega_{i} r_{i} \,,
\end{equation}
one can express the numerical fluxes in (\ref{upsim}) and (\ref{upsip}) using the form
\begin{eqnarray*}
\label{upsimlim}
F_{i-1/2}^{+,n+1} = \bar v \left( u_{i-1}^{n+1} -\frac{1}{2} \Psi_{i-1} (u_{i-1}^{n+1} - u_{i}^{n}) \right)\,, \\[1ex]
\label{upsiplim}
F_{i+1/2}^{+,n+1} = \bar v \left( u_{i}^{n+1} -\frac{1}{2} \Psi_i (u_{i}^{n+1} - u_{i+1}^{n}) \right) = \bar v \left( u_{i}^{n+1} -\frac{1}{2} \frac{\Psi_i}{r_i} (u_{i-1}^{n+1} - u_{i}^{n}) \right).
\end{eqnarray*}
The values $\Psi_i$ can be viewed as the so-called flux limiters \cite{leveque_finite_2004,toro_riemann_2009,duraisamy_implicit_2007,puppo_quinpi_2022}, when the scheme (\ref{step1}) can be written formally in the form,
\begin{eqnarray}
\label{steppsi}
u_i^{n+1} - u_i^n + C \left( u_i^{n+1} - u_{i-1}^{n+1} -
\frac{1}{2} \left(\frac{\Psi_{i}}{r_{i}} - \Psi_{i-1}\right) \right) (u_{i-1}^{n+1} - u_{i}^n) = 0 \,.
\end{eqnarray}
Now, using
\begin{equation}
\nonumber
\label{triv}
u_{i-1}^{n+1} - u_{i}^n = (u_i^{n+1} - u_{i}^n) - (u_i^{n+1} - u_{i-1}^{n+1}) \,,
\end{equation}
the scheme (\ref{steppsi}) can be written in the form
\begin{eqnarray}
\label{tvdscheme}
\left( 1 - \frac{C}{2} \left(\frac{\Psi_{i}}{r_{i}} - \Psi_{i-1}\right)\right)(u_i^{n+1} - u_i^n) + \\[1ex]\nonumber
C \left( 1 + \frac{1}{2} \left(\frac{\Psi_{i}}{r_{i}} - \Psi_{i-1}\right) \right) (u_i^{n+1} - u_{i-1}^{n+1}) = 0 \,.
\end{eqnarray}
If the coefficients before $(u_i^{n+1} - u_i^n)$ and $(u_i^{n+1} - u_{i-1}^{n+1})$ in (\ref{tvdscheme}) are fixed and nonnegative, then the scheme is TVD \cite{duraisamy_implicit_2007,puppo_quinpi_2022}.
In what follows, we propose $\omega=\omega(r)$ so that this property is fulfilled for $C\le 1$. For larger Courant numbers, to preserve the TVD property, we have to consider $l_i \in [0,1]$, see later their definition inspired by flux-corrected type methods \cite{kuzmin2022bound,deLuna2021maximum}.
\begin{rmk}
\label{rem-spec}
To derive (\ref{tvdscheme}) we have supposed, among others, that $u_{i-1}^{n+1} \neq u_i^n$. As we show later, the case $u_{i-1}^{n+1} = u_i^n$ can happen only if $\omega_{i-1}=0$, when the scheme (\ref{step1}) takes the simpler form,
\begin{eqnarray}
\label{steppsispec1}
u_i^{n+1} - u_i^n + C \left( u_i^{n+1} - u_{i-1}^{n+1} -
\frac{1}{2} (1-\omega_{i}) (u_i^{n+1}-u_{i+1}^n) \right) = 0 \,.
\end{eqnarray}
To fulfill the TVD property, we choose $\omega_{i}=1$ in (\ref{steppsispec1}) which results in the first order scheme. If by chance the result of (\ref{steppsispec1}) is $u_i^{n+1}=u_{i+1}^n$, then $\omega_i$ in (\ref{steppsispec1}) can be arbitrarily chosen, so we set $\omega_{i}=0$.
\end{rmk}
To have positive coefficients in (\ref{tvdscheme}) we require the following,
\begin{eqnarray}
\label{ineq1}
-1 \le \Psi_{i-1} \le 2 \,, \\[1ex]
\label{ineq2}
\Psi_{i-1} - 2 \le \frac{\Psi_{i}}{r} \le \Psi_{i-1} + 2 \,,
\end{eqnarray}
where the inequalities in (\ref{ineq2}) must be satisfied for an arbitrary nonzero $r \in R$. Note that the inequalities in (\ref{ineq1}) are, in fact, required to fulfill (\ref{ineq2}) for two special cases that can occur: $\Psi_{i}=0$ and $\Psi_{i}=r$. Note that for accuracy reasons, we require $\Psi(1)=1$ \cite{leveque_finite_2004,duraisamy_implicit_2007,puppo_quinpi_2022}.
One of the simplest choices is used in \cite{duraisamy_implicit_2007}, where
\begin{equation}
\nonumber
\Psi(r) = \left \{
\begin{array}{lr}
r & |r| \le 1 \\[1ex]
1 & |r| > 1
\end{array}
\right . \,,
\end{equation}
or, equivalently,
\begin{equation}
\label{minmodomega}
\omega(r) = \left \{
\begin{array}{lr}
1 & |r| \le 1 \\[1ex]
0 & |r| > 1
\end{array}
\right . \,.
\end{equation}
In the case of fully explicit or fully implicit schemes, the choice (\ref{minmodomega}) can be viewed as the second order ENO reconstruction \cite{shu_essentially_1998,duraisamy_implicit_2007}.
We propose the function $\omega=\omega(r)$ following a strategy of modified ENO schemes \cite{shu_numerical_1990}. That is, we assume that a preferable constant value $\bar \omega \in (0,1]$ of $\omega$ is chosen to be used in (\ref{F2a}) that should be considered for each $\omega_i$ if the TVD property is not destroyed. In what follows, we choose $\bar \omega = 1$ which gives the upwinded form of fluxes in (\ref{F2a}) - (\ref{F2b}) that we prefer for larger Courant numbers.
In particular, we define
\begin{equation}
\label{omega1}
\omega(r) = \left \{ \begin{array}{lr}
\frac{1}{r-1} & 2 \le r \\[1.5ex]
\frac{2}{1-r} & r \le -1 \\[1.5ex]
1 & -1 \le r \le 2
\end{array} \right.
\end{equation}
or, equivalently,
\begin{equation}
\label{psi1}
\Psi(r) = \left \{ \begin{array}{lr}
2 & 2 \le r \\[1.5ex]
-1 & r \le -1 \\[1.5ex]
r & -1 \le r \le 2 \,.
\end{array} \right.
\end{equation}
Clearly, if $\Psi_{i-1}$ and $\Psi_i$ are defined by (\ref{psi1}), then the inequalities (\ref{ineq1}) - (\ref{ineq2}) are fulfilled and the scheme is TVD.
Next, we comment on how to solve the nonlinear algebraic equations (\ref{step1}) with (\ref{F2a}) and (\ref{omega1}). We propose it in the form of an iterative predictor and corrector procedure. Firstly, we predict the value of $u_i^{n+1}$ by $u_i^{n+1,0}$ that is obtained, e.g., from the first order scheme or with the second order scheme fixing $\omega_i$ at some chosen value, see discussions in the section on numerical experiments.
Suppose that some predicted value $u_i^{n+1,k} \approx u_i^{n+1}$ for $k\ge 0$ is available, then we define the value of $r_i=r_i^k$ by replacing $u_i^{n+1}$ with $u_i^{n+1,k}$ in (\ref{r}). Similarly, the value $\omega_i=\omega_i^k$ from (\ref{omega1}) or $\Psi_i=\Psi_i^k$ from (\ref{psi1}) is obtained. Now, solving the linear algebraic equation for the unknown $u_i^{n+1}$
\begin{eqnarray}
\label{steppsik}
u_i^{n+1} + C u_i^{n+1} = u_i^n + C \left( u_{i-1}^{n+1} + \frac{1}{2} \left(\frac{\Psi_{i}^k}{r_{i}^k} - \Psi_{i-1}\right) (u_{i-1}^{n+1} - u_{i}^n) \right) \,,
\end{eqnarray}
one obtains the corrected value $u_i^{n+1,k+1}$. If the difference $|u_i^{n+1,k+1}-u_i^{n+1,k}|$ is acceptable, set $u_i^{n+1}:=u_i^{n+1,k+1}$.
If $u_i^{n+1} \neq u_i^{n+1,k}$, one has to choose between the flux $F_{i+1/2}^{n+1}$ that preserves the TVD property, i.e. obtained from (\ref{steppsik}),
\begin{equation}
\label{consflux}
F_{i+1/2}^{n+1} = C \left(u_i^{n+1} - \frac{1}{2}\left((1-\omega_i^k) ( u_i^{n+1,k}-u_{i+1}^{n}) + \omega_i^k (u_{i-1}^{n+1}-u_i^n)\right)\right)
\end{equation}
or the locally conservative flux,
\begin{equation}
\label{tvdflux}
F_{i+1/2}^{n+1} = C \left(u_i^{n+1} - \frac{1}{2}\left((1-\omega_i^k) ( u_i^{n+1}-u_{i+1}^{n}) + \omega_i^k (u_{i-1}^{n+1}-u_i^n)\right)\right) \,.
\end{equation}
In our numerical experiments, we prefer the conservative one, especially if only one corrector step is used, therefore a small violation of TVD property for the numerical solution can occur in general.
Note that to find the numerical solution in each time step we have to visit each point in the grid only once, and we compute the value $u_i^{n+1}$ explicitly for the linear advection equation in each corrector step (very often only a single one).
Finally, we have to solve the case where $C>1$. For that purpose we introduced in (\ref{F2a}) and (\ref{F2b}) the factors $l_i \in [0,1]$ in the spirit of flux corrected transport schemes. In particular, instead of (\ref{tvdscheme}) we obtain
\begin{eqnarray}
\label{ltvdscheme1}
\left( 1 - \frac{C}{2} \left(\frac{l_{i}\Psi_{i}}{r_{i}} - l_{i-1}\Psi_{i-1}\right)\right)(u_i^{n+1} - u_i^n) + \\[1ex]\nonumber
C \left( 1 + \frac{1}{2} \left(\frac{l_{i}\Psi_{i}}{r_{i}} - l_{i-1}\Psi_{i-1}\right) \right) (u_i^{n+1} - u_{i-1}^{n+1}) = 0 \,.
\end{eqnarray}
Clearly, if $l_{i}=l_{i-1}=1$, we obtain the origin (uncorrected) scheme. To have positive coefficients in (\ref{ltvdscheme1}) if $C>1$, we have to require more restrictive inequalities than (\ref{ineq1}) and (\ref{ineq2}), namely,
\begin{equation}
\label{ineqe}
-\frac{1}{C} < l_{i-1} \Psi_{i-1} \le 2 \,, \quad -2 + l_{i-1} \Psi_{i-1} \le \frac{l_{i} \Psi_{i}}{r} \le \frac{2}{C} + l_{i-1} \Psi_{i-1} \,.
\end{equation}
Therefore, we define
\begin{equation}
\label{omegae}
\omega_{i} = \left \{ \begin{array}{lr}
\frac{1}{r-1} & 2 \le r \\[1.5ex]
\frac{1+C}{C (1-r)} & r \le -\frac{1}{C} \\[1.5ex]
1 & \hbox{otherwise}
\end{array} \right.
\end{equation}
or equivalently
\begin{equation}
\label{l}
\Psi_{i} = \left \{ \begin{array}{lr}
2 & 2 \le r \\[1.5ex]
-1/C & r \le -\frac{1}{C} \\[1.5ex]
r & \hbox{otherwise}
\end{array} \right. \,.
\end{equation}
Finally,
\begin{equation}
\label{li}
l_i = \min \left \{ 1 , \max \left \{ 0, \frac{r_i}{\Psi_i}\left(\frac{2}{C}+l_{i-1}\Psi_{i-1}\right)\right \} \right \} \,.
\end{equation}
Using (\ref{omegae}) - (\ref{li}), one obtains the inequalities in (\ref{ineqe}) for arbitrary $C\ge 1$.
Before formulating the method with the semi-implicit scheme for scalar case in next section, we comment briefly the case of nonlinear flux function $f$. If, for example, $f'(u)\ge 0$ for $u \in R$, then we generalize (\ref{ltvdscheme1}) to the form
\begin{eqnarray}
\label{ltvdschemenl}
\left( 1 - \frac{1}{2} \frac{f_i^{n+1}-f_i^n}{u_i^{n+1}-u_i^{n}} \left(\frac{l_{i}\Psi_{i}}{r_{i}} - l_{i-1}\Psi_{i-1}\right)\right)(u_i^{n+1} - u_i^n) + \\[1ex]\nonumber
\frac{f_i^{n+1}-f_{i-1}^{n+1}}{u_i^{n+1}-u_{i-1}^{n+1}} \left( 1 + \frac{1}{2} \left(\frac{l_{i}\Psi_{i}}{r_{i}} - l_{i-1}\Psi_{i-1}\right) \right) (u_i^{n+1} - u_{i-1}^{n+1}) = 0 \,,
\end{eqnarray}
when the indicators $r_i$ are now determined by
\begin{equation}
\label{rnl}
r_{i} = \frac{f_{i-1}^{n+1} - f_{i}^n}{f_{i}^{n+1} - f_{i+1}^n} \,.
\end{equation}
The important difference from (\ref{tvdscheme}) is that the constant Courant number $C$ is replaced in (\ref{rnl}) by nonlinear terms. However, if an estimate of the maximal Courant number $C$ is available, the TVD property is preserved.
In the next section, we describe the details of the semi-implicit scheme for the scalar (nonlinear) hyperbolic equation including some algorithmic aspects.
\section{The semi-implicit high resolution scheme}
\label{sec3}
For simplicity, we suppose that the solution $u$ of (\ref{cl}) has a compact support, so we can set
\begin{equation}
\nonumber
\label{compactl}
u_0^{n+1} = u_0^n \,, \,\, u_1^{n+1} = u_1^{n} \,,
\end{equation}
and
\begin{equation}
\label{compactr}
\nonumber
u_I^{n+1} = u_I^n \,, \,\, u_{I-1}^{n+1} = u_{I-1}^{n} \,.
\end{equation}
Furthermore, we suppose that Courant numbers $C^+ \ge 0$ and $C^- \ge 0$ are available such that
\begin{equation}
\label{CN}
\nonumber
\frac{\tau}{h} \max_u \frac{d}{du} f^+(u) \le C^+ \quad \hbox{and} \quad \frac{\tau}{h} \max_u \frac{d}{du} f^-(u) \ge -C^- \,.
\end{equation}
Moreover, $\epsilon>0$ denotes a small enough constant. The default values for each time step, if not recomputed, are $\omega_i=0$, $l_i=1$, and $\Psi_i=1$.
The scheme (\ref{step1}) using (\ref{F2a}) for $i=2,3,\ldots,I-2$ or (\ref{step2}) using (\ref{F2b}) for $i=I-2,I-3,\ldots,2$ is then iteratively solved at the $n$-th time step as follows.
\vspace{2ex}
\begin{enumerate}
\item Compute
\begin{equation}
\label{fupw} \nonumber
\Delta^{up} = f_{i-1}^{+,n+1} - f^{+,n}_i \quad \hbox{or} \quad
\Delta^{up} = f_{i+1}^{-,n+1} - f_i^{-,n} \,.
\end{equation}
If $|\Delta^{up}|\le \epsilon$ then set $\omega_i=1$ and solve the algebraic equation (\ref{step1}) or (\ref{step2}) for the unknown $u_i^{n+1}$. Continue with the step 1 for $i+1$ or $i-1$.\\
\item If $|\Delta^{up}| > \epsilon$ then set an initial guess $\mathrm{u}^{0} \approx u^{n+1}_i$, e.g., by solving for $\mathrm{u}$
\begin{equation}
\label{fo1} \nonumber
\mathrm{u} + \frac{\tau}{h} f^+(\mathrm{u}) = u_i^n + \frac{\tau}{h} F^{+,n+1}_{i-1/2}
\end{equation}
or
\begin{equation}
\label{fo2}\nonumber
\mathrm{u} - \frac{\tau}{h} f^-(\mathrm{u}) = u_i^n - \frac{\tau}{h} F^{-,n+1}_{i+1/2} \,.
\end{equation}
\item For the value $\mathrm{u}^k \approx u_i^{n+1}$ for some $k\ge 0$ compute
\begin{equation}
\label{fdw}\nonumber
\Delta^{dw,k} = f^+(\mathrm{u}^k) - f_{i+1}^{+,n} \quad \hbox{or} \quad
\Delta^{dw,k} = f^-(\mathrm{u}^k) - f_{i-1}^{-,n} \,.
\end{equation}
If $|\Delta^{dw,k}|\le \epsilon$ then proceed with the step 5.
\item If $|\Delta^{dw,k}| > \epsilon$ then compute
\begin{equation}
\label{main}\nonumber
r^k = \frac{\Delta^{up}}{\Delta^{dw,k}} \,,
\end{equation}
and
\begin{eqnarray}
\label{omeganew}
\omega_i^k = \left \{ \begin{array}{lr}
\frac{1}{r^k-1} & 2 \le r^k \\[1.5ex]
\frac{1+C}{C (1-r^k)} & r^k \le -\frac{1}{C} \\[1.5ex]
1 & \hbox{otherwise} ,
\end{array} \right.
\end{eqnarray}
with $C=\max\{1,C^+\}$ or $C=\max\{1,C^-\}$, respectively. Furthermore,
\begin{equation}
\label{psiM}\nonumber
\psi_i^k = 1 - \omega_i^k + \omega_i^k r^k \,
\end{equation}
and if $\psi_i^k \neq 0$ then
\begin{equation}
\label{lnewl}\nonumber
l_i^k = \min \{ 1 , \max \{ 0, \frac{r_i^k}{\psi_i^k}\left(\frac{2}{C}+l_{i-1}\psi_{i-1}\right)\} \} \,,
\end{equation}
or
\begin{equation}
\label{lnewr}\nonumber
l_i^k = \min \{ 1 , \max \{ 0, \frac{r_i^k}{\psi_i^k}\left(\frac{2}{C}+l_{i+1}\psi_{i+1}\right)\} \}\,.
\end{equation}
\item Having set all the $k$-th estimates of the input parameters, we solve the algebraic equation (\ref{step1}) or (\ref{step2}) and denote its solution by $u_i^{n+1,k+1}$. If a chosen stopping criterion is fulfilled, we set $u_i^{n+1}=u_i^{n+1,k+1}$ and proceed with the step 1 for $i+1$ or $i-1$. If not, we proceed with the step 3.\\[0ex]
\end{enumerate}
We note that to improve accuracy, one can replace $C^+$ in (\ref{omeganew}) by $C_i^k$ defined by
\begin{equation}
\label{Cnew}\nonumber
C_{i}^k = \left \{ \begin{array}{lr}
\frac{f^+(u_i^{n+1,k})-f^+(u_i^{n})}{u_i^{n+1,k}-u_i^{n}} & u_i^{n+1,k}\neq u_i^{n} \\[2ex]
\frac{d}{du}f^+(u_i^{n+1,k}) & u_i^{n+1,k} = u_i^{n}
\end{array} \right.
\end{equation}
and analogously for $C^-$.
\section{Hyperbolic systems}
\label{sec4}
Concerning systems of hyperbolic equations, one has to take the steps defined for the scalar case in previous sections for each component of the system. Similarly to experiences in \cite{leveque_finite_2004,duraisamy_implicit_2007}, we prefer to express the second order update of numerical fluxes with the help of characteristic variables and characteristic speeds (the eigenvalues). In what follows, we explain some details.
We solve the system for ${\bf f} : R^m \rightarrow R^m$
\begin{equation}
\label{sys}\nonumber
\partial_t {\bf u} +\partial_x {\bf f}({\bf u}) = 0 \,,
\end{equation}
where we suppose that the Jacobian ${\bf f}'({\bf u})$ has only nonnegative real eigenvalues $\lambda^p$, $p=1,2,\ldots,m$. The systems with nonpositive eigenvalues are treated analogously, the general case can be solved using the fractional step method as explained before in (\ref{step1}) - (\ref{step2}) \cite{lozano_implicit_2021}.
Let the columns of the matrix $R=R({\bf u})$ be given by the eigenvectors ${\bf r}^p$, $p=1,2,\ldots,m$. Due to hyperbolicity, the matrix $R$ is regular for each considered value of ${\bf u}$.
Let ${\bf u}$ be the last estimate of ${\bf u}^{n+1}_i$ and let $R^{-1}$ be the inverse matrix of $R({\bf u})$. We express the terms in the second order update of the semi-implicit scheme (\ref{F2a}) as a linear combination of the eigenvectors, namely
\begin{equation}
\label{ev}\nonumber
\boldsymbol{\alpha}_i = R^{-1} \cdot \left( {\bf f}_i^{k,n+1} - {\bf f}_{i+1}^n \right) \,, \quad
\boldsymbol{\beta}_i = R^{-1} \cdot \left( {\bf f}_{i-1}^{n+1} - {\bf f}_{i}^n \right) \,.
\end{equation}
The idea is that the weights in ${\bf w}_i =(w^1_i,w^2_i,\ldots,w^m_i)$ and ${\bf l}_i=(l^1_i,l^2_i,\ldots,l^p_i)$ are now associated with the coefficients ${\boldsymbol \alpha}_i$ and ${\boldsymbol \beta}_i$, so the fluxes in (\ref{F2a}) take the form,
\begin{eqnarray}
\label{sisys2}
{\bf F}_{i+1/2}^{n+1} = {\bf f}_i^{n+1} - \frac{1}{2}
\sum_p l_i^p \left( (1-w^p_i) \alpha^p_i + w_i^p \beta^p_i \right) {\bf r}^p \,.
\end{eqnarray}
Having the form (\ref{sisys2}), the high-resolution form of the scalar case is used for each component of the system with the indicators defined by
\begin{equation}
\label{rsys}
r^p_i = \frac{\beta^p_i}{\alpha^p_i} \,, \,\, p=1,2,\ldots,m \,.
\end{equation}
Furthermore, the Courant numbers $C^+$ in (\ref{omeganew}) are replaced by the corresponding values of the eigenvalues for each component.
\section{Numerical experiments}
\label{sec5}
In what follows, we illustrate numerical resolutions of the proposed semi-implicit high-resolution scheme for several standard test problems.
When computing examples for Burgers equation with $f(u)=u^2/2$, we use the approach of \cite{lozano_implicit_2021} when the splitting (\ref{split}) is obtained by
\begin{equation}
\label{bsplit}\nonumber
f^+(u):=\frac{1}{2}\left( f(u)+\left|u\right|\frac{u}{2}\right) \,, \quad
f^-(u):=\frac{1}{2}\left( f(u)-\left|u\right|\frac{u}{2}\right) \,.
\end{equation}
\subsection{Linear advection}
\label{ex03}
To illustrate the TVD property, we solve the test example \cite{balsara2000monotonicity,borges_improved_2008} with non-smooth solutions for the advection with constant unity speed. The initial condition consists of four different segments - a Gaussian, a triangle, a square-wave and a semi-ellipse, see \cite{balsara2000monotonicity,borges_improved_2008} for a complete definition. The problem is solved with an integer Courant number, and numerical solutions are shifted backward after each time step to return to the initial position. To compute the predicted value $u_i^{n+1,0}$, the scheme with $w_i^0=0$ is used and only one correction step is computed.
For a visual comparison see Figure \ref{fig:lin} where a clear improvement with respect to the first order scheme can be seen. Moreover, no over- or under-shootings with a magnitude larger than rounding errors are observed.
\begin{figure}
\centering
\includegraphics[width=6.0cm]{lin_N=500.pdf}
\includegraphics[width=6.0cm]{lin_N=1000.pdf}
\caption{The comparison of the exact (orange) and numerical solutions obtained with the first order method (green) and the high-resolution method (blue) for the example in Section \ref{ex03}. The left picture is obtained for $I=500$ and the right one for $I=1000$ after $125$ and $250$ time steps, respectively. The Courant number is always $4.$.}
\label{fig:lin}
\end{figure}
\subsection{The smooth solution of Burgers equation}
\label{ex01}
\begin{figure}
\centering
\includegraphics[width=6.0cm]{sin-I=80-w=1-C=4.5-T=1.pdf}
\includegraphics[width=6.0cm]{sin-I=160-w=1-C=4.5-T=1.pdf}
\caption{The comparison of the exact (orange) and numerical solutions obtained with the first order method (green) and the second order method (blue) with $\omega = 1$ for the example in Section \ref{ex01}. The left picture is obtained for $I=80$ and the right one for $I=160$ after $20$ and $40$ time steps, respectively. The maximal Courant number is always $4.5$.}
\label{fig:nsin}
\end{figure}
In this example, we test the method for the fixed choice of $\omega \equiv 1$ in the case of a smooth solution of Burgers equation. Namely, we define
\begin{equation}
\label{sininit}\nonumber
f(u) = \frac{u^2}{2} \,, \quad u(x,0) = 1+\frac{1}{8} \sin(2\pi x) \,, \,\, x \in [0,1] \,,
\end{equation}
and we solve the equation for $t \in [0,1]$. The exact solution is computed numerically using the method of characteristics by solving the algebraic equations for $u=u(x_i,t^n)$
$$
u = 1 + \frac{1}{8} \sin (2\pi (x_i-u t^n)) \,.
$$
In Figure \ref{fig:nsin}, a comparison of the exact and numerical solutions obtained with the first and the second order method are compared at the final time for two grids.
The global $l_1$ discrete error in time and space
\begin{equation}
\label{l1}
E_I^N := h \tau \sum \limits_{i=0}^I \sum \limits_{n=1}^N |u_i^n - u(x_i,t^n)|
\end{equation}
is equal for the coarse grid $E_{80}^{20}=9.09\cdot 10^{-4}$ and the EOC (the Experimental Order of Convergence) equals for $I=160$ and $I=320$ to $2.08$ and $2.17$, respectively.
\subsection{Slowly moving shock of Burgers equation}
\label{ex02}
\begin{figure}
\centering
\includegraphics[width=6.0cm]{slow-I=20-T=1-N=20.pdf}
\includegraphics[width=6.0cm]{slow-I=40-T=1-N=40.pdf}
\caption{The comparison of the exact (orange) and numerical solutions obtained with the first order method (green) and the high-resolution method (blue) for the example in Section \ref{ex02}. The left picture is obtained for $I=20$ and the right one for $I=40$ after $40$ and $80$ time steps, respectively. The maximal Courant number is always $10$.}
\label{fig:slow}
\end{figure}
Inspired by \cite{lozano_implicit_2021}, we present numerical solutions of Riemann problem with a slowly moving shock. The initial discontinuity of the piecewise constant function is placed at $x=-0.5$ with the left value $u_L=20$ and the right value $u_R=-18$. Consequently, the shock speed is equal to $1$. We present the comparison of the first order and the high-resolution scheme at $t=1$ in Figure \ref{fig:slow} for two rather coarse meshes with $I=20$ and $I=40$ with the time step $\tau=h/20$ corresponding to the maximal Courant number equal to $10$. One can observe a significantly improved approximation of the shock speed for numerical solutions obtained with the high-resolution scheme when compared with the first order scheme.
\subsection{Burgers equation with interacting shock and rarefaction}
\label{excomplex}
\begin{figure}
\centering
\includegraphics[width=6.0cm]{complex40.pdf}
\includegraphics[width=6.0cm]{complex80.pdf}\vspace{.3cm}
\includegraphics[width=6.0cm]{complex120.pdf}
\includegraphics[width=6.0cm]{complex160.pdf}
\caption{The comparison of exact solutions (orange) with numerical solutions obtained with the first order method (green) and the high-resolution method (blue) for the example in Section \ref{excomplex}. The first line presents the results at $t=0.5$ (left) and $t=1$ (right) and the second line at $t=1.5$ and $t=2.$. The number of mesh points is $I=640$, the maximal Courant number is $4$ and the numerical solutions are obtained with only one corrector step with the predictor using the first order scheme.}
\label{fig:complex}
\end{figure}
\begin{table}[ht]
\begin{center}
\begin{tabular}{ c c l l l l }
\hline
$I$ & $N$ & $E_i^n$ & EOC & $E_i^N$ & EOC \\
\hline
160 & 40 & 0.0102 & - & 0.0374 & - \\
320 & 80 & 0.00564 & 0.85 & 0.0235 & 0.67 \\
640 & 160 & 0.00314 & 0.84 & 0.0144 & 0.71 \\
1280 & 320 & 0.00175 & 0.84 & 0.00870 & 0.73 \\
\hline
\end{tabular}
\end{center}
\caption{Numerical errors with Experimental Order of Convergence (EOC) for the example \ref{excomplex}. The third and fourth columns are for the high-resolution method, the fifth and sixth ones for the first order method.}
\label{tab}
\end{table}
The last example of Burgers equation is taken from \cite{lozano_implicit_2021} and contains typical features of solutions for Riemann problems. The initial condition is given by
\begin{equation}
\label{initcomplex}
\nonumber
u(x,0) = \left \{
\begin{array}{lr}
1 & 0.3 < x < 0.6\\
-0.2 & \hbox{otherwise}
\end{array}
\right .
\end{equation}
and the exact solution can be found in \cite{lozano_implicit_2021}. At time $t=1$ the end points of a rarefaction and a shock wave merge and the solution evolves further with a triangular profile. In Figure \ref{fig:complex} one can see that the first order scheme approximates the exact solution at $t=1$ with a visibly larger error than the high-resolution scheme. Probably, this is a reason why the position of the shock for $t>1$ is significantly better approximated with the high-resolution method. Both methods converge to the exact solution with respect to the error defined in (\ref{l1}), see Table \ref{tab}.
\subsection{Linear hyperbolic system}
\label{exls}
\begin{figure}
\centering
\includegraphics[width=6.0cm]{obr800x00.pdf}\vspace{.3cm}
\includegraphics[width=6.0cm]{obr400x15.pdf}
\includegraphics[width=6.0cm]{obr400x40.pdf}\vspace{.3cm}
\includegraphics[width=6.0cm]{obr800x15.pdf}
\includegraphics[width=6.0cm]{obr800x40.pdf}
\caption{The comparison of exact solutions $q_1$ (green) and $q_2$ (red) with numerical solutions (blue and orange, respectively) obtained with the high-resolution method for the example in Section \ref{exls}. The first line is the initial condition, the second row is for $I=400$ and $t=0.15$ (the first column) and $t=0.4$ (the second column), and the third row is for $I=800$ and the analogous times. The constant Courant number is $10$ and the numerical solutions are obtained with only one corrector step.}
\label{fig:linsys}
\end{figure}
To test the method for systems of conservation laws, we begin with a simple linear system having a constant matrix,
\begin{equation}
\label{exlins}\nonumber
{\bf f}={\bf f}({\bf q})=\textbf{f}(q_1,q_2) = A \cdot {\bf q} \,, \quad
A = \left (\begin{array}{cc}
1.1 & -0.9 \\\\
-0.9 & 1.1
\end{array}
\right) \,.
\end{equation}
The matrix has positive eigenvalues $1$ and $0.1$. The initial condition consists of rectangular profiles, see the first row in Figure \ref{fig:linsys}.
The example is computed with Courant number $10$, so only the slowly moving waves shall be well resolved by numerical solutions. The predicted values are computed with the second order scheme using $\omega=0$, afterwards only one corrector step is used. One can clearly see that numerical solutions do not contain visible oscillations and that the contact discontinuities are well resolved for slowly moving waves and smeared for fast moving discontinuities.
\subsection{Shallow water equation}
\label{exsw1}
\begin{figure}
\centering
\includegraphics[width=6.0cm]{obrh400x1.pdf}
\includegraphics[width=6.0cm]{obrh400x2.pdf}\vspace{.3cm}
\includegraphics[width=6.0cm]{obrh800x1.pdf}
\includegraphics[width=6.0cm]{obrh800x2.pdf} \vspace{.5cm}
\includegraphics[width=6.0cm]{obru400x1.pdf}
\includegraphics[width=6.0cm]{obru400x2.pdf}\vspace{.3cm}
\includegraphics[width=6.0cm]{obru800x1.pdf}
\includegraphics[width=6.0cm]{obru800x2.pdf}
\caption{The comparison of numerical solutions obtained with the first order method (orange) and the high-resolution method (blue) for the example in Section \ref{exsw1}. The first column is for $t=1$, the second one for $t=2$. The first row compares $h$ for $I=400$, the second one $h$ for $I=800$, the third one $h u$ for $I=400$ and the fourth one $h u$ for $I=800$. The maximal Courant number is always $6.21$.}
\label{fig:shalow1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=6.0cm]{comp1.pdf}
\includegraphics[width=6.0cm]{comp2.pdf}\vspace{.6cm}
\includegraphics[width=6.0cm]{comp3.pdf}
\includegraphics[width=6.0cm]{comp4.pdf}
\caption{The comparison of numerical solutions obtained with the first order method (orange) and the high-resolution method (blue) at $t=1$ (left) and $t=2$ (right). The first order method was computed with $I=800$ and the high-resolution one with $I=200$. The maximal Courant number is always $6.21$.}
\label{fig:shalow2}
\end{figure}
Finally, we test the method for the simple example \cite{leveque_finite_2004} of nonlinear shallow water system using two equations
\begin{eqnarray}
\label{exsweh}\nonumber
\partial_t h + \partial_x (h u) = 0 \,, \quad h(x,0) = 1 + 0.4 \exp{-5 (x-5)^2} \,,\\[1ex]
\label{exsweh2}\nonumber
\partial_t (h u) + \partial_x (h u^2 + 0.5 h^2) = 0 \,, \quad u(x,0) = 0 \,,
\end{eqnarray}
for $x \in [0,10]$ and $t \in [0,2]$. The system is discretized with conservative variables $(h,h u)$ using the Lax-Friedrichs splitting (\ref{lf}) and $\alpha=1.3$. A comparison of results at $t=1$ and $t=2$ for the first order \cite{lozano_implicit_2021} method and the high-resolution method is given in Figure \ref{fig:shalow1} for two fine grids. The maximal Courant number is around $6.21$, the predicted values are computed with the second order semi-implicit method using $\omega=0$, and only one corrector step is used. In Figure \ref{fig:shalow1} one can see a significantly improved resolution of shock and rarefaction waves when comparing the high-resolution method with the first order accurate one. The results resemble well those presented in \cite{leveque_finite_2004}.
To make the difference in the resolution even clearer, we compare in Figure \ref{fig:shalow2} the results obtained on a coarse grid with the high-resolution method and the results obtained by the first order accurate method on two times uniformly refined grid that still do not have the quality of the high-resolution method.
\section{Conclusion}
\label{sec-conc}
We have presented the semi-implicit conservative finite difference method for hyperbolic problems in one-dimensional case. The method shares the advantageous properties of the first order accurate implicit method. Namely, the method is unconditionally stable for the linear advection equation and non-oscillatory numerical solutions are obtained explicitly after two sweeps of the fast sweeping method. In the case of nonlinear scalar hyperbolic PDEs one has to solve for each grid point a single nonlinear algebraic equation with nonlinearity only due to the nonlinear flux function. All these properties are preserved in the proposed high-resolution (TVD) method, which is second order accurate if the solution is smooth. Although the TVD limiters depend on the single unknown per each grid point, the nonlinearity can be typically resolved with one predictor and one corrector step for each algebraic equation. The method is applied successfully for the linear system of hyperbolic PDE and for the shallow water equations by expressing second order correction terms in the scheme using characteristic variables and speeds.
The proposed semi-implicit methods can be used and extended for problems where up to now fully implicit or explicit-implicit schemes appeared useful. In addition to accuracy requirements, the method does not require formal restrictions on the choice of time steps $\Delta t$ for stability reasons. A possible restriction on $\Delta t$ due to slow or no convergence of the nonlinear algebraic solver is shared with the first order accurate implicit method. We plan to extend the method analogously to \cite{titarev2005weno,puppo_quinpi_2022} with high-order WENO type spatial reconstruction and Lax-Wendroff type of time discretization.
\section{Appendix}
\label{sec-app}
In what follows, we motive the form of fluxes (\ref{F2a}) - (\ref{F2b}) for the semi-implicit method.
Let $u$ be sufficiently smooth solution of (\ref{cl}) with smooth flux function $f \equiv f^+$. The first order accurate scheme takes the form
\begin{equation}
\label{foscheme}
u_i^{n+1} - u_i^n + \frac{\Delta t}{\Delta x} \left(
f_i^{n+1} - f_i^n\right) = 0 \,.
\end{equation}
Using finite Taylor series we can express the dominant error term of the scheme (\ref{foscheme}) by
\begin{equation}
\label{E}
\frac{\Delta t \Delta x}{2} \partial_{xx} f(u(x_i,t^{n+1})) - \frac{\Delta t^2}{2} \partial_{tx} f(u(x_i,t^{n+1})) \,.
\end{equation}
Note that we keep the mixed derivative in (\ref{E}), so we follow the Lax-Wendroff procedure of replacing every time derivative using the PDE $\partial_t u = - \partial_x f$ only partially \cite{duraisamy_implicit_2007,carrillo2019compact,frolkovic2018semi,frolkovic_semi-implicit_2021}.
Now applying the following approximations in (\ref{E})
\begin{equation}
\label{Eappr}
\Delta x \partial_x f(u(x_i,t^{n+1})) - \Delta t \partial_{t} f(u(x_i,t^{n+1})) \approx f_i^{n+1} - f_{i-1}^{n+1} - (f_i^{n+1} - f_i^n)
\end{equation}
and the parametric approximation (analogously for $(x_{i-1},t^{n+1})$)
\begin{equation}
\label{par}
\Delta x \partial_x f(u(x_i,t^n)) \approx (1-\omega) (f_{i+1}^n - f_i^n) + \omega (f_i^n - f_{i-1}^n) \,,
\end{equation}
the numerical fluxes (\ref{F2a}) are recovered for $l_i \equiv 1$.
\bibliographystyle{siamplain}
|
1,314,259,995,872 | arxiv | \section{Introduction} \label{sec:intro}
\begin{figure}
\centering
\includegraphics[keepaspectratio, scale=0.5]{figures/cc-comparison-L1-abort-rate.pdf}
\caption{Abort rate of long-running update transaction (L1) in BoMB workload.}
\label{fig:cc-comparison-L1-abort-rate}
\end{figure}
Transaction processing has been used for applications and workloads in various industries.
Concurrency control is the core of transaction processing. Various concurrency control protocols have been proposed to take advantage of the recent architectural evolution such as many-core and large memory capacity~\cite{Tu13,Wang16,Yu16,Wang17,Lim17}, which have achieved high performance and scalability.
Existing concurrency control protocols do not assume a certain type of heterogeneous workload in which a long-running update transaction and multiple types of short transactions are mixed.
Such heterogeneous workloads exist, for example, in OLTP systems for manufacturing industries. The system there runs a transaction that builds up a tree structure based on an item master and an item component master, referred to as a Bill of Materials, or BoM, and calculates product costs and requirements~\cite{jde}. This transaction (referred to as the L1 transaction; see Section~\ref{sec:bomb} for details) is a long-running update transaction because it must read a large number of records and write the results. In addition to L1, the system runs a short transaction (called the S1 transaction) that updates the raw material cost referred to in the calculation of the product cost, and a short transaction (called the S2 transaction) that uses the product cost in other applications. Handling these concurrent transactions that interfere with each other remains challenging.
Because existing concurrency control protocols are not designed for such heterogeneous workloads, it is common for companies to process long transactions at night when online short transactions do not occur~\cite{Bog14}. However, this workaround is sometimes infeasible. The freshness and accuracy of the product costs, which are kept by the long transactions, are essential since the product costs are used as input for optimal production planning in manufacturing resource planning (MRP)~\cite{Wight81}, especially budgeting and demand planning. Generally, an accuracy of 98\% in BoM composition and 95\% in inventory is required to obtain accurate results in MRP because input errors accumulate from pile-up calculations~\cite{Sheldon07,jde}. Meanwhile, the cost of raw materials and the components of items, which are the basis of product costing, can frequently change due to supply chain disruptions caused by disasters and infectious diseases\footnote{Supply chain disruptions and price fluctuations have occurred due to the extensive damage caused by the Great East Japan Earthquake and the resulting nuclear power plant accident, as well as lockdowns to prevent the spread of COVID-19~\cite{Matsumoto20,Mahajan21,scm-disruption}.}.
Therefore, there is a need for on-demand product costing not at night but during the day when online short transactions occur~\cite{Commsoft,Makersite}.
What happens if modern concurrency control protocols try to handle such heterogeneous workloads? The abort rate of the L1 transaction is shown in Figure~\ref{fig:cc-comparison-L1-abort-rate} when the L1, S1, and S2 transactions run concurrently using state-of-the-art protocols. The horizontal axis is the number of products to be costed in one transaction, which corresponds to the length of the L1 transaction. As shown in the figure, none of the existing protocols can commit the long transaction, or if they can, the success rate is less than one percent. OCC protocols such as Silo~\cite{Tu13} and TicToc~\cite{Yu16} abort the L1 transaction because the S1 transaction updates the cost of raw materials before L1 is completed. MOCC~\cite{Wang16}, which combines OCC and the advantages of a lock-based scheme, rarely avoids aborts even with pessimistic behavior.
ERMIA~\cite{Wang17} and Cicada~\cite{Lim17} cannot commit any L1 transactions because they are interfering with concurrent S2 transactions. The details are described in Section~\ref{sec:soa-protocols}.
Existing lock-based~\cite{Guo21} and deterministic~\cite{Thomson12,Fan19} approaches can handle this workload under a certain condition where BoMs do not change before and after the L1 transaction, as shown in Figure~\ref{fig:cc-comparison-L1}. We call such a fixed BoM \textit{a static BoM}. Even though these approaches can commit the L1, they suffer from performance degradation of the S1 that must wait for the L1. More importantly, our target BoMs must often be updated by another transaction which changes the composition of a product to dynamically respond supply chain disruption. We call such a BoM \textit{a dynamic BoM}. Deterministic approaches cannot commit the L1 anymore when handling a dynamic BoM. Even if they use reconnaissance queries~\cite{Thomson12} to know BoM trees in advance, they cannot guarantee the trees are not changed without additional application-level assistance, e.g., stop transactions that modify BoMs. Such an application-level workaround is not the direction of our goal.
In this paper, we first propose Oze, a concurrency control protocol that can handle heterogeneous workloads which include long and short transactions. Oze generates serializable schedules using a multi-version serialization graph (MVSG)~\cite{Bernstein83}. MVSGT~\cite{Hadzilacos85} and MVSGA~\cite{Hadzilacos88} are conventional graph-based approaches that generate serializable schedules in large scheduling spaces: multi-version conflict serializability (MCSR) and multi-version view serializability (MVSR). However, their protocols are rather theoretical and there are no available implementations, to the best of our knowledge. In addition, they assume centralized graph management, which cannot benefit from many cores and achieve high scalability. We present a decentralized implementation of Oze that can take advantage of many-core environments by using a logically single graph on each record.
Second, we present a new benchmark, BoMB (Bill of Materials Benchmark), which reproduces the heterogeneous workload described above. TPC-C~\cite{tpcc} and TPC-E~\cite{tpce} are widely used as de facto standard benchmarks for OLTP systems; however, neither includes long-running update transactions. In contrast, BoMB's target application is a cost management system for manufacturing, which consists of six transactions: calculating product costs (L1), updating raw material costs that are used by L1 (S1), posting journal vouchers based on the calculated product cost (S2), changing a product (S3), changing a raw material of a product (S4), changing a product quantity (S5). The transactions of BoMB are designed and implemented on CCBench~\cite{Tanabe20}, which is a benchmarking platform for concurrency control protocols for in-memory database systems, to allow fair comparison and evaluation between various protocols.
Third, we evaluate Oze with modern concurrency control protocols using BoMB. Experimental results show that Oze keeps the abort rate of the long-running update transactions at zero while reaching up to 1.7 Mtpm for short transactions with near-linear scalability, whereas state-of-the-art protocols cannot commit the long transaction or experience performance degradation in the throughput of short transactions.
The rest of this paper is organized as follows. First, in Section~\ref{sec:bomb}, we define the workload and its database in BoMB. Next, we describe the design of the Oze protocol in Section~\ref{sec:design} and the implementation of Oze for multi-core systems in Section~\ref{sec:implementation}. In Section~\ref{sec:eval}, we evaluate several protocols using BoMB. In Section~\ref{sec:related}, we describe related work. Finally, we conclude this paper in Section~\ref{sec:conclusion}.
\section{BoM Benchmark} \label{sec:bomb}
This section describes an overview, database schema, and workload of BoM Benchmark (BoMB). We also show how and why existing protocols cannot effectively handle the BoMB workload.
\subsection{Background and Overview}
\begin{figure}
\centering
\includegraphics[keepaspectratio, scale=0.5]{figures/workload.pdf}
\caption{System and workload overview.}
\label{fig:workload}
\end{figure}
We use a food manufacturing company that produces bread nationwide as our reference when designing the BoMB workload. In the target food manufacturing industry, the supply chain can be disrupted by various reasons such as climate change, and the prices of raw materials often change as well. Therefore, it is necessary to accurately gauge manufacturing costs and schedule an optimal production plan and supply chain. In order to reflect such needs, the BoMB workload is configured assuming a system capable of on-demand inventory control, cost control, and production planning. Figure~\ref{fig:workload} shows an overview of the target system and workload. We assume that the system consists of an MRP system that manages products and resources needed for manufacturing and a perpetual inventory management system that continuously manages the inventory. The MRP consists of cost management, budgeting, demand planning, and supply chain management (SCM) modules, and each module accesses one database. The cost management generates the most complicated workload among these modules due to the long-running update transactions. Thus, for BoMB, we focus on emulating the workload of the cost management module.
The BoMB workload has six transactions that are directly related to product costing and its input/output: L1 and S1--S5 transactions.
L and S stand for "long" and "short," respectively. All of these transactions generally occur in manufacturing industries~\cite{Younus10,OpenBOM} and can be widely applied other than in bakeries.
\subsection{Tables and Parameters} \label{sec:table-and-params}
\begin{table}[t]
\caption{BoMB parameters.}
\label{tab:params}
\begin{tabular}{lcl}
\toprule
\begin{tabular}{l}
Parameters
\end{tabular} &
Default &
\begin{tabular}{l}
Description
\end{tabular} \\
\midrule
\begin{tabular}{l}
factories
\end{tabular} &
8 &
\begin{tabular}{l}
Number of factories
\end{tabular} \\
\begin{tabular}{l}
product-types
\end{tabular} &
72,000 &
\begin{tabular}{l}
Number of product types
\end{tabular} \\
\begin{tabular}{l}
material-types
\end{tabular} &
198,000 &
\begin{tabular}{l}
Number of material types
\end{tabular} \\
\begin{tabular}{l}
raw-material-\\types
\end{tabular} &
75,000 &
\begin{tabular}{l}
Number of \\ raw material types
\end{tabular} \\
\begin{tabular}{l}
material-trees-\\per-product
\end{tabular} &
5 &
\begin{tabular}{l}
Number of material trees \\ per product
\end{tabular} \\
\begin{tabular}{l}
material-tree-size
\end{tabular} &
10 &
\begin{tabular}{l}
Number of materials \\ in a material tree
\end{tabular} \\
\begin{tabular}{l}
raw-materials-\\per-leaf
\end{tabular} &
3 &
\begin{tabular}{l}
Number of raw materials \\ in a leaf material
\end{tabular} \\
\begin{tabular}{l}
target-products
\end{tabular} &
100 &
\begin{tabular}{l}
Number of products \\ manufactured in a factory
\end{tabular} \\
\begin{tabular}{l}
target-materials
\end{tabular} &
1 &
\begin{tabular}{l}
Number of raw materials \\ for update
\end{tabular} \\
\bottomrule
\end{tabular}
\end{table}
BoMB uses seven tables shown below. The underlined attribute is the primary key. Note that \texttt{INT16}, \texttt{INT32}, and \texttt{INT64} are integers of 16, 32, and 64 bits, respectively. Adjustable parameters for the BoMB are shown in Table~\ref{tab:params}. The default values are set on the basis of the actual values of the referenced bread manufacturer; these would change depending on the industry.
\texttt{\textbf{factory}(\underline{id} INT32, name VARCHAR)}: The assuming company operates multiple factories, and the \texttt{factory} table manages a list of those factories. The number of factories is set by the parameter \texttt {factories}.
\texttt{\textbf{item}(\underline{id} INT32, name VARCHAR, type INT16)}: The \texttt{item} table manages the name of items with their type: product, material, or raw material. The \texttt{item} table stores the total records of products (\texttt{product-types}), materials (\texttt{\seqsplit{material-types}}), and raw materials (\texttt{\seqsplit{raw-material-types}}).
\texttt{\textbf{product}(\underline{factory\_id} INT32, \underline{item\_id} INT32, quantity DOUBLE)}:
The \texttt{product} table manages the manufactured products and their quantity in each factory. When performing cost accounting, it is used to obtain the products currently in production at the factory.
\texttt{\textbf{bom}(\underline{parent\_item\_id} INT32, \underline{child\_item\_id} INT32, quan\-ti\-ty DOUBLE)}: The \texttt{bom} table manages a list of (intermediate and raw) materials and the quantities of each needed to manufacture a product. Specifically, it stores the parent item ID, child item ID, and quantity in each record and hierarchically represents BoM trees. Details of the structure of BoM trees and product costing using this table are described in Section~\ref{sec:bom-tree}.
\texttt{\textbf{material-cost}(\underline{factory\_id} INT32, \underline{item\_id} INT32,\\stock\_quantity DOUBLE, stock\_amount DOUBLE)}: The \texttt{\seqsplit{material-cost}} table manages the cost of raw materials, the stock quantity, and the amount of the raw materials for each factory and item.
\texttt{\textbf{result-cost}(\underline{factory\_id} INT32, \underline{item\_id} INT32, cost DOUBLE)}: The \texttt{result-cost} table contains the latest cost calculation results for each product in each factory.
\texttt{\textbf{journal-voucher}(\underline{voucher\_id} INT64, date DATE, debit INT32, credit INT32, amount DOUBLE, description VARCHAR)}: Because the cost calculation result is used for each module, e.g., budgeting, demand planning, and SCM, it is created as a journal voucher as needed and stored in this table.
\subsection{BoM Tree} \label{sec:bom-tree}
\begin{figure}
\centering
\includegraphics[keepaspectratio, scale=0.5]{figures/bom.pdf}
\caption{Example of BoM tree.}
\label{fig:bom-tree}
\end{figure}
The \texttt{bom} table is a list of intermediate and raw materials and the quantities required to make a particular product and their quantities. It can be logically expressed in a tree structure. An example of a BoM tree is shown in Figure~\ref{fig:bom-tree}. The product consists of several major materials (hereafter, "root materials" for convenience). For example, in the production of sandwiches, the major materials correspond to bread and the ingredients inside (e.g., tuna salad). Each root material is made from multiple materials. In the example of bread, the material is dough, and the raw materials are flour, yeast, etc.
To support BoMs in other manufacturing industries~\cite{Younus10,OpenBOM} such as in aircraft and robots industries, we introduce parameters for BoM trees: the number of root materials, the number of materials that make up each root material tree (\texttt{\seqsplit{material\_tree\_size}}), and the number of raw materials in each leaf material (\texttt{\seqsplit{raw-materials-per-leaf}}).
When starting the benchmark, the BoM trees are initialized as follows. (1) Select a set of materials with size \texttt{\seqsplit{material\_tree\_size}}. (2) Select the root material from them. (3) Add the remaining materials as child nodes to random tree nodes. (4) Add \texttt{\seqsplit{raw-materials-per-leaf}} raw materials to each leaf of the tree. Raw materials are randomly selected from \texttt{\seqsplit{raw-materials-types}}. (5) After generating all trees until \texttt{\seqsplit{materials-types}} materials are exhausted, assign \texttt{\seqsplit{material-trees-per-product}} trees to each product. Though the tree structure is randomly configured by default for versatility in BoMB, skew may be given depending on the target industry.
The product cost is calculated using the BoM tree as follows. (1) Determine the products to be costed. (2) Refer to the \texttt{bom} table, recursively acquire the materials that comprise the product, and construct a BoM tree. Each tree node holds the item ID, the list of child item IDs, the unit price, and the required quantity. (3) Obtain the \texttt{stock\_quantity} and \texttt{stock\_amount} for each raw material which is a leaf node of the BoM tree, and set the unit cost calculated from them. (4) Recursively call the \texttt{calculate\_cost()} function shown in Algorithm~\ref{alg:cost} from the root node of the BoM tree.
\begin{algorithm}[t]
\caption{Calculate cost of product}\label{alg:cost}
\Function{calculate\_cost()}{
\If{is\_leaf()}{
\Return{unit\_cost*quantity}
}
subtotal = 0 \\
\For{child \textbf{in} children}{
subtotal += child->calculate\_cost()
}
\Return{subtotal*quantity}
}
\end{algorithm}
\begin{figure*}
\begin{minipage}{0.33\linewidth}
\centering
\includegraphics[keepaspectratio, scale=0.38]{figures/cc-comparison-L1-stale-rate.pdf}
\subcaption{Rate of invalid records in L1 read set.}
\end{minipage}
\begin{minipage}{0.33\linewidth}
\centering
\includegraphics[keepaspectratio, scale=0.38]{figures/cc-comparison-L1-ermia-delay-rate.pdf}
\subcaption{Rate of delay in L1 (ERMIA).}
\end{minipage}
\begin{minipage}{0.33\linewidth}
\centering
\includegraphics[keepaspectratio, scale=0.38]{figures/cc-comparison-L1-cicada-delay-rate.pdf}
\subcaption{Rate of delay in L1 (Cicada).}
\end{minipage}
\caption{Detailed analysis of aborted transactions.}
\label{fig:cc-analysis}
\end{figure*}
\subsection{Transactions}
\textbf{L1 (update-product-cost): }
The L1 transaction is a long transaction that builds the BoM tree described in Section~\ref{sec:bom-tree} and calculates the product cost. First, it selects one factory at random and obtains \texttt{item\_id} of all products manufactured at the factory and their \texttt{quantity} referring to the \texttt{product} table. Then, it builds a BoM tree for a product, calculates the cost, and writes the result to \texttt{cost} of the \texttt{result-cost} table. It repeats these steps for all products; the number of products (\texttt{\seqsplit {target-products}}) means how many products are currently manufactured at each factory. This parameter determines the length of the long transaction. When the L1 transaction is executed with the default values in Table~\ref{tab:params}, there are a total of about 20,000 read/scan records and 100 write records.
\textbf{S1 (update-material-cost): }
The S1 transaction is a short transaction that changes the cost of raw materials. First, it selects a factory and a raw material uniformly at random and uses them as keys to read records from the \texttt{material-cost} table. Then, it adds or subtracts an arbitrary value to/from the current \texttt{stock\_quantity} of the record and writes on the record. Depending on the applications, updates may occur all at once across multiple factories and raw materials, so the number of raw materials to be updated (\texttt{\seqsplit{target-materials}}) is allowed to be configured. By default, S1 transactions perform one single read-modify-write on a record.
\textbf{S2 (issue-journal-voucher): }
The S2 transaction is a short transaction that creates a journal voucher based on the calculated product cost. It selects a factory uniformly at random and scans the \texttt{\seqsplit{result-cost}} table to obtain the \texttt{cost} of each product in the factory. Then it calculates the \texttt{amount} from the cost and production volume (given as an input) for each product. Finally, it inserts the journal vouchers (new records) into the \texttt{journal-voucher} table with the \texttt{debit} as the product and the \texttt{credit} as the work in process. The number of records inserted is determined by \texttt{\seqsplit{target-products}}.
\textbf{S3 (change-product): }
The S3 transaction is a short transaction that replaces an old product with a newly-developed product. It selects a product from a factory uniformly at random and deletes the product. Then it decides a unique item ID for a new product and chooses root materials randomly according to the number \texttt{\seqsplit{material-trees-per-product}}. Item IDs can be cached; they can be retrieved in advance and excluded from transactions' read set. Finally, new records with the chosen item ID are inserted into the \texttt{bom} table.
\textbf{S4 (change-raw-material): }
The S4 transaction is a short transaction that replaces a raw material of a product with a different one due to changes in purchasing conditions (e.g., change a flour X to X'). It selects a record that consists of a material and a raw material from the \texttt{bom} table and a raw material from the \texttt{item} table uniformly at random. Then it deletes the old record and inserts a new record with the chosen raw material. \texttt{bom} records and item IDs can be cached the same as the above transaction.
\textbf{S5 (change-product quantity): }
The S5 transaction is a short transaction that updates a manufacturing quantity of a product in a factory as a result of demand planning. It selects a factory and a product uniformly at random and then updates the record in the \texttt{product} table with a given value of quantity.
\textbf{Regulation for Execution.}
BoMB can be run with two settings according to the target BoM characteristics: static and dynamic BoM. For static BoM, BoMB runs L1, S1, and S2 transactions. For dynamic BoM, it additionally runs S3, S4, and S5 transactions that modify the \texttt{product} and \texttt{bom} table. Note that BoMB requires at least one thread for each transaction to keep issuing requests so that all (three or six) types of transactions are executed concurrently as a mixed workload. To generate the workload, it may be desirable to control the request ratio of each transaction as predefined in TPC-C. However, if no long transaction can be committed, all threads will run long transactions while continuing to retry, or short transactions will be stalled to maintain the specified ratio. In that case, complicated dependencies between the long and short transactions no longer occur. Since this is not the workload we would like to model, we prepare at least one thread in charge of generating each type of transaction to ensure interferences occur.
\textbf{Measurements.}
What we want to measure using BoMB is how likely long-running update transactions can be committed and how many online short transactions can be committed concurrently. Therefore, in BoMB, we use each type of transaction's throughput and abort rate as the measurement items.
\subsection{BoMB with State-of-the-art Concurrency Control Protocols} \label{sec:soa-protocols}
OCC protocols such as Silo~\cite{Tu13} and TicToc~\cite{Yu16} struggle to handle the BoMB workload. They verify that the read records are not updated to confirm whether a transaction can be committed (i.e., read validation). Since an S1 transaction updates the raw material cost with a high probability before the L1 transaction commits, the L1 repeatedly aborts in the validation phase. Figure~\ref{fig:cc-analysis}(a) shows the rate of records that have already been updated (i.e., invalid records) in the read set of L1 at the validation phase\footnote{In the original protocol, the transaction will be aborted when an updated record is found, but we check the entire read set to calculate the invalid record rate.}. The x-axis is the number of products handled by the L1 transaction; operations scale with this number. It is difficult to commit the L1 because the number of invalid records increases as the number of products increases. MOCC combines OCC with a pessimistic scheme using locks and a hotspot counter. However, since not all records are treated pessimistically, invalid records still remain.
Timestamp adjustments used in some OCC variants~\cite{Boksenbaum87,Lee93,Kwok96} do not contribute to the L1 completion with or without priority setting. This is because although an S1 can update records while protecting those already read by an L1, the L1 scheduled in the order L1 < S1 will have no room for timestamp adjustments once another S1 update a record the L1 is about to read.
MVCC, which is used by ERMIA~\cite{Wang17} and Cicada~\cite{Lim17}, holds multiple versions of records so that a reader can use older versions even if the record is updated by concurrent writers. MVCC performs with high throughput because the read is not hindered even in highly-contended workloads, which single-version OCC struggles to handle.
MVCC uses two timestamps: the write timestamp, which indicates when the version became valid, and the read timestamp which indicates how long the version is valid.
Both ERMIA and Cicada update the read timestamp (known as the high watermark in ERMIA) of a record when an S2 transaction, which reads the record in the \texttt{result-cost} table, is committed. Meanwhile, when an L1 transaction tries to update the same record after the S2 transaction updates the read timestamp of a version of the record, the L1 aborts as a false positive.
Figure~\ref{fig:cc-analysis}(b)(c) show the extent to which the L1 transaction is actually late. The x-axis is the same as Figure~\ref{fig:cc-analysis}(a), and the y-axis on the left side is the delay rate which is the rate of the delayed time to commit to the transaction execution time. Let $t_b$, $t_r$, and $t_e$ be the begin timestamp, the observed minimum read timestamp that caused the abort, and the end timestamp.
Then, the delay rate is calculated by $(t_e - t_r) / (t_e - t_b)$ and must be zero to commit the transaction. The y-axis on the right side is the abort rate. Both protocols are not in time at all because the S2 updates the read timestamp at a relatively early phase as the duration of the L1 becomes longer.
\section{Oze Design} \label{sec:design}
The basic idea of Oze is to allow fully-precise ordering of transactions with a 3-phase protocol using a multi-version serialization graph. In this section, we describe these designs and the correctness of the protocol. A decentralized implementation of Oze that exploits modern many-core architecture is presented in Section~\ref{sec:decentralized}.
\subsection{Graph-based Precise Ordering} \label{sec:order-based-cc}
Timestamp-based optimistic protocols such as OCC and MVCC provide efficiency,
but they sacrifice scheduling space obtained by managing the precise order with a serialization graph as a trade-off.
Even though MVCC is known to have a large scheduling space, the existing protocols~\cite{Wang17, Lim17} cannot commit the long transactions in BoMB, as shown in Figure~\ref{fig:cc-comparison-L1-abort-rate} and \ref{fig:cc-analysis}. The S2 transaction has a dependency on the record read by the L1 transaction but no dependency on the write record. Nevertheless, the existing protocols have to abort L1 with a false positive because they simplify the dependency using the size of the timestamp. In contrast, Oze tracks such dependencies without omission by using MVSG~\cite{Bernstein83} and handles all transactions of BoMB concurrently in a large scheduling space.
MVSG is a directed acyclic graph that has edges for the reads-from relationships between transactions and for the version orders. That is, when there is a transaction $T_i$ that reads $x$ written by transaction $T_j$ ($w_j(x_j) r_i(x_j)$), an edge is added from $T_j$ to $T_i$. In addition, if there is another transaction $T_k$ that writes $x$ and the version order is $x_j \ll x_k$, then an anti-dependency edge is added from $T_i$ to $T_k$ so that $T_i$ will read the latest version in the equivalent monoversion schedule; i.e., it does not break the view of $T_i$. If the version order is $x_k \ll x_j$, then the edge is added from $T_k$ to $T_j$.
Finding all possible version orders that make a multi-version serializable schedule is an NP-complete problem~\cite{Bernstein83, Papadimitriou84}. Oze simplifies the problem by ordering a transaction before or after the version read by the concurrent transactions and accepting false positives to obtain a solution in such a vast search space efficiently. Oze first tries postposing the version so that the newer version in chronological order becomes the newer version in the serialization order. Here, postposing of $T_k$ means to select the order of $x_j \ll x_k$ when $w_j(x_j)$ and $r_i(x_j)$ are preceding as in the previous example. If postposing breaks serializability (i.e., a cycle occurs in MVSG), then Oze uses a technique \textit{order forwarding}, which tries preposing ($x_k \ll x_j$) to expand scheduling space.
With order forwarding, Oze changes the version order while avoiding breaking the concurrent transactions' view, i.e., as long as the forwarding transaction does not interrupt the writer of the version and its readers. Order forwarding enables Oze to schedule transactions in the MVSR space, which is larger than the space generated by the MVSGT protocol~\cite{Hadzilacos85}.
\subsection{3-Phase Order Construction Protocol} \label{sec:3phase-design}
We design an optimistic multi-version protocol suitable for in-memory DBMS to reduce the interference of both memory access and lock on records among concurrent transactions. Figure~\ref{fig:protocol} shows an overview of Oze that consists of the following three phases.
\begin{figure}
\centering
\includegraphics[keepaspectratio, scale=0.55]{figures/protocol.pdf}
\caption{Overview of Oze protocol \textmd{-- The solid boxes are explained in Section~\ref{sec:design}, and the dashed boxes are explained in Section~\ref{sec:implementation}.}}
\label{fig:protocol}
\end{figure}
\textbf{Local Ordering: }
Oze executes a transaction while reading committed versions and writing new ones in the thread-local area. When reading, the transaction selects the latest committed version from the version list sorted by the serialization order, and the read-from edge is added to the graph. If the graph is still acyclic (i.e., serializable), the transaction proceeds to the next step. If the graph has a cycle, then it selects and checks an older version until it finds a version without making a cycle.
The transaction adds anti-de\-pen\-den\-cy edges from itself to the writers of newer versions than the selected one.
When writing, the transaction only puts the new version of the record in the local write set to reduce unnecessary interferences on the records and the graph. Only the reads-from relationship is reflected on the graph in the local ordering phase. At this point, the order is partially and tentatively determined; selecting version orders is deferred to the global ordering phase.
\textbf{Global Ordering: }
In the global ordering phase of Oze, a transaction determines version orders that can guarantee serializability. This phase is performed after accepting a commit request or finishing the execution of transaction logic. The transaction selects a version order for each record in the write set and adds a corresponding edge. As mentioned in Section~\ref{sec:order-based-cc}, the transaction first tries the postposing version order, and if it makes a cycle, it tries preposing by order forwarding. After confirming there is no cycle, it adds a pending version (that cannot be read at this point) to the record while maintaining the serialization order. If the version orders can be determined while keeping the graph acyclic for all write records, the transaction proceeds to the finalizing phase for committing.
\textbf{Finalizing: }
For a transaction that passed the global ordering phase without aborting, Oze changes the status of versions written by the transaction to \textit{committed}.
\subsection{Example} \label{sec:oze-example}
\begin{figure}
\centering
\includegraphics[keepaspectratio, scale=0.42]{figures/order-forwarding.pdf}
\caption{Example of order forwarding.}
\label{fig:order-forwarding}
\end{figure}
The following example describes the behavior of the Oze protocol. Consider schedule $s$, which consists of read, write, and commit operations from the transaction $T_1$ to $T_4$. The notation corresponds to the multi-version full schedule in the literature~\cite{Hadzilacos85}.
\begin{equation*}
s = w_1(x_1) w_2(y_2) c_1 c_2 r_3(x_1) r_4(y_2) w_3(y_3) c_3 w_4(x_4) c_4
\end{equation*}
First, after committing $T_1$ and $T_2$, $T_3$ and $T_4$ read $x_1$ and $ y_2$, respectively, as indicated by the blue arrows in Figure~\ref{fig:order-forwarding}. The edges for reads-from (dependency edges) are added. Next, $T_3$ verifies\\whether $y_2 \ll y_3$ (postposing) can be chosen in the version order of $y$ when committing after writing $y_3$. Specifically, as shown by the solid red arrow in the figure, the Oze protocol adds the anti-dependency edge from $T_4$ to $T_3$ and checks if there is a cycle. In this example, there is no cycle and $T_3$ is committed.
In contrast, when committing after writing $x_4$, $T_4$ creates a cycle if it tries to add a postposing edge from $T_3$ to $T_4$ (i.e., the version order $x_1 \ll x_3$) due to $T_3$'s read of $x_1$. Thus, it will put a preposing edge from $T_4$ to $T_1$ to try another version order $x_3 \ll x_1$ as shown by the dashed red arrow in the figure. This order forwarding keeps the graph acyclic, so $T_4$ can also be committed. The final serialization order is $T_2 <T_4 <T_1 <T_3$, which is different from the chronological order. Note that Oze can still ensure \textit{linearizability}~\cite{Herlihy90} by introducing an epoch and restricting the forwarding within the epoch, as detailed in Section~\ref{sec:implementation}.
\subsection{Correctness}
\begin{theorem}
If the Oze scheduler works correctly, i.e., if it outputs schedule $s$ and computes version order $\ll$, then $(s, \ll)$ is multi-version view serializable.
\end{theorem}
\begin{proof}
Let $G(s)$ be the multi-version serialization graph produced by the scheduler after having output $s$. $G(s)$ is acyclic, so let $s_r$ be any partial schedule of the transactions in $s$ in which the serialization order of the transactions is compatible with the edges of $G(s)$.
Let $r_i(x_j)$ be any reads-from relation in $(s, \ll)$. Then $j \rightarrow i$ is an edge in $G(s)$, so $T_j$ comes before $T_i$ in $s_r$. For any other transaction $T_k$ which writes $x$, there must be either $k \rightarrow j$ or $i \rightarrow k$ as an edge of $G(s)$ since the scheduler decides a version order based on the following protocol:
(1) For $r_i(x_j)$, the scheduler adds an edge $i \rightarrow k$ for each skipped version $x_k$ in the local ordering phase.
(2) For $w_k(x_k)$, the scheduler first tries to add an edge $i \rightarrow k$ in the global ordering phase. If there exists a cycle, the scheduler tries to add $k \rightarrow j$ instead of $i \rightarrow k$ by order forwarding optimization.
Hence, $T_k$ is not between $T_j$ and $T_i$ in $s_r$. Therefore, $T_i$ reads $x$ from $T_j$ in $s$ as well, so $(s, \ll)$ is multiversion view serializable.
\end{proof}
\section{Oze Implementation} \label{sec:implementation}
We describe a centralized implementation of Oze with a single MVSG, followed by a decentralized one in Section~\ref{sec:decentralized} and \ref{sec:parallel-validation}.
\subsection{Data Structure}
\begin{figure}
\centering
\includegraphics[keepaspectratio, scale=0.38]{figures/data-structure-compact.pdf}
\caption{Data structures in Oze.}
\label{fig:data-structure}
\end{figure}
\textbf{Transaction: }
Each transaction worker thread has a transaction ID (\textit{txid}) that is assigned at the beginning and a read/write set. The transaction ID consists of an epoch, worker thread ID, and local counter. We introduce the epoch to ensure linearizability and facilitate garbage collection. Like Silo, a dedicated thread increments the global epoch at regular intervals, and each worker thread refers to it. The read set stores the read version pointer with keys, and the write set stores the written versions with keys.
\textbf{Record: }
The structure of the database record is shown in Figure~\ref{fig:data-structure}. The record has a linked list of versions with transactions' serialization order; specifically, it has a pointer (\textit{record.latest}) to the last version of the list. Each version has \textit{txid} of the version creator, the pointer to the previous version, and the state. In addition, the decentralized implementation described in Section~\ref{sec:decentralized} holds a pointer to the multi-version serialization graph (MVSG, or simply "graph") managed on a per-record basis.
\textbf{Graph: }
The structure of the graph in Oze is shown in Figure~\ref{fig:data-structure}. The graph is represented as a map whose key is \textit{txid} and whose value is the node of the graph (\textit{txnode}). Each node has three lists of \textit{txids} and the transaction's read set.
The first is \textit{read\_follower}, a list of transactions that read the version written by the transaction, which corresponds to the reads-from edges in the follower's perspective. The second is \textit{write\_follower}, a list of transactions that write a version newer than the version written by the transaction, which corresponds to the version order edges. The third is \textit{from}, a list of transactions that have any edges pointed from the transaction. Note that Figure~\ref{fig:data-structure} holds a graph for each record as the decentralized implementation, but a centralized implementation shares a single graph using the same structure.
\subsection{Read and Write in Local Ordering Phase} \label{sec:read-write}
\begin{algorithm}[t]
\caption{Local ordering phase (read and write)}\label{alg:readwrite}
\Function{read(txn, record)}{
ver = record.latest \\
\While{ver}{
graph[ver.txid].read\_follower.add(txn.txid) \\
\If{is\_acyclic(graph)}{
break
}
graph[ver.txid].read\_follower.remove(txn.txid) \\
graph[txn.txid].write\_follower.add(ver.txid) \\
ver = ver.next
}
\eIf{ver}{
txn.read\_set.add((record, ver)) \\
graph[txn.txid].read\_set.add(record, ver) \\
}{
abort()
}
}
\Function{write(txn, record)}{
ver = create\_version(txn.txid) \\
txn.write\_set.add((record, ver)) \\
}
\end{algorithm}
As described in Section~\ref{sec:3phase-design}, Oze executes the local ordering phase, global ordering phase, and finalizing phase in that order. We describe the read and write protocol in the local ordering phase below.
Lines 1--14 of Algorithm~\ref{alg:readwrite} show Oze's read protocol. The read protocol is protected by a single global latch to access the MVSG exclusively. In Oze, a transaction first searches the version list of the record from the latest version to find a \textit{readable version}. A \textit{readable version} is a committed version in which reading it does not create a cycle. If the transaction cannot find a readable version even after reaching the oldest version, it aborts itself. If there is a readable version, the transaction adds its \textit{txid} to the \textit{read\_follower} in the \textit{txnode} of the transaction that wrote the selected version, and adds the writer's \textit{txids} of the skipped versions to the \textit{write\_follower} in the own \textit{txnode} (Lines 3--9).
The record and the version are also added to the local read set and the read set on the graph, respectively (Lines 11--12).
When writing, a transaction in Oze creates a version and stores it in the local write set associated with the record (Lines 16--17).
It processes the write set later in the global ordering phase to reduce unnecessary interferences on the records and the graph.
\subsection{Global Ordering Phase} \label{sec:commit}
\begin{algorithm}[t]
\caption{Global ordering phase}\label{alg:commit}
\tcp{\textbf{decided:} List of txns ordered before committing txn (empty when starting function)}
\Function{ordering(txn)}{
\For{(rec, v) \textbf{in} txn.write\_set}{
readers = find\_readers(graph, rec) \\
\For{r \textbf{in} readers}{
graph[r.txid].write\_follower.add(txn.txid) \\
}
\eIf{is\_acyclic(graph)}{
decided.add(readers) \\
rec.insert\_version([]) \\
}{
followers = find\_followers(graph, rec, readers) \\
\For{r \textbf{in} readers}{
\If{r \textbf{not in} decided}{
graph[r.txid].write\_follower.remove(txn.txid) \\
}
}
\For{f \textbf{in} followers}{
\eIf{txn.txid.epoch == f.txid.epoch}{
graph[txn.txid].write\_follower.add(f.txid) \\
}{
abort() \\
}
}
\eIf{is\_acyclic(graph)}{
rec.insert\_version(followers) \\
}{
abort() \\
}
}
}
}
\end{algorithm}
In the global ordering phase of Oze, a transaction verifies whether the serializability can be guaranteed while determining the order of transactions. Algorithm~\ref{alg:commit} shows the global ordering protocol. The centralized implementation uses a single global latch to exclusively access the MVSG throughout the entire ordering function.
For each record \textit{rec} and version \textit{v} in the local write set, the transaction first gets \textit{readers}, a list of transactions reading the record, from the graph and adds postposing edges; the verifying transaction itself is placed behind in the serialization order (Lines 4--5). If there is no cycle, it adds \textit{readers} to \textit{decided} to remember that their order is already fixed and inserts the version as the latest one (Lines 6--8).
If there is a cycle, to try a different version order, the transaction finds \textit{followers} that should be ordered after \textit{txn} itself based on \textit{readers} (Line 10). Specifically, it lists followers by checking each \textit{reader's} read set to observe which transaction (version) it is reading. Then, it removes the current version order edges from \textit{readers} to \textit{txn} itself except for the transactions already ordered and adds new preposing edges from \textit{txn} to each of \textit{followers} (Line 13 and 16).
This edge replacement is equivalent to searching for another version order; we name it order forwarding because it forwards the transaction ahead in the serialization order. The order forwarding itself can be performed across epochs. However, Oze limits it within the same epoch to guarantee linearizability and simplify graph cleaning (described in Section~\ref{sec:gc}) and aborts transactions if forwarding occurs across epochs (Lines 15--18).
\subsection{Decentralized Graph Management} \label{sec:decentralized}
Protocols that use centralized graphs, such as MVSGT~\cite{Hadzilacos85} and MVSGA~\cite{Hadzilacos88}, have to take a single global latch whenever the graph is updated, so they do not benefit from the recent many-core architectures. Thus, the decentralized Oze manages the MVSG on per-record basis instead of using a single centralized graph and validates transactions in an optimistic way to achieve better performance and scalability. Specifically, it uses a loosely synchronized MVSG by propagating the orders that each transaction has decided to the MVSGs on each read/write record and the related records.
The key differences from the naive Oze protocol are \textit{read()} and \textit{ordering()}.
In \textit{read()} of the decentralized Oze, a transaction first merges its local MVSG into the MVSG on the record.
The process of selecting the version and adding the reads-from edge is the same as in Lines 2--14 of Algorithm~\ref{alg:readwrite} except that the merged graph is used instead of the global MVSG. After selecting a version, it merges the graph on the record into the local one. \textit{write()} is the same as the centralized implementation because it does not access the MVSG.
\begin{algorithm}[t]
\caption{Global ordering in decentralized Oze}\label{alg:opt-commit}
\tcp{\textbf{done:} List of records already processed (empty at beginning)}
\tcp{\textbf{target:} List of records to be propagated (empty as well)}
\Function{ordering(txn)}{
\For{(record, v) \textbf{in} write\_set}{
merge(record.graph, txn.graph) \\
\texttt{(Omitted)} \tcp{ Same as lines 3--22 in Algorithm~\ref{alg:commit}}
add\_target(record.graph, target, done) \\
merge(txn.graph, record.graph) \\
done.add(record) \\
}
target.insert(records in read\_set) \\
\While{! target.is\_empty()}{
record = target.pop() \\
merge(record.graph, txn.graph) \\
\If{! is\_acyclic(graph)}{
abort() \\
}
add\_target(txn, record.graph, target, done) \\
merge(txn.graph, record.graph) \\
done.add(record) \\
}
}
\Function{add\_target(txn, graph, target, done)}{
followers = get\_all\_followers(txn, graph) \\
\For{follower \textbf{in} followers}{
\For{(rec, v) \textbf{in} graph[follower].read\_set}{
\If{rec \textbf{not in} done}{
target.add(rec) \\
}
}
}
}
\end{algorithm}
The difference in the global ordering phase is shown in Algorithm~\ref{alg:opt-commit}. The ordering process for writing records is the same as the protocol described in Algorithm~\ref{alg:commit}, but merging graphs (Line 3 and 6) and finding records to propagate the orders (\textit{add\_target} in Line 5) are different. After ordering the write records,
a verifying transaction repeats the propagation for each record in the target, including the records in the read set, while merging, checking if there is a cycle, and listing up additional targets to propagate until the target is empty (Lines 9--16). To obtain the additional targets, the transaction finds all the followers of it in concern by recursively tracking the graph and adds their read records (Lines 17--22).
Oze propagates the orders so that the concurrent transactions do not select an order inconsistent with each other (i.e., create a cycle). Such inconsistency will appear as a write skew anomaly if propagation is omitted.
For example, consider a schedule $s = r_1(x_0)$ $r_2(y_0)$ $w_1(y_1)$ $w_2(x_2)$ $c_1$ $c_2$ that causes a typical write skew in the literature \cite{Berenson95}. Since $T_1$ and $T_2$ write $y$ and $x$ that the other transactions read, both transactions visit $y$ and $x$ and select the version order in the global ordering, respectively. Then the order $T_1 < T_2$ and $T_2 < T_1$ are written for $x$ and $y$, since there is no cycle when postposing on each record. If Oze commits without resolving these inconsistent choices, the write skew would occur.
To avoid it,
Oze propagates and verifies the previously determined orders to each record in the read set (Line 11--12). If $T_1$ writes $T_2 < T_1$ on $x$ before $T_2$ writes the selected order ($T_1 < T_2$) on the same $x$, $T_2$ that comes later detects a cycle and aborts.
That is, Oze is a first-come-first-win protocol. Note that both $T_1$ and $T_2$ might be aborted if they simultaneously write their choices to $y$ and $x$, respectively.
\textbf{Correctness Sketch: }
Not only in the above case of two records and two transactions but also in the case of $n$ records and $n$ or more transactions, the decentralized Oze prevents any cycles such as write skews. As has been theoretically established in SSI~\cite{Fekete05, Cahill09}, the essence of those cycles is a series of anti-dependencies. Hence, using a similar idea, Oze can also guarantee serializability by finding propagation targets based on the anti-dependencies even if the MVSG is managed in a decentralized manner. If the validating transaction (for example, $T$) creates a cycle that includes $T$ itself, it must be due to a transaction that chooses an order following $T$; i.e., it must be a transaction that writes a record that $T$ reads as long as $T$ is still uncommitted. Thus, Oze never creates a cycle by guaranteeing that the transactions contained in $T$'s \textit{write\_follower} and its followers never precede $T$ through the propagation. In other words, Oze can guarantee serializability for the following reasons if it can propagate the orders to the record read by $T$'s subsequent transaction (Lines 17--22) without creating a cycle: (1) If a transaction is in-flight, the record is always revisited in the global ordering phase. Thus, it will be aborted if it precedes $T$ and creates a cycle. (2) If a transaction is in the global ordering phase, $T$ can confirm that the transaction does not precede $T$ (no cycle) since the read/write records have already been fixed.
\textbf{Complexity: }
The dominant factor of the complexity in the decentralized Oze protocol is graph processing such as merging and cycle-checking. The time complexities of both processes are $O(|V| + |E|)$ where $|V|$ and $|E|$ are the number of nodes and edges in a graph\footnote{Precisely, for the cycle check, it requires less nodes and edges since it is enough to only check the nodes that follow the commiting transaction.}, respectively. The graph processing must be done for each write records ($W$) and propagation records ($P$) which are read records and records found in the global ordering phase. Therefore, as a whole, the time complexity of the decentralized Oze protocol is $O((|W| + |P|)(|V| + |E|))$.
\subsection{Parallel Validation} \label{sec:parallel-validation}
In the decentralized Oze, the cost of selecting, propagating, and verifying the version order in the global ordering phase increases, especially for a long transaction with many records to be read and written. In addition, there is a risk that the size of the graph will continue to grow due to the less frequent garbage collection during the processing of such long transactions, and the validation of the graph will continue forever. Thus, Oze performs the global ordering phase in parallel using multiple threads for fast validation.
The part that can be parallelized is the propagation phase (Lines 8--15 in Algorithm~\ref{alg:opt-commit}) after deciding the version order (\ref{alg:opt-commit} Lines 2--6). If the validation threads (referred to as validators) do not know each other's decision of the version orders, they cannot prevent the cycle of MVSG by propagation. Therefore, we only parallelize the propagation after determining the version orders.
\subsection{Garbage Collection} \label{sec:gc}
This section describes the garbage collection (GC) of MVSG and record versions. In SGT~\cite{Casanova80},
incoming edges are never added to a committed transaction; similarly, we can delete MVSG nodes that will never make a future cycle if we guarantee that there would be no additional incoming edge.
However, in Oze, incoming edges can be added to the committed transaction in two situations.
The first case can occur in the read protocol. As mentioned in Section~\ref{sec:read-write}, when selecting a version to read, a reader transaction may add edges to the transactions (\textit{write\_follower}) that wrote the skipped version.
The second case can occur in order forwarding; when preposing a transaction, it adds edges to the \textit{write\_follower} transactions as mentioned in Section~\ref{sec:commit}.
Oze uses epochs to guarantee that incoming edges are not added to a node anymore and delete it.
An epoch that satisfies this condition is called \textit{reclamation\_epoch ($e_r$)}. $e_r$ is the minimum local epoch of each worker thread minus one. We can prevent transactions from adding incoming edges that create a cycle by prohibiting (1) reading versions in $e_r$ or before (except the latest version) and (2) order forwarding to versions in $e_r$ or before.
Algorithm \ref{alg:clean} shows the cleaning algorithm of the graph. First, for each \textit{txnode} before $e_r$, check if there is a newer transaction in the followers than $e_r$ (Lines 4--11). This prevents removing the \textit{txnode} read by the transaction in progress. If there is such a transaction, or if there is an incoming edge (i.e., \textit{from} is not empty), the \textit{txnode} is excluded from GC targets (Lines 12--13). If \textit{txnode} is a GC target, delete its transaction ID from all followers (i.e., remove the incoming edge on the follower side), and then delete the node from the graph (Lines 14--16).
For GC of versions, we remove versions in $e_r$ or before except for the latest one (in the serialization order). We need to check the MVSG and hold versions where transactions are reading them.
\begin{algorithm}[t]
\caption{Graph cleaning in Oze}\label{alg:clean}
\tcp{\textbf{followers:} List of txns ordered after committing txn}
\Function{clean(graph, reclamation\_epoch)}{
\For{txnode \textbf{in} graph}{
followers.clear() \\
\If{txnode.txid.epoch $>$ reclamation\_epoch}{
continue \\
}
followers.add(txnode.read\_follower) \\
followers.add(txnode.write\_follower) \\
keep = false \\
\For{f \textbf{in} followers}{
\If{f.epoch > reclamation\_epoch}{
keep = true; break \\
}
}
\If{keep \textbf{or} txnode.from.size() $>$ 0}{
continue \\
}
\For{f \textbf{in} followers}{
graph[f].from.remove(txnode.txid) \\
}
graph.remove(txid)
}
}
\end{algorithm}
\section{Evaluation} \label{sec:eval}
\subsection{Experimental Setup}
We evaluate Oze with BoMB that reproduces our targeting application workloads first and then report the performance of Oze on two standard benchmarks: TPC-C~\cite{tpcc} and YCSB~\cite{Cooper10}. All experiments are performed using CCBench~\cite{Tanabe20}, a benchmark platform for various concurrency control protocols. We compare Oze with Silo~\cite{Tu13}, MOCC~\cite{Wang16}, TicToc~\cite{Yu16}, ERMIA~\cite{Wang17}, Cicada~\cite{Lim17} and D2PL. D2PL is a 2PL-based protocol that mimics deterministic behavior such as in Calvin\cite{Thomson12} and is only used for the static BoMB evaluation. D2PL first sorts all the accessing keys and then locks them in that order.
We use the original implementation of CCBench but modify it to support each workload efficiently; for instance, we unify the read-write interface and the abstract record format. We also implement Oze and D2PL in C++ as a new protocol on CCBench.
The evaluation environment consists of a single server with four Intel\textregistered Xeon\textregistered Platinum 8176 CPUs with 2.10 GHz processors and forty-eight DDR4-2666 32 GB DIMMs (total 1.5 TB). Each CPU has 28 physical cores and supports hyper-threading.
Similar to previous work~\cite{Guo21}, we evaluate all protocols in two modes: \textit{one-shot} and \textit{interactive}. The \textit{one-shot} mode simulates situations in which all the necessary parameters are given at the beginning of a transaction and there is no interaction between the client and the database server. The \textit{interactive} mode simulates situations in which the client executes transaction logic and sends requests
to the database server. We emulate the interactive mode by inserting a 1-ms delay immediately after each request in the transaction logic of each workload.
\subsection{Experiments on BoMB}
We first evaluate how each protocol can handle the BoMB workload using \textit{static BoM} (i.e., L1, S1 and S2 transactions only and BoM trees never change). Then, we run all six transactions of \textit{dynamic BoM} only with Oze and D2PL based on the result of the static case.
\subsubsection{L1 Transaction Runablity}
We run the L1, S1 and S2 transactions with one thread each for 1 minute while varying the number of target products from 20 to 100 and measure the throughput and abort rate of each type of transaction.
We use the default parameters of BoMB shown in Table~\ref{tab:params} except for varying products. Even with a high abort rate, the protocols may probabilistically commit the L1 transaction by increasing the number of trials, so we repeat the above ten times and calculate the \textit{success rate}, the rate of successful trials that the L1 committed at least once. Figure~\ref{fig:cc-comparison-L1} shows the success rate of the L1 and Figure~\ref{fig:cc-comparison-L1-abort-rate} in Section~\ref{sec:intro} shows the average abort rate in these ten trials.
\begin{figure}
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[keepaspectratio, scale=0.45]{figures/cc-comparison-L1-success-rate-bar.pdf}
\subcaption{One-shot mode.}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[keepaspectratio, scale=0.45]{figures/cc-comparison-L1-success-rate-interactive-bar.pdf}
\subcaption{Interactive mode.}
\end{minipage}
\caption{Success rate of L1 with static BoM. Only Oze, D2PL (and MOCC partially) can handle it.}
\label{fig:cc-comparison-L1}
\end{figure}
\begin{figure}
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[keepaspectratio, scale=0.45]{figures/cc-comparison-S1-throughput-1shot.pdf}
\subcaption{One-shot mode.}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[keepaspectratio, scale=0.45]{figures/cc-comparison-S1-throughput-interactive.pdf}
\subcaption{Interactive mode.}
\end{minipage}
\caption{S1 throughput with static BoM. From L1 perspective, protocols other than Oze, D2PL and MOCC are for reference.}
\label{fig:cc-comparison-S1}
\end{figure}
\textbf{One-shot:} For a large number of target products, no protocols can commit the L1 transaction other than Oze, MOCC and D2PL. Silo and TicToc abort almost all of the L1 transactions due to the read validation failure in the commit phase because the S1 transactions update the cost of raw materials. Surprisingly, the multi-version protocols, ERMIA and Cicada can hardly commit the L1 transaction with even 20 products. As discussed in Section~\ref{sec:soa-protocols}, with the benefit of multi-version, both ERMIA and Cicada can build the BoM trees and calculate the costs without being hindered by the S1. However, both protocols almost always abort the L1 as false positives since the writes of the costing results conflict with the reads of S2.
As for MOCC, the L1 can be committed with the following sensitive behavior. First, the L1 aborts if MOCC notices that a read record has already been updated by the S1 in its validation phase. Then the transaction retries and acquires the lock for the record that caused the abort. If the S1 tries to update the same record again by chance at this moment, the L1 can be committed since the S1 waits for the lock in the validation phase. Though the success rate decreases with the number of trials in 1 minute, this can still happen with a larger number of products. Note that MOCC can easily abort the L1 if there are other concurrent S1 transactions.
As for D2PL, its deterministic behavior allows it to acquire all locks without deadlocks and thus commit the L1.
Oze with 32 validators can commit the L1 perfectly for up to 140 products, but after that, it does not commit at all within the 1 minute. This is not due to the abort, but rather the ordering phase taking a long time to complete as a result of the size of the graph growing.
\textbf{Interactive:} The difference from the one-shot result is the behavior of MOCC and Oze with a single validator. For MOCC, since the number of trials for each type of transaction decreases due to the long read phase, accessing the same record consecutively as in the one-shot mode is less likely to occur. As a result, MOCC cannot commit the L1 even with 40 products. Regarding Oze, the local and global ordering phase is shortened because the graph growing speed is slower than that of the one-shot mode because of the fewer concurrent S1 transactions. Thus, Oze with the single validator can commit the L1 with a greater number of products.
\begin{figure}
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[keepaspectratio, scale=0.46]{figures/oze-S1-scale-throughput-1shot.pdf}
\subcaption{One-shot mode.}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[keepaspectratio, scale=0.46]{figures/oze-S1-scale-throughput-interactive.pdf}
\subcaption{Interactive mode.}
\end{minipage}
\caption{S1 scalability with static BoM.}
\label{fig:oze-S1-wscale}
\end{figure}
\begin{figure}
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[keepaspectratio, scale=0.46]{figures/oze-L1-vscale-throughput-1shot.pdf}
\subcaption{One-shot mode.}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[keepaspectratio, scale=0.46]{figures/oze-L1-vscale-throughput-interactive.pdf}
\subcaption{Interactive mode.}
\end{minipage}
\caption{Validation scalability with static BoM.}
\label{fig:oze-L1-vscale}
\end{figure}
\subsubsection{Short Transaction Throughput}
Figure~\ref{fig:cc-comparison-S1} shows the average throughput of the short transaction (S1) during the above experiment. Note that it is not always appropriate to directly compare short transaction throughput from the viewpoint of whether they can handle heterogeneous workloads because long transactions cannot be committed at all except by Oze, MOCC and D2PL under certain conditions.
\textbf{One-shot:} While the existing protocols process roughly 50 to 100 Mtpm in the one-shot mode, Oze and D2PL process only a few Mtpm. The throughput of Oze with 32 validators decreases as the number of products increases. This is because the graph on the record, which is also handled by the short transactions, remains large for a long time without GC as a result of increasing the processing time of L1 transactions.
For D2PL, short transactions that conflict with long transactions must wait to acquire locks, resulting in performance degradation with larger number of products.
\textbf{Interactive:}
Unlike the one-shot mode, all protocols other than D2PL are almost comparable in the interactive mode.
Because communication delay is added to the latency, the overhead of concurrency control (i.e., the performance difference) is small.
However, only D2PL experiences significant performance degradation because the longer the L1 transaction continues to hold locks, the more the S1 transaction is affected by those locks.
\subsubsection{Scalability Analysis of Oze} \label{sec:eval-scalability}
We evaluate the scalability of Oze by varying the number of threads for the short transactions. We increase the number of threads for the S1 and S2 transactions up to 32 while keeping the L1 transaction on single thread. Figure~\ref{fig:oze-S1-wscale} shows the throughput of the S1 transaction with 50 and 100 \texttt{target-products}. Note that we used 32 threads as validators and confirmed the L1 transaction could be committed in all cases.
In the one-shot mode, Oze shows near-linear scalability when the number of threads is small. The effect of parallelism reduces as the number of threads increases. As the size of graphs on records grow with more S1 transactions, the latency of the long transaction that accesses those records gradually becomes higher. The expansion of the graph size also affects the short transactions themselves since garbage collection is not triggered during the long transaction execution.
The result of the interactive mode shows higher scalability than that of the one-shot mode while hiding the inserted delay because the size of the graph grows more gradually.
\subsubsection{Parallel Validation}
Figure~\ref{fig:oze-L1-vscale} shows the throughput of the L1 transaction when increasing the number of validators from 1 to 64.
In the one-shot mode, Oze cannot commit the L1 transaction with a single validator or even a small number of validation threads for 100 products, but it is possible to commit the L1 transaction using parallel validation, and its throughput increases with more threads. In the interactive mode, the effect of the parallel validation is smaller because the inserted delay extends the local ordering phase. Note that the benefit for both cases reduces as the number of threads increases threads because the parallel validation presents an overhead when merging the resulting graph of each validator and checking the acyclicity of the graph.
\begin{figure}
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[keepaspectratio, scale=0.46]{figures/bomb-full-throughput-1shot.pdf}
\subcaption{One-shot mode.}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[keepaspectratio, scale=0.46]{figures/bomb-full-throughput-interactive.pdf}
\subcaption{Interactive mode.}
\end{minipage}
\caption{Oze and D2PL throughput with dynamic BoM.}
\label{fig:bomb-full}
\end{figure}
\subsubsection{Dynamic Setting}
Figure~\ref{fig:bomb-full} shows the throughput of Oze and D2PL on dynamic BoM with varying product size. For Oze, each transaction runs with single worker thread and 32 validators. For D2PL, we only plot the L1 throughput since D2PL cannot commit it at all. D2PL issues reconnaissance queries~\cite{Thomson12} to determine input records, which are locked and validated at actual execution time. However, D2PL always fails to lock or validate them since the inputs change dynamically. In contrast, Oze can handle all types of transactions. Note that the overall throughput decreased as the number of products increased, because of the size of the graph growing in the L1 transaction. Especially, the S2 throughput drops significantly if the S2 transaction finds the L1 transaction in the graph which causes the large number of propagation.
\begin{figure}
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[keepaspectratio, scale=0.46]{figures/cc-comparison-tpcc-throughput-1shot.pdf}
\subcaption{One-shot mode.}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[keepaspectratio, scale=0.46]{figures/cc-comparison-tpcc-throughput-interactive.pdf}
\subcaption{Interactive mode.}
\end{minipage}
\caption{TPC-C-NP throughput.}
\label{fig:tpcc}
\end{figure}
\subsection{Experiments on TPC-C}
We compare Oze with other concurrency control protocols on the TPC-C benchmark. We only run New-Order and Payment transactions with the same ratio since they account for a large percentage of full-mixed queries of TPC-C.
Figure~\ref{fig:tpcc} shows the throughput of each protocol with varying number of threads and warehouses. The throughput of Oze in the one-shot mode is an order of magnitude slower, but the throughput in the interactive mode is comparable to other protocols and scales almost linearly. This is because the overhead of concurrency control is masked by the delay. Unfortunately, we do not see advantages of using Oze for workloads such as TPC-C, where long and short transactions do not mix at the same time. We could switch between Oze and other protocols if it is guaranteed that such mixing does not occur only at a certain time.
\subsection{Experiments on YCSB}
We present empirical evaluation with YCSB to understand the details of Oze's protocol behavior. We run YCSB-A (50\% reads and 50\% pure writes) for 100 million records with 100 bytes payloads. The number of Oze worker threads is 28 and all workers use the single-thread validation. Figure~\ref{fig:ycsb} shows the throughput and the average graph size with (a) varying number of operations in a transaction and (b) varying skew. Note that the graph size is the average number of nodes in cycle checking.
As shown in the analysis of computational complexity in Section~\ref{sec:decentralized}, the throughput is almost inversely proportional to the product of the number of operations (i.e., write records and propagated records) and the graph size when there is no skew. When the skew exceeds 0.7, the graph size increases dramatically, and the throughput drops by an order of magnitude.
Figure~\ref{fig:runtime-analysis} shows the average latency required to process each major function of Oze. Both Figure~\ref{fig:runtime-analysis} (a) and (b) show that the percentage of propagation becomes larger compared to read processing (e.g., cycle check) and version ordering (i.e., determining version orders) as the graph size increases. Therefore, reducing the graph size with more sophisticated garbage collection is one of the major challenges in the successor of Oze.
\begin{figure}
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[keepaspectratio, scale=0.46]{figures/ycsb-op.pdf}
\subcaption{No skew.}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[keepaspectratio, scale=0.46]{figures/ycsb-skew.pdf}
\subcaption{16 operations.}
\end{minipage}
\caption{YCSB-A throughput with graph size.}
\label{fig:ycsb}
\end{figure}
\begin{figure}
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[keepaspectratio, scale=0.46]{figures/ycsb-runtime-analysis-lat-op.pdf}
\subcaption{No skew.}
\end{minipage}
\begin{minipage}{0.49\linewidth}
\centering
\includegraphics[keepaspectratio, scale=0.46]{figures/ycsb-runtime-analysis-lat-skew.pdf}
\subcaption{16 operations.}
\end{minipage}
\caption{Runtime analysis of Oze.}
\label{fig:runtime-analysis}
\end{figure}
\section{Related Work} \label{sec:related}
\textbf{Benchmark: }
TPC-C~\cite{tpcc} is the de facto standard benchmark for OLTP systems that simulates a warehouse-centric order processing application. TPC-E~\cite{tpce} simulates the activity of processing brokerage trades. Though TPC-C and TPC-E provide realistic workloads, both lack long transactions with update operations, which BoMB provides. TPC-EH~\cite{Wang17} is a variant of TPC-E and has a read-mostly (i.e., write-included) long transaction. However, there is no transaction that reads the result (Asset-History) written by the read-mostly long transaction, which corresponds to the S2 transaction in BoMB. YCSB~\cite{Cooper10} provides synthetic workloads comprised of homogeneous operations to benchmark cloud services.
\textbf{Lock-based Protocols: }
In 2PL variants using timestamp-based priority \cite{Rosenkrantz78,Corbett13,Guo21}, a long transaction can commit when it has the smallest timestamp. Then, subsequent short transactions must either wait or abort until the long one commits if they conflict with the long transaction.
Altruistic locking~\cite{Salem94} enables transactions to \textit{donate} objects which have been locked and permitted other transactions to access the donated objects before they are unlocked.
Once short transactions (e.g., S1) accept a donation, they must wait for another donation to access another record regardless of whether long transactions use them, which increases the waiting time.
\textbf{Protocols for Highly-Contended Workloads: }
ERMIA~\cite{Yu14} and Cicada~\cite{Lim17} keep multiple versions to handle highly-contended workloads. MOCC~\cite{Wang16} and ACC~\cite{Tang17} are hybrid protocols that switch between an optimistic and a pessimistic scheme to avoid starvation. Commit-time updates and timestamp splitting~\cite{Huang20} avoid the high contention using the database schema and the workload knowledge.
A batching and reordering scheme~\cite{Ding18} is for contended workloads with flexibility. While highly-contended workloads have been explored, heterogeneous workloads with long-running update transactions, such as BoMB, have been less discussed.
\textbf{Deterministic Databases: }
Calvin~\cite{Thomson12} executes transactions while acquiring locks based on the pre-determined total order. This behavior is the same as D2PL and would not work for dynamic BoM. Ocean Vista’s functor-based execution~\cite{Fan19} and Aria’s batch-based execution~\cite{Lu20} do not require the read-set in advance. When handling the BoMB workload, the throughput of short transactions can be bounded by the L1 throughput if short transactions read the functors placed by an L1 transaction (Ocean Vista) or if a batch contains an L1 transaction (Aria).
\section{Conclusion} \label{sec:conclusion}
We proposed Oze, a new concurrency control protocol that exploits a large scheduling space using a fully precise multi-version serialization graph in a decentralized manner. We also proposed a new OLTP benchmark, BoMB, based on a use case in an actual manufacturing company. Experiments using BoMB showed that Oze keeps the abort rate of the long-running update transaction at zero while reaching up to 1.7 Mtpm for short transactions with near-linear scalability, whereas state-of-the-art protocols cannot commit the long transaction or experience performance degradation in short transaction throughput.
\begin{acks}
This work is partially supported by a project, JPNP16007, commissioned by the New Energy and Industrial Technology Development Organization (NEDO).
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
1,314,259,995,873 | arxiv | \section{\label{intro} Introduction}
In the last years the study and knowledge of the full three-dimensional dynamical
nucleon structure in polarized high energy collisions have witnessed impressive progress
(see e.g.~Refs.~\cite{D'Alesio:2007jt,Barone:2010zz} for recent reviews).
Motivated by several experimental results on spin and azimuthal asymmetries,
a class of partonic, transverse momentum dependent, distribution and fragmentation
functions (nowadays largely known as TMDs for short) have been introduced and analyzed.
In high energy hadronic processes where two energy scales play a role (a large perturbative scale
and a small transverse momentum scale) the usual leading-twist QCD collinear factorization schemes,
making use of the corresponding collinear parton distribution (PDFs) and fragmentation (FFs) functions,
often fail to describe several puzzling experimental measurements on spin asymmetries.
When a small transverse scale is involved one needs to take care more accurately
of the intrinsic motion of constituent partons inside parent hadrons.
Typical examples are: the low transverse momentum distribution of dilepton pairs
in Drell-Yan (DY) processes and the corresponding asymmetries in the azimuthal distribution
of the observed pair~\cite{Tangerman:1994eh,Boer:1999mm,Anselmino:2002pd};
the low transverse momentum spectrum of hadrons produced in the current region
in semi-inclusive deeply inelastic scattering (SIDIS)~\cite{Mulders:1995dh,Boer:1997nt,Anselmino:2011ch};
the azimuthal asymmetries in the correlations of two leading hadrons (typically pions) observed in opposite
jets produced in $e^+e^-$ collisions~\cite{Boer:1997mf,Anselmino:2007fs}.
Despite the theoretical and experimental difficulties associated with the study of these
reactions, they offer a unique opportunity to learn about the hadron structure in
the transverse directions (with respect to the usual light-cone one).
{}From an historical perspective, the first sizable single spin asymmetries were observed
in single inclusive hadron production at large values of the Feynman variable,
$x_F=p_L/p_L^{\rm max}\simeq 2p_L/\sqrt{s}$, and moderately large transverse momentum
in polarized hadronic collisions.
However, the theoretical study of this process is made difficult by the fact that
there is no small transverse momentum scale. Intrinsic transverse momenta
of partons are integrated out in the observable. This complicates the treatment of
such (higher twist) asymmetry, since several possible effects are mixed up and it is not obvious
how to disentangle them. Moreover, a TMD factorization scheme similar to that developed for the reactions
discussed above (DY, SIDIS, $e^+e^-$ annihilations) has never been proved and,
at least for double inclusive jet and/or hadron production processes,
like the one studied here, there are clear indications
that factorization may be broken (for a recent discussion, see e.g.~Ref.~\cite{Rogers:2013zha}
and references therein).
Quite recently, it has been suggested to study azimuthal asymmetries in hadronic collisions
by looking at the azimuthal distribution of leading hadrons (pions or kaons) inside a
large transverse momentum jet inclusively produced in polarized
proton proton collisions~\cite{Yuan:2007nd,D'Alesio:2010am}.
Although also in this case a proof of TMD factorization is not available (if one takes
into account intrinsic motion in the initial colliding hadrons), the observables considered
are rather similar to those measured in SIDIS. In particular, leading-twist
asymmetries appear and different contributions (like the Sivers or Collins effects) can be
disentangled by taking appropriate moments of the azimuthal distributions, much in the
same way adopted in the SIDIS or DY cases.
The detailed analysis of this process can be of crucial relevance, when compared with
analogous studies in the DY and SIDIS cases, for the theoretical and phenomenological
understanding of the process dependence and the universality properties of the
Sivers distribution and the validation of the expected universality of the Collins
fragmentation function. Other TMDs can also be tested in the same way.
In this review we summarize recent results concerning the study of
the Sivers and Collins azimuthal asymmetries in the distribution of leading pions
inside a jet in $p^\uparrow p\to {\rm jet}\, \pi\, X$ processes.
After a short description of the TMD theoretical approach adopted, the so-called generalized parton
model (GPM)~\cite{D'Alesio:2004up,Anselmino:2005sh}, we present
a selection of interesting results involving the Sivers and Collins effects,
that are expected to be the dominant contributions to the single spin asymmetries considered here.
We will also discuss in some details an extension of the GPM~\cite{Gamberg:2010tj},
named colour gauge invariant (CGI) GPM,
including colour gauge factors in the approach,
and its application to the study of the process dependence of the
Sivers distribution~\cite{D'Alesio:2011mc}. This is indeed expected in perturbative QCD, due to the essential
role played by initial and final state interactions among active partons and parent
hadrons for the nonvanishing of these single-polarized observables.
Finally, we will shortly summarize other recent attempts to study the process dependence
of the Sivers distribution (and other TMD PDFs and FFs) in different processes and adopting different
theoretical approaches. Hopefully, the combined phenomenological
analysis of several reactions and observables will help in clarifying
essential theoretical issues crucial for a full understanding of these interesting
phenomena in the realm of QCD.
\section{\label{kinematics} Kinematics}
\begin{figure*}[t]
\includegraphics[angle=0,width=0.8\textwidth]{pion-jet-kinem-2.ps}
\caption{Kinematics for the
process $A(p_A;S)+B(p_B)\to {\rm jet}(p_{\rm j})+\pi(p_\pi)+X$ in the
center-of-mass frame of the two incoming hadrons, $A$ and $B$.
\label{fig-kinem} }
\end{figure*}
We consider the process
\begin{equation}
A (p_A; S) \,+\, B (p_B)\, \to \, {\rm jet}(p_{\rm j})\,
+\pi(p_\pi)\, +\, X\,,
\end{equation}
where $A$ and $B$ are two spin-1/2 hadrons carrying momenta $p_A$ and
$p_B$ respectively. One of the two hadrons, $A$, is in a pure transverse
spin state described by the four-vector $S$ ($S^2=-1$ and $p_A\cdot S =0$), while $B$ is unpolarized.
We work mainly in the center-of-mass (c.m.) frame of $A$ and $B$,
where
$s = (p_A+p_B)^2$ is the total energy squared, and, as depicted in Fig.~\ref{fig-kinem}, $A$ moves along the positive direction of the $\hat{\bm{Z}}_{\rm cm}$ axis. The production plane containing the colliding beams and the observed jet
is taken as the $(XZ)_{\rm cm}$ plane,
with $(\bm{p}_{\rm j})_{X_{\rm cm}}>0$. In this frame the four-momenta of the
particles and the spin vector $S$ are given by
\begin{eqnarray}
p_A &=& \frac{\sqrt{s}}{2}(1,0,0,1)\,, \qquad S \,= \,S_T \,= \,(0,\cos\phi_{S},\sin\phi_{S},0)\,,\nonumber\\
p_B &=& \frac{\sqrt{s}}{2}(1,0,0,-1)\, ,\nonumber\\
p_{\rm j} & = & (E_{\rm j}, p_{{\rm j}T},0,p_{{\rm j}L})\, =\,
E_{\rm j}(1,\sin\theta_{\rm j},0,\cos\theta_{\rm j}) \, = \,
p_{{\rm j}T}(\cosh \eta_{\rm j},1,0,\sinh \eta_{\rm j})\,, \nonumber\\
p_{\pi} &=&
E_{\pi}(1,\sin\theta_{\pi}\cos\phi_\pi,\sin\theta_{\pi}\sin\phi_\pi,\cos\theta_{\pi})\,,
\label{4mom-cm}
\end{eqnarray}
where all masses have been neglected and $\eta_{\rm j}$ denotes the jet
(pseudo)rapidity, $\eta_{\rm j} = -\log[\tan(\theta_{\rm j}/2)]$.
At leading order in perturbative QCD, the reaction proceeds
via the partonic hard scattering subprocesses $ab\to cd$, where the outgoing
parton $c$ fragments into the observed hadronic jet.
For the partonic
momenta in the hadronic c.m.\ frame, one has
\begin{eqnarray}
p_a &=& \left(x_a\frac{\sqrt{s}}{2}+\frac{{k}_{\perp a}^2}{2x_a\sqrt{s}},
k_{\perp a}\cos\phi_a,k_{\perp a}\sin\phi_a,
x_a\frac{\sqrt{s}}{2}-\frac{{k}_{\perp a}^2}{2x_a\sqrt{s}}\right)\,,\nonumber\\
p_b &=& \left(x_b\frac{\sqrt{s}}{2}+\frac{{k}_{\perp b}^2}{2x_b\sqrt{s}},
k_{\perp b}\cos\phi_b,k_{\perp b}\sin\phi_b,
-x_b\frac{\sqrt{s}}{2}+\frac{{k}_{\perp b}^2}{2x_b\sqrt{s}}\right)\,,\nonumber\\p_c & \equiv & p_{\rm j}\,,
\label{4mom-cm-par}
\end{eqnarray}
where $k_{\perp a,b} = \vert \bm{k}_{\perp a,b} \vert$. Here we have
introduced the variables $x_{a,b}$ and $\bm{k}_{\perp a,b}$,
which are, respectively, the light-cone
momentum fractions and the intrinsic transverse momenta of the incoming partons $a$ and $b$. From Eqs.~(\ref{4mom-cm}) and (\ref{4mom-cm-par}) one can calculate the partonic Mandelstam variables:
\begin{eqnarray}
\hat s = (p_a+p_b)^2 & = & x_a x_b s \left[1 - 2 \left(\frac{k_{\perp a}
k_{\perp b}}{x_ax_b s}\right) \cos(\phi_a-\phi_b) +
\left(\frac{k_{\perp a} k_{\perp b}}{x_a x_b s}\right)^2
\right] \,,\\
\hat t = (p_a-p_c)^2 & = & - x_a E_{\rm j}\sqrt{s}\,\left[ 1-\cos\theta_{\rm j} -
2 \left(\frac{k_{\perp a}}{x_a\sqrt{s}}\right) \sin\theta_{\rm j}\cos\phi_a +
\left(\frac{ k_{\perp a}}{x_a\sqrt{s}}\right)^2 (1+\cos\theta_{\rm j}) \right] = \nonumber\\
& = & -x_a p_{{\rm j}T}\sqrt{s} \left[ e^{-\eta_{\rm j}} -
2\left(\frac{k_{\perp a}}{x_a\sqrt{s}}\right)\cos\phi_a +
\left(\frac{k_{\perp a}}{x_a\sqrt{s}}\right)^2 e^{\eta_{\rm j}} \right] \,, \\
\hat u = (p_b-p_c)^2 & = & - x_b E_{\rm j}\sqrt{s}\,\left[ 1+\cos\theta_{\rm j} -
2 \left(\frac{k_{\perp b}}{x_b\sqrt{s}}\right) \sin\theta_{\rm j}\cos\phi_b +
\left(\frac{ k_{\perp b}}{x_b\sqrt{s}}\right)^2 (1-\cos\theta_{\rm j}) \right] = \nonumber\\
& = & -x_b p_{{\rm j}T}\sqrt{s} \left[ e^{\eta_{\rm j}} -
2\left(\frac{k_{\perp b}}{x_b\sqrt{s}}\right)\cos\phi_b +
\left(\frac{k_{\perp b}}{x_b\sqrt{s}}\right)^2 e^{-\eta_{\rm j}} \right] \,,
\end{eqnarray}
with the condition $\hat s + \hat t + \hat u = 0$ giving an additional constraint.
The helicity frame of the fragmenting parton $c$ has axes denoted by $\hat{\bm{x}}_{\rm j}$ , $\hat{\bm{y}}_{\rm j}$, $\hat{\bm{z}}_{\rm j}$, with $\hat{\bm{z}}_{\rm j}$ along the direction of motion of $c$. It
can be reached from the hadronic c.m.\ frame by performing a simple rotation
by the angle $\theta_{\rm j}$ around $\hat{\bm{Y}}_{\rm cm}\equiv \hat{\bm{y}}_{\rm j}$, as can be seen from Fig.\ \ref{fig-kinem}. Hence, in this frame,
\begin{eqnarray}
\tilde{p}_c &=& \tilde{p}_{\rm j} = E_{\rm j}(1,0,0,1)\nonumber\\
\tilde{p}_\pi &=& \left(E_\pi,\bm{k}_{\perp\pi},\sqrt{E_\pi^2-\bm{k}_{\perp\pi}^2}\,\right) =
\left(E_\pi,{k}_{\perp\pi}\cos\phi_\pi^H,{k}_{\perp\pi}
\sin\phi_\pi^H,\sqrt{E_\pi^2-\bm{k}_{\perp\pi}^2}\,\right)\,,
\label{4mom-H}
\end{eqnarray}
with $\phi_\pi^H$ being the azimuthal angle of the pion three-momentum around the jet axis, as measured in the fragmenting parton helicity frame. From Eq.\ (\ref{4mom-H}), one can obtain the expression for the light-cone momentum
fraction of the pion,
\begin{equation}
z = \frac{\tilde{p}_\pi^+}{\tilde{p}_c^+}\equiv \frac{\tilde{p}_\pi^+}{\tilde{p}_{\rm j}^+} = \frac{\tilde{p}^0_\pi + \tilde{p}^3_\pi}{\tilde{p}^0_{\rm j} + \tilde{p}^3_{\rm j}} =
\frac{E_\pi+\sqrt{E_\pi^2-\bm{k}_{\perp\pi}^2}}{2E_{\rm j}}\, .
\label{z-def}
\end{equation}
By writing down explicitly the three-momentum of the pion $\bm{p}_\pi$
in the parton $c$ helicity frame and in the hadronic c.m.~frame respectively,
\begin{eqnarray}
\bm{p}_\pi &=& k_{\perp\pi}\cos\phi_\pi^H\hat{\bm{x}}_{\rm j}+
k_{\perp\pi}\sin\phi_\pi^H\hat{\bm{y}}_{\rm j}+
\sqrt{E_\pi^2-\bm{k}_{\perp\pi}^2}\,\hat{\bm{z}}_{\rm j}\nonumber\\
&=& \Bigl[\,k_{\perp\pi}\cos\phi_\pi^H\cos\theta_{\rm j}+
\sqrt{E_\pi^2-\bm{k}_{\perp\pi}^2}\sin\theta_{\rm j}\,\Bigr]\hat{\bm{X}}_{\rm cm}+
k_{\perp\pi}\sin\phi_\pi^H\hat{\bm{Y}}_{\rm cm} \label{pi-H-cm}\\
&& \qquad + ~\Bigl[\,-k_{\perp\pi}\cos\phi_\pi^H\sin\theta_{\rm j}+
\sqrt{E_\pi^2-\bm{k}_{\perp\pi}^2}\cos\theta_{\rm j}\,\Bigr]\hat{\bm{Z}}_{\rm cm}\,,\nonumber
\end{eqnarray}
one finds that the intrinsic transverse momentum of the pion in the hadronic c.m.~frame can be written as
\begin{equation}
\bm{k}_{\perp\pi} = k_{\perp\pi}\cos\phi_\pi^H\cos\theta_{\rm j}\hat{\bm{X}}_{\rm cm}+
k_{\perp\pi}\sin\phi_\pi^H\hat{\bm{Y}}_{\rm cm}-
k_{\perp\pi}\cos\phi_\pi^H\sin\theta_{\rm j}\hat{\bm{Z}}_{\rm cm}\,.
\label{kpi-cm}
\end{equation}
Therefore, denoting by $\phi_{k}$ the azimuthal angle of $\bm{k}_{\perp\pi}$,
\emph{as measured in the hadronic c.m.~frame}, one obtains
\begin{equation}
\tan\phi_{k} = \frac{\tan\phi_\pi^H}{\cos\theta_{\rm j}}~.
\label{tan-phik}
\end{equation}
In Ref.~\cite{Yuan:2007nd}, where only forward jet production was considered,
azimuthal asymmetries were given in terms of $\phi_k$ (named $\phi_h$ there).
In this kinematical configuration, $\cos\theta_{\rm j} \to 1$ and the angles
$\phi_\pi^H$ and $\phi_k$ become practically identical. On the other hand, for central-rapidity jets ($\theta_{\rm j}=\pi/2$), $\phi_k=\pi/2$, implying that
azimuthal asymmetries expressed as function of $\phi_k$ would be artificially
suppressed. For this reason, $\phi_\pi^H$ has to be considered as the
physically relevant angle in the present analysis.
\section{\label{GPM} The generalized parton model}
The single transversely polarized cross section for the process $p(S) + p\to {\rm jet}+ \pi + X$ has been calculated in the GPM framework, using the helicity
formalism, in Ref.~\cite{D'Alesio:2010am},
to which we refer for further details. Its final expression has the following
general structure,
\begin{eqnarray}
2{\rm d}\sigma(\phi_{S},\phi_\pi^H) &\sim & {\rm d}\sigma_0
+{\rm d}\Delta\sigma_0\sin\phi_{S}+
{\rm d}\sigma_1\cos\phi_\pi^H+ {\rm d}\sigma_2\cos2\phi_\pi^H+
{\rm d}\Delta\sigma_{1}^{-}\sin(\phi_{S}-\phi_\pi^H)
\nonumber\\
&& \qquad +~{\rm d}\Delta\sigma_{1}^{+}\sin(\phi_{S}+\phi_\pi^H)
+{\rm d}\Delta\sigma_{2}^{-}\sin(\phi_{S}-2\phi_\pi^H)+
{\rm d}\Delta\sigma_{2}^{+}\sin(\phi_{S}+2\phi_\pi^H)\,,
\label{d-sig-phi-SA}
\end{eqnarray}
where, as discussed in Section~\ref{kinematics}, $\phi_\pi^H$ is the azimuthal angle of the pion three-momentum around the jet axis and $\phi_S$ is the azimuthal angle of the spin polarization vector $S$ of the polarized proton,
as measured in the hadronic c.m.\ frame. The numerator of the related single spin asymmetry is given by
\begin{eqnarray}
{\rm d}\sigma(\phi_{S},\phi_\pi^H)-
{\rm d}\sigma(\phi_{S}+\pi,\phi_\pi^H)
& \sim & {\rm d}\Delta\sigma_0\sin\phi_{S}+
{\rm d}\Delta\sigma_{1}^{-}\sin(\phi_{S}-\phi_\pi^H)+
{\rm d}\Delta\sigma_{1}^{+}\sin(\phi_{S}+\phi_\pi^H)\nonumber\\
&&+ \;{\rm d}\Delta\sigma_{2}^{-}\sin(\phi_{S}-2\phi_\pi^H)+
{\rm d}\Delta\sigma_{2}^{+}\sin(\phi_{S}+2\phi_\pi^H)\,,
\label{num-asy-gen}
\end{eqnarray}
while for the denominator we have
\begin{equation}
{\rm d}\sigma(\phi_{S},\phi_\pi^H)+
{\rm d}\sigma(\phi_{S}+\pi,\phi_\pi^H)
\equiv 2{\rm d}\sigma^{\rm unp}(\phi_\pi^H) \sim
{\rm d}\sigma_0 + {\rm d}\sigma_1\cos\phi_\pi^H+
{\rm d}\sigma_2\cos2\phi_\pi^H\,.
\label{den-asy-gen}
\end{equation}
The various terms contributing to the cross section in Eq.~(\ref{d-sig-phi-SA})
are explicitly given by convolutions of different TMD parton distribution
and fragmentation functions with hard scattering (polarized) cross sections.
For example, if we keep only the leading
contributions after integrating over the intrinsic transverse momenta of the
initial partons, the symmetric term in Eq.~(\ref{den-asy-gen}) is given by
\begin{eqnarray}
{\rm d}\sigma_0 \,\equiv\, {E_{\rm j}}\,\frac{{\rm d}\sigma_0}{{\rm d}^3 {\bm p}_{\rm j}\, {\rm d} z\, {\rm d}^2 {\bm k}_{\perp \pi}}
& = & \frac{2\alpha_s^2}{s} \sum_{a,b,c,d} \int \frac{{\rm d} x_a}{x_a}\,{\rm d}^2\bm{k}_{\perp a}
\int \frac{{\rm d} x_b}{x_b}\,{\rm d}^2{\bm k}_{\perp b} \,
\delta(\hat s+\hat t+\hat u)\,
H^U_{ab\to cd}(\hat s,\hat t,\hat u) \nonumber \\
&&\quad \,\times f_{a/A}(x_a, {\bm k}_{\perp a}^2) \, f_{b/B}(x_b, {\bm k}_{\perp b}^2)\, D_{1}^c(z, {\bm k}_{\perp \pi}^2)\,,
\label{unp}
\end{eqnarray}
where $H^U_{ab\to cd}(\hat s,\hat t, \hat u)$ is the unpolarized squared hard scattering amplitude for the partonic process $a\, b\to c\, d$, related
to the elementary cross section as follows:
\begin{equation}
\frac{{{\rm d}\hat\sigma}_{ab\to cd}}{{\rm d}\hat t} = \frac{\pi\alpha_s^2}{\hat s^2}\,
H^U_{ab\to cd}~.
\end{equation}
By $f_{a/A}(x_a, {\bm k}_{\perp a}^2)$ and $f_{b/B}(x_b, {\bm k}_{\perp b}^2)$ we denote the unpolarized TMD distributions for parton $a$ inside hadron $A$ and for parton $b$ inside
hadron $B$, respectively, while $D_{1}^c(z,\bm{k}_{\perp\pi}^2)$ is
the unintegrated fragmentation function for the unpolarized parton $c$ that
fragments into a pion. The term containing the $\sin\phi_S$ modulation in Eq.~(\ref{num-asy-gen})
is related to the Sivers effect,
\begin{eqnarray}
{\rm d}\Delta\sigma_0\sin\phi_S \,\equiv\, {E_{\rm j}}\,\frac{{\rm d}\Delta\sigma^{(\rm{Sivers})}}
{{\rm d}^3 {\bm p}_{\rm j}\, {\rm d} z\, {\rm d}^2 {\bm k}_{\perp \pi}}
& = & \frac{2\alpha_s^2}{s} \sum_{a,b,c,d} \int \frac{{\rm d} x_a}{x_a}\,{\rm d}^2\bm{k}_{\perp a}
\int \frac{{\rm d} x_b}{x_b}\,{\rm d}^2{\bm k}_{\perp b} \,
\delta(\hat s+\hat t+\hat u)\,
H^U_{ab\to cd}(\hat s,\hat t,\hat u) \nonumber \\
&&\quad \,\times \Big ( -\frac{k_{\perp a}} {M} \Big ) f_{1T}^{\perp a}(x_a, {\bm k}_{\perp a}^2) \cos\phi_a
\, f_{b/B}(x_b, {\bm k}_{\perp b}^2)\,
D_{1}^c(z, {\bm k}_{\perp \pi}^2)\sin\phi_S\,,
\label{sivers}
\end{eqnarray}
where $M$ is the proton mass and $f_{1T}^{\perp a}(x_a, \bm{k}_{\perp a}^2)$
the Sivers function, also denoted as
$\Delta^{N}\!f_{a/p^\uparrow}=-2(k_\perp/M)f_{1T}^{\perp a}$~\cite{Bacchetta:2004jz}.
Notice that, for a direct comparison with the CGI~GPM approach,
in this review we adopt the so-called Amsterdam notation~\cite{Mulders:1995dh,Boer:1997nt}
instead of the usual GPM notation~\cite{Anselmino:2005sh,D'Alesio:2007jt}.
The term containing the $\sin(\phi_S-\phi_\pi^H)$ modulation in Eq.~(\ref{num-asy-gen}) corresponds
to the Collins effect,
\begin{eqnarray}
{\rm d}\Delta\sigma_1^-\sin(\phi_S-\phi_\pi^H) & \equiv & {E_{\rm j}}\,\frac{{\rm d}\Delta\sigma^{(\rm{Collins})}}
{{\rm d}^3 {\bm p}_{\rm j}\, {\rm d} z\, {\rm d}^2 {\bm k}_{\perp \pi}}
\, = \, \frac{2\alpha_s^2}{s} \sum_{a,b,c,d} \int \frac{{\rm d} x_a}{x_a}\,{\rm d}^2\bm{k}_{\perp a}
\int \frac{{\rm d} x_b}{x_b}\,{\rm d}^2{\bm k}_{\perp b} \,
\delta(\hat s+\hat t+\hat u)\,
H^U_{ab\to cd}(\hat s,\hat t,\hat u) \nonumber \\
& & \times\, h_{1}^a(x_a, {\bm k}_{\perp a}^2) \cos(\phi_a-\psi)\, f_{b/B}(x_b, {\bm k}_{\perp b}^2)
\frac{k_{\perp \pi}}{z M_\pi}H_{1}^{\perp c}(z, {\bm k}_{\perp \pi}^2)
\,d_{NN}(\hat s,\hat t,\hat u) \,
\sin(\phi_{S}-\phi_\pi^H)\,,
\label{collins}
\end{eqnarray}
where the Collins fragmentation function of the struck quark $c$,
$H_{1}^{\perp c}(z,\bm{k}_{\perp\pi}^2)$
(or $\Delta^N\!D_{\pi/c^\uparrow}=2(k_{\perp\pi}/zM_\pi)H_{1}^{\perp c})$,
is convoluted with the unintegrated
transversity distribution, $h_1^a(x_a, \bm{k}_{\perp a}^2)$, that is the
distribution of transversely polarized quarks in a transversely polarized
hadron. In Eq.\ (\ref{collins}), $M_\pi$ is the pion mass,
$d_{NN}$ is the spin transfer asymmetry for the partonic process
$a^\uparrow b\to c^\uparrow d$,
\begin{equation}
d_{NN} = \frac{\sigma^{a^\uparrow b\to c^\uparrow d} - \sigma^{a^\uparrow b\to c^\downarrow d}}{\sigma^{a^\uparrow b\to c^\uparrow d} + \sigma^{a^\uparrow b\to c^\downarrow d}} \,,
\end{equation}
and $\psi$ the corresponding azimuthal phase~\cite{D'Alesio:2010am}.
In order to single out the different contributions to the polarized cross section, we introduce the following average values of the circular
functions of $\phi_{S}$ and $\phi_\pi^H$ appearing in Eq.~(\ref{d-sig-phi-SA}),
\begin{equation}
\langle\,W(\phi_{S},\phi_\pi^H)\,\rangle(\bm{p}_{\rm j},z,k_{\perp\pi})=
\frac{\int{\rm d}\phi_{S}\,{\rm d}\phi_\pi^H\,
W(\phi_{S},\phi_\pi^H)\,{\rm d}\sigma(\phi_{S},\phi_\pi^H)}
{\int{\rm d}\phi_{S}\,{\rm d}\phi_\pi^H{\rm\, d}\sigma(\phi_{S},\phi_\pi^H)}\,.
\label{average}
\end{equation}
For single spin asymmetries one can preferably define azimuthal moments,
similarly to the SIDIS case,
\begin{eqnarray}
A_N^{W(\phi_{S},\phi_\pi^H)}(\bm{p}_{\rm j},z,k_{\perp\pi})
&=&
2\,\frac{\int{\rm d}\phi_{S}\,{\rm d}\phi_\pi^H\,
W(\phi_{S},\phi_\pi^H)\,[{\rm d}\sigma(\phi_{S},\phi_\pi^H)-
{\rm d}\sigma(\phi_{S}+\pi,\phi_\pi^H)]}
{\int{\rm d}\phi_{S}\,{\rm d}\phi_\pi^H\,
[{\rm d}\sigma(\phi_{S},\phi_\pi^H)+
{\rm d}\sigma(\phi_{S}+\pi,\phi_\pi^H)]}\,,
\label{gen-mom}
\end{eqnarray}
with $W(\phi_{S},\phi_\pi^H)$ now being one of the angular modulations in
Eq.\ (\ref{num-asy-gen}). In the following we will focus mainly on
the two observables that are the most relevant from the phenomenological point of view: the Collins and the Sivers contributions to $A_N$, namely
$A_N^{\sin(\phi_S-\phi_\pi^H)}$ and $A_N^{\sin\phi_S}$.
\section{Phenomenological results}
Here as well as in the following sections we review some phenomenological
implications of the TMD generalized parton model approach
for the $p^\uparrow p\to {\rm jet}\,\pi\,X$ and $p^\uparrow p\to {\rm jet}\,X$ processes
in kinematical configurations accessible at RHIC by the STAR and PHENIX experiments.
We consider both central ($\eta_{\rm j}=0$) and forward
($\eta_{\rm j}=3.3$) (pseudo)rapidity configurations,
at c.m.~energies $\sqrt{s} =$ 200 GeV and 500 GeV.
A more detailed account and additional phenomenological results are given in Ref.~\cite{D'Alesio:2010am}.
Preliminary STAR results at $\sqrt{s}=200$ GeV for the Collins azimuthal asymmetry in the process
$p^\uparrow p \to {\rm jet}\,\pi^{\pm}\,X$ in the mid-rapidity region~\cite{Fatemi:2012ry}
and for the Collins and Sivers azimuthal asymmetries in $p^\uparrow p \to {\rm jet}\,\pi^{0}\,X$
at forward rapidities~\cite{Poljak:2011vu} are also available.
A phenomenological analysis of these results in the GPM approach, with proper account of all
jet kinematical cuts, is in progress and will be presented elsewhere~\cite{dalesio:2013prg}.
In the sequel TMD parton distribution and
fragmentation functions are parameterized with a simplified
functional dependence on
the parton light-cone momentum fraction and on the transverse
motion, which are completely factorized.
Notice however that kinematical constraints due to usual
parton model requirements (implemented in numerical calculations)
effectively lead to correlations between the light-cone
momentum fraction and the transverse momentum, particularly at
very small and very large ($\to 1$) momentum fractions
(for more details, see e.g.~appendix A of Ref.~\cite{D'Alesio:2004up}).
Moreover, we assume a Gaussian-like flavour-independent
shape for the transverse momentum component.
Preliminary lattice QCD calculations seem to support the validity of this
assumption, see e.g.~Ref.~\cite{Hagler:2009ni}.
Concerning the parameterizations of the quark transversity and
Sivers distributions, and of the quark Collins functions,
we will consider two sets:
SIDIS~1 \cite{Anselmino:2005ea, Anselmino:2007fs} and
SIDIS~2 \cite{Anselmino:2008sga,Anselmino:2008jk}.
The set SIDIS~1 includes the $u$, $d$ quark Sivers functions of Ref.~\cite{Anselmino:2005ea},
the $u$, $d$ quark transversity distributions and the favoured
and disfavoured Collins FFs of Ref.~\cite{Anselmino:2007fs}.
The Kretzer
set~\cite{Kretzer:2000yf} for collinear pion FFs was used.
Instead, the set SIDIS~2 includes the $u$, $d$, and sea-quark
Sivers functions of Ref.~\cite{Anselmino:2008sga} and the updated set
of the $u$, $d$ quark transversity distributions and of the favoured
and disfavoured Collins FFs of Ref.~\cite{Anselmino:2008jk}.
In this case, the DSS set~\cite{deFlorian:2007aj} for collinear pion and kaon FFs was adopted.
In both cases, for the usual collinear parton distributions, the LO
unpolarized set GRV98~\cite{Gluck:1998xa} and the corresponding
longitudinally polarized set GRSV2000~\cite{Gluck:2000dy} (needed in order to
implement the Soffer bound~\cite{Soffer:1994ww} for the transversity distribution) were adopted.
Notice that quite recently, updated parameterizations of the transversity distribution
and of the Collins function within the GPM approach have been released~\cite{Anselmino:2013vqa}.
Since they are qualitatively similar to those adopted in Ref.~\cite{D'Alesio:2010am},
for ease of comparison they will not be used in the following.
Since the jet transverse momentum
(the hard scale in the process) covers a significant range,
one should properly take into account the QCD evolution
of all TMDs.
On the other hand, a formal proof of TMD factorization for such processes
is still missing and the study of TMD evolution is at present in its earlier stage.
Therefore, we tentatively take into account proper evolution
with scale, at leading order, for the usual collinear PDFs and FFs,
while keeping the transverse momentum component of all TMDs
fixed.
The study of the formal aspects and the related phenomenology of the correct
QCD evolution with scale of TMD PDFs and FFs has received a lot of attention
quite recently. Several papers have investigated proper TMD evolution equations for the Sivers
function and their phenomenological implications, see e.g.~Refs.~\cite{Aybat:2011zv,Aybat:2011ge,Aybat:2011ta,Anselmino:2012aa,Boer:2013zca,Sun:2013dya,Echevarria:2012pw}.
The TMD evolution of the helicity and transversity parton distributions has been
considered e.g.~in Ref.~\cite{Bacchetta:2013pqa}.
No information is available yet on the TMD evolution of the Collins fragmentation
functions.
In all cases considered, $\bm{k}_{\perp\pi}$ is integrated out and, since we are interested in leading particles inside the jet, we present results obtained
integrating over the light-cone momentum fraction of the observed hadron, $z$,
in the range $z\geq 0.3$. Different choices, according to the kinematical cuts of interest in specific experiments,
can be easily implemented in the numerical calculations.
We have considered first, for $\pi^+$ production only,
an extreme scenario in which the effects of
all TMD functions are over-maximized. By this we mean that all TMDs
are maximized in size by imposing natural positivity bounds.
The transversity distribution has been fixed at the initial scale
by saturating the Soffer bound~\cite{Soffer:1994ww} and then we let it
evolve. Moreover, the relative signs of
all active partonic contributions are chosen so that they
sum up additively. In this way we set
an upper bound on the absolute value of any of the effects playing
a potential role in the azimuthal asymmetries.
Therefore, all effects that are negligible or even
marginal in this scenario may be directly discarded in subsequent
refined phenomenological analyses. See Ref.~\cite{D'Alesio:2010am} for a more detailed discussion.
As a second step in our study we consider, for both neutral and charged pions,
only the dominant contributions, that is the Collins and the Sivers effects,
involving TMD functions for which parameterizations
are available from independent fits to other spin and azimuthal
asymmetries data in SIDIS and $e^+e^-$ processes (the SIDIS~1
and SIDIS~2 sets discussed above).
\subsection{The Collins asymmetries \label{sec:res-coll}}
The Collins fragmentation function contributes to two of the
azimuthal moments defined in Eq.\ (\ref{gen-mom}), namely $A_N^ {\sin(\phi_S+\phi_\pi^H)}$ and $A_N^ {\sin(\phi_S-\phi_\pi^H)}$. In $A_N^ {\sin(\phi_S+\phi_\pi^H)}$ it is convoluted with two different terms:
\begin{equation}
A_N^ {\sin(\phi_S+\phi_\pi^H)}\sim
\left [ h_{1 T}^{\perp q}(x_a,\bm{k}_{\perp a}^2)\otimes f_1(x_b,\bm{k}_{\perp b}^2) +
f_{1 T}^{\perp}(x_a,\bm{k}_{\perp a}^2) \otimes h_1^{\perp q}(x_b,\bm{k}_{\perp b}^2) \right ]
\otimes H_1^{\perp q}(z,\bm{k}_{\perp \pi}^2)~.
\label{eq:Coll1}
\end{equation}
The first term is related to the so-called pretzelosity distribution $h_{1 T}^{\perp q}$, while the second one, which enters also in the expression for $A_N^ {\sin(\phi_S-\phi_\pi^H)}$, involves in the convolution the Sivers and
Boer-Mulders ($h_1^{\perp q}$) functions.
\begin{figure}[t]
\begin{center}
\includegraphics[angle=0,width=0.4\textwidth]{asy_coll_par_SIDIS1.eps}
\includegraphics[angle=0,width=0.4\textwidth]{asy_coll_par_SIDIS2.eps}
\caption{The Collins asymmetry $A_N^{\sin(\phi_{S}-\phi_\pi^H)}$ for the process $p^\uparrow \, p\to {\rm jet}\,
\pi \, X$, as a function of $p_{{\rm j} T}$, at fixed value of the rapidity $\eta_{\rm j}$ and c.m.~energy
$\sqrt{s}= 200$ GeV. Estimates are obtained by adopting the parameterizations
SIDIS~1 (left panel)
and SIDIS~2 (right panel). The dotted vertical line delimits the region $x_F \approx 0.3$,
beyond which the currently available parameterizations for the
quark transversity distributions, extracted from SIDIS data, are affected
by large uncertainties.
\label{asy-an-coll-par200} }
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[angle=0,width=0.4\textwidth]{asy_coll_par_SIDIS1_500.eps}
\includegraphics[angle=0,width=0.4\textwidth]{asy_coll_par_SIDIS2_500.eps}
\caption{The same as for Fig.~\ref{asy-an-coll-par200}, but at c.m.~energy
$\sqrt{s}= 500$ GeV.
\label{asy-an-coll-par500} }
\end{center}
\end{figure}
As described above, and in more detail in Ref.~\cite{D'Alesio:2010am},
it has been checked that the upper bound of this asymmetry is always negligible, hence it will not
be considered again in the following. Same conclusions hold for the
Collins-like azimuthal moment $ A_N^ {\sin(\phi_S+2 \phi_\pi^H)}$
originating from the fragmentation of linearly polarized gluons, which has a
structure similar to Eq.~(\ref{eq:Coll1}), with quarks replaced by
gluons.
The azimuthal asymmetry $A_N^{\sin(\phi_{S}- \phi_\pi^H)}$ is
dominated by a convolution of the transversity distribution and the Collins
fragmentation function,
\begin{equation}
A_N^{\sin(\phi_S-\phi_\pi^H)} \sim h_1^q(x_a,\bm{k}_{\perp a}^2) \otimes
f_1(x_b,\bm{k}_{\perp b}^2) \otimes H_1^{\perp\, q}(z,\bm{k}_{\perp \pi}^2)\,,
\label{eq:Coll2}
\end{equation}
see Eq.\ (\ref{collins}).
A similar expression holds for its gluonic counterpart $A_N^{\sin(\phi_{S}- 2 \phi_\pi^H)}$. Their upper bounds turn out to be sizeable, at least in some kinematic domains \cite{D'Alesio:2010am}.
In Figs.~\ref{asy-an-coll-par200} and~\ref{asy-an-coll-par500} we show our
estimates for $A_N^{\sin(\phi_{S}- \phi_\pi^H)}$ at the RHIC energies $\sqrt{s} = 200$ GeV
and 500 GeV respectively, as a function of the transverse momentum of the jet, $p_{{\rm j}T}$, and at fixed jet rapidity ($\eta_{{\rm j}}$= 3.3). These results have been obtained by adopting the parameterizations SIDIS~1 and SIDIS~2.
Notice that while the results of Fig.~\ref{asy-an-coll-par200} are taken from Ref.~\cite{D'Alesio:2010am},
those of Fig.~\ref{asy-an-coll-par500} are presented here for the first time.
Our prediction of an almost vanishing asymmetry for neutral
pions, confirmed very recently by preliminary data at $\sqrt{s} = 200$ GeV from the STAR Collaboration~\cite{Poljak:2011vu},
is a consequence of the comparable size and the opposite sign, in both parameterizations, of the favoured (e.g.~$u\to\pi^+$) and disfavoured (e.g.~$d\to\pi^+$) Collins fragmentation functions. In fact, because of isospin invariance,
the Collins function for neutral pions is given by half the sum
of the fragmentation functions for charged pions, hence turning out to be
very small.
In addition, further cancellations among quark contributions are due to the
relative opposite sign of the transversity distribution for $u$ and $d$
flavours. Concerning charged pions, the two parameterizations give comparable
results only in the kinematic domain where the Feynman variable
$x_F=2 p_{{\rm j} L}/\sqrt{s}$ is equal to or smaller than the value
$x_F \approx 0.3$, denoted by the dotted vertical lines in
Figs.~\ref{asy-an-coll-par200},~\ref{asy-an-coll-par500} (notice the different scales used in the
two panels). This corresponds to the Bjorken $x$ region covered by
the SIDIS data that have been used to determine the available
parameterizations for the transversity distributions. Extrapolation beyond
$x_F \approx 0.3$, where transversity is not constrained,
leads to completely different estimates at large $p_{{\rm j} T}$,
as shown in the figures.
Based on these considerations, in a recent paper~\cite{Anselmino:2012rq}
(to which we refer for more details) a different and complementary analysis
(denoted as ``scan procedure") has been performed.
The large $x$ behaviour of the quark transversity distribution
is mainly controlled by the parameters $\beta_q$ ($q=u,\, d$) in the factor
$(1-x)^{\beta_q}$ of the parametrization~\cite{D'Alesio:2010am}, which are basically unconstrained
by SIDIS data. Therefore, starting from a reference fit (with a given total $\chi^2$, $\chi^2_0$)
to updated SIDIS and $e^+e^-$ data (hence, although using the same
collinear PDFs and FFs, slightly different from the SIDIS~1 set)
the following procedure has been implemented:
First, we fix $\beta_{u,d}$ within the range $[0,4]$ by discrete steps of $0.5$, for a total of 81
different $\{\beta_u,\beta_d\}$ configurations;
secondly, for each of these $\{\beta_u,\beta_d\}$ pairs, we perform a new fit of the
other parameters and evaluate its corresponding total $\chi^2$.
Only those configurations with a $\Delta\chi^2=\chi^2-\chi^2_0$ less than
a statistically significant reference value (see Ref.~\cite{Anselmino:2012rq} for further details)
have been kept. In practice, in this case all 81 configurations fulfill the selection criterium, reinforcing the conclusion
that presently available SIDIS data do not constrain the large $x$
behaviour of the TMD transversity distribution.
For a given process of interest and the related azimuthal asymmetries,
like e.g.~the inclusive particle production in polarized $pp$ collisions
studied in this review (in particular in the large $x_F$ region),
the final step of the scan procedure consists in taking the full envelope of the values
of the asymmetry generated by considering all the selected configuration sets.
This envelope gives an estimate of the uncertainty in the asymmetry calculation
due to the limited $x_B$ range covered by SIDIS data and the consequent indeterminacy
in the large $x$ behaviour of the quark transversity distribution.
As an example, in Fig.~\ref{asy-an-coll-scan-500} we show the resulting scan bands for the
Collins azimuthal asymmetry $A_N^{\sin(\phi_{S}- \phi_\pi^H)}$ for neutral and
charged pions at the RHIC c.m.~energy $\sqrt{s}=500$ GeV, as a function of the jet
transverse momentum and fixed jet pseudorapidity, $\eta_{\rm j}=3.3$ (that is, the same
kinematical configuration of Fig.~\ref{asy-an-coll-par500}).
\begin{figure}[t]
\begin{center}
\includegraphics[angle=0,width=0.4\textwidth]{asy_coll_scan_500.eps}
\caption{
Scan bands (that is, the envelope of possible values) for the
Collins azimuthal asymmetry
$A_N^{\sin(\phi_{S}-\phi_\pi^H)}$ for the process $p^\uparrow \, p\to {\rm jet}\,
\pi \, X$, as a function of $p_{{\rm j} T}$, at fixed value of the pseudorapidity,
$\eta_{\rm j}=3.3$ and c.m.~energy $\sqrt{s}= 500$ GeV.
The shaded bands are generated following the scan procedure explained in the text
(see Ref.~\cite{Anselmino:2012rq} for more details).
\label{asy-an-coll-scan-500} }
\end{center}
\end{figure}
It is clear from this plot how the uncertainty on the asymmetry grows as $p_{{\rm j} T}$
(and consequently $x_F$) increases. This information is complementary and integrates
the indications obtained comparing the results of the specific SIDIS~1 and SIDIS~2 sets in
Figs.~\ref{asy-an-coll-par200},~\ref{asy-an-coll-par500}.
It is also clear that future measurements of the Collins
asymmetries for charged pions in $p^\uparrow p \to {\rm jet}\, \pi\, X$ processes
would be very helpful in delineating
the large $x$ behaviour of the quark transversity distributions.
We point out that in the central rapidity region these asymmetries are much
smaller. Nevertheless, they are currently under
active investigation by the STAR Collaboration \cite{Poljak:2011vu,Fatemi:2012ry}.
Finally, analogous estimates for the azimuthal moment
$A_N^{\sin(\phi_{S}- 2 \phi_\pi^H)}$ cannot be provided, since
the underlying TMD gluon distribution and fragmentation functions
are still completely unknown.
\subsection{The Sivers asymmetries}
\begin{figure}[t]
\begin{center}
\includegraphics[angle=0,width=0.35\textwidth]{asy_siv_par_pip.eps}
\hspace*{-20pt}
\includegraphics[angle=0,width=0.35\textwidth]{asy_siv_par_pi0.eps}
\hspace*{-20pt}
\includegraphics[angle=0,width=0.35\textwidth]{asy_siv_par_pim.eps}
\caption{The Sivers asymmetry $A_N^{\sin\phi_{S}}$
for the process $p^\uparrow \, p\to {\rm jet}\, \pi \, X$, as a function
of $p_{{\rm j} T}$, at fixed value of the rapidity $\eta_{\rm j}$ and c.m.\ energy $\sqrt{s}= 200$ GeV. Estimates for the quark contribution are obtained
by adopting the
parametrization sets SIDIS~1 and SIDIS~2. The gluon Sivers function is
assumed to be positive and to saturate an updated version of the
bound in Ref.~\cite{Anselmino:2006yq}. The dotted vertical line delimits the region $x_F\approx 0.3$, beyond which the currently available
parameterizations for the
quark Sivers function, extracted from SIDIS data, are affected by large
uncertainties.
\label{asy-an-siv-par200} }
\end{center}
\end{figure}
In analogy to Eqs.\ (\ref{eq:Coll1}) and (\ref{eq:Coll2}), the azimuthal
moment $A_N^ {\sin\phi_S}$ can be written
schematically as
\begin{equation}
A_N^ {\sin\phi_S}\sim f_{1 T}^{\perp}(x_a,\bm{k}_{\perp a}^2) \otimes
f_1(x_b,\bm{k}_{\perp b}^2) \otimes D_1(z,\bm{k}_{\perp \pi}^2)\,,
\label{eq:Siv}
\end{equation}
{\it i.e.}\ as a convolution of the Sivers function for the parton inside
the transversely polarized proton with the unpolarized TMD
distribution and fragmentation functions of the two other active partons
in the hard scattering.
The explicit expression for the
numerator of the asymmetry is given in Eq.~(\ref{sivers}). Both the quark and
gluon Sivers functions contribute to this observable, and in principle
these contributions cannot be separated. Nevertheless it should be possible to select between
the two terms by looking at particular kinematic domains in which only
one of them is expected to be sizeable and dominate the asymmetry~\cite{D'Alesio:2010am}.
In Fig.~\ref{asy-an-siv-par200} $A_N^{\sin\phi_{S}}$ is presented,
for both neutral and charged pions, at the c.m.~energy $\sqrt{s}=200$ GeV and
in the forward rapidity region
($\eta_{\rm j}=3.3$), as a function of $p_{{\rm j} T}$.
The quark Sivers contribution is estimated adopting the SIDIS~1 and
SIDIS~2 parameterizations, which give comparable results only in the $p_{{\rm j} T}$ region where they are constrained by SIDIS data (see, as for the case of
the Collins asymmetry, the dotted vertical line). The almost unknown gluon
Sivers function is tentatively taken positive and saturates an updated version
of the bound calculated in Ref.~\cite{Anselmino:2006yq} by analyzing PHENIX
data for transverse single spin asymmetries for the process
$p^{\uparrow}\,p\to \pi^0\,X$, with the neutral pion being produced in the
central rapidity region.
Clearly, the measurement of $A_N^{\sin\phi_{S}}$ at large $p_{{\rm j} T}$,
where the role of the gluon Sivers function becomes negligible,
could be quite helpful in discriminating between the SIDIS~1 and SIDIS~2
parameterizations and constraining the large $x$ behaviour of the $u$, $d$ quark
Sivers functions.
The present analysis can be extended to the transverse single
spin asymmetry $A_N^{\sin\phi_S}$ for inclusive jet production in
$p^\uparrow \,p\to {\rm jet}\,X$, by simply integrating the results for
the process $p^\uparrow \,p\to {\rm jet}\,\pi\,X$ over the pion phase space.
In this case, in the general structure of the asymmetry in
Eq.~(\ref{num-asy-gen}), only the
$\sin\phi_S$ modulation will be present, since all the mechanisms related to
the fragmentation process cannot play a role. The numerator of
$A_N^{\sin\phi_S}$ will be given by Eq.~(\ref{sivers}),
in which the fragmentation function $D_{1}^c(z,\bm{k}^2_{\perp \pi})$
is replaced by $\delta(z-1)\,\delta^2(\bm{k}_{\perp \pi})$. As already done
for jet-pion production, we have checked explicitly that,
for the kinematic configurations under study, all other possible contributions
rather than the Sivers one are numerically irrelevant and therefore can be
safely neglected.
In Fig.~\ref{asy-siv-jet-200} we present our
results for $A_N^{\sin\phi_S}$ for inclusive jet production
at the c.m.\ energy $\sqrt{s}=200$ GeV,
as a function of $p_{{\rm j} T}$ and fixed rapidities $\eta_{\rm j}=0$
(left panel) and $\eta_{\rm j} = 3.3$ (right panel).
As before, they have been obtained utilizing the parameterizations SIDIS~1
and SIDIS~2 for the quark Sivers functions and an updated version of the
bound presented in Ref.~\cite{Anselmino:2006yq} for the gluon Sivers
function (taken to be positive). Predictions in the forward rapidity region
are very similar to those for jet-neutral pion production shown in the
central panel of Fig.~\ref{asy-an-siv-par200},
where the gluon component dominates only at very low values of $p_{{\rm j} T}$
and decreases quickly as $p_{{\rm j} T}$ increases. On the other hand,
in the central rapidity region, the gluon component is always larger than
the quark one, the latter being practically negligible.
A measurement of $A_N^{\sin\phi_S}$ in this kinematic
domain would therefore be ideal to probe the gluon Sivers function \cite{D'Alesio:2010am,Adamczyk:2012qj}.
Results for RHIC kinematics at $\sqrt{s}=500$ GeV will be discussed in the next section.
Notice that the scan procedure discussed in section~\ref{sec:res-coll} and in Ref.~\cite{Anselmino:2012rq}
for the Collins effect in the process $p^\uparrow p\to h\,X$, and for the large $x$ behaviour of
the transversity distribution, can also be applied, for the same process, to the Sivers
asymmetry and, in this case, the large $x$ behaviour of the Sivers distribution, see Ref.~\cite{Anselmino:2013rya}.
We will present some new results obtained utilizing the scan procedure for the Sivers azimuthal
asymmetry $A_N^{\sin\phi_S}$ in $p^\uparrow p\to {\rm jet}\,\pi\,X$ and $p^\uparrow p\to {\rm jet}\,X$ processes
in the next section.
\begin{figure*}[t]
\includegraphics[angle=0,width=0.4\textwidth]{asy_siv_par_jet_eta0.eps}
\includegraphics[angle=0,width=0.4\textwidth]{asy_siv_par_jet.eps}
\caption{The Sivers asymmetry $A_N^{\sin\phi_{S}}$
for the process $p^\uparrow \, p\to {\rm jet}\, X$, as a function
of $p_{{\rm j} T}$, at fixed value of the rapidity $\eta_{\rm j}$ and
c.m.\ energy $\sqrt{s}= 200$ GeV. Estimates for the quark contribution
are obtained
by adopting the parametrization sets SIDIS~1 and SIDIS~2. The gluon Sivers
function is assumed to be positive and to saturate an updated version of the
bound in Ref.~\cite{Anselmino:2006yq}. The dotted vertical line delimits
the region $x_F\approx 0.3$, beyond which the currently available
parameterizations for the
quark Sivers function, extracted from SIDIS data, are affected by large
uncertainties.
\label{asy-siv-jet-200} }
\end{figure*}
\section{A study of the process dependence of the Sivers function}
In the GPM approach adopted so far TMD distribution and fragmentation
functions are assumed to be universal.
In particular, the Sivers function
in Eq.~(\ref{sivers}) is taken to be the same as the one extracted from SIDIS~\cite{Anselmino:1994tv, D'Alesio:2010am},
\begin{equation}
f_{1T}^{\perp a}(x_a, \bm{k}_{\perp a}^2) \equiv f_{1T}^{\perp a, \rm SIDIS}(x_a, \bm{k}_{\perp a}^2).
\end{equation}
There is at present a large consensus on the universality of the Collins
fragmentation function (which however must be verified phenomenologically),
at least for processes where QCD factorization has been proven.
On the contrary, several naively time-reversal odd (T-odd) TMD distributions
crucially depend on initial and/or final state interactions (embedded via gauge links)
among struck partons and soft remnants in the process.
Recently the azimuthal asymmetries for the distribution of leading pions
inside jets have been studied allowing for the process dependence of the quark
Sivers function \cite{D'Alesio:2011mc} within the framework of the so-called
colour gauge invariant GPM \cite{Gamberg:2010tj}. In the CGI~GPM the
existence of a nonzero Sivers function in a transversely polarized hadron is
due to the effects of initial (ISIs) and final (FSIs) state interactions
between the struck parton and the spectator remnants from the polarized proton.
These interactions depend on the particular process considered and make
the Sivers function non-universal. The typical example is provided by the
predicted opposite sign of the quark Sivers functions in SIDIS, where only
FSIs are present, and in the DY process, in which only ISIs can be active.
\begin{figure}[t]
\includegraphics[angle=0,width=0.35\textwidth]{asy_jet_pip_500.eps}
\hspace*{-20pt}
\includegraphics[angle=0,width=0.35\textwidth]{asy_jet_pi0_500.eps}
\hspace*{-20pt}
\includegraphics[angle=0,width=0.35\textwidth]{asy_jet_pim_500.eps}
\caption{The quark contribution to the Sivers asymmetry
$A_N^{\sin\phi_{S}}$ in the GPM and CGI~GPM approaches for the process $p^\uparrow \, p\to {\rm jet}\, \pi\,X$,
as a function of $p_{{\rm j} T}$, at fixed value of the rapidity $\eta_{\rm j}$ and c.m.\ energy $\sqrt{s}= 500$ GeV.
Estimates are obtained
by adopting the parametrization sets SIDIS~1 and SIDIS~2. The dotted
vertical line delimits the region $x_F\approx 0.3$, beyond which the
currently available parameterizations for the quark Sivers function,
extracted from SIDIS data, are affected by large uncertainties.}
\label{fig1}
\end{figure}
The colour factor structure of the Sivers function for the reaction under study,
involving hadrons in both the initial and the final states,
is more complicated because both ISIs and FSIs contribute. Eq.~(\ref{sivers}) has then to be replaced by
\begin{eqnarray}
{E_{\rm j}}\, \frac{{\rm d}\Delta\sigma^{(\rm Sivers)}}{{\rm d}^3 {\bm p}_{\rm j}\, {\rm d} z\, {\rm d}^2 {\bm k}_{\perp \pi}}
& = & \frac{2\, \alpha_s^2}{s} \sum_{a,b,c,d} \int \frac{{\rm d} x_a}{x_a}\,{\rm d}^2\bm{k}_{\perp a}\,
\int \frac{{\rm d} x_b}{x_b}\,{\rm d}^2{\bm k}_{\perp b} \,\delta(\hat s+\hat t+\hat u)\,H^{U}_{ab\to cd}(\hat s,\hat t,\hat u)
\nonumber \\
&&\qquad\times
\Big ( -\frac{k_{\perp a}}{M}\Big)f_{1T}^{\perp a, ab\to cd}(x_a, {\bm k}_{\perp a}^2) \cos\phi_a
\, f_{b/B}(x_b, {\bm k}_{\perp b}^2)\, D_{1}^c(z, {\bm k}_{\perp \pi}^2) \sin\phi_{S} \,,
\label{process}
\end{eqnarray}
in which a {\it process-dependent} Sivers function denoted as $f_{1T}^{\perp a, ab\to cd}$ is used.
The resulting colour factors, $C_I$ ($C_{F_c}$), for initial (final) state interactions determine the proper Sivers function to be used for each of
the different partonic scattering processes $a\, b\to c\, d$.
They are the same as
the ones calculated in Ref.~\cite{Gamberg:2010tj} for single inclusive hadron production using a one-gluon exchange approximation. Finally, the process dependence of the Sivers function can be absorbed into the squared hard partonic
scattering amplitude $H^{U}_{ab\to cd}$, that is
\begin{equation}
f_{1T}^{\perp a, ab\to cd} \,H^{U}_{ab\to cd} \equiv f_{1T}^{\perp a, \rm SIDIS} \,H^{\rm Inc}_{ab\to cd}\,,
\label{HInc}
\end{equation}
where the new hard function $H^{\rm Inc}_{ab\to cd}$ has been introduced.
Details on the connection between the CGI~GPM and the twist-three
collinear formalism~\cite{Qiu:1991pp, Kouvaris:2006zy}, suggested by Eq.~(\ref{HInc}),
can be found in Ref.~\cite{Gamberg:2010tj}.
\begin{figure}[t]
\includegraphics[angle=0,width=0.35\textwidth]{asy_siv_scan_pip_500.eps}
\hspace*{-20pt}
\includegraphics[angle=0,width=0.35\textwidth]{asy_siv_scan_pi0_500.eps}
\hspace*{-20pt}
\includegraphics[angle=0,width=0.35\textwidth]{asy_siv_scan_pim_500.eps}
\caption{
Scan bands (that is, the envelope of possible values) for the
quark contribution to the Sivers asymmetry
$A_N^{\sin\phi_{S}}$ in the GPM and CGI~GPM approaches, for the process $p^\uparrow \, p\to {\rm jet}\, \pi\,X$,
as a function of $p_{{\rm j} T}$, at fixed value of the rapidity $\eta_{\rm j}$ and c.m.~energy $\sqrt{s}= 500$ GeV.
The shaded bands are generated following the scan procedure explained in the text
(see Refs.~\cite{Anselmino:2012rq,Anselmino:2013rya} for more details).}
\label{asy-siv-scan-500}
\end{figure}
Since our aim is to study the process dependence of the quark Sivers function,
we analyze pion-jet production in the forward rapidity region, where possible
contributions from sea-quark and gluon Sivers functions are expected to be
negligible. This assumption is supported by studies of SSAs in SIDIS~\cite{Anselmino:2008sga} and in $pp\to \pi\, X$
processes at central rapidities~\cite{Anselmino:2006yq, Adler:2005in, Wei:2011nt} and by the analysis performed in Ref.~\cite{Brodsky:2006ha}. Our results are shown in Fig.~\ref{fig1},
where $A_N^{\sin\phi_{S}}$, integrated over $\bm{k}_{\perp\pi}$ and $z$ ($z\ge 0.3$), is plotted as a function of the jet transverse momentum $p_{{\rm j} T}$ at fixed jet rapidity $\eta_{\rm j}=3.3$, for the RHIC energy $\sqrt{s}=500$ GeV.
The solid and dotted lines represent our predictions in the GPM formalism
using the two available sets, SIDIS~1 and SIDIS~2 respectively, for the quark Sivers function, while the dashed and dot-dashed lines describe the
analogous predictions in the CGI~GPM formalism. As one can easily see, the results obtained with and without inclusion of colour gauge factors are comparable in size but have {\em opposite signs} \cite{D'Alesio:2011mc}, in close analogy to the DY case. The reason is that, at forward rapidity, the dominant channel is
$qg\to qg$, where the final quark is identified with the observed jet, for which
the effects of ISIs/FSIs lead to
\begin{equation}
H^{\rm Inc}_{qg\to qg}\sim -\frac {N_c^2+2}{N_c^2-1}\,\frac{\hat s^2}{\hat t^2}
\end{equation}
in the CGI~GPM, while
\begin{equation}
H^{U}_{qg\to qg}\sim \frac{2\hat s^2} {\hat t^2}
\end{equation}
in the GPM. Moreover, as already pointed out in the previous section, our
estimates obtained adopting the two different parameterizations SIDIS~1 and
SIDIS~2 are similar only in the region $x_F \le 0.3$, corresponding to
$p_{{\rm j} T}\le 5.5$ GeV at $\sqrt{s}=500$ GeV. Therefore this is the
optimal kinematic region to test directly the process dependence of the
Sivers function:~the measurement of a sizable asymmetry for
$p_{{\rm j} T}\le 5.5$ GeV could easily discriminate between the two
different approaches and probe the universality properties of the Sivers
function. At the c.m.\ energy $\sqrt s = 200$ GeV our predictions would be
qualitatively similar to the ones presented in Fig.~\ref{fig1},
becoming almost twice as large. However the range of $p_{{\rm j} T}$
covered would now be narrower, $p_{{\rm j} T}\le 6.5$~GeV, and $x_F \le 0.3$
would correspond to $p_{{\rm j} T}\le 2.5$~GeV.
As already discussed in the previous section for the Collins azimuthal asymmetry,
the scan procedure introduced in Refs.~\cite{Anselmino:2012rq,Anselmino:2013rya}
offers a different and more complete information. In fact, it gives the envelope of all
possible values of $A_N^{\sin\phi_S}$ coming
from parameterizations of the Sivers function leading to good fits of the SIDIS
data on the analogous asymmetry.
Therefore, in Fig.~\ref{asy-siv-scan-500} we present the analogous of Fig.~\ref{fig1}
obtained using new results of the Sivers scan procedure.
These plots confirm the conclusions drawn from Fig.~\ref{fig1}: the low-intermediate $p_{{\rm j} T}$
region is the most interesting for a discrimination between the GPM and CGI~GPM approaches.
As soon as $p_{{\rm j} T}$ grows beyond $4-6$ GeV the two scan bands start overlapping
and we loose predictive power. For this reason, we cut our plots at $p_{{\rm j} T}=9$ GeV, although
the kinematical limit is larger (see Fig.~\ref{fig1}).
Clearly, the most favourable situation seems to be that of the $\pi^-$,
for which the asymmetry is larger and the scan bands for the GPM and CGI~GPM cases are
well separated up to $p_{{\rm j} T}\simeq 5$ GeV.
\begin{figure}[t]
\begin{center}
\includegraphics[angle=0,width=0.4\textwidth]{asy_siv_par_jet_500.eps}
\hspace*{0pt}
\includegraphics[angle=0,width=0.4\textwidth]{an-andy-kre-siv-uvdv-scan-GPM-CGI.eps}
\caption{
Left panel: The quark contribution to the Sivers asymmetry
$A_N^{\sin\phi_{S}}$ in the GPM and CGI~GPM approaches for the process $p^\uparrow \, p\to {\rm jet}\,X$,
as a function of $p_{{\rm j} T}$, at fixed value of the rapidity $\eta_{\rm j}=3.3$ and c.m.\ energy $\sqrt{s}= 500$ GeV.
Estimates are obtained by adopting the parametrization sets SIDIS~1 and SIDIS~2. The dotted
vertical line delimits the region $x_F\approx 0.3$, beyond which the
currently available parameterizations for the quark Sivers function,
extracted from SIDIS data, are affected by large uncertainties.
Right panel: Scan bands (that is, the envelope of possible values) for the
quark contribution to the Sivers asymmetry
$A_N^{\sin\phi_{S}}$ in the GPM and CGI~GPM approaches, for the process $p^\uparrow \, p\to {\rm jet}\,X$,
as a function of $x_F$, at fixed value of the rapidity $\eta_{\rm j}=3.25$ and c.m.~energy $\sqrt{s}= 500$ GeV.
The shaded bands are generated following the scan procedure explained in the text
(see Refs.~\cite{Anselmino:2012rq,Anselmino:2013rya} for more details).}
\label{asy-an-siv-jet-par500}
\end{center}
\end{figure}
Finally, we consider also single inclusive jet production in proton-proton
scattering. Data for this observable are now available and
have been presented in Refs.~\cite{Bland:2013pkt,Nogach:2012sh}.
The results obtained for $A_N^{\sin\phi_{S}}$ are plotted in
Fig.~\ref{asy-an-siv-jet-par500}. In the left panel, we show $A_N^{\sin\phi_{S}}$
for the $p^\uparrow p\to {\rm jet}\,X$ process, as a function of $p_{{\rm j} T}$ and
fixed pseudorapidity, $\eta_{\rm j}=3.3$, at RHIC c.m.~energy $\sqrt{s}=500$ GeV.
The results look very similar to those for the
case of neutral pion-jet production, shown in the central panel of
Fig.~\ref{fig1}.
In the right panel of Fig.~\ref{asy-an-siv-jet-par500} we compare
the GPM and CGI~GPM scan bands for the Sivers asymmetry $A_N^{\sin\phi_{S}}$
with recent results by the A$_N$DY Collaboration~\cite{Bland:2013pkt,Nogach:2012sh},
shown as a function of $x_F$, at fixed pseudorapidity
$\eta_{\rm j}=3.25$ and $\sqrt{s}=500$ GeV.
As expected, beyond $x_F\sim 0.3$, since the $u$, $d$ quark Sivers functions
are poorly constrained by present SIDIS data, the scan bands become larger and overlap
almost completely. Therefore, at this stage we cannot draw any conclusion by looking solely
at these results. Only the first and the last A$_N$DY data points seems to favour
respectively the CGI~GPM and the GPM approach, but much more work is needed.
See also Ref.~\cite{Gamberg:2013kla} for a similar study comparing the GPM
and collinear twist-three results.
\section{\label{sec-other-tests}
Other tests of the process dependence of the TMD functions}
In this section we present a short overview of other possible tests of
the process dependence of TMD
parton distribution and fragmentation functions proposed in the literature.
Due to lack of space, we will not cover thoroughly all aspects of the subject, limiting
ourselves to a discussion of the more interesting phenomenological tests.
A detailed treatment may be found in the original papers quoted in the bibliography.
Basically, all these phenomenological studies try to compare predictions for
spin asymmetries coming from different formalisms (like the collinear twist-three,
the GPM and CGI GPM approaches) in kinematical situations where
typically only one of the many possible effects dominates. If the predictions
of the various approaches are very different (in particular, in sign) then
interesting phenomenological investigations can be performed.
In Ref.~\cite{Bacchetta:2007sz} it was proposed to study a
weighted asymmetry in the azimuthal distribution of photon-jet pairs in the
polarized process $p^\uparrow p\to \gamma\, {\rm jet}\, X$.
It was shown that for specific kinematical configurations reachable
at RHIC, the asymmetry is dominated by the quark Sivers effect, making
its interpretation much more clear. Moreover, predictions
coming from gluonic-pole cross sections~\cite{Bacchetta:2005rm},
directly related to the Wilson lines preserving colour gauge invariance
and leading to process dependent effects,
are almost opposite to those of the generalized parton model.
Therefore, experimental tests of these results offer an interesting alternative
way to investigate the process dependence of the Sivers function and the
predicted relative sign difference in SIDIS and Drell-Yan processes.
As we have already discussed in the previous section,
in Ref.~\cite{Gamberg:2010tj} Gamberg and Kang
have discussed a modified version of the generalized parton model, the
colour gauge invariant GPM. Assuming, as in the GPM, the validity of factorization
for single inclusive particle production in hadronic collisions, this approach
includes the process dependence of TMDs by taking into account initial and
final state interactions between the struck parton and the parent hadron remnants.
Once more, these interactions come out from appropriate, process-dependent colour gauge links.
It was also shown that the CGI GPM is in close connection with the collinear twist-three
approach. The phenomenological implications of the CGI GPM for the process dependence
of the Sivers effect in $p^\uparrow p\to\pi^0, \gamma + X$ reactions were investigated.
Once again, the main result is that the transverse single spin asymmetry due to
the quark Sivers contribution has a similar size but opposite sign with respect to the
original GPM that assumes the universality of TMDs.
Applications of the approach to pion-jet production were discussed in the previous section and in more
detail in Ref.~\cite{D'Alesio:2011mc}.
The study of the universality and process dependence of the Sivers function is
of relevance also in the context of the so-called ``sign mismatch" issue
for the collinear twist-three approach~\cite{Kang:2011hk}.
Since in this formalism factorization has been proven for both
SIDIS processes and single inclusive particle production
in hadronic collisions at large energy scales, the multi-parton soft correlation
functions involved are universal and process independent.
On the other hand, factorization holds also for the TMD approach
in SIDIS, for large $Q^2$ and small transverse momentum of the final hadron.
It has been shown that there is a common region of validity of these two
approaches, and this allows to find a relation among the twist-three
quark-gluon correlation function and the first $\bm{k}_\perp$ moment
of the TMD Sivers function.
However, if one uses this relation from SIDIS processes for the
calculation in the twist-three approach of $A_N$ in $p^\uparrow p\to\pi^0, \gamma + X$
processes, one finds results opposite in sign with respect to those obtained
by directly fitting, in the same approach, the RHIC data for $p^\uparrow p\to\pi^0\, X$.
In Ref.~\cite{Kang:2012xf} the authors have explored the possibility
of escaping this sign-mismatch problem for the twist-three approach
by accounting for nodes of the quark Sivers function
(either in its $x$ or $\bm{k}_\perp$ dependence).
They found that by allowing for a single node in the quark Sivers function
one is not able to ``cure" the sign mismatch problem and explain both the STAR
and BRAHMS $A_N$ data for $p^\uparrow p\to \pi\,X$ reactions.
However, one must not forget that the Sivers effect is not the
only possible contribution to $A_N$.
In fact, it may be that the Sivers effect gives a subdominant
contribution, and the asymmetry is mainly due to the Collins effect
in the fragmentation sector.
To investigate this eventuality it is crucial to collect experimental information
for processes like, e.g., $p^\uparrow p\to\gamma\, X$ and $p^\uparrow p\to {\rm jet}\, X$
where fragmentation in the final state is absent.
As we have seen, quite recently the A$_N$DY Collaboration at
RHIC~\cite{Bland:2013pkt,Nogach:2012sh} has presented preliminary results for
$A_N(p^\uparrow p\to {\rm jet}\, X)$ at forward rapidity and c.m.~energy $\sqrt{s}=500$ GeV.
Gamberg, Kang and Prokudin~\cite{Gamberg:2013kla} have performed a new fit of the Sivers
function using HERMES and COMPASS data on the $A_N^{\sin(\phi_h-\phi_S)}$ asymmetry.
Then, using this information and the relation among the twist-three quark-gluon
correlation function and the first $\bm{k}_\perp$ moment of the Sivers function
discussed above, they have estimated the spin asymmetry for $p^\uparrow p\to {\rm jet}\, X$
in the collinear twist-three approach, comparing it with A$_N$DY data.
They found, taking into account that the large $x$ behaviour of the Sivers
function is poorly constrained by present SIDIS data, that their estimate is consistent
with experimental data and there is in fact no strong sign mismatch problem,
contrary to the case of pion single spin asymmetries discussed above.
Kang and Qiu~\cite{Kang:2009bp} have proposed to probe
the (modified) universality of the quark and gluon Sivers functions,
that is the change of sign between the Sivers functions in SIDIS and DY processes,
by studying the transverse single spin asymmetry $A_N$ for $W$ production and
inclusive lepton production from $W$ decays in polarized proton-proton collisions at RHIC energies.
Although the lepton asymmetry is diluted by the $W$ decays, its size can reach several percents
over a large range of lepton rapidity at RHIC.
Therefore this process can offer an additional phenomenological test of the predicted
sign change of the Sivers function. Moreover, because of the weak interaction,
it can provide unique information, with respect to the DY case, on the flavour
dependence and the functional form of the Sivers function.
Let us finally add some comments on the process dependence of the T-odd TMD fragmentation
functions, like the Collins function and the so-called ``polarizing" fragmentation
function~\cite{Mulders:1995dh,Boer:1997nt,Anselmino:2000vs}.
They have been shown to be universal by several authors,
see e.g.~Refs.~\cite{Collins:2004nx,Meissner:2008yf,Yuan:2007nd,Gamberg:2010uw,Yuan:2009dw}.
Testing phenomenologically universality in the fragmentation sector is as important
as the tests for the modified universality of the Sivers functions discussed above.
However, for the Collins function, the study of its universality is made difficult by its
chiral-odd nature. In any physical observable it will always appear coupled to
another chiral-odd object, either in the distribution or in the fragmentation
sector. As well known examples, the Collins function couples to the TMD
transversity distribution in SIDIS and in the pion-jet production process considered
in detail here. It couples to another Collins FF in $e^+e^-\to h_1 h_2 \,X$ reactions.
Therefore, relative signs among these coupled chiral-odd functions are
difficult to determine and require the study of different observables involving
additional chiral-odd functions.
Based on these considerations, the authors of Ref.~\cite{Boer:2010ya}
have suggested to study the universality of the polarizing fragmentation
functions and test factorization by looking at the transverse polarization of
$\Lambda$ hyperons in SIDIS processes and $e^+e^-$ annihilations.
They found that, despite the large uncertainties in these functions,
definite signs for the hyperon polarization in different processes
can be obtained, possibly allowing for a robust test of universality
in this sector.
\section{\label{sec-conclusions} Conclusions}
In the last years, impressive progress has been made in the theoretical understanding of the origin
of the sizable azimuthal and spin asymmetries measured by several experiments in polarized
hadronic processes at large energy scales. The crucial role of colour gauge invariance, and of
the proper account of gauge links (Wilson lines) also in the transverse plane with
respect to the usual light-cone direction, has been emphasized and investigated in depth.
Several processes and polarized observables, for which factorization may not hold and
universality can be broken, have been recognized.
However, it is always difficult to assess, for the ongoing experiments, as well as
for the ones which are going to be performed in the near future, the real relevance and size of
process-dependent terms and factorization-breaking effects.
Clearly, theoretical, more formal, developments must be complemented by corresponding detailed
phenomenological analyses. These can be of great help and valuable guidance for further
theoretical progress in this field.
In this review we have discussed, in the framework of the so-called generalized
parton model, the phenomenological relevance and usefulness of the reaction
$p^\uparrow p\to{\rm jet}\,\pi\,X$ for the study of the process dependence of the
TMD PDFs and FFs, in particular for the Sivers distribution and the Collins fragmentation function.
We have shown how the study of this process can well complement information coming from SIDIS, Drell-Yan and
$e^+e^-$ annihilations, particularly for the knowledge of the large $x$ behaviour of the TMD quark
transversity distributions and of the quark Sivers functions.
We have also shortly summarized additional phenomenological tests, formulated within various
theoretical approaches, recently suggested in the literature for the study of
the universality properties and the process dependence of the TMDs.
\begin{acknowledgments}
We acknowledge financial support from the European Community under the FP7
``Capacities - Research Infrastructures'' programme (HadronPhysics3, Grant Agreement 283286).
U.D.~and F.M.~acknowledge partial support by Italian Ministero dell'Istruzione,
dell'Universit\`{a} e della Ricerca Scientifica (MIUR) under Cofinanziamento PRIN 2008.
U.D. is grateful to the Department of Theoretical Physics II of the Universidad Complutense
of Madrid for the kind hospitality extended to him during the completion of this work.
\end{acknowledgments}
|
1,314,259,995,874 | arxiv | \section{Introduction}
\label{sec:Introduction}
Since the first discovery of the expansion of the Universe more than 90 years ago
\citep{1927ASSB...47...49L,1929PNAS...15..168H}, the Hubble constant $H_0$ characterizing
its current expansion rate has been of great interest to astronomers. In the last decade, however,
a significant mismatch has emerged between several early-time and local measurements of $H_0$
(see \citealt{2019NatAs...3..891V,2021CQGra..38o3001D} for recent reviews). The latest value
of $H_0$ ($=73.2\pm1.3$ km $\rm s^{-1}$ $\rm Mpc^{-1}$; \citealt{2021ApJ...908L...6R})
measured from local Type Ia supernovae (SNe Ia), calibrated by the Cepheid distance ladder, is
in $4.2\sigma$ tension with that inferred from {\it Planck} cosmic microwave background (CMB)
observations interpreted in the context of the standard $\Lambda$CDM model ($H_{0}=67.4\pm0.5$
km $\rm s^{-1}$ $\rm Mpc^{-1}$; \citealt{2020A&A...641A...6P}). If the unknown systematics
cannot be responsible for the discrepancy, the Hubble tension may imply new physics beyond
$\Lambda$CDM \citep{Melia2020,2020PhRvD.102b3518V}.
In order to resolve the Hubble tension, more independent methods of measuring $H_0$ are required.
For example, the age of the oldest stellar populations in our galaxy can provide an independent
local determination of $H_0$
\citep{2015ApJ...808L..35T,2019JCAP...03..043J,2020JCAP...12..002V,2021PhRvD.103j3533B,2021MNRAS.505.2764B}.
But most age measurements use objects at higher redshifts, which can also constrain other
cosmological parameters (e.g., \citealt{1995Natur.376..399B,1995GReGr..27.1137K,1996Natur.381..581D,
1999ApJ...521L..87A,2000MNRAS.317..893L,2002ApJ...573...37J,2003ApJ...593..622J,2004PhRvD..70l3501C,
2005MNRAS.362.1295F,2005PhRvD..71l3001S,2006PhLB..633..436J,2006PhRvD..73l3530P,2007A&A...467..421D,
2009PhLB..679..423D,2011PhLB..699..239D,2010PhLB..693..509S,2014A&A...561A..44B,2015AJ....150...35W,
2017JCAP...03..028R,2020MNRAS.496..888N,2021ApJ...908...84V}). Very recently, \cite{2021arXiv210510421V}
used the age estimates of high-redshift (up to $z\sim8$) old astrophysical objects (OAO) to derive
an upper limit on $H_0$ by requiring that all OAO at any $z$ must be younger than the age of the
Universe at that redshift. Their study shed some light on the ingredients needed to resolve the
Hubble tension, but to constrain $H_0$ in this manner, one has to assume a background cosmology.
Assuming the validity of $\Lambda$CDM at late times, \cite{2021arXiv210510421V} found a 95\%
confidence-level upper limit of $H_0<73.2$ km $\rm s^{-1}$ $\rm Mpc^{-1}$, marginally consistent
with that measured using the local distance ladder.
Of direct relevance to the principal aim of this paper is the fact that, unlike the cosmic distance
ladder methods that rely on the distances of primary or secondary indicators, the age measurements
of distant objects are independent of each other. The age-redshift relationship of high-$z$ OAO may
therefore provide a whole new perspective on one of the most frontier issues in modern cosmology,
i.e., the spatial curvature of the Universe. Knowing whether the Universe is open, closed, or flat
is crucial for a complete understanding of its evolution and the nature of dark energy
\citep{2006JCAP...12..005I,2007JCAP...08..011C,2007PhRvD..75d3520G,2008JCAP...12..008V}. A significant
deviation from zero spatial curvature would have far-reaching consequences for the inflationary paradigm
and its underlying physics \citep{2005ApJ...633..560E,2006PhRvD..74l3507T,2007ApJ...664..633W,2007PhLB..648....8Z,Melia2020}.
Although a spatially flat universe ($\Omega_{k}=0$) is strongly favored by most of the current
cosmic probes, especially by the {\it Planck} 2018 CMB observations
\citep{2020A&A...641A...6P},\footnote{Some recent studies show that the {\it Planck} 2015 CMB
anisotropy data support a mildly closed Universe (see \citealt{2019Ap&SS.364...82P,2019ApJ...882..158P}
and references therein).} these curvature determinations
are based on the pre-assumption of a particular cosmological model (e.g., $\Lambda$CDM).
But there is a strong degeneracy between the curvature parameter and the dark-energy equation
of state, so it would be better to measure the purely geometric quantity $\Omega_{k}$ from the
data using a model-independent method. A non-exhaustive set of references attempting to constrain
the value of $\Omega_{k}$ in a model-independent way includes \cite{2006ApJ...637..598B},
\cite{2007JCAP...08..011C}, \cite{2010PhRvD..81h3537S},
\cite{2014ApJ...789L..15L,2016ApJ...833..240L,2018ApJ...854..146L,2018NatCo...9.3833L,2019ApJ...887...36L,
2019ApJ...873...37L,2020MNRAS.491.4960L}, \cite{2014PhRvD..90b3012S}, \cite{2015PhRvL.115j1301R},
\cite{2016PhRvD..93d3517C}, \cite{2016ApJ...828...85Y}, \cite{2017JCAP...01..015L}, \cite{2017ApJ...839...70L},
\cite{2017JCAP...03..028R}, \cite{2017ApJ...847...45W,2020ApJ...898..100W,2021MNRAS.501.5714W},
\cite{2017ApJ...838..160W}, \cite{2017ApJ...834...75X}, \cite{2018JCAP...03..041D}, \cite{2018ApJ...868...29W},
\cite{2018MNRAS.477L.122W}, \cite{2018ApJ...856....3Y}, \cite{2019PhRvL.123w1101C},
\cite{2019PDU....24..274C,2019NatSR...911608C,2021arXiv211200237C},
\cite{2019PhRvD..99h3514L}, \cite{2019ApJ...881..137R}, \cite{2019PhRvD.100b3530Q,2019MNRAS.483.1104Q},
\cite{2020MNRAS.496..708L}, \cite{2020ApJ...897..127W,2020ApJ...888...99W}, \cite{2020ApJ...889..186Z},
\cite{2021MNRAS.506L...1D}, \cite{2021MNRAS.500.2227J}, \cite{2021ApJ...908...84V}, \cite{2021MNRAS.504.3092Y},
and \cite{2021EPJC...81...14Z}.
In this paper, we broaden the base of support for the age measurements of high-$z$ OAO by demonstrating
their usefulness in testing the late-time expansion history and arbitrating the Hubble tension in
different cosmological models. Further, we propose a new model-independent method of determining the
spatial curvature by combining the OAO age-$z$ data with SNe Ia luminosity distances. Using a
polynomial fitting technique, we reconstruct a continuous age-$z$ function representing the
discrete age measurements of OAO without the pre-assumption of any specific cosmological model. The
time-redshift derivative $dt/dz$ can then be approximately obtained by differentiating the age-$z$
function. Then, $dt/dz$ can be transformed into the curvature-dependent luminosity distance
$D_{L}(\Omega_{k};\;z)$ according to the geometric relation derived from the
Friedmann-Lema\^{\i}tre-Robertson-Walker (FLRW) metric. Finally, by carrying out the joint maximum
likelihood analysis on the polynomial fitting and the observed differences between
$D_{L}(\Omega_{k};\;z)$ and the curvature-independent luminosity distances inferred from SNe Ia,
one can simultaneously constrain the curvature parameter $\Omega_{k}$, the polynomial coefficients,
and the SN nuisance parameters in a model-independent way.
The paper is arranged as follows. In \S~\ref{sec:HT}, we briefly describe the age-redshift
test, and then constrain $H_0$ in different cosmological models. In \S~\ref{sec:OmegaK},
we introduce the methodology of measuring $\Omega_{k}$ using OAO age-$z$ and SN Ia data, and
then present the results of our analysis. We summarize our main conclusions in
\S~\ref{sec:summary}.
\section{Exploration on the Hubble Tension}
\label{sec:HT}
\subsection{The Age-redshift Test}
The theoretical age of the Universe at redshift $z$ is given as
\begin{equation}
t\left(z\right)=\int^{\infty}_{z}\frac{dz'}{\left(1+z'\right)H\left(z',\;\boldsymbol{\theta}\right)}\;,
\label{eq:tU}
\end{equation}
where $H(z,\;\boldsymbol{\theta})$ is the Hubble parameter and $\boldsymbol{\theta}$ stands for the parameters of
the specific cosmological model. All of our analysis in this paper is based on this
expression, which is derived from the FLRW metric. In so doing, we restrict our attention to
the spacetime predicted in the context of general relativity only, though we shall consider possible
model variations consistent with this constraint as prescribed via the choice of stress-energy tensor
in Einstein's equations, which are characterized by the specific model parameters $\theta$.
The age $t_{{\rm obj}, i}$ of an object (e.g., a passive galaxy or quasar) at redshift $z_{i}$
is defined as the difference between the age of the Universe at $z_{i}$ and that when the object
was formed at redshift $z_{f}$. Given that no object was born at the Big Bang
($z_{f}\rightarrow\infty$), the age of the Universe at any redshift should always be greater
than or equal to the age of the oldest astrophysical object (OAO) at the same redshift, i.e.,
$t(z_{i})\geq t_{{\rm obj}, i}$. The difference between $t(z_{i})$ and $t_{{\rm obj}, i}$,
which we denote by $\tau_{\rm inc}$, represents the `incubation' time, or delay factor,
and accounts for the amount of time elapsed since the Big Bang to the formation of the object.
Equation~(\ref{eq:tU}) shows that the age of the Universe at any given redshift is inversely
proportional to the Hubble constant $H_{0}\equiv H(z=0)$. An upper limit on $H_{0}$ can
therefore be obtained by requiring that the Universe be at least as old as the oldest objects
at the corresponding redshifts \citep{2021arXiv210510421V}. If the value of $H_{0}$ is too
high, then we are in an awkward position that the Universe is younger than the oldest objects
it contains at a given redshift. In Equation~(\ref{eq:tU}), $t(z)$ receives most of its
contribution at late times ($z\leq10$), and is scarcely sensitive to pre-recombination physics.
Therefore, consistency between the high-$z$ upper limits on $H_{0}$ and the local $H_{0}$
measurements offers a stringent test of late-time and/or local new physics, potentially
suggesting the necessity for the latter to operate together with early-time new physics
to completely address the Hubble tension \citep{2020PhRvD.102j3525K,2021CQGra..38r4001K,
2021arXiv210602532K,2021ApJ...912..150D,2021CmPhy...4..123J,2021ApJ...920..159L,
2021PhRvD.104f3524V,2021arXiv210510421V}.
Using a combination of galaxies and high-$z$ quasars, \cite{2021arXiv210510421V} constructed
an age-redshift diagram of OAO up to $z\sim8$. Most of their galaxy data come from the Cosmic
Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS) observing program
\citep{2011ApJS..197...35G}, and the remaining galaxy data are from the observations of 32
old passive galaxies in the redshift range $0.117\leq z \leq 1.845$ \citep{2005PhRvD..71l3001S}.
For high-$z$ quasars, they considered the following observations: 7,446 quasars from SDSS DR7
in the range $3\leq z \leq5$ \citep{2011ApJS..194...45S}, 50 quasars detected by the GNIRS
spectrograph in the range $5.5\leq z \leq6.5$ \citep{2019ApJ...873...35S}, 15 quasars detected
by Pan-STARRS1 in the range $6.5\leq z \leq7.0$ \citep{2017ApJ...849...91M}, and 9 of the most
distant quasars ever discovered in the range $7.0\leq z \leq7.642$
\citep{2011Natur.474..616M,2018Natur.553..473B,2018ApJ...869L...9W,2021ApJ...907L...1W,2019ApJ...872L...2M,
2019ApJ...883..183M,2019AJ....157..236Y,2020ApJ...897L..14Y}.
Basically, the ages of the CANDELS galaxies were estimated by fitting the photometric
spectral energy distribution, whereas for the quasars a specific growth model of black hole seeds
developed by \cite{2017ApJ...850L..42P} was adopted.
Applying severe quality cuts to
these observations, and selecting only those objects which are among the oldest ones within
each redshift bin, \cite{2021arXiv210510421V} compiled a final catalog of 114 OAO with reliable
redshift and age measurements, in which 61 OAO are galaxies and the other 53 are quasars. We
adopt this high-$z$ OAO catalog covering the redshift range $0< z < 8$ for our assessment of
the $H_0$ limits. Figure~\ref{f1} shows the age measurements as a function of redshift for
these 114 OAO. In this plot, we also illustrate the dependence of the Universe's age $t(z)$
(estimated using flat $\Lambda$CDM with a fixed matter density $\Omega_{\rm m}=0.3$) on the
value of the Hubble constant $H_0$.
\begin{figure}
\vskip-0.1in
\centerline{\includegraphics[keepaspectratio,clip,width=0.5\textwidth]{f1.eps}}
\caption{Age-redshift diagram for 114 OAO, including 61 galaxies (black points) and 53 quasars
(orange points). The solid curves show the age of the Universe as a function of redshift in
flat $\Lambda$CDM with a fixed $\Omega_{\rm m}=0.3$, but an adjustable $H_0$. The violet dashed
curve shows the inferred result when fitting solely the age-redshift data of the 61 galaxies
using a third-order polynomial.}
\label{f1}
\end{figure}
\subsection{Upper Limits on $H_0$}
We are now in position to use the selected 114 age measurements of OAO as a function of redshift
to derive upper limits on $H_0$. Given the observed data $\mathbf{D}$ (with the OAO ages at
redshifts $z_i$ being $t_{{\rm obj}, i}\pm\sigma_{t_{{\rm obj}, i}}$; see solid points in
Figure~\ref{f1}) and some prior knowledge about the hypothetical models (for which the parameters
are denoted by the vector $\boldsymbol{\theta}$), the posterior probability distributions of the
free parameters can be modeled through the half-Gaussian (log-)likelihood
\citep{2021arXiv210510421V}:
\begin{equation}\label{eq:halfGaussian}
\ln{\mathcal L}\left(\boldsymbol{\theta}\mid\mathbf{D}\right) = -\frac{1}{2}\sum_{i}^{114} \left\lbrace \begin{array}{ll}
\Delta_{i}^{2}\left(\boldsymbol{\theta}\right)/\sigma_{t_{{\rm obj}, i}}^{2}~~~~~{\rm if}~~\Delta_{i}\left(\boldsymbol{\theta}\right)<0\\
0~~~~~~~~~~~~~~~~~~~~~~~{\rm if}~~\Delta_{i}\left(\boldsymbol{\theta}\right)\geq0\;,\\
\end{array} \right.
\end{equation}
where $\Delta_{i}\equiv t\left(\boldsymbol{\theta},\;z_{i}\right)-t_{{\rm obj}, i}$ is defined
as the age of the Universe minus the age of the $i$-th OAO at redshift $z_i$. The expression
in Equation~(\ref{eq:halfGaussian}) is based on the fact that: $a)$ since the Universe must not
be younger than its oldest inhabitants, parameters for which the Universe is younger than the
OAO (i.e., $\Delta_{i}(\boldsymbol{\theta})<0$) are exponentially unlikely, and this means the
more the Universe is younger than the OAO, the worse the fit; $b)$ parameters for which the
Universe is older than the OAO (i.e., $\Delta_{i}(\boldsymbol{\theta})\geq0$) are equally
likely, and cannot be distinguished solely on the basis of the OAO age.
To calculate model predictions for the age $t(z)$ in Equation~(\ref{eq:tU}), we need an
expression for $H(z,\;\boldsymbol{\theta})$. As the cosmic expansion rate within the context
of specifically selected models is significantly different, it is interesting to examine the
upper limits on $H_{0}$ derived from the OAO ages using different background cosmologies.
Here we discuss how these limits are obtained for $\Lambda$CDM, the Einstein-de Sitter universe,
and the $R_{\rm h}=ct$ universe.
\begin{figure}
\centerline{\includegraphics[keepaspectratio,clip,width=0.5\textwidth]{f2.eps}}
\vskip-0.1in
\caption{1D and 2D marginalized posterior distributions with the $1-2\sigma$ contours
for $H_{0}$, $\Omega_{\rm m}$, and the incubation time $\tau_{\rm inc}$, using the
114 age-redshift data shown in Figure~\ref{f1} and the same priors on $H_{0}$,
$\Omega_{\rm m}$, and $\tau_{\rm inc}$ as \cite{2021arXiv210510421V}.}
\label{f2}
\end{figure}
\begin{itemize}
\item $\Lambda$CDM
\end{itemize}
In flat $\Lambda$CDM, the Hubble parameter is well approximated by
\begin{equation}
H^{{\rm \Lambda CDM}}\left(z,\;\boldsymbol{\theta}\right)=H_{0}\left[\Omega_{\rm m}
\left(1+z\right)^{3}+\Omega_{\Lambda}\right]^{1/2}\;,
\end{equation}
where $\Omega_{\Lambda}=1-\Omega_{\rm m}$ is the cosmological constant energy density.
Note that we ignore the contribution from radiation, which is negligible compared to that
of matter and dark energy in the late-time expansion history. The analysis of the OAO ages
provides a valuable consistency test: if we trust the data, a disagreement between our upper
limit on $H_{0}$ and the value measured from the local distance ladder may indicate new
physics beyond $\Lambda$CDM, at least in the late-time expansion history.
For the basic $\Lambda$CDM model, the free parameters to be constrained are
$\boldsymbol{\theta}=\{H_{0},\;\Omega_{\rm m}\}$. We adopt the Python Markov chain Monte
Carlo (MCMC) module, EMCEE \citep{2013PASP..125..306F}, to explore the posterior probability
distributions of these parameters. Note that \cite{2021arXiv210510421V} expressed
$\Delta_{i}$ in Equation~(\ref{eq:halfGaussian}) as
$\Delta_{i}\equiv t\left(\boldsymbol{\theta},\;z_{i}\right)-t_{{\rm obj}, i}-\tau_{\rm inc}$,
and modeled the incubation time $\tau_{\rm inc}$ as a prior distribution derived by
\cite{2019JCAP...03..043J}, based on the assumption that the formation redshift $z_f$ for the
oldest observed galaxies is $z_{f}>11$. After marginalizing over $H_{0}$, $\Omega_{\rm m}$,
and $z_f$, this approach yields a prior peaked at $\tau_{\rm inc}\approx0.1-0.15$ Gyr,
which \cite{2021arXiv210510421V} labeled as J19 and adopted its fitting function
provided in Appendix G of \cite{2020JCAP...12..002V}.
For the quasars, \cite{2021arXiv210510421V} fixed
$\tau_{\rm inc}=t(z_{f}=20)$, under the assumption that
they were all seeded at redshift $z_{f}\sim20$. In their baseline analysis,
\cite{2021arXiv210510421V} set flat priors on $H_{0}\in[40,\;100]$ km $\rm s^{-1}$
$\rm Mpc^{-1}$ and $\Omega_{\rm m}\in[0.2,\;0.4]$, the J19 prior on $\tau_{\rm inc}$ for
the galaxies, and fixed $\tau_{\rm inc}=t(z_{f}=20)$ for the quasars. To verify the
reliability of our calculations, we have carried out a parallel analysis with the same
priors on $H_{0}$, $\Omega_{\rm m}$, and $\tau_{\rm inc}$ to ensure that our results
are consistent with each other. Figure~\ref{f2} shows the joint $H_{0}-
\Omega_{\rm m}-\tau_{\rm inc}$ posterior distributions obtained from the baseline
analysis suggested by \cite{2021arXiv210510421V}. Our 95\% confidence-level upper limit on
the reduced Hubble constant $h_{0}\equiv H_{0}/$(100 km $\rm s^{-1}$ $\rm Mpc^{-1}$)
$<0.732$ (all quoted upper limits will hereafter be at the 95\% confidence level)
is the same as that obtained by \cite{2021arXiv210510421V}. Our methodology can thus
reliably incorporate the constraints of \cite{2021arXiv210510421V}, producing results
consistent with their analysis.
As one can see from Equations~(\ref{eq:tU}) and (\ref{eq:halfGaussian}), however, the
inclusion of $\tau_{\rm inc}$ clearly results in a more stringent, less conservative limit
on $H_0$, depending on one's choice of the initial conditions. In addition, the derived
$\tau_{\rm inc}$ distribution from \cite{2019JCAP...03..043J} depends (though only weakly)
on the assumed $\Lambda$CDM cosmology. In order to be as conservative as possible, and
to provide the most reliable upper limits, we suggest to avoid introducing
$\tau_{\rm inc}$ in Equation~(\ref{eq:halfGaussian}). For the rest of this section, we
shall therefore begin by conservatively constraining $H_0$ without the inclusion of
this incubation time.
\begin{figure}
\centerline{\includegraphics[keepaspectratio,clip,width=0.45\textwidth]{f3.eps}}
\vskip-0.1in
\caption{1D and 2D marginalized posterior distributions with the $1-2\sigma$ contours
for the parameters $H_{0}$ and $\Omega_{\rm m}$ in flat $\Lambda$CDM, using
the 114 age-redshift data shown in Figure~\ref{f1}. Different colored contours correspond to
different priors on $\Omega_{\rm m}$: flat prior $\Omega_{\rm m}\in[0.2,\;0.4]$
(red contours) and Gaussian prior $\Omega_{\rm m}=0.315\pm0.007$ (blue contours).}
\label{f3}
\end{figure}
In our analysis, we choose wide flat priors for $H_{0}\in[0,\;150]$ km $\rm s^{-1}$
$\rm Mpc^{-1}$ and $\Omega_{\rm m}\in[0.2,\;0.4]$. The 1D marginalized posterior
distributions and 2D plots of the $1-2\sigma$ confidence regions for these two parameters,
constrained by the 114 age-redshift data, are displayed in Figure~\ref{f3} (red contours).
These contours show that, whereas $\Omega_{\rm m}$ is not as well constrained, we can set an
upper limit on $H_{0}$, whose 95\% confidence-level value is $h_{0}<0.755$. This is roughly
consistent with its latest local measurement ($h_{0}=0.732\pm0.013$;
\citealt{2021ApJ...908L...6R}). To explore the impact of a $\tau_{\rm inc}$ prior,
\cite{2021arXiv210510421V} also analyzed the data without its inclusion, i.e., by setting
$\tau_{\rm inc}=0$ Gyr, finding in this case that $h_{0}<0.791$, which is somewhat incompatible
with our result ($h_{0}<0.755$). The difference appears to be due to the fact that
\cite{2021arXiv210510421V} set a narrower prior on $h_{0}\in[0.4,\;1]$, while we put
$h_{0}\in[0,\;1.5]$. The relatively low values of $H_{0}$ are equally favored by the
half-Gaussian likelihood (Eqn.~\ref{eq:halfGaussian}).
If one insists on using $\Lambda$CDM as the background cosmology, the version that appears to
be consistent with the majority of observations is spatially flat, with a scaled matter
density $\Omega_{\rm m}\approx0.3$ (e.g., \citealt{2015PhRvD..92l3516A,2018ApJ...859..101S,2020A&A...641A...6P}). Nevertheless, a peculiarity of the often made comparison between the measurements
of $H_0$ at low and high redshifts in this model is that $H_0$ is constrained on its own for the
former, but only in concert with other parameters, particularly $\Omega_{\rm m}$, for the latter.
Thus, to investigate how our results may be affected by the priors for these other concordance
parameters, we sample the limits imposed on $H_0$ using alternate values of the matter density.
First, we adopt the Gaussian prior $\Omega_{\rm m}=0.315\pm0.007$ from {\it Planck}
\citep{2020A&A...641A...6P}. The resulting constraints are shown as blue contours in
Figure~\ref{f3}. In this case, the $\Omega_{\rm m}$ posterior unsurprisingly follows its
Gaussian prior, and $h_{0}$ is constrained to be $h_{0}<0.706$, representing a $2\sigma$
tension with its locally measured value. But it is important to note that it agrees with
the {\it Planck} inference ($h_{0}=0.674\pm0.005$; \citealt{2020A&A...641A...6P}).
This is very interesting because, in this case, the outcomes for both $H_{0}$ and $\Omega_{\rm m}$
are mutually consistent for both {\it Planck} and the OAO age-redshift data.
Next, we explore the impact of an $\Omega_{\rm m}$ prior by fixing its value to be
0.1, 0.3, 0.5, 0.7, and 0.9, respectively. The outcome of each case is presented in
Table~\ref{table1}. One can see that the inferred upper limit on $H_{0}$ does depend
quite significantly on $\Omega_{\rm m}$. That is, some of the impact of adjusting $H_{0}$
for the fits is mitigated by corresponding changes to $\Omega_{\rm m}$. And since the local
measurement of $H_{0}$ does not require $\Omega_{\rm m}$, while {\it Planck} uses both,
the tension between the two measurements may be due in part to the use of
$\Omega_{\rm m}$ in the latter, but not the former.
\begin{table}
\centering \caption{The 95\% confidence-level upper limits on $H_{0}$ with different $\Omega_{\rm m}$ priors}
\begin{tabular}{lccccc}
\hline
\hline
$\Omega_{\rm m}$ (fixed) & 0.1 & 0.3 & 0.5 & 0.7 & 0.9 \\
$H_{0}$ / [km $\rm s^{-1}$ $\rm Mpc^{-1}$] & <112.4 & <72.2 & <56.6 & <47.9 & <42.4 \\
\hline
\end{tabular}
\label{table1}
\end{table}
\begin{itemize}
\item The $R_{\rm h}=ct$ universe
\end{itemize}
The expansion rate in the $R_{\rm h}=ct$ universe \citep{2003eisb.book.....M,2007MNRAS.382.1917M,2013A&A...553A..76M,2012MNRAS.419.2579M,2015AJ....150...35W,Melia2020},
is given as
\begin{equation}
H^{R_{\rm h}=ct}\left(z,\;\boldsymbol{\theta}\right)=H_{0}\left(1+z\right)\;.
\end{equation}
The $R_{\rm h}=ct$ cosmology has only one free parameter, i.e., $\boldsymbol{\theta}=\{H_{0}\}$.
Here we also set a flat prior on $H_{0}\in[0,\;150]$ km $\rm s^{-1}$ $\rm Mpc^{-1}$.
The results of fitting the 114 age-redshift data with this cosmology are shown in the left panel
of Figure~\ref{f4}. We find an upper limit of $h_{0}<0.861$ at the 95\% confidence-level,
in good agreement with the locally measured $H_{0}$.
\begin{itemize}
\item Einstein-de Sitter
\end{itemize}
The Einstein-de Sitter universe is characterized by a cosmic fluid containing only matter.
In this model, $H_{0}$ is the sole free parameter, i.e., $\boldsymbol{\theta}=\{H_{0}\}$,
and the Hubble rate is expressed as
\begin{equation}
H^{\rm EdS}\left(z,\;\boldsymbol{\theta}\right)=H_{0}\left(1+z\right)^{3/2}\;.
\end{equation}
With the flat prior on $H_{0}\in[0,\;150]$ km $\rm s^{-1}$ $\rm Mpc^{-1}$, we find that
a low upper limit of $h_{0}<0.401$ is required in order to ensure the Universe is older than
the OAO (see the right panel of Figure~\ref{f4}). The Einsten-de Sitter universe can thus be
safely excluded, given that this inferred upper limit on $H_{0}$ is in $25.5\sigma$ tension with
the locally measured $H_{0}$.
\begin{figure}
\begin{center}
\includegraphics[width=0.23\textwidth]{f4a.eps}
\includegraphics[width=0.23\textwidth]{f4b.eps}
\vskip-0.1in
\caption{1D posterior distributions of the Hubble constant $H_{0}$ in the $R_{\rm h}=ct$
universe (left panel) and the Einsten-de Sitter universe (right panel), constrained by the
114 age-redshift data.}
\label{f4}
\vskip-0.2in
\end{center}
\end{figure}
\section{A Cosmology-independent Estimate of the Spatial Curvature}
\label{sec:OmegaK}
In this section, we obtain the curvature-dependent luminosity distance to the OAO, based
on their age measurements, and estimate the spatial curvature constant by comparing it
with the empirically derived distance modulus of SNe Ia.
\subsection{Curvature-dependent distance from the age of OAO}
In the FLRW spacetime, the luminosity distance $D_{L}(z)$ may be written using the first
derivative of the age, $t$, of the Universe,
\begin{eqnarray}
D_{L}\left(z\right)&=& \nonumber\frac{c}{H_{0}} \frac{\left(1+z\right)}{\sqrt{|\Omega_{k}|}}\;\times\\
& &{\rm sinn}\left\{H_{0}\sqrt{|\Omega_{k}|}\int_{z}^{0}\left(1+z'\right)\frac{dt}{dz'}dz'\right\}\;.
\label{eq:DL}
\end{eqnarray}
In this expression, sinn is $\sinh$ when $\Omega_{k}>0$ and $\sin$ when $\Omega_{k}<0$. For
a flat universe with $\Omega_{k}=0$, the right-hand side of this expression simplifies to
the form $(1+z)c$ times the integral. Thus, if one can access the quantity $dt/dz$ at the
required redshift, without pre-assuming a particular cosmological model, one may reconstruct
the line-of-sight comoving distance $D_{C}(z)=c\int_{z}^{0}\left(1+z\right)\frac{dt}{dz'}dz'$
along with the curvature-dependent luminosity distance, $D_{L}(\Omega_{k};\;z)$. The idea
that $dt/dz$ may be obtained from the age-redshift measurements of old objects has been
suggested on various occasions \citep{2002ApJ...573...37J,2017GReGr..49..150J,
2017JCAP...03..028R}.
Since we are primarily interested in the derivative $dt/dz$ and the present age of the Universe
from other observations, and much less so in the incubation time $\tau_{\rm inc}$, we choose to
directly fit the original estimated ages $t_{\rm obj}(z)$ of the OAO. Taking $\tau_{\rm inc}$
to be constant, one may see that $t(z)$ differs from $t_{\rm obj}(z)$ by just a constant. That is,
\begin{equation}
t\left(z\right)=t_{\rm obj}\left(z\right)+\tau_{\rm inc}\;.
\label{eq:tz}
\end{equation}
In principle, we may use the age-redshift data of all 114 OAO up to $z\sim8$ compiled by
\cite{2021arXiv210510421V} to estimate $dt/dz$. However, this catalog includes two different
kinds of source, viz., 61 galaxies and 53 quasars, each of which has its own distinct
incubation time. For this analysis, we therefore only employ the age of 61 galaxies
distributed over the redshift interval $0.001\leq z \leq 6.689$ to estimate $dt/dz$.
The advantage of solely using galaxies is the relative uniformity of the sample.\footnote{
Note, however, that there is no guarantee that all the galaxies constitute a
homogeneous sample either. This analysis should perhaps carried out for each sub-sample
separately. But since the current sub-sample size is admittedly small, we use all the
galaxies for our analysis.} The originally estimated ages, $t_{\rm obj}$, of the 61
galaxies are indicated as a function of redshift by the black points in Figure~\ref{f1}.
In our analysis, we construct the age function $t_{\rm obj}(z)$ in a cosmology-independent
way by fitting a third-order polynomial, with the initial condition
$t_{\rm obj}(z\rightarrow\infty)=0$, to the age-redshift data. To mitigate the convergence
problem that the polynomial fit encounters at high redshifts, we recast the $t_{\rm obj}(z)$
function in the form of the $y$-redshift, defined by the relation $y=z/(1+z)$. In this way,
the age in $z\in[0,\;\infty)$ is mapped into $y\in[0,\;1]$, so that the polynomial fit is
well behaved throughout the redshift range from our local Universe to the Big Bang. This
polynomial is then expressed as
\begin{equation}
t_{\rm obj}\left(y\right)=a_{0}+a_{1}y+a_{2}y^{2}+a_{3}y^{3}\;,
\label{eq:polynomial}
\end{equation}
where $a_{1}$, $a_{2}$, and $a_{3}$ are three free parameters (all in units of Gyr). With
the initial condition $t_{\rm obj}(z\rightarrow\infty)=t_{\rm obj}(y=1)=0$, it is easy to
identify $a_{0}\equiv-a_{1}-a_{2}-a_{3}$. For $z=0$, Equation~(\ref{eq:tz}) simplifies to
$t_{0}=a_{0}+\tau_{\rm inc}$. Once we have the inferred value of $a_{0}$ and know the
present age of the Universe $t_{0}$, we can also estimate $\tau_{\rm inc}$.
As we assume $\tau_{\rm inc}$ to be constant, we have $\frac{dt_{\rm obj}}{dz}=\frac{dt}{dz}$.
Thus, by differentiating the polynomial (Eqn.~\ref{eq:polynomial}), we obtain
\begin{equation}
\frac{dt}{dz}=\frac{a_{1}}{\left(1+z\right)^{2}}+\frac{2a_{2}z}{\left(1+z\right)^{3}}+\frac{3a_{3}z^{2}}{\left(1+z\right)^{4}}\;.
\label{eq:dtdz}
\end{equation}
Then, the curvature-dependent luminosity distance can be derived by substituting
Equation~(\ref{eq:dtdz}) into (\ref{eq:DL}), i.e.,
\begin{eqnarray}
D_{L}\left(z\right)&=& \nonumber\frac{c}{H_{0}} \frac{\left(1+z\right)}{\sqrt{|\Omega_{k}|}}\;{\rm sinn}\{H_{0}\sqrt{|\Omega_{k}|}\\
& & \times \int_{z}^{0}\left[\frac{a_{1}}{1+z'}+\frac{2a_{2}z'}{\left(1+z'\right)^{2}}+\frac{3a_{3}z'^{2}}{\left(1+z'\right)^{3}} \right]dz'\}\;.
\label{eq:DL_age}
\end{eqnarray}
We can further obtain the reconstructed distance modulus
$\mu_{\rm age}(\Omega_{k},\;a_1,\;a_2,\;a_3;\;z)$ using the age-redshift data:
\begin{equation}
\mu_{\rm age}\left(\Omega_{k},\;a_1,\;a_2,\;a_3;\;z\right)=5\log_{10}\left[\frac{D_{L}\left(z\right)}{\rm Mpc}\right]+25\;.
\label{eq:mu_age}
\end{equation}
\subsection{Distance from observations of SNe Ia}
By comparing the curvature-dependent luminosity distance $D_{L}(\Omega_{k},\;z)$ derived from
the age-redshift data with the empirically-derived luminosity distance (at similar redshifts)
we can obtain a model-independent measurement of $\Omega_{k}$. For the latter, we use the
largest Pantheon SN Ia sample, consisting of 1,048 SNe Ia in the redshift range $0.01<z<2.3$
\citep{2018ApJ...859..101S}.
The observed distance modulus of each SN is given as
\begin{equation}
\mu_{\rm SN}=m_{B}+\alpha x_{1}-\beta {\mathcal C}-M^{\star}_{B}\;,
\label{eq:mu_SN}
\end{equation}
where $m_{B}$ is the observed $B$-band apparent magnitude, $x_{1}$ is the light-curve stretch
factor, and ${\mathcal C}$ is the SN color at maximum brightness. The absolute $B$-band
magnitude $M^{\star}_{B}$ is correlated with the host galaxy mass $M_{\rm stellar}$ via
a simple step function \citep{2014A&A...568A..22B,2018ApJ...859..101S}:
\begin{equation}\label{HSFR}
M^{\star}_{B} = \left\lbrace \begin{array}{ll} M_{B}+\Delta_{M}~~~~~~~~~{\rm for}~~~M_{\rm stellar}>10^{10}M_{\odot}\\
M_{B}~~~~~~~~~~~~~~~~~~{\rm otherwise}\;, \\
\end{array} \right.
\end{equation}
where $\Delta_{M}$ corresponds to a distance correction based on $M_{\rm stellar}$. Note that
$\alpha$, $\beta$, $M_{B}$, and $\Delta_{M}$ are nuisance parameters that need to be constrained
simultaneously with the cosmological parameters. As such, the derived SN distance is typically
dependent on the chosen cosmology. To avoid this, \cite{2017ApJ...836...56K} introduced an
approximate method called BEAMS with Bias Corrections (BBC) to correct those expected biases
and simultaneously fit for the SN nuisance parameters. The BBC fit produces a bin-averaged
Hubble diagram of SNe Ia, and then the nuisance parameters $\alpha$ and $\beta$ are constrained
by fitting to a reference cosmological model with fixed values of the matter density
$\Omega_{\rm m}$ and equation-of-sate of dark energy $w$. Within each redshift bin,
the local shape of the Hubble diagram is assumed to be well described by the reference
cosmological model. If there are sufficient redshift bins, the fitted parameters $\alpha$
and $\beta$ will converge to consistent values \citep{2011ApJ...740...72M,2017ApJ...836...56K}.
With the BBC method, \cite{2018ApJ...859..101S} report the corrected apparent magnitudes
$m_{\rm corr}=m_{B}+\alpha x_{1}-\beta {\mathcal C} -\Delta_{M}+\Delta_{B}$ for all the SNe,
where $\Delta_{B}$ is the added distance correction. Given these corrected apparent magnitudes,
we just need to subtract the absolute magnitude $M_{B}$ from $m_{\rm corr}$ to derive the observed
distance moduli:
\begin{equation}
\mu_{\rm SN}=m_{\rm corr}-M_{B}\;.
\label{eq:mu_SNcorr}
\end{equation}
The caveat with this approach, however, is that the format assumes all cosmological models
are nested, which is not true in general. This formulation may be used approximately for
various versions of $\Lambda$CDM, but not for other models, such as $R_{\rm h}=ct$, whose
luminosity distance does not depend on parameters such as $\Omega_{k}$. The caveat here is
that the results we report below pertain specifically to $\Lambda$CDM, not necessarily to
other FLRW models, or models based on alternative theories of gravity.
Even within the context of $\Lambda$CDM, however, there may still be some residual model
dependence, so to test how serious this limitation might be, we take the following approach.
The inferred values of $\alpha$ and $\beta$ in the BBC method are valid only for the
reference model. We therefore consider two different cases: first, the determination
of $\alpha$ and $\beta$ is assumed to be independent of the model, and we directly use
those corrected apparent magnitudes reported by \cite{2018ApJ...859..101S} for our purpose;
second, we carry out a parallel analysis of the uncorrected SN magnitudes by re-constraining
$\alpha$ and $\beta$ as nuisance parameters, and we compare the results.
\subsection{Analysis and results}
We constrain all of the free parameters via a joint analysis involving the galaxy age and SN Ia
data. The final log-likelihood sampled by the Python MCMC module EMCEE is a sum of the separate
likelihoods of the galaxy ages and SNe Ia:
\begin{equation}
\ln\left({\mathcal L}_{\rm tot}\right) = \ln\left({\mathcal L}_{t_{\rm obj}}\right) + \ln\left(\mathcal{L}_{\rm SN}\right)\;,
\end{equation}
where
\begin{equation}
\ln\left({\mathcal L}_{t_{\rm obj}}\right) = -\frac{1}{2}\sum_{i}^{61}\frac{\left[t_{{\rm obj}, i}^{\rm obs}-t_{\rm obj}^{\rm fit}\left(a_1,\;a_2,\;a_3;\;z_{i}\right)\right]^{2}}{\sigma_{t_{{\rm obj}, i}}^{2}}
\label{eq:Lage}
\end{equation}
and
\begin{equation}
-2 \ln\left(\mathcal{L}_{\rm SN}\right) = \Delta \bf{\hat{\mu}}^{\emph{T}} \cdot \bf{Cov}^{-1} \cdot \Delta \bf{\hat{\mu}}\;.
\label{eq:Lsne}
\end{equation}
In Equation~(\ref{eq:Lage}), $\sigma_{t_{{\rm obj}, i}}$ is the uncertainty of the $i$-th age measurement
$t_{{\rm obj}, i}^{\rm obs}$ and $t_{\rm obj}^{\rm fit}\left(a_1,\;a_2,\;a_3;\;z_{i}\right)$ is obtained from
Equation~(\ref{eq:polynomial}). In Equation~(\ref{eq:Lsne}),
$\Delta \hat{\mu}=\hat{\mu}_{\rm SN}(M_{B};\;z)-\hat{\mu}_{\rm age}(\Omega_{k},\;a_1,\;a_2,\;a_3;\;z)$
is the data vector, defined by the difference between the distance modulus $\mu_{\rm SN}$ of SNe Ia
(Eqn.~\ref{eq:mu_SNcorr}) and the constructed distance modulus $\mu_{\rm age}$ from the galaxy
age-redshift data (Eqn.~\ref{eq:mu_age}), and $\bf{Cov}$ is a full covariance matrix that
contains both statistical and systematic uncertainties of SNe. Note that in the SN likelihood
estimation, there is a degeneracy between $H_0$ and $M_{B}$. We therefore adopt a fiducial
$H_0=70$ km $\rm s^{-1}$ $\rm Mpc^{-1}$ for the sake of constraining $M_{B}$. In this case,
the free parameters are: the spatial curvature parameter $\Omega_{k}$, the three polynomial
coefficients ($a_1$, $a_2$, $a_3$), and the SN absolute magnitude $M_{B}$.
\begin{figure*}
\vskip-0.1in
\centerline{\includegraphics[keepaspectratio,clip,width=0.8\textwidth]{f5.eps}}
\vskip-0.1in
\caption{1D and 2D marginalized posterior distributions with the $1-2\sigma$ contours
for the cosmic curvature $\Omega_{k}$, the polynomial coefficients ($a_1$, $a_2$, $a_3$),
and the SN absolute magnitude $M_{B}$, based on the joint analysis of the galaxy age and corrected SN magnitude data.
The vertical solid lines represent the medium parameter values, whereas the vertical dashed lines
indicate $\pm1\sigma$ deviations from their respective means.}
\label{f5}
\end{figure*}
\begin{table*}
\centering \caption{Constraints on all parameters with different choices of data}
\begin{tabular}{lccccccccc}
\hline
\hline
Data & $\Omega_{k}$ & $a_1$ & $a_2$ & $a_3$ & $M_B$ & $\alpha$ & $\beta$ & $\Delta_{M}$ & $\sigma_{\rm int}$ \\
& & (Gyr) & (Gyr) & (Gyr) & & & & & \\
\hline
galaxy + corrected SN & $0.43^{+0.27}_{-0.27}$ & $-14.21^{+1.44}_{-1.46}$ & $-5.37^{+0.91}_{-0.95}$ & $6.76^{+1.15}_{-1.13}$ & $-19.40^{+0.23}_{-0.21}$ & -- & -- & -- & -- \\
galaxy + uncorrected SN & $0.59^{+0.18}_{-0.17}$ & $-15.28^{+1.53}_{-1.54}$ & $-3.78^{+0.71}_{-0.74}$ & $6.02^{+1.06}_{-1.05}$ & $-19.48^{+0.23}_{-0.21}$ & $0.132^{+0.005}_{-0.005}$ & $2.595^{+0.057}_{-0.056}$ & $0.052^{+0.009}_{-0.009}$ & $0.079^{+0.006}_{-0.006}$ \\
\hline
\end{tabular}
\label{table2}
\end{table*}
The 1D marginalized posterior distributions and 2D regions with $1-2\sigma$ contours corresponding
to these five free parameters, constrained by the galaxy ages and corrected SN magnitudes, are
presented in Figure~\ref{f5}. These contours show that, at the $1\sigma$ confidence level, the
inferred parameter values are $\Omega_{k}=0.43^{+0.27}_{-0.27}$, $a_1=-14.21^{+1.44}_{-1.46}$,
$a_2=-5.37^{+0.91}_{-0.95}$, $a_3=6.76^{+1.15}_{-1.13}$, and $M_{B}=-19.40^{+0.23}_{-0.21}$. The
corresponding results for the galaxy + corrected SN data are summarized in Table~\ref{table2}.
With this approach, we find that the spatial geometry of the Universe is marginally
consistent with spatial flatness at a $1.6\sigma$ level of confidence.
As noted earlier, our procedure allows us to determine the inferred value of $a_{0}$ along with
the best-fit polynomial coefficients, i.e., $a_{0}\equiv-a_{1}-a_{2}-a_{3}=12.82\pm2.06$ Gyr.
Considering the present age of the Universe as inferred from {\it Planck} in the context
flat $\Lambda$CDM ($t_{0}=13.80\pm0.02$ Gyr; \citealt{2020A&A...641A...6P}), we can further
estimate the incubation time as $\tau_{\rm inc}=t_{0}-a_{0}=0.98\pm2.06$ Gyr.
Next, to investigate how sensitive our results of $\Omega_{k}$ are to the choice of corrected
SN magnitudes provided by the Pantheon team, we also perform a (parallel) comparative analysis
of the galaxy + uncorrected SN data by simultaneously constraining the nuisance parameters along
with $\Omega_{k}$. The likelihood function of SNe now becomes
\begin{equation}
\mathcal{L}_{\rm SN} = \prod_{i=1}^{1048}\frac{1}{\sqrt{2\pi}\sigma_{{\rm stat},i}}
\exp\left(-\frac{{\Delta \mu_{i}}^{2}}{2\sigma_{{\rm stat},i}^{2}}\right)\;,
\label{eq:Lsne2}
\end{equation}
where
$\Delta \mu_{i}=\mu_{\rm SN}(\alpha,\;\beta,\;M_{B},\;\Delta_{M};\;z_{i})-\mu_{\rm age}(\Omega_{k},\;a_1,\;a_2,\;a_3;\;z_{i})$
is the difference between the distance modulus $\mu_{\rm SN}$ of SN Ia (Eqn.~\ref{eq:mu_SN})
and the distance modulus $\mu_{\rm age}$ constructed from the galaxy age-redshift data
(Eqn.~\ref{eq:mu_age}), and $\sigma_{{\rm stat},i}$ is the statistical uncertainty of each SN,
given by the expression
\begin{equation}
\begin{split}
\sigma_{{\rm stat},i}^{2}=\sigma^{2}_{m_{B},i}+\alpha^{2}\sigma^{2}_{x_{1},i}+\beta^{2}\sigma^{2}_{\mathcal{C},i}\qquad\qquad\qquad~~~~~\\
+2\alpha C_{m_{B}\,x_{1},\,i}-2\beta C_{m_{B}\,\mathcal{C},\,i}-2\alpha\beta C_{x_{1}\,\mathcal{C},\,i}\qquad~\\
+\sigma^{2}_{\mu-z,i}+\sigma^{2}_{{\rm lens},i}+\sigma^{2}_{\rm int}\;.\qquad\qquad\qquad~~~~~~
\end{split}
\label{eq:sigstat}
\end{equation}
Here, $\sigma_{{m_{B}},i}$, $\sigma_{x_{1},i}$, and $\sigma_{{\mathcal{C}},i}$ stand for the
uncertainties of the peak magnitude and light-curve parameters of the $i$-th~SN, the terms
$C_{m_{B}\,x_{1},\,i},\;C_{m_{B}\,\mathcal{C},\,i}$, and $C_{x_{1}\,\mathcal{C},\,i}$ represent
the covariances among $m_{B},\;x_{1},\;\mathcal{C}$ for the $i$-th~SN, $\sigma_{{\rm lens},i}$
is the uncertainty from stochastic gravitational lensing, and $\sigma_{\rm int}$ is the unknown
intrinsic uncertainty. The dispersion $\sigma_{\mu-z,i}=5\sqrt{\sigma_{z_{\rm pec}}^{2}+
\sigma_{z_{i}}^{2}}/\left(z_{i}\ln10\right)$ accounts for the uncertainty from the peculiar
velocity uncertainty $\sigma_{z_{\rm pec}}$ and redshift measurement uncertainty
$\sigma_{z_{i}}$ in quadrature. We follow \cite{2018ApJ...859..101S} in using
$c\sigma_{z_{\rm pec}}=240$ km $\rm s^{-1}$, as well as $\sigma_{{\rm lens},i}=0.055z_{i}$.
Only the statistical uncertainties are considered since the six-parameter systematic
covariance matrices ($m_{B}$, $x_1$, $\mathcal C$, $m_{B}\mathcal C$, $x_{1}m_{B}$,
$x_{1}\mathcal C$) are not available in \cite{2018ApJ...859..101S}. In this case, the
free parameters are the curvature parameter $\Omega_{k}$, the three polynomial coefficients
($a_1$, $a_2$, $a_3$), and the SN nuisance parameters ($\alpha$, $\beta$, $M_{B}$,
$\Delta_{M}$, $\sigma_{\rm int}$). These nine parameters are constrained to be
$\Omega_{k}=0.59^{+0.18}_{-0.17}$, $a_1=-15.28^{+1.53}_{-1.54}$, $a_2=-3.78^{+0.71}_{-0.74}$,
$a_3=6.02^{+1.06}_{-1.05}$, $\alpha=0.132^{+0.005}_{-0.005}$, $\beta=2.595^{+0.057}_{-0.056}$,
$M_{B}=-19.48^{+0.23}_{-0.21}$, $\Delta_{M}=0.052^{+0.009}_{-0.009}$, and
$\sigma_{\rm int}=0.079^{+0.006}_{-0.006}$, which are displayed in Figure~\ref{f6} and
summarized in Table~\ref{table2}. The comparison between lines 1 and 2 of Table~\ref{table2}
suggests that simply using the corrected SN magnitudes introduces a non-negligible disparity
in the results. The value of $\Omega_{k}$ inferred from the galaxy + corrected SN data
represents a $0.5\sigma$ tension with that measured from the galaxy + uncorrected SN data.
\begin{figure*}
\vskip-0.1in
\centerline{\includegraphics[keepaspectratio,clip,width=1.0\textwidth]{f6.eps}}
\vskip-0.1in
\caption{Same as Figure~\ref{f5}, but now showing the constraints for the parameters
$\Omega_{k}$, $a_1$, $a_2$, $a_3$, $\alpha$, $\beta$, $M_{B}$, $\Delta_{M}$, and
$\sigma_{\rm int}$ based on the joint analysis of the galaxy age and uncorrected SN
magnitude data.}
\label{f6}
\end{figure*}
\section{Summary and Discussion}
\label{sec:summary}
In this work, we have used the age measurements of 114 OAO (including 61 galaxies and 53 quasars)
in the redshift range $0\lesssim z\lesssim 8$ to constrain the late-time cosmic expansion history
and explore the Hubble tension in several cosmological models. Owing to the age of the Universe
at any redshift being inversely proportional to the Hubble constant $H_0$, the requirement that
the Universe be older than the OAO it contains at any redshift provides an upper limit to $H_0$.
Assuming the validity of flat $\Lambda$CDM at late times, and setting wide flat priors on $H_0$
and $\Omega_{\rm m}$, we have obtained $H_0<75.5$ km $\rm s^{-1}$ $\rm Mpc^{-1}$ at the 95\%
confidence level, roughly consistent with local $H_0$ measurements. However, if a Gaussian prior
of $\Omega_{\rm m}=0.315\pm0.007$ informed by {\it Planck} is used, then the 95\% confidence
level upper limit on $H_0$ turned out to be $H_0<70.6$ km $\rm s^{-1}$ $\rm Mpc^{-1}$,
representing a $2\sigma$ tension with the locally measured value. It is compatible with
the {\it Planck} inference, however. This is interesting because, in this scenario, both
$H_{0}$ and $\Omega_{\rm m}$ are mutually consistent for both {\it Planck} and the OAO
age-redshift data. We found that the inferred upper value to $H_{0}$ does depend quite
significantly on $\Omega_{\rm m}$. Since the local measurement of $H_{0}$ does not require
$\Omega_{\rm m}$, while {\it Planck} uses both, we conclude that the Hubble tension between
the two measurements may be due in part to the use of $\Omega_{\rm m}$ in one case
and not the other.
Besides $\Lambda$CDM, we also discussed how the $H_{0}$ limits may be obtained for
$R_{\rm h}=ct$ and Einstein-de Sitter. The $R_{\rm h}=ct$ universe fits the age-redshift
data with an upper limit of $H_0<86.1$ km $\rm s^{-1}$ $\rm Mpc^{-1}$. By comparison,
the Einstein-de Sitter universe fits the same data with an upper limit of $H_0<40.1$ km
$\rm s^{-1}$ $\rm Mpc^{-1}$. Obviously, Einstein-de Sitter fails to pass the cosmic age test,
because the inferred upper limit to $H_{0}$ in this model represents a $25.5\sigma$ tension
with the locally measured $H_{0}$. Our overall results affirm the idea that cosmic ages are
an extremely valuable probe in the quest towards uncovering the nature of the Hubble tension.
We have also proposed a novel method of estimating the spatial curvature, avoiding possible
biases introduced by the pre-assumption of a specific cosmological model. To perform our
analysis, we have considered the following cosmological data: 61 age measurements of galaxies
and 1,048 SNe Ia from Pantheon compilation. Based on the geometric relation in the FLRW
metric, we have shown the possibility of obtaining the curvature-dependent luminosity distance
from a best-fit polynomial to the age-redshift data of old objects. By comparing this
curvature-dependent luminosity distance with the empirical luminosity distance inferred
from SNe Ia, we obtained a somewhat model-independent estimate of the curvature parameter
$\Omega_{k}$ based on the parametrization in $\Lambda$CDM.
\cite{2018ApJ...859..101S} applied the BBC method to determine the SN nuisance parameters and
reported the corrected apparent magnitudes for all the Pantheon SNe. Combining the age-redshift
measurements of galaxies with these corrected SN magnitudes, we have placed limits
simultaneously on the cosmic curvature $\Omega_{k}$, the polynomial coefficients
($a_1$, $a_2$, $a_3$), and the SN absolute magnitude $M_{B}$. This analysis suggests that
the curvature parameter is constrained to be $\Omega_{k}=0.43^{+0.27}_{-0.27}$, which
marginally compatible with zero. That is, the spatial geometry of the Universe is marginally
consistent with spatial flatness at the $1.6\sigma$ level.
As the inferred values
of the SN nuisance parameters in the BBC method may depend on the reference cosmological
model, even within the context of $\Lambda$CDM, we also carried out this type of analysis
using the combined galaxy + uncorrected SN data sets by simultaneously constraining the curvature
parameter $\Omega_{k}$, the polynomial coefficients ($a_1$, $a_2$, $a_3$), and the SN nuisance
parameters ($\alpha$, $\beta$, $M_{B}$, $\Delta_{M}$, $\sigma_{\rm int}$). In this case, we
found that the constraint is $\Omega_{k}=0.59^{+0.18}_{-0.17}$. The value of $\Omega_{k}$
changes slightly, by about $0.5\sigma$, when the SN nuisance parameters are re-constrained
along with the cosmology, implying that simply using the corrected SN magnitudes would
introduce a non-negligible disparity in the results.
Such deviations from zero are not yet compelling enough to initiate a detailed investigation
of their implications. We point out, however, that there are several rather essential
consequences, should such an outcome be realized. First, spatial flatness is assumed to
be an indicator of inflation \citep{1981PhRvD..23..347G}. If the Universe is not spatially flat
afterall, this would cast serious doubt on the possibility that inflation could have
happened. At the very least, it would require major modifications to most of
the inflation potentials proposed thus far. Note, however, that a de Sitter expansion need
not necessarily proceed solely with spatial flatness. As such, several attempts have been
made to create an inflationary scenario with negative spatial curvature, leading to an
an open Universe. This may happen, e.g., in the context of quantum tunnelling-induced
false vacuum decay \citep{1994PhRvD..50.5252R,1994ApJ...432L...5R,1995PhRvD..52.1837R,
1995PhRvD..52.3314B,2012JCAP...06..029K}.
Second, if it turns out that $\Omega_{k}$ is definitely positive, the Universe
must also have net positive energy density \citep{Melia2020}. This would
be very alarming in the context of a quantum-fluctuation origin for the Big Bang, since
it would rule out a `creation from nothing' scenario, in which all the laws of physics,
initial conditions and all the structure appeared as a quantum fluctuation at $t=0$ with
no pre-history. It would at the very least imply a pre-existing vacuum prior to the
expansionary event. Even so, one would then need to contend with the very serious
problem of how a Universe with the known value of Planck's constant and such an enormous
amount of energy could have lived long enough to classicalize and evolve into the
large-scale structure we see today \citep{Melia2020}.
There are good philosphical, if not empirical, reasons for believing that $\Omega_{k}$
must be zero. But we cannot yet make that claim without at least some doubt, certainly
not based on the analysis of the oldest astronomical objects in the Universe that we
have carried out in this paper.
\begin{acknowledgments}
We are grateful to Sunny Vagnozzi for systematically assembling the high-$z$ OAO catalog and
sharing it with us. JJW would like to thank Yan-Mei Han for her infinite patience while part
of this project was carried out at home during the Nanjing lockdown caused by the Covid-19
pandemic. This work is partially supported by the National Natural Science Foundation of China
(grant Nos.~11725314, U1831122, and 12041306), the Youth Innovation Promotion
Association (2017366), the Key Research Program of Frontier Sciences (grant No.
ZDBS-LY-7014) of Chinese Academy of Sciences, the Major Science and Technology
Project of Qinghai Province (2019-ZJ-A10), the China Manned Space Project (CMS-CSST-2021-B11),
and the Guangxi Key Laboratory for Relativistic Astrophysics.
We are also grateful to the anonymous referee for helpful comments.
\end{acknowledgments}
|
1,314,259,995,875 | arxiv | \section{Introduction}
The Crab is the prototypical Pulsar Wind Nebula (PWN), characterized by a center-filled synchrotron nebula that is powered by a magnetized wind of charged particles emanating from a centrally located pulsar formed during the supernova explosion \citep{Weiler1978}. Due to its brightness, proximity of $\sim$2 kpc, and well-known explosion date of 1054 AD, the Crab is the best studied object of its kind, and the literature is rich with hundreds of publications and reviews describing its properties across all energy bands (see e.g. \citet{Hester2008} and \citet{Buhler2014} for a recent review).
Detailed images from the \textit{Hubble Space Telescope} \citep{Hester1995} and \textit{Chandra} \citep{Weisskopf2000} have revealed the nebula's morphological complexities. In the optical band the remnant measures $\sim$3$^{\prime}$\ across its longest axis with thermal filaments composed of ejecta from the explosion confining the synchrotron nebula. In X-rays the remnant is considerably smaller and shows both torus and jet structures. The symmetry axis is tilted at about $27^\circ$ to the plane of the sky with the NW edge closer to the observer \citep{Ng2004}. The jet emerges toward the observer to the SE, and a less collimated structure (the counter-jet) extends away to the NW. \citet{Ng2004} fit simple models to the morphology to obtain a radial flow velocity through the torus of $0.550 \pm 0.001 c$, and at these velocities relativistic beaming brightens the nearside (NW) of the torus.
The wind likely terminates at a shock zone about 10$^{\prime\prime}$\ from the pulsar \citep{Weisskopf2000}, and the post shock particles and magnetic fields propagate non-relativistically either by diffusion \citep{Gratton1972,Wilson19722} or advection \citep{Rees1974,KC19841,KC19842} to the edge of the remnant, emitting their energy as synchrotron radiation. Both the process of particle acceleration in relativistic shocks and the transport of particles and fields downstream, occur in other astrophysical settings such as gamma-ray bursts and jets in active galaxies, and studying them in a relatively nearby spatially resolved source, may have application beyond the understanding of the Crab and PWNe in general.
From radio to TeV, the emission from the Crab nebula is non-thermal and peaks in the optical through X-ray. At radio wavelengths the nebula's integrated emission is a power-law spectrum with index ($S_\nu \propto \nu^\alpha$) $\alpha = -0.299 \pm 0.009$ \citep{Baars1977}. In the optical, the synchotron spectrum is steeper with a gradual turnover occurring somewhere between 10 and 1000 $\mu$m \citep{Marsden1984}. In the X-ray band the Crab nebula+pulsar photon index is $\Gamma \sim 2.1$ and further softens above 100\,keV to $\sim$2.23 \citep{Jourdain2009}. The Crab nebula component alone likewise softens above 100\,keV to $\sim$2.14 up to 300\,keV \citep{Pravdo1997} and softens further to $2.227 \pm 0.013$ between 0.75 and 30\,MeV \citep{Kuiper2001} where it continues to soften until 700\,MeV, beyond which the inverse Compton component sets in \citep{Meyer2010}. The pulsed spectrum follows a different spectral energy distribution (SED) with a more complex spectral evolution \citep{Kuiper2001,Weisskopf2011,Kirsch2006}.
In GeV $\gamma$-rays the Crab has been observed to flare rapidly \citep{Tavani2011,Abdo2011} roughly once a year for a duration of $\sim$10 days, with a flux increase of a factor of $\sim$5. The origins of these flaring episodes are currently not understood, but due to the rapidity of the flares and the fast cooling time of synchrotron radiation, the flare emission is presumed to be of synchrotron origin rather than inverse Compton or bremsstrahlung \citep{Abdo2011}.
No single model has yet successfully been devised to explain the properties of the nebula across all energy bands. The ratio of wind flow times to the particle radiative lifetimes is a strong model dependent parameter tied to the mechanism of energy transport and is observable as an energy-dependent nebular size. The prediction of both diffusion and advection models is a decrease in size of the nebula with increasing photon energy due to higher-energy electrons dying out sooner than lower-energy ones. In X-rays the expectation is for a diffusion-driven nebula to be smaller than one that is advection dominated. \citet{Ku1976} first collected broadband evidence confirming shrinkage of the Crab nebula, and this limited data set pointed towards a combination of diffusive and advective electron transport.
The spatial dependence of the broad-band spectrum provides another observational constraint on models. Detailed \textit{Chandra} observations \citep{Mori2004} show that the torus spectrum is roughly uniform with a steepening at the edge, in line with predictions made by \citet{KC19841,KC19842}, and indicates that for X-ray-emitting particles, the transport in the Crab nebula appears to be dominated by advection rather than diffusion.
\begin{table}
\caption{Observations Log}
\centering
\begin{tabular}{lcccl}
\hline
Obs ID & Date & \footnote{Effective exposure time corrected for dead-time.}Exposure & off-axis angle & Comment \\
& & seconds & arcminutes & \\
\hline
\hline
10013022002\footnote{Subset used for deconvolution analysis.} & 2012-09-20 & 2592 & 1.5/2.0 & Crab22 \\
10013022004$^b$ & 2012-09-21 & 2347 & 1.5/2.0 & Crab22 \\
10013022006$^b$ & 2012-09-21 & 2587 & 1.5/2.0 & Crab22 \\
10013031002$^b$ & 2012-10-25 & 2507 & 2.0/2.6 & Crab31 \\
10013032002$^b$ & 2012-11-04 & 2595 & 1.2/1.3 & Crab32 \\
10013034002 & 2013-02-14 & 988 & 0.8/1.5 & Crab34 \\
10013034004 & 2013-02-14 & 5720 & 0.7/0.9 & Crab34 \\
10013034005 & 2013-02-15 & 5968 & 0.9/0.6 & Crab34 \\
10013037002 & 2013-04-03 & 2679 & 2.0/2.4 & Crab37 \\
10013037004 & 2013-04-04 & 2796 & 2.9/3.5 & Crab37 \\
10013037006 & 2013-04-05 & 2944 & 2.9/3.5 & Crab37 \\
10013037008 & 2013-04-18 & 2814 & 3.0/3.7 & Crab37 \\
80001022002$^b$ & 2013-03-09 & 3917 & 1.5/1.8 & CrabToo \\
10002001002$^b$ & 2013-09-02 & 2608 & 1.72/2.09 & CrabSci \\
10002001004$^b$ & 2013-09-03 & 2386 & 1.73/2.26 & CrabSci \\
10002001006$^b$ & 2013-11-11 & 14260 & 1.08/1.28 & CrabSci \\
\hline
\end{tabular}
\label{obsid}
\end{table}
Above 10 keV, observations with the ability to spatially resolve the Crab nebula have been limited to one-dimensional scanning techniques and lunar occultation, which have limited signal to noise and lack true two-dimensional imaging \citep{Pelling1987}. The \textit{NuSTAR} high-energy X-ray focusing mission has the ability to make sensitive imaging observations of diffuse sources above 10~keV for the first time. \textit{NuSTAR's} two co-aligned telescopes operate from 3 -- 78 keV, and have an imaging resolution of 18$^{\prime\prime}$\ FWHM and $\sim1$$^{\prime}$\ Half Power Diameter \citep{Harrison2013}. \textit{NuSTAR} can image the Crab above 10 keV with sufficient resolution to investigate the spectral and spatial properties of the nebula.
In Section \ref{phaseaveraged} we present the global properties of the pulsar and nebula. In Section \ref{flaring} we discuss the \textit{NuSTAR} observations during the $\gamma$-ray flaring on the 9th of March 2013. In \ref{phaseresolved} we present phase resolved spectroscopy of the Crab in the energy range 3 -- 78 keV, tracing the spectrum of the pulsar as a function of its phase. In Section \ref{spatialspectral} we present spectral maps of the inner 100$^{\prime\prime}$\ of the nebula, using pulse-off phase intervals, and in Section \ref{size} we present deconvolved \textit{NuSTAR} images to investigate the physical size of the Crab as a function of energy. In the discussion section we review and summarize the new findings.
\section{Observations and Data Reduction}
\textit{NuSTAR} has two co-aligned telescopes with corresponding focal planes FPMA and FPMB. Each focal plane consists of four hybrid hard X-ray pixel detectors with a total field of view of 12.5$^{\prime}$$\times 12.5$$^{\prime}$. The detectors are denoted by numbers 0 through 3, with the optical axis placed in the inner corner of detector 0. \textit{NuSTAR} observed the Crab multiple times throughout 2012 and 2013 as part of the instrument calibration campaign. The observations span a wide range of off-axis angles, some of which were deemed too far off-axis for the analysis presented here, and Table \ref{obsid} lists the chosen subset. All of these observations have the pulsar within 4$^{\prime}$\ of the optical axis and located on detector 0, with a total elapsed live-time corrected exposure of 59.7 ks. The accuracy of the absolute source location can vary due to the thermal environment with $3\sigma$ offsets typically on the order of 8$^{\prime\prime}$\ \citep{Harrison2013}, but because of the very peaked pulsar emission it is straightforward to register all observations to the pulsar location.
Despite being a bright target ($\sim$250 cts s$^{-1}$ per FPM with a dead time of $\sim50\%$), no special processing or pile-up corrections are needed since the focal plane detectors have a triggered readout with a short (1~$\mu$s) preamplifier shaping time \citet{Harrison2013}. The data were reduced using the NuSTARDAS pipeline version v1.3.0 and CALDB version 20131223 with all standard settings. We extracted spectra using standard scripts provided by the NuSTARDAS pipeline.
The background for the Crab only becomes important at energies above 60\,keV where the internal detector background dominates over sky background. Since the internal background varies from detector to detector, ideally one would extract background from the same detector as the source. Unfortunately the brightness and extent of the Crab precludes extracting a local background from the same detector. We therefore simulated the backgrounds using the \textit{nuskybgd} tool \citep{Wik2014} and created a master background for each observation for a 200$^{\prime\prime}$\ extraction radius. To test for fluctuations in the background, we ran numerous realizations fitting a typical representative spectrum. We found that the fits were not sensitive to the background fluctuations.
In the text and tables all errors are reported at 90\% confidence, and all fits are performed with XSPEC\footnote{http://heasarc.gsfc.nasa.gov/xanadu/xspec/} using Cash statistics \citep{Cash1979} unless otherwise stated. Spectra shown are rebinned for display purposes only.
\begin{figure}
\begin{center}
\includegraphics[width=0.47\textwidth]{crabfit80pix.pdf}
\end{center}
\caption{Ratio of the data to the best-fit model for all the Crab observations listed in Table \ref{obsid}. The structured residuals, such as those around 20 -- 25\,keV are related to the calibration process (piece-wise linear spline interpolation), but are small ($\sim$2\%), typically broad and at energies not to be mistaken for emission lines.}
\label{canonical}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.47\textwidth]{cumulativecrab.pdf}
\end{center}
\caption{Spectra and fit of FPMA assuming a power-law index of $\Gamma=2.1$. The normalizations of the curves have been scaled to illustrates how the index of the Crab softens with increasing extraction radius until at an extraction radius of 200$^{\prime\prime}$\ the power-law index of 2.1 is recovered.}
\label{cumulative}
\end{figure}
\section{Data analysis}
\subsection{Phase Averaged Spectroscopy}\label{phaseaveraged}
The spatially integrated spectrum of the Crab nebula + pulsar in the 1 -- 100\,keV X-ray band has been well-described by a power-law with photon index $\Gamma\sim 2.1$ (\textit{RXTE, BeppoSAX, EXOSAT, INTEGRAL}/JEM\_X) \citep{Kirsch2005}. Above 100\,keV the hard X-ray instruments (\textit{INTEGRAL}/SPI/ISGRI, \textit{CGRO}) measure a softer index of $\Gamma \sim$2.20 -- 2.25, and below 10\,keV instruments with CCD detectors a harder spectrum. This hardening in CCD X-ray instruments comes about from photon pile-up, and although models exist for these instruments to deal with the pile-up, the Crab usually still challenges the models and requires special non-standard reductions. In addition, it is common practice for these instruments to excise the piled-up regions, which removes part of the integrated spectrum and thus can technically no longer be directly compared to other instruments where this excision has not occurred. The non piled-up instruments covering the 1 -- 100\,keV band agree on a photon index $\Gamma = 2.1 \pm 0.02$ and have not measured any curvature in the Crab spectrum across this band.
Over the 16 years {\em RXTE } was operational and regularly monitoring the Crab, the spectral index was seen to vary by a peak-to-peak variation of $\Delta \Gamma \sim 0.025$ \citep{Shaposhnikov2012}, perhaps due to magnetosonic waves in the nebula (e.g., \citet{Spitkovsky2004}). This variation is consistent with the observed spread between instruments, but is slow and on average over the 16 years the Crab remained at $\Gamma = 2.1$. Because the average index covers several instruments and the deviations from it are small, \textit{NuSTAR} has calibrated the effective area against a Crab index of $\Gamma = 2.1$.
A total of 39 Crab observations, spanning off-axis angles from 0 -- 7$^{\prime}$, went into adjusting the effective area ancillary response files (ARF) of \textit{NuSTAR}. This was done using a piece-wise linear spline interpolation as a function of energy and off-axis angle. Cross-calibration campaigns on 3C\,273 and PKS2155-304 have been used to confirm that the ARF adjustments have not introduced a systematic offset, and well-known power-law sources, such as CenA, to confirm they still appear as power-law sources. We used the quasar 3C\,273 to calibrate the N$_H$ column and derived a value of N$_H= (2 \pm 2) \times 10^{21}$ cm$^{-2}$ for the Crab. At 3\,keV \textit{NuSTAR} is only very marginally sensitive to N$_H$ columns of $10^{21}$ cm$^{-2}$, which is the reason behind the large error on the N$_H$ column. We have frozen the column to the above for all fits, employing Wilms abundances \citep{Wilms2000} and Verner cross-sections \citep{Verner1996}. An extensive discussion on the choice of N$_H$ and effective area calibration can be found in \citet{Madsen2014}.
The Crab flux has been registered in multiple instruments to decline over a 2 year period of $\sim 7\%$ \citep{Wilson2011} across the 15 -- 50\,keV band. This corresponds to a decay in the flux of 3.5\% per year over the period it has been observed. We set the Crab normalization to 8.5 to optimize (and minimize) the cross-calibration constants between concurrent X-ray observatories (\textit{Chandra, Swift, Suzaku, XMM-Newton}). In the followed, however, the absolute flux or a variation of it, has no influence on the results.
We combine all the observations from Table \ref{obsid}, excluding those denoted by ``Crab34" that have the central region of the source falling on the gap between detectors in a way that complicates flux correction. Figure \ref{canonical} shows the ratio of data to the best-fit power-law model $\Gamma = 2.0963 \pm 0.0004$ and cross normalization between FPMA and FPMB of 1.002. The structured residuals, such as those around 20 -- 25\,keV are related to the calibration process and are small ($\sim$2\%), typically broad and at energies not to be mistaken for emission lines.
It is important to note that in the subsequent analysis, we are focusing on changes relative to the above measured spectrum. The data quality is high, and we can measure slight changes within the spectrum with great accuracy. This is not to be mistaken for our knowledge of the \textit{absolute} value of the spectral index. Individual fits to the 39 Crab observations yields an average measured photon-index of $\Gamma = 2.1$ with a $1\sigma$ spread of $\pm 0.01$. However, this error on our absolute measurement is not relevant to our analysis since we are only concerned with relative changes.
To illustrate the range of spectral change in the Crab's integrated spectrum as a function of radius, we extract all events within circles of radius: 12$^{\prime\prime}$, 25$^{\prime\prime}$, 50$^{\prime\prime}$, 75$^{\prime\prime}$, 150$^{\prime\prime}$\ and 200$^{\prime\prime}$\ centered on the Crab pulsar and subtract the simulated master background. Figure \ref{cumulative} shows the ratio of these spectra to the canonical power-law model $\Gamma=2.1$, where the relative normalizations have been set equal at 3\,keV. As already discussed, this progressive softening of the index is predicted by theory and has been measured at all energies (up to 10\,keV) where it is possible to spatially resolve the Crab.
\begin{figure}
\begin{center}
\includegraphics[width=0.55\textwidth, viewport=100 50 600 700]{profileDTA.pdf}
\end{center}
\caption{Top: The pulse profile (P = 33ms) of module A is shown in solid, the live-time curve is shown dashed, and the boundaries of the extracted phase bins as vertical dotted lines. The minimum of the live-time curve occurs just after the peak and the oscillatory pattern is due to the dead-time of 2.5 ms. Middle: Live-time corrected pulse profile. Bottom: Power-law index averaged between 17$^{\prime\prime}$\ and 50$^{\prime\prime}$\ regions are shown in diamonds for $\Gamma_1$ and crosses $\Gamma_2$. The histogram shows the relative normalization of pulsed emission to the nebular emission ($N_\mathrm{pulse}/N_\mathrm{nebula}$) within each bin.}
\label{pulseprofile}
\end{figure}
\subsection{Crab flaring}\label{flaring}
In addition to the long time scale flux variations observed in the X-ray band, the Crab is known to flare in GeV $\gamma$-rays. Observations with \textit{Agile} \citep{Tavani2011} and \textit{Fermi} \citep{Abdo2011} in the 0.1 -- 1 GeV range have observed flares with a frequency of roughly $\sim1$ a year with typical durations of $\sim10$\,days and a flux increase of about a factor of 5. The radiation is thought to be of synchrotron origin due to the rapidity of the flares and the fast electron cooling time, as opposed to Bremsstrahlung or inverse-Compton emission that have cooling timescales of the order of $\sim10^6-10^7$\,years \citep{Abdo2011}. The emission region is further thought to be Doppler boosted towards the observer \citep{Buehler2012} and causality arguments suggest the flares originate from a very small region, most likely inside the termination shock zone. However, the spatial resolution of current $\gamma$-ray instruments is not sufficient to resolve the inner parts of the nebula. \citet{Weisskopf2013} analyzed \textit{Chandra}, Keck and VLA data during the 2011 April flare to look for such a counterpart, but none of these instruments found conclusive evidence of a change in the Crab emission.
On the 9th of March 2013 \textit{NuSTAR} triggered an observation during one such flaring episode \citep{Mayer2013}. This observation is labeled `CrabToO' in Table \ref{obsid} and had a duration of $\sim$16~ksec and an effective exposure of $\sim$4\,ksec after taking dead time, SAA passages, and occultation into account. {\em NuSTAR} caught the Crab at the tail end of flaring and did not detect any spectral variation or flux change beyond the level that can be expected from calibration ($\pm 5$\% in flux and $0.01$ in spectral index change).
\subsection{Phase Resolved Spectroscopy}\label{phaseresolved}
\subsubsection{Pulsar Spectrum}
The Crab pulsar has a 33~ms period and a pulse profile that is double peaked in all energy bands. In the X-rays the primary peak is higher than the secondary, and the off-pulse period spans about 30\% of the phase (see Figure 3, middle panel). The Crab ephemeris is routinely calculated by the Jodrell Bank Observatory \footnote{(http://www.jb.man.ac.uk/pulsar/crab.html)} and we used the closest ephemeris entry for each observation to obtain the pulse profile.
For the phase resolved spectroscopy, all observations listed in Table \ref{obsid} were combined for FPMA, and for FPMB ``Crab34'' was excluded due to proximity of the pulsar to the chip gap. The Ancillary Response Files (ARFs) of the different observations were combined, while the same detector response matrix (RMF) could be used since the pulsar was on detector 0 for all observations. We barycenter corrected event times using the {\tt barycorr} routine that is part of the HEASOFT {\tt FTOOL} library, and corrected the \textit{NuSTAR} clock for thermal drifts.
The resulting raw (not live-time corrected) pulse profile is shown in the top panel of Figure~\ref{pulseprofile} (solid line). The pulse profile exhibits a distinct oscillatory pattern due to dead time effects associated with the 2.5~msec event readout \citep{Harrison2013}. The decreased probability of detecting an event just after pulse peak results in a pattern of damped oscillations with a period of $\sim$3.4~msec and causes the live-time fraction to vary throughout the pulse period as shown by the dashed curve. To correct for this effect we use the ``PRIOR" column in the event list, which specifies the elapsed time since the prior event. In the absence of events vetoed by the active anti-coincidence shield, this column would accurately reflect the true elapsed live-time, but the standard operating mode does not downlink vetoed events, so that in general adding the PRIOR column would not yield the proper live-time. However, for source count-rates significantly higher than the veto rate the error becomes negligible, and for the Crab we determine it is only $\sim$0.3\%, by comparing the summed PRIOR column to the live-time reported by the instrument once per second.
We used the live-time curve to adjust the net exposure time of each phase, and for each phase bin we extracted events for both 17$^{\prime\prime}$\ and 50$^{\prime\prime}$\ circular regions centered on the pulsar. The reason for picking two extraction regions is to test the stability of the fitting procedure. The nebular component will change as a function of the extraction region size, while the pulsar component should not, providing a cross check on how well the two components are distinguished. To avoid the statistical pit-falls of subtracting a large component, we decided to fit the two components together, presuming that each phase bin can be decomposed into the un-pulsed nebula, constant throughout all phases, and the pulsar.
We first investigate what spectral models to use by fitting the three phase bins 10 -- 12 corresponding to the pulse-off interval. We found that a power-law yields an inadequate fit ($\chi^2_\mathrm{red}$ = 1.92 for 1076 degrees of freedom (dof)) and that a broken power-law provides a much better fit ($\chi^2_\mathrm{red}$ = 1.05 for 1074 dof). We also tried out a \texttt{logpar} model and an exponential \texttt{cutoff} model, but both failed to fit the high energy tail of the spectrum and under-predicted the flux. Although we recognize that it is difficult to infer physical meaning from a broken power-law, we chose to continue with this representation because it gives a simple and intuitive understanding of the shape of the spectrum.
We use the best fit broken power-law to investigate the shape of the pulsar component during the peaks in phase bins 2 and 8. Freezing the broken power-law model to what we derived for phase bins 10 -- 12, we fit the pulsar component with a power-law and a broken power-law respectively. Once again the broken power-law clearly provides a better fit; for phase bin 2 the power-law fit has $\chi^2_\mathrm{red}$=1.19 for 806 dof and broken power-law $\chi^2_\mathrm{red}$=1.00 for 809 dof. On this basis we chose to model with two independent broken power-laws. We fit a total of 26 spectra (13 phase bins for FPMA and FPMB) in XSPEC between 3 -- 78 keV.
We observe degeneracies between the models when we do not constrain the normalization of the pulsar component during phase bins 10 -- 12. We resolve this by limiting the contribution of the pulsar during these phases to be a small fraction during pulse-on phases. According to \citet{Weisskopf2011}, the ratio of the 0.3 -- 3.8\,keV flux between the primary pulse peak and pulse-off is about a factor of 100, and we applied this restriction to the relative pulsar normalization in the model for these phase bins. We coupled the break energy, BE, between all phase bins for the pulsar and nebula respectively. Table \ref{phaseresolvedtable} shows the best-fit parameters for the pulsar spectrum for both extraction regions in the top half for all phase bins except 10 -- 12. The parameters for the two extraction regions agree for most bins within 1$\sigma$ and all of them within 2$\sigma$. The bottom half lists the best fit parameters for the nebula in the two extraction regions and as anticipated they are different, softening with increasing radius but maintaining the same break energy. Figure~\ref{pulseprofile} bottom panel shows the averaged power-law for each phase bin of the two extraction regions for $\Gamma_1$ and $\Gamma_2$ respectively. We have omitted the parameters for phase bins 10 -- 12 since the pulsar component could not be properly constrained during these periods. The average $\Delta\Gamma = \Gamma_2 - \Gamma_1$ across the phase bins, excluding bins 10 -- 12, is $0.27 \pm 0.09$.
\subsubsection{Spatially Integrated Phase Resolved Spectrum}
We measured the phase-averaged spectrum from a 200$^{\prime\prime}$\ region in Section \ref{phaseaveraged} and found $\Gamma = 2.0963 \pm 0.0004$. At smaller radii we find that the pulsar and the nebula can both be characterized independently by broken power-laws. It may seem strange that the superposition of apparent broken and un-broken power-laws should sum up to a simple power-law. To show how the Crab achieves this, we investigate here the phase-resolved total spectrum (nebula+pulsar) using the same phase bins as defined in the previous section. We use a broken power-law where required by the data, and power-law otherwise and the fit results are listed in Table \ref{80pixfits}.
Figure \ref{pulseprofile80pix} shows the ratio of the data to the phase-average power-law fit. The curves have been scaled for clarity to show how the Crab roughly decomposes into four components; 1st pulse, bridge emission, 2nd pulse, pulse-off (nebula). Although both pulse peaks have spectra that steepen ($\Gamma_2 \sim 2.047 - 2.099$), the spectra remain harder than the phase-averaged value. The bridge emission is best approximated by a power-law with an index close to the phase-averaged value ($\Gamma = 2.086 \pm 0.002$), while the off-pulse nebula emission is significantly softer and mildly broken ($\Gamma_1 = 2.123 \pm 0.003$, $\Gamma_2 = 2.134 (-0.004/+0.01)$). The sum of nebula and pulsed emission thus approximately conspire to mimic the phase-averaged power-law and explain why as a whole we observe a power-law spectrum.
Finally in Figure \ref{psr_p_pwn} we show the spectral energy distribution for the (1) phase averaged pulsar + nebula, (2) nebula (phase bins 10 -- 12), (3) pulsar + nebula (phase bins 1 -- 9, 13), (4) and pulsar (phase bins 1 -- 9, 13) with nebula subtracted. The fit parameters for (1) is $\Gamma = 2.0963 \pm 0.0004$, (2) $\Gamma = 2.0865 \pm 0.0004$, (3) nebula alone as recorded in Table \ref{80pixfits}, (4) and for the pulsar alone $\Gamma_1 = 1.72 \pm 0.02$, $\Gamma_2 = 1.72 \pm 0.02$ and BE = $10.5 \pm 1.6$.
\begin{figure}
\begin{center}
\includegraphics[width=0.47\textwidth]{crab80pixphase.pdf}
\end{center}
\caption{Ratio plot of best fit model $\Gamma = 2.1$ of the 13 phase bins extracted from a region of 200$^{\prime\prime}$\ centered on the pulsar.}
\label{pulseprofile80pix}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.47\textwidth]{psr_p_pwn.pdf}
\end{center}
\caption{Spectral energy distribution of the pulsar + nebula over all phase bins, nebula (phase bins 10 -- 12), pulsar + nebula (phase bins 1 -- 9, 13), and pulsar (phase bins 1 -- 9, 13) alone with nebula subtracted.}
\label{psr_p_pwn}
\end{figure}
\begin{table*}
\caption{Phase resolved fits to extraction regions 17$^{\prime\prime}$\ and 50$^{\prime\prime}$}
\centering
\begin{tabular}{l|ccc|ccc}
\hline
\multicolumn{1}{l}{} & \multicolumn{3}{c}{\texttt{Tbabs(bkn+bkn)}} &
\multicolumn{3}{c}{\texttt{Tbabs(bkn+bkn)}} \\
\multicolumn{1}{l}{} & \multicolumn{3}{c}{17$^{\prime\prime}$: $\chi^2_\mathrm{red}$=1.005 (for 15884 degrees of freedom)} &
\multicolumn{3}{c}{50$^{\prime\prime}$: $\chi^2_\mathrm{red}$=1.018 (for 23752 degrees of freedom)} \\
\multicolumn{1}{l}{} & \multicolumn{3}{c}{Pulsar} & \multicolumn{3}{c}{Pulsar} \\
\multicolumn{1}{l}{} & \multicolumn{3}{c}{\texttt{bknpower}} & \multicolumn{3}{c}{\texttt{bknpower}} \\
\hline
Phase & $\Gamma_1$ & $\Gamma_2$ & BE\footnote{Break energy (BE) component coupled for all phase bins.} (keV) & $\Gamma_1$ & $\Gamma_2$ & BE$^\mathrm{a}$ (keV) \\
\hline
1 (0-0.07) &1.64$\pm$0.04 & 2.01$\pm$0.02 & 11.7$\pm$0.6 & 1.66$\pm$0.04 & 2.01$\pm$0.02 & 13.1$\pm$0.4 \\
2 (0.07-0.14) &1.80$\pm$0.01 & 2.02$\pm$0.02 &. & 1.83$\pm$0.01 & 2.05$\pm$0.02 &. \\
3 (0.14-0.21) &1.52$\pm$0.04 & 1.76$\pm$0.05 &. & 1.55$\pm$0.04 & 1.77$\pm$0.05 &. \\
4 (0.21-0.28) &1.39$\pm$0.06 & 1.62$\pm$0.07 &. & 1.28$\pm$0.06 & 1.64$\pm$0.07 &. \\
5 (0.28-0.35) &1.30$\pm$0.05 & 1.70$\pm$0.06 &. & 1.29$\pm$0.05 & 1.74$\pm$0.06 &. \\
6 (0.35-0.42) &1.42$\pm$0.03 & 1.75$\pm$0.04 &. & 1.42$\pm$0.03 & 1.76$\pm$0.04 &. \\
7 (0.42-0.49) &1.57$\pm$0.02 & 1.93$\pm$0.03 &. & 1.61$\pm$0.02 & 1.85$\pm$0.03 &. \\
8 (0.49-0.56) &1.66$\pm$0.02 & 1.92$\pm$0.03 &. & 1.71$\pm$0.02 & 1.91$\pm$0.03 &. \\
9 (0.56-0.63) &1.66$\pm$0.08 & 1.97$\pm$0.12 &. & 1.83$\pm$0.08 & 1.95$\pm$0.12 &. \\
13 (0.9-1.0) & 1.76$^{+0.2}_{-0.08}$ & $1.81\pm$0.10 & . & 1.96$\pm$0.08 & 2.11$\pm$0.10 &. \\
\hline
\multicolumn{7}{c}{}\\
\hline
\multicolumn{1}{l}{} & \multicolumn{3}{c}{Nebula} & \multicolumn{3}{c}{Nebula} \\
\multicolumn{1}{l}{} & \multicolumn{3}{c}{\texttt{bknpower}} & \multicolumn{3}{c}{\texttt{bknpower}} \\
\hline
Phase & $\Gamma_1$ & $\Gamma_2$ & BE (keV) & $\Gamma_1$ & $\Gamma_2$ & BE (keV) \\
\hline
10-12\footnote{Nebula component only.} (0.63-0.9) & 1.92$\pm$0.01 & 2.00$\pm$0.01 & 8.3$\pm$0.5 & 1.99$\pm$0.01 & 2.09$\pm$0.01 & 8.3$\pm$0.2\\
\hline
\end{tabular}
\label{phaseresolvedtable}
\end{table*}
\begin{table}
\caption{Phase resolved Pulsar+Nebula fits to 200$^{\prime\prime}$\ extraction region.}
\centering
\begin{tabular}{l|lll}
\hline
Phase & $\Gamma_1$ & $\Gamma_2^\mathrm{a}$ & BE\footnote{ If $\Gamma_2$ and break energy (BE) is not given the best fit was a power-law.} (keV)\\
\hline
1 (0-0.07) & 2.090 $\pm 0.003$ & 2.110 $\pm 0.006$ & 11 $\pm 3$ \\
2 (0.07-0.14) & 2.041 $\pm 0.003$ & 2.099$^{+0.006}_{-0.01}$ & 12.3 $\pm 1.2$ \\
3 (0.14-0.21) & 2.081 $\pm 0.002$ & - & - \\
4 (0.21-0.28) & 2.090 $\pm 0.002$ & - & - \\
5 (0.28-0.35) & 2.086 $\pm 0.002$ & - & - \\
6 (0.35-0.42) & 2.064 $\pm 0.002$ & - & - \\
7 (0.42-0.49) & 2.030 $\pm 0.002$ & 2.047$^{+0.006}_{-0.01}$ & 13.7 $\pm 3$ \\
8 (0.49-0.56) & 2.034 $\pm 0.003$ & 2.056 $\pm 0.005$ & 10 $\pm 1.5$ \\
9 (0.56-0.63) & 2.120 $\pm 0.002$ & - & - \\
10 (0.63-0.7) & 2.120 $\pm 0.004$ & 2.138$^{+0.004}_{-0.01}$ & 10 $\pm 4$ \\
11 (0.7-0.8) & 2.123 $\pm 0.003$ & 2.134$^{+0.004}_{-0.01}$ & 10 $\pm 4$ \\
12 (0.8-0.9) & 2.126 $\pm 0.002$ & - & - \\
13 (0.9-1.0) & 2.124 $\pm 0.002$ & - & - \\
\hline
\end{tabular}
\label{80pixfits}
\end{table}
\subsection{Spatially Resolved Spectroscopy of the Nebula}\label{spatialspectral}
\begin{figure}
\includegraphics[width=0.47\textwidth]{crab22_22A.pdf}
\caption{Spectrum for off-pulse bins 10 -- 12 extracted from map location (Ra,dec)=(0,0) of FPMA, which coincides with the pulsar location. The black and red data set is the same, but the models different. The red curve shows the best fit broken power-law, which had $\Gamma_1=1.91 \pm 0.01$, $\Gamma_2=2.03\pm0.01$, E$_{break}=8.7\pm0.9$\,keV. The black curve is a power-law with index $\Gamma = \Gamma_2$ and the normalization scaled such that the high energy part is the same.}
\label{bknfit}
\end{figure}
\begin{figure*}
\includegraphics[width=9cm, viewport=0 200 600 725]{contourMapsB_5.pdf}
\includegraphics[width=9cm, viewport=0 200 600 725]{contourMapsB_14.pdf}
\includegraphics[width=9cm, viewport=0 200 600 725]{contourMapsB_8.pdf}
\includegraphics[width=9cm, viewport=0 200 600 725]{contourMapsB_17.pdf}
\caption{Plots showing the break energy and $\Delta\Gamma$ map of $\Gamma_2$-$\Gamma_1$ for FPMB during pulse off. Left panel contours are the \textit{\textit{NuSTAR}} intensity levels and the cross marks the pulsar location. Right panel contours are from \textit{\textit{Chandra}}.}
\label{contour1}
\end{figure*}
\begin{figure*}
\includegraphics[width=9cm, viewport=0 200 600 725]{contourMapsB_9.pdf}
\includegraphics[width=9cm, viewport=0 200 600 725]{contourMapsB_18.pdf}
\includegraphics[width=9cm, viewport=0 200 600 725]{contourMapsB_10.pdf}
\includegraphics[width=9cm, viewport=0 200 600 725]{contourMapsB_19.pdf}
\caption{Plots power-law map of $\Gamma_{<6keV}$ and $\Gamma_2$ for FPMB during pulse off. Left panel contours are the \textit{\textit{NuSTAR}} intensity levels and the cross marks the pulsar location. Right panel contours are from \textit{\textit{Chandra}}.}
\label{contour2}
\end{figure*}
\begin{figure}
\includegraphics[width=9cm, viewport=0 200 600 725]{CrabModel3D1.pdf}
\includegraphics[width=9cm, viewport=0 200 600 725]{CrabModel3D3.pdf}
\caption{Top: Input power-law index map for the simulations. For more details see description in text. Bottom: \textit{Chandra} intensity map used to set the 2 -- 10\,keV flux level for the simulations.}
\label{contour3}
\end{figure}
\begin{figure*}
\includegraphics[width=9cm, viewport=0 200 600 725]{SimcontourMapsB14.pdf}
\includegraphics[width=9cm, viewport=0 200 600 725]{SimcontourMapsB18.pdf}
\includegraphics[width=9cm, viewport=0 200 600 725]{SimcontourMapsB17.pdf}
\includegraphics[width=9cm, viewport=0 200 600 725]{SimcontourMapsB19.pdf}
\caption{Simulated model. Left panel: Break energy and $\Delta\Gamma$ map of $\Gamma_2$-$\Gamma_1$. Right panel: power-law map of $\Gamma_{<6keV}$ and $\Gamma_2$. Contours are from \textit{\textit{Chandra}}}
\label{contour4}
\end{figure*}
From the phase-averaged spectra shown in Figure \ref{cumulative}, it is evident the Crab spectral index changes quite dramatically across the face of the remnant. This is in part due to the harder pulsar spectrum mixing with the softer nebula. When restricting only to phase bins 10 -- 12 the change is less dramatic, going from $\Gamma_1/\Gamma_2 = 1.92/2.00$ at 17$^{\prime\prime}$, 1.99/2.09 at 50$^{\prime\prime}$\ (Table \ref{phaseresolvedtable}) to 2.12/2.14 at 200$^{\prime\prime}$\ (Table \ref{80pixfits}). Softening of the spectra with increasing radius is predicted in theory and has been observed in G21.5-0.9 \citep{Nynka2014} and 3C 58 \citep{Slane2004} as well in the Crab at lower energies by \textit{Chandra} \citep{Weisskopf2000} and \textit{XMM-Newton} \citep{Kirsch2006}. Due to mixing of spectra at different radial locations by the PSF in \textit{NuSTAR}, it is not straight forward to measure the true (unmixed) index as a function of radius, and a more careful analysis is required to confirm and quantify the effect.
The theoretical predictions apply to the nebula, and we chose to analyze the spatial variation during the phase bins 10 -- 12 when the pulsar is off. We slide a box of 24.5$^{\prime\prime}$$\times$ 24.5$^{\prime\prime}$\ solid angle across the inner 100$^{\prime\prime}$\ to obtain a high S/N spectra, and the box is incremented along RA and Dec by a solid angle of 2.45$^{\prime\prime}$, the size of a {\em NuSTAR} projected sky pixel. At each step we calculate the responses for the box center and fit a broken power-law spectrum. We scale the background from the simulated master background. We performed fitting using XSPEC and Cash statistics.
In general a broken power-law adequately describes the data. However, in some cases the fitting found either $\Gamma_1=\Gamma_2$, or a break energy less than 5\,keV. In these cases we reverted to a single power-law model. At the edges where the torus transitions into the extended nebula, the spectra become complex and can no longer be represented by a broken power-law either. Some of them show negative breaks and for the edge regions we therefore characterized the spectrum with a simple single power-law setting $\Gamma = \Gamma_2$.
In the following we switch to a coordinate system relative to the pulsar location ($\Delta$Ra,$\Delta$Dec)=(0,0). To illustrate the data quality and the level of deviation from a power-law in the center of the remnant, Figure \ref{bknfit} shows the ratio of the spectrum extracted from FPMA for one grid point (24.5$^{\prime\prime}$$\times$ 24.5$^{\prime\prime}$) with the center at ($\Delta$Ra,$\Delta$Dec)=(0,0) to the model. The red curve shows the best fit broken power-law, for which we find $\Gamma_1=1.91 \pm 0.01$, $\Gamma_2=2.03\pm0.01$, and E$_{break}=8.7\pm0.9$\,keV. The black curve shows a single power-law where we have fixed the index $\Gamma = \Gamma_2 = 2.03\pm0.01$ and normalization such that the spectra match at E$ > 10$~keV.
Simply by eye it is clear that these spectra strongly deviate from a power-law by softening above 10\,keV. We emphasize that the effect of PSF mixing is to obscure the true underlying spectra. It cannot create the appearance of a broken power-law out of a superposition of spectra if they were solely power-laws.
We present now the spatially resolved maps. Figure \ref{contour1} and Figure \ref{contour2} show maps of the (1) the break energy, (2) $\Delta\Gamma = \Gamma_2 - \Gamma_1$), (3) $\Gamma_{<6keV}$ (the photon index at energies $<6$keV), and (4) $\Gamma_2$. In general the errors in individual boxes are ($\Gamma_1, \Gamma_2) \sim \pm 0.015$ and the error in the break energy (if there is one) $\pm$1\,keV. The left panels show the map overlaid with \textit{NuSTAR} intensity contours and right panel with \textit{Chandra} contours to show details of the spatial structures. The pulsar is represented by a cross and located in all images at $(\Delta$Ra,$\Delta$Dec) = (0,0). Although we show the map for FPMA for ease of presentation, the maps from FPMB are identical within errors.
Figure \ref{contour1} shows that inside the remnant the break energy is roughly constant, with an average value of $\sim$9\,keV decreasing towards the NW and increasing at the SE edge. The $\Delta\Gamma$ map shows that the highest values of $\Delta\Gamma$ follows the curvature of the forward edge of the torus. It is interesting that the largest value is not found at the location of the highest intensity, but rather slightly more north on the edge of the \textit{Chandra} intensity contour. Comparing to the break energy map, where $\Delta\Gamma$ is high the break energy is on average lower.
Figure \ref{contour2} shows the low-energy spectral index, $\Gamma_{<6keV}$. The 6~keV energy was chosen because it is always below the break energy. The greatest change in morphology in this map occurs along the forward edge of the torus where the spectrum softens rapidly. Along a line from the pulsar towards the NW corner, the radial spectrum softens faster above the break than below. This holds at the majority of azimuthal angles around the torus, but not in the jet direction. Neither the jet region or counter-jet region have measurable spectral breaks. The SE corner appears to have a negative $\Delta\Gamma$, but this is likely the effect of the PSF scattering the harder torus high energy spectrum into the softer nebula.
There is no easy way to spatially disentangle spectral components that have been mixed by the PSF (In the Appendix we provide details on the \textit{NuSTAR} PSF and discuss the technical details of it). To interpret the spectral map, we therefore compose a model of the nebula and forward fold it through the \textit{NuSTAR} response and compare it to the data. We create a 2-D spectral model based on the analysis of \citet{Mori2004} of the Crab with \textit{Chandra}. This data set shows spectral variations on arcsecond scales, but we follow the authors comment that the remnant can be represented by six components: (1) a halo (shown in dark red in Figure \ref{contour3}) with $\Gamma = 3.0$, (2) a cap (yellow) $\Gamma=2.5$, (3) a skirt (light blue) $\Gamma = 2.1$, (4) a torus (dark blue) $\Gamma = 1.9$, (5) a jet (medium blue) $\Gamma = 2.0$, and (6) a center (black) $\Gamma = 1.6$. We assume that this spectral model holds up until our measured break. Based on the \textit{NuSTAR} analysis, we further assume that the torus has an average spectral break at 9\,keV with a $\Delta\Gamma=0.25$, and that the spectrum of the center has a break at the same energy, but with a $\Delta\Gamma=0.1$. We arrived at this smaller $\Delta\Gamma=0.1$ through a series of iterations since this component is maximally obscured and difficult to determine from the \textit{NuSTAR} data. The center and the torus are the only two components that we allow to have a break. All other regions are presumed to have power-law spectra. We used a \textit{Chandra} image of the Crab, shown in Figure \ref{contour3}, to set the normalization of the map in the 2 -- 10\,keV band. The spectra are completely defined by this set of parameters.
We propagated this model through the \textit{NuSTAR} PSF and responses and analyzed the output in exactly the same manner as the real data itself. Figure \ref{contour4} shows the resulting maps. Macroscopically they reproduce the {\em NuSTAR} observations very well. There are some discrepancies, but these can be explained by the simple model we use which does not include the low energy variations observed by \citet{Mori2004} or the break energy variations seen by \textit{NuSTAR}. However, these simulations still address some important questions. It is clear that the torus spectrum has to steepen with energy, and that one has to use an approximately constant spectral index in both $\Gamma_1$ and $\Gamma_2$ all the way out to the edge, where there is a rapid transition to a softer, unbroken spectra. We tested this hypothesis by creating a model with a linearly changing spectral index as a function of radius as has been observed for G21.5-0.9 \citep{Safi2001} and 3C\,58 \citep{Slane2004}. The resulting maps do not match the observations and create spectral slopes that are far too soft in the interior. The spectrum of the core must be hard in order to reproduce the \textit{NuSTAR} results. \citet{Mori2004} did not find a hard center of the remnant below 10 keV, but their observations suffered pile-up in the center, and they couldn't measure the spectrum in the inner ~10$^{\prime\prime}$. The origin of the hard component seen by {\em NuSTAR} could be the neutron star itself, which according to \citet{Weisskopf2011} has a spectral index of $\Gamma\sim1.9\pm0.4$. Because the simulations are driven by the \textit{Chandra} flux map, the location of the pulsar has no counts because it was excised, which may explain why instead of a point source our simulations required an extended, hard core, distributing the missing flux over a large region. Finally neither the cap or the jet appear to steepen with energy
It is pertinent to question whether dust scattering could have anything to do with the slope below 10\,keV. Unless the N$_H$ column inside the remnant is larger than the galactic column by a factor of 10, the interstellar dust extinction has little effect above 3\,keV \citep{Seward2006}. Indeed the fact that the break energy and $\Delta\Gamma$ trace out features of the remnant strongly indicates that it must be intrinsic to the source.
\subsection{Spatial Extent of the Nebula}\label{size}
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{deconvolved.pdf}
\end{center}
\caption{Maximum likelihood deconvolved images of FPMA in six different energy bands shown in a square root stretch during pulse off. The shrinking of the NW counter-jet is easily observable and the morphological changes of the NE torus can also be seen.}
\label{deconvolved}
\end{figure*}
The size of the Crab remnant shrinks as a function of energy due to the radiative lifetimes of outward-propagating electrons being shorter for high energy than low energy particles. This effect is often referred to as `synchrotron burn-off'. To investigate the radial extent of the Crab as a function of energy we deconvolved the {\em NuSTAR} maps using a maximum likelihood method. The deconvolution procedure is sensitive to artifacts, such as detector gaps and the variation in signal to noise ratio with position in the map. The stronger the source relative to background the better the deconvolution results. The PSF is relatively constant near the optical axis (the difference in the HPD between off-axis angles of 1$^{\prime}$\ and 2$^{\prime}$\ is less than 1$^{\prime\prime}$), but becomes azimuthally distorted at large off-axis angles. This, however, does not become noticeable until about 3$^{\prime}$\ off-axis, where the difference between the major and minor axis of the PSF is $\sim$2\%. To minimize these effects we selected a subset of the observations in Table \ref{obsid} marked with ``b'' at off-axis angles less than 2$^{\prime}$\ and well away from the detector gaps. At these off-axis angles we can design an average PSF weighted by time and combine the images to yield a more robust result than individually deconvolving short segments.
To remove the contamination of the pulsar we only use photons falling in phase bins 10 -- 12 and deconvolve the Crab in the following energy bands: 3 -- 5, 5 -- 6, 6 -- 8, 8 -- 12, 12 -- 20, 20 -- 35, and 35 -- 78\,keV. Prior to combining the images, we vignetting correct them with the effective area taken at the area-weighted average energy.
Deconvolution with a maximum likelihood method is iterative and if not performed with care can introduce artifacts due to over deconvolution. During the deconvolution process we checked the relative size between the selected iteration steps (20, 30, 40 and 50) within each energy band, and saw that the size difference remained largely unchanged after 30 deconvolutions. For the highest energy band, however, we started to observe artifacts after 50 deconvolutions, and we therefore estimate that 40 deconvolutions is a safe number to use for all energy bands.
Because of the nature of the deconvolution process, it is difficult to assign an error to the resulting size. Ideally a model of the source can be forward folded and compared to the actual image, but in the case of the Crab where such a model is not well known, and the morphology of the source is complex, this approach is not feasible. Forward folding the deconvolved image with the PSF and comparing to the raw image is another option, but it only serves to identify gross errors. It is therefore not possible to assign an error to the absolute size in the deconvolved images. Fortunately, we are interested in a rate of change as a function of energy, and we can investigated the error of the deconvolution as a function of energy by deconvolving several strong \textit{NuSTAR} point sources. These should have a constant radial extent as a function of energy, and any discrepancy is assumed to be the error introduced by deconvolution. We used the same energy bands as for the Crab images, and found an evolution in the deconvoled point source sizes as a function of energy. The discrepancy between the highest and lowest energy bands is $\sim$1.5$^{\prime\prime}$, and we have conservatively assumed a 2$^{\prime\prime}$\ 1$\sigma$ error in the relative size of the remnant between the energy bands.
Figure \ref{deconvolved} shows the resulting images for FPMA. The major components observed in \textit{Chandra} are also seen by \textit{NuSTAR}, and morphological changes as a function of energy are clearly visible. Figure \ref{sizecontour} plots the contours of the HWHM as defined by the point where the intensity falls to half the value measured at the pulsar location, and shows the magnitude of the shrinkage is position angle (PA) dependent. Figure \ref{deconvprofile} shows the profiles of two perpendicular slices with the profiles extracted by bi-linear interpolation. The intensity map is off-set from the pulsar due to the 27$^\circ$ torus inclination and the beaming of the leading edge of the torus, but we normalized the curves at the pulsar location, which places the peak intensity towards the NW for low energies. The offset decreases with increasing energy and is gone by 20 keV. The NE and SW side of the torus plane therefor have somewhat different morphologies close to the pulsar, and it can also be seen that the jet and counter-jet directions evolve quite differently; the counter-jet and cap area HWHM falls off rapidly with energy, while the forward jet is much slower.
We fitted the HWHM of the profiles as a function of energy using a power-law $kE^{-\gamma}$. Figure \ref{fits} shows the fits for FPMA for the two sides of both the jet axis and torus plane, and Table \ref{tablefits} summarizes the fit values for both FPMA and FPMB. The flux of the jet is much smaller than that of the torus, and what we measure along the SE is most likely not the jet, but rather the edge of the torus. The rate of change in NE, SE and SW are of a similar slope, but different intensity, indicating the shrinkage occurs in the plane of the torus. The projection effect of the tilted plane would in this way explain the smaller magnitude observed along the SE edge. The NW edge of the torus has a line of sight that also includes the NW cap and counter-jet, obscuring the true shrinkage of the torus in this direction. We know this area has the softest spectrum and the index rate of shrinkage, $\gamma$, is twice as large as that of the torus.
\begin{table}
\centering
\caption{Nebula HWHM}
\begin{tabular}{ccc|cc}
\hline
&\multicolumn{2}{c}{FPMA}&\multicolumn{2}{c}{FPMB} \\
\hline
Axis & $\gamma$ & k & $\gamma$ & k\\
\hline
NE Torus & 0.094$\pm$0.018 & 49(1) & 0.079$\pm$0.017 & 50(1)\\
SW Torus & 0.060$\pm$0.020 & 38(1) & 0.085$\pm$0.02 & 42(1)\\
SE Jet & 0.083$\pm$0.062 & 13(1) & 0.014$\pm$0.056 & 12(1)\\
NW Jet & 0.245$\pm$0.029 & 46(2) & 0.192 $\pm$0.027 & 42(2)\\
\hline
\label{tablefits}
\end{tabular}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth,viewport=0 0 570 570]{contourmap.pdf}
\end{center}
\caption{Image of the HWHM contours for energy bands: 3 -- 5, 5 -- 6, 6 -- 8, 8 -- 12, 12 -- 20, 20 -- 35, and 35 -- 78\,keV.}
\label{sizecontour}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth,viewport=50 200 550 620]{contoursizeA_ML403.pdf}
\includegraphics[width=0.5\textwidth,viewport=50 200 550 620]{contoursizeA_ML402.pdf}
\end{center}
\caption{Energy dependent profiles of module A interpolated along the torus plane and jet axis of the deconvolved images shown in Figure \ref{sizecontour} in the direction of the arrows. Top: Profile along torus plane. The right shoulder corresponds to the SW. Bottom: Profile along jet axis. The right shoulder corresponds to the NW counter-jet and shows a very clear shrinking as a function of energy, while the left shoulder corresponds to the SE jet and shows almost none at all.}
\label{deconvprofile}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.55\textwidth,viewport=100 350 600 650]{contoursizeA_ML407.pdf}
\end{center}
\caption{Half Width Half Max, HWHM, in arcseconds. The HWHM is measured from the pulsar location along the torus and jet axis for either side. From top to bottom: NE torus plane. SW torus plane. NW jet axis. SE jet axis.}
\label{fits}
\end{figure}
\section{Discussion and Conclusion}
\citet{Gratton1972} and \citet{Wilson19722} developed the first analytic models attempting to explain the observational properties of the Crab. They constructed a model of the nebula assuming that electrons and positrons produced by the pulsar are transported away by diffusion, with energy lost through synchrotron radiation. The size and spectral shape of the nebula in the radio and optical are well explained by the model, but the authors did not consider the details of the production of the electrons and positrons by the pulsar, or their relationship to the pulsar properties, such as its magnetic field.
\citet{Rees1974} and \citet{KC19841, KC19842} constructed a steady state spherically symmetric magnetohydrodynamic (MHD) model of the Crab with a toroidal magnetic field, which links the pulsar to the nebula. A highly relativistic pulsar wind is terminated by a strong MHD shock, which decelerates the flow to non-relativistic speeds. Downstream the approximately adiabatic flow continues to decelerate, carrying its frozen-in magnetic field out to the supernova ejecta that confines the synchrotron nebula. The model successfully describes the integrated spectrum from optical to X-rays where the diffusion model fails. However, it does not reproduce the radio spectrum, nor reproduce the spatial variations of the optical spectrum across the remnant. The model predicts a constant spectrum that steepens only at the edge of the remnant, but this has not been met by observations in the optical, which instead show a monotonically increasing spectral index \citep{Veron1993, Temim2006}. Similar discrepancies have been observed in X-rays for the other two young PWN, 3C\,58 and G21.5-0.9 \citep{Nynka2014,Slane2004,Safi2001}. This motivated \citet{Tang2012} to re-investigate the diffusion driven models from \citet{Gratton1972} and \citet{Wilson19722}. They found that a combination of diffusion and advection can reproduce the observed spectral index variations observed in radio and optical for the Crab, and in X-rays for the two other sources. However, diffusion under-predicts the size of the Crab in X-rays. A combination of advection and diffusion increases the predicted half-light radius of the Crab at 10 keV from 20$^{\prime\prime}$\ in the pure diffusion case to 25$^{\prime\prime}$\ for the best fit combination of advection and diffusion. This is still too small, and the conclusion of \citet{Tang2012} is that due to the compact size of the Crab in X-rays, the advection time scale dominates over the diffusion time scale for X-ray producing particles. Detailed \textit{Chandra} maps of the Crab nebula by \citet{Mori2004} have shown that the spectral index, within some small-scale variations, is constant for most of the nebula and steepens abruptly at the edge. This supports the idea of an advection, rather than diffusion, dominated X-ray nebula.
The first measurements of the extent of the X-ray Crab as a function of energy from 2 -- 12~keV were made in 1964 using a lunar occultation method \citep{Bowyer1964}. \citet{Ku1976} later combined the results of several experiments \citep{Palmieri1975, Ricker1975, Fukada1976, Ku1976}, and found the size to vary as $\propto\nu^{-\gamma}$ with $\gamma = 0.148 \pm 0.012$, an effect they attributed to a combination of diffusive and advective transport of electrons in the nebula. At this time the advective model of KC84 had not yet been constructed, but KC84 concluded in their paper that these results agree fairly well with an advective electron transport. These rates were obtained from multiple experiments and observatories along a narrow range of position angles through the Crab, and they did not probe any azimuthally dependent information.
With \textit{NuSTAR} we have measured the energy dependent size at all position angles at energies from 3 -- 78\,keV. The spatial dependence of the HWHM along the edge of the torus plane is well fit by a power-law with an average of $\gamma=0.08 \pm 0.03$. This is in good agreement with the predictions made by \citet{KC19842} of $\gamma \sim 1/9$. Measurement of the jet rate of change could not be done since the torus flux dominates, but judging from the tail of the profile in Figure \ref{deconvprofile} bottom panel, it does appear as if the jet shrinks more swiftly outside the torus as a function of energy. The size of the counter-jet as a function of energy clearly follows a different rate, $\gamma=0.22\pm0.03$, suggesting the energy transport is not the same as in the torus. In other band passes this region is found to be different as well, and is the only part of the remnant where the synchotron nebula extends beyond the edge of the visible filaments. It is purported that here the shock velocity is larger than for the rest of the remnant, and the post-shock cooling time longer than the age of the Crab \citep{Sankrit1997}.
Similar values of $\gamma \sim 0.2$, have been found for the nebula remnants in MSH\,15--52 \citep{An2014} and G21.5-0.9 \citep{Nynka2014}, two PWN observed with \textit{NuSTAR} and covering the same energy band. The authors conclude that both remnants appear to be diffusion dominated. Both of these remnants are more than twice as distant ($\sim$5 kpc) and have distinct morphological differences from the Crab. MSH\,15--52 is not a spherically symmetric remnant, but highly irregular and dominated by a powerful jet. G21.5-0.9 is spherically symmetric, but the inner portions of the remnant are not resolved and there is no apparent torus or jet evident. This would suggest that for the Crab we are probing the inner workings of the PWN, namely the torus where advection processes dominate, while in the counter-jet region diffusion processes dominate similar to what is found in MSH\,15--52 and G21.5-0.9.
Using a map of the average spectral index distribution from \textit{Chandra} as the basis, we have shown through simulations that the torus must have a near constant spectrum as a function of radius. If we presume that the wind expands radially along the torus plane, then the transition from a flat profile to a steep occurs at a radius of approximately at 50$^{\prime\prime}$. This is consistent with the KC84 model, which predicts a flat profile in an advection dominated MHD flow that transitions at $\sim 50$$^{\prime\prime}$\ if the termination shock is of size 10$^{\prime\prime}$. In this geometry, the energy dependent rate of shrinkage we have found is also consistent with KC84. We confirm that the torus is well-described by a portion of a spherical outflow, with relatively little diffusion or turbulent mixing and conclude that for the X-ray band at least, advection dominates the processes.
What was not anticipated is that in the \textit{NuSTAR} band there is a very clear spectral steepening inside the remnant. Our analysis has revealed the spatially resolved nebula is not well fit by a power-law across the band 3 -- 78~keV, but rather needs a model with a spectral steepening. We chose a broken power-law as an adequate representation and presented maps of the spectral variation that show a steepening of $\Delta\Gamma \sim 0.25$, which appears to be limited to the torus of the nebula with an approximate break energy at $\sim9$\,keV. We confirm this with simulations in order to address any ambiguities resulting from spatial smearing by the PSF. We also conclude that the jet and counter-jet regions do not appear to have a similar spectral steepening.
The integrated spectrum of the nebula has been known to turn over above 100\,keV, transitioning from a $\Gamma \sim 2.1$ to $\sim$ 2.14. The gradual break must come about from the shrinking of the nebula, and the steepening of the torus spectrum. There is no immediate physical interpretation of the torus spectrum. It is possible the steepening could result from the projections of the emission from electron populations of different synchrotron ages, but a similar steepening has also been by \textit{NuSTAR} in the spatially integrated spectrum of G21.5-0.9 \citep{Nynka2014}, and since the two remnants are morphologically different, it questions the geometrical argument and might suggest the steepening is connected to the injection spectrum itself. In either case the steepening of the torus spectrum shouldn't come as a surprise. Hard X-ray instruments have measured the photon index above 100\,keV to be $2.140 \pm 0.001$ \citep{Pravdo1997} while \textit{Chandra} measures the average photon index of the torus to be $\sim$1.9 \citep{Mori2004}. This is a softening of $\Delta\Gamma \sim 0.25$, which matches nicely with the number found in our maps. Using the rate of burn-off for the NE side from Table \ref{tablefits}, the HWHM radius of the Crab is just $\sim$30$^{\prime\prime}$\ at 100\,keV, restricting the source of the steepening to come from the innermost regions of the remnant. It is therefore not only likely but necessary for the torus to steepen in the \textit{NuSTAR} band in order to bridge the gap between soft X-ray and $\gamma$-ray observation.
We performed phase resolved spectroscopy of the Crab on several different length scales from the inner 17$^{\prime\prime}$\ out to 200$^{\prime\prime}$. The pulsed spectrum is best represented by a steepened spectrum with a break energy of $\sim$10\,keV. As found previously by \citet{Weisskopf2011} and \citet{Willingale2001}, the index below 10\,keV, $\Gamma_1$, shows spectral evolution as a function of phase; the secondary pulse is harder than the primary and the hardest index occurs during the bridge emission between the first and secondary pulse. We find that $\Gamma_2$ traces $\Gamma_1$ with an average $\Delta\Gamma = 0.27 \pm 0.09$. There are indications of this steepening in RXTE \citep{Pravdo1997} observations, but it was not properly quantified. \citet{Kuiper2001} present phase resolved pulsed spectra by combining \textit{BeppoSAX}, \textit{CGRO}, and \textit{GRIS} data and found that from 0.1\,keV up to 10\,GeV it could be fit using three components: (1) a power-law, (2) a modified power-law (\texttt{logpar}) for the first pulse, and (3) a modified power-law for the bridge emission. They found acceptable fits by combining these three models, and we attempted to fit with the same combination. While we were able to find statistically acceptable fits, the models proved degenerate in our narrower band-pass without the $\gamma$-ray spectrum to constrain them.
Finally \textit{NuSTAR} participated in a ToO observation of the Crab during the flaring on the 9th of March 2013. No flux change was detected between 3 -- 78 keV aside from what is expected from calibration, which is $\pm 5$\% in flux and $0.01$ in spectral index change, and thus places an upper limit on the hard X-ray variability due to $\gamma$-ray flares.
\acknowledgments
This work was supported under NASA Contract No.
NNG08FD60C, and made use of data from the NuSTAR mission,
a project led by the California Institute of Technology,
managed by the Jet Propulsion Laboratory, and funded by the
National Aeronautics and Space Administration. We thank
the NuSTAR Operations, Software and Calibration teams for
support with the execution and analysis of these observations.
This research has made use of the NuSTAR Data Analysis
Software (NuSTARDAS) jointly developed by the ASI Science
Data Center (ASDC, Italy) and the California Institute
of Technology (USA).
\section{Appendix}
The \textit{NuSTAR} point spread function has a sharp core (FWHM = 18$^{\prime\prime}$) but extended large wings. It has an energy dependency, which causes the half power diameter to shrink as a function of energy by a few arcseconds between 3 and 10 keV. Above 10 keV the PSF remains constant.
As shown in Figure \ref{eefplot} of an encircled power curve of a 10 keV PSF, 30\% of the photons of a point source are located within a radius of 20$^{\prime\prime}$, 50\% within 35$^{\prime\prime}$\ and 80\% within 70$^{\prime\prime}$. This means that for an extended source like the Crab, which has most of its flux contained within an ellipse of 100$^{\prime\prime}$$\times$150$^{\prime\prime}$, the extended wings will roughly redistribute 60\% of the flux from a central volume element over the rest of remnant. This causes the true spectral distribution to be mixed, and even with detailed knowledge of the PSF and effective area, it is unfortunately not possible to disentangle these mixed spectral distributions by any method of deconvolution.
One approach to tackle this mixing, is to subdivide the extended source into smaller regions and calculate the cross-correlation functions between the regions of both the ARFs and PSFs as was done in \citet{Wik2014}. The resulting spectra and responses must then be fitted simultaneously. This works well for large sources with spectra that are slowly changing, but for for smaller sources with fast changing spectra where the number of regions could number in the hundreds, this starts getting cumbersome. Another way is to forward fold a two dimensional spectral model through the optics response, carefully taking into account the PSF, and compare to actual data.
Based on the fine structure we observe see in the spectral data maps, we have in chosen the latter method.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.47\textwidth]{EEFplot.pdf}
\end{center}
\caption{\textit{NuSTAR} encircled energy curve at 10 keV.}
\label{eefplot}
\end{figure}
|
1,314,259,995,876 | arxiv | \section{Introduction}
\label{}
Among the many ideas developed around the use of liquid noble gases, the Liquid Argon Time Projection Chamber~\cite{intro1,Aprile:1985xz} certainly represented one of the most challenging and appealing designs. The technology was proposed as a tool for uniform and high accuracy imaging of massive detector volumes. The operating principle of the LAr TPC was based on the fact that in highly purified LAr ionization tracks could indeed be transported undistorted by a uniform electric field over distances of the order of meters. Imaging is provided by wire planes placed at the end of the drift path, continuously sensing and recording the signals induced by the drifting electrons. Liquid Argon is an ideal medium since it provides high density, excellent properties (ionization, scintillation yields) and is intrinsically safe and cheap, and readily available anywhere as a standard by-product of the liquefaction of air. Non--destructive readout of ionization electrons by charge induction allows to detect the signal of electrons crossing subsequent planes with different wire orientation. This provides several projective views of the same event, hence allowing for space point reconstruction and precise calorimetric measurement. The feasibility of this technology has been demonstrated by the extensive ICARUS R\&D program, which included studies on small LAr volumes about proof of principle, LAr purification methods, readout schemes and electronics, as well as studies with several prototypes of increasing mass on purification technology, collection of physics events, pattern recognition, long duration tests and readout. The largest of these prototypes had a mass of 3 tons of LAr~\cite{3tons,Cennini:ha} and has been continuously operated for more than four years, collecting a large sample of cosmic-ray and gamma-source events. Furthermore, a smaller device with 50 l of LAr~\cite{50lt} was exposed to the CERN neutrino beam, demonstrating the high recognition capability of the technique for neutrino interaction events. The realization of the 600 ton ICARUS detector culminated with its full test carried out at surface during the summer 2001~\cite{t600paper}. This test demonstrated that the LAr TPC technique can be operated at the kton scale with a drift length of 1.5~m.
\section{Liquid Argon TPC in a magnetic field}
The bubble-chamber-like reconstruction capability of the liquid Argon TPC provides simultaneously (1) a tracking device with unbiased imaging and reconstruction, and (2) full sampling calorimetry. The detector is fully active, homogeneous and isotropic. The resolution is very good, both for energy (calorimetry) and for angular reconstruction (tracking). The possibility to complement the features with those provided by a magnetic field has been considered and would open new possibilities \cite{Rubbia:2001pk,Rubbia:2004tz,Bueno:2001jd}: (a) charge discrimination, (b) momentum measurement of particles escaping the detector ($e.g.$ high energy muons), (c) very precise kinematics, since the measurements are multiple scattering dominated (e.g. $\Delta p/p\simeq 4\%$ for a track length of $L=12\ m$ and a field of $B=1T$).
The orientation of the magnetic field can be chosen such that the bending direction is in the direction of the drift where the best spatial resolution is achieved. The magnetic field is hence perpendicular to the electric drift field. The Lorentz angle is expected to be very small in liquid (e.g. $\approx 30 mrad$ at $E=500\ V/cm$ and $B=0.5T$). Embedding the volume of argon into a magnetic field should therefore not alter the imaging properties of the detector and the measurement of the bending of charged hadrons or penetrating muons would allow a precise determination of the momentum and a determination of their charge.
The required magnetic field for charge discrimination for a path $x$ in liquid Argon \cite{Rubbia:2004tz} is given by the bending
\begin{equation}
b\approx \frac{l^2}{2R}=\frac{0.3B(T)(x(m))^2}{2p(GeV)}
\end{equation}
and the multiple scattering contribution:
\begin{equation} MS\approx \frac{0.02(x(m))^{3/2}}{p(GeV)}
\end{equation}
At low momenta, we can safely neglect the contribution from the position measurement error given
the readout pitch and drift time resolution.
The momentum determination resolution is then given by:
\begin{equation}
\frac{\Delta p}{p} \approx \frac{0.13}{B(T)(x(m))^{1/2}}
\end{equation}
and the statistical significance for charge separation can be written as ($b^\pm$ are the bending
for positive and negative charges):
\begin{equation}
sig\approx \frac{b^+-b^-}{MS}\approx \frac{2b}{MS} \approx 15B(T)(x(m))^{1/2}
\end{equation}
For example, with a field of 0.55~T, the charge of tracks of 10~cm can be separated
at $2.6\sigma$.
The requirement for a $3\sigma$ charge discrimination can be written as: $b^+-b^- = 2b > 3MS$, which implies a field strength:
\begin{equation}
B\geq \frac{0.2(T)}{\sqrt{x(m)}}
\end{equation}
For long penetrating tracks like muons, a field of $0.1T$ allows to discriminate the charge for tracks longer than 4 meters. This corresponds for example to a muon momentum threshold of 800~MeV/c. Hence, performances are very good, even at very low momenta. Unlike muons or hadrons, the early showering of electrons makes their charge identification difficult. The track length usable for charge discrimination is limited to a few radiation lengths after which the showers make the recognition of the parent electron more difficult. In practice, charge discrimination is possible for high fields $x=1X_0 \rightarrow B>0.5T$, $x=2X_0 \rightarrow B>0.4T$, $x=3X_0 \rightarrow B>0.3T$. From simulations, we found that the determination of the charge of electrons of energy in the range between 1 and 5 GeV is feasible with good purity, provided the field has a strength in the range of 1~T. Preliminary estimates show that these electrons exhibit an average curvature sufficient to have electron charge discrimination better than $1\%$ with an efficiency of 20\%\cite{Bueno:2001jd}. Further studies are on-going.
An R\&D programme to investigate a LAr TPC in a magnetic field was initiated. The goal was to study the drift properties of free electrons in LAr in the presence of a magnetic field and to prove that the imaging capabilities are not affected. The test programme included (1) checking the basic imaging in B-field (2) measuring traversing and stopping muons (3) test charge discrimination (4) check Lorentz angle. We report here on preliminary results obtained. A complete report is in preparation~\cite{lafthesis}.
\begin{figure}[tb]
\begin{center}
\epsfig{file=Allezusammen-b.eps,width=0.95\textwidth} \caption{\label{magnet}
Cut through the
magnet with the LAr cryostat containing the drift chamber. The
cryostat consists of three cylinders: the purified LAr container
(red), the LN$_2$ bath (blue) and the vacuum insulation (green)}
\label{fig:setup}
\end{center}
\end{figure}
\begin{figure}[tb]
\begin{center}
\epsfig{file=chamber.eps,width=0.45\textwidth} \epsfig{file=driftchamber_open.eps,width=0.45\textwidth} \caption{(left) Cut through the
drift chamber (right) View into
the assembled drift chamber with the cathode plate removed.}
\label{fig:laff1}
\end{center}
\end{figure}
\section{Experimental Setup}
The experimental setup (see Figure~\ref{fig:setup}) was custom built for this test.
We have first designed and assembled a liquid Argon TPC with a width of 300~mm, a height of 150~mm and a maximal
drift length of 150~mm (see Figure~\ref{fig:laff1}). Its dimensions were chosen to fit in the recycled
SINDRUM I magnet\footnote{The magnet was kindly lent to us by PSI, CH-5232 Villigen,
Switzerland and was brought from PSI to ETH/Z\"urich.},
which allows to test the chamber in a maximal field of 0.55~T.
At the maximal field the DC current is 850~A corresponding to a power consumption of 220~kW. The
electrical power and the water cooling circuit necessary to operate the magnet in the
laboratory had to be specially installed by ETH.
The cryostat is made of three concentrical cylinders: the inner most with a diameter of
250~mm contains the purified LAr with the drift chamber, the second cylinder is a LN$_2$
bath kept under an absolute pressure of 2.7~bar in order not to freeze out the LAr at
about 1~bar, it is wrapped with 25 layers of superinsulation; the outer most cylinder is
for the insulation vacuum.
The drift chamber consists of a rectangular cathode, 27 field shaping electrodes spaced
by 5~mm and the three detector planes. The first two detector planes are wire chambers
with the wires oriented at $\pm 60^o$ to the vertical; the stainless steel wires have a
diameter of $100~\mu m$ and a pitch of 2~mm. The wire chambers are operated at potentials such that
they are transparent to the drifting electrons and only pick up an induced signal from
the electron cloud passing the planes. The third plane is a PCB with horizontal strips
with a width of 1~mm and a pitch of 2~mm on which the drift electrons are collected. The
wires and the strips are connected through 3~m long twisted pair cables to the
feedthroughs on a flange, from the feedthroughs the signals are connected through 20~cm
cables to the analog boards (CAEN-V791) of the ICARUS readout electronics; the VME-like
analog board contains the low-noise preamplifiers for 32 channels and two multiplexed 10
bit FADC running at 40~MHz; each wire (strip) is sampled with 2.5~MHz. The readout
electronics works as a continuous wave form digitizer: the digitized data are stored in a
buffer, large enough to contain the data of a time interval of about 1~ms for each
channel; the maximal drift time during this run was about 150~$\mu$s. When a trigger
occurs the filling of the buffer is stopped and the data are transferred to a PCI card in
a PC, the PCI card is read out with a LabView program. The high voltage up to a maximal
value of 22.5~kV is applied to the cathode and is connected through a resistor chain to
the field shaping elctrodes in order to produce a homogeneous electric drift field
(horizontal and perpendicular to the solenoid axis).
There were two different triggers used: to trigger on cosmic ray muons passing through
the magnet, plastic scintillators mounted on top and at the bottom of the magnet were
used. The trigger counters also define the time $t_0$ of the event, needed to determine
the drift time from the chamber signals. The sum-output from the chamber signals allows to trigger on
low-energy events, as e.g. muons stopping in the chamber. In this case the scintillators
on the top of the chamber are use to define the $t_0$.
\section{Results}
In November 2004 the setup was ready for a first test. Before filling with LAr, the
cylinder containing the drift chamber was pumped during four weeks; the final vacuum
was better than $5 \cdot 10^{-6}$~mbar. After cooling down during a few days with
LN$_2$, the cylinder was filled through a purification cartridge with LAr; the same LAr
filling was used during the whole three weeks of data taking without recirculating it
through the purification cartridge. Starting the tests without magnetic field and
triggering with the scintillator counters, clean cosmic ray tracks were immediately
observed at a drift field of 500~V/cm. About 100 passing-through muons per hour were
stored. From the observation of a decreasing collected drift charge with increasing drift
time, the mean lifetime of the free electrons in LAr was estimated to be about
100~$\mu$s; it did not decrease significantly during the whole period of the run. \\
After a few days of commissioning, the magnetic field was turned on to the maximal value
of 0.55~T. The signal-to-noise ratio of the chamber signals did not change significantly
with the magnet on. Figure~\ref{fig:eve}
shows the raw data (from the collection plane) of
events with the magnetic field turned on and with a drift field of 300~V/cm; the
intensity of the black color is a measure of the collected charge. The horizontal axis
corresponds to the drift time and the vertical axis to the wire number. The figures show
the two-dimensional projection of tracks in the plane perpendicular to the magnetic
field. Combining clusters with equal drift time from two (or three) planes allows the
three-dimensional reconstruction of the track\cite{lafthesis}. These events are
interpreted as cosmic muons either crossing or stopping then decaying in
the detector. Delta-rays or converted $e^+e^-$-pairs are also easily identifiable.
\section{Conclusions}
We have built a small LAr test TPC and operated it for the first time in a magnetic field
(0.55~T) perpendicular to the electric drift field. The quality of cosmic ray tracks is
not significantly decreased with the B-field turned on. The setup will be used to study
the drift properties of electrons in LAr in a magnetic field and a measurement of the
Lorentz angle is foreseen. A complete description of this work is in preparation\cite{lafthesis}.
\begin{figure}[tb]
\begin{center}
\epsfig{file=run1273event44.eps,height=5cm} \epsfig{file=run1314event38.eps,height=5cm}
\epsfig{file=run1293event90.eps,height=5cm} \epsfig{file=run1493event15.eps,height=5cm}
\epsfig{file=run1412event78.eps,height=5cm} \epsfig{file=run1364event68.eps,height=5cm}
\epsfig{file=run1464event35.eps,height=5cm} \epsfig{file=run1409event63.eps,height=5cm}
\caption{Eight examples of real events collected with the liquid Argon TPC prototype immersed in a magnetic field of 0.55~T.
The horizontal axes correspond to the time coordinate and the vertical axes are the wire coordinate. }
\label{fig:eve}
\end{center}
\end{figure}
\section*{Acknowledgements}
We thank the ETHZ, Abteilung Bauten, for providing us the necessary power and cooling infrastructure to operate the SINDRUM magnet at ETHZ. We are also indebted to the INFN Padova group who has cordially lent us the readout electronics necessary for the measurements. In particular, we thank Sandro Centro (INFN Padova) for his support. We thank
P.~Picchi and F.~Pietropaolo for useful discussions.
This work was supported by ETH/Z\"urich and Swiss National Science Foundation.
|
1,314,259,995,877 | arxiv | \section{Introduction}
In the last few years, much effort has been devoted to the study of
statistical properties of scalar quantities advected by random flows with
short memory. Remarkable progress in understanding intermittency and anomalous
scaling has been achieved \cite{K94,GK95,CFKL95,SS95} for the Kraichnan
model \cite{K94} of passive scalar advection by random, Gaussian,
incompressible and white-in-time velocity fields. A crucial property of the
model is that equal-time correlation functions obey closed equation of motion.
Analytical treatments are thus feasible, and the identification of a general
mechanism for intermittency has been established. Its source has been found
in zero modes of the operators governing the Eulerian dynamics of $N$-point
correlation functions \cite{GK95,CFKL95,SS96}.\\
Concerning numerical studies of the Kraichnan model, efficient
Lagrangian methods
have been recently proposed \cite{FMV98,GPZ98} and thanks to them both the
limits
of the vanishing of intermittency corrections, for which perturbative
predictions are available \cite{GK95,SS95}, and the non-perturbative
region, have been successfully investigated \cite{FMV98,FMNV98b}.
A compressible generalization of the Kraichnan model has been recently
proposed \cite{CKV97,GV98,AA98} and the existence of very different behaviors for
the Lagrangian trajectories, depending on the degree of compressibility, has
been shown analytically \cite{CKV97,GV98}.
For weak compressibility, the well-known
direct cascade of the passive scalar energy takes place. This is
associated, from a Lagrangian point
of view, to the explosive separation of initially close trajectories
\cite{FMNV98b,BGK97},
a feature characterizing the direct energy cascade for the incompressible
Kraichnan model as well. On the contrary, when the compressibility is strong enough,
particles collapse: both non intermittent inverse cascade of tracer energy exciting large scales
and suppression of the short-scale dissipation occur \cite{GV98}.
The relation between intermittency and compressibility is
the main issue of the present short communication.\\
As already highlighted \cite{CKV97,GV98}, because compressibility inhibits
the separation between Lagrangian trajectories,
the resulting scalar transport slows down and scaling
properties may be affected.
Our remark here is that the slowing down of Lagrangian separations plays an essential role in
characterizing intermittency in the direct cascade regime.
This can be easily grasped from the following considerations.
In the direct cascade regime, typical trajectories are stretched, whereas
contractions are rare and thus affect only the
extreme tails of the pdf of scalar differences.
Furthermore, within a Lagrangian framework, scalar correlations are
essentially governed by the time spent by particles with their
mutual distances smaller than the integral scale of the problem.
The stretching process, typical of the direct energy cascade, is thus
intermittent because contracted trajectories cause strong fluctuations
of the time needed to reach the integral scale.
When compressibility is present, even if weakly, trapping effects are
amplified due to the slowing
down of Lagrangian separations. It then follows that
the dynamical role of collapsing trajectories increases for increasing
compressibility, and the same should happen for the intermittency.
It is worth noting that the trapping mechanism, enhanced by the
compressibility, works in the same direction as that induced
by lowering the spatial dimension $d$: it is indeed observed perturbatively
\cite{CFKL95} that when $d$ is reduced an increased intermittency arises,
a fact corroborated by numerical evidences \cite{FMNV98b} comparing results of
the incompressible Kraichnan model in two and three dimensions.
These considerations will be here quantitatively supported by numerical
simulations.
\vspace{3mm}
The compressible generalization of the
Kraichnan model is governed by the equation (for the Eulerian dynamics)
\begin{equation}
\label{fp}
\partial_t\theta(\bbox{r},t)+\bbox{v}(\bbox{r},t)\cdot\nabla\,
\theta(\bbox{r},t)=\kappa\nabla^2\theta(\bbox{r},t)+f(\bbox{r},t) ,
\end{equation}
where, as for the incompressible case, the velocity and the forcing
are zero mean, Gaussian independent processes, both homogeneous, isotropic
and white-in-time. The velocity is self-similar, with the 2-point correlation
function:
\begin{equation}
\label{2-point-v}
\langle v_{\alpha}(\bbox{r},t)v_{\beta}(\bbox{r}',t') \rangle =
\delta(t-t')\,\left[ d^0_{\alpha\beta} -d_{\alpha\beta}(\bbox{r}- \bbox{r}')
\right] ,
\end{equation}
where $d_{\alpha\beta}(\bbox{r})$, the so-called {\it eddy-diffusivity}, is fixed by isotropy and scaling behavior
along the scales:
\begin{eqnarray}
&&d_{\alpha\beta}(\bbox{r})=\nonumber\\
&&r^{\xi}\left\{\left[A+(d+\xi-1)B\right]
\delta_{\alpha\beta} + \xi \left[ A-B\right]\frac{r_{\alpha}r_{\beta}}{r^2}
\right\} ,
\label{eddydiff}
\end{eqnarray}
where $d$ is the dimension of the space.\\
The degree of compressibility is controlled by the ratio
$\wp\equiv {\cal C}^2/{\cal S}^2$, being ${\cal S}^2\equiv
A+(d-1)B\propto\langle(\nabla\bbox{v})^2\rangle$ and
${\cal C}^2\equiv A\propto\langle(\nabla\cdot\bbox{v})^2\rangle$,
which satisfies the inequality $0\leq \wp \leq 1$.\\
The statistics of the forcing term is defined by the 2-point correlation
function
\begin{equation}
\label{2-point-f}
\langle f(\bbox{r},t)f(\bbox{r}',t') \rangle =
\delta(t-t')\,\chi(|\bbox{r}- \bbox{r}'|) ,
\end{equation}
where $\chi$ is chosen nearly constant for distance $|\bbox{r}- \bbox{r}'|$
smaller than the integral scale $L$ and rapidly decreasing for $r \gg L$.\\
It is worth remarking that equation (\ref{fp}) physically describes the
evolution of a tracer, that is a quantity which is conserved
along the Lagrangian trajectories in absence of diffusivity and forcing.
To characterize the advection of a density, one should consider
the equation
\begin{equation}
\label{density}
\partial_t\rho(\bbox{r},t)+\nabla \cdot \left( \bbox{v}(\bbox{r},t)
\rho(\bbox{r},t) \right) =\kappa\nabla^2\rho(\bbox{r},t)+f(\bbox{r},t) ,
\end{equation}
which in the ideal case ($\kappa=0$, $f=0$) enjoys the conservation
of the total mass.
The density advection equation
has also a wide realm of physical applications and should deserve
a detailed study in its own, as well as a specific numerical approach.
Hereafter we shall limit ourselves
to the case of tracer advection
ruled by (\ref{fp}).
Exploiting the $\delta$-correlation in time, equations for the even scalar
correlations (odd correlations being trivially zero) in the stationary state,
can be deduced \cite{GK96}; for the generic $N$-point correlation function $C_N^{\theta}\equiv\langle \theta(r_1)\cdots\theta(r_N) \rangle$ the expression reads:
\begin{equation}
{\cal M}_N\,C_N^{\theta}=
\sum_{i<j} \chi\left( \frac{r_{ij}}{L}\right)
\langle \theta(r_1)
\smash{\mathop{\dots\dots}_{\hat{i}\ \ \hat{j}}}
\theta(r_N) \rangle \;,
\label{eqclosed}
\end{equation}
with $r_{ij}\equiv r_i-r_j$, and ${\cal M}_N$ is the differential operator
given by:
\begin{eqnarray}
&&{\cal M}_N=\nonumber\\
&& \sum_{1\leq n < m \leq N} d_{\alpha\beta}(\bbox{r}_n - \bbox{r}_m)
\nabla_{r_{n{\alpha}}} \nabla_{r_{m{\beta}}} - \kappa\sum_{1\leq n\leq N}
\nabla^2_{r_n} .
\label{emme}
\end{eqnarray}
As for the incompressible case, this model has a Gaussian limit for $\xi\to 0$,
and the perturbative expansion at small $\xi$'s can be done as in
Ref.~\cite{GK95}. Accordingly, the calculation performed in the weakly
compressible case (i.e. $\wp < d/\xi^2$)
corresponding to the direct cascade regime leads
(see Ref.~\cite{GV98})
to the expression for the intermittent correction $\Delta_N^{\theta}$, to the
normal scaling exponent $(2-\xi)N/2$ of the N-point structure function
$S_N^{\theta}(r) = \langle [\theta(\bbox{r}) -\theta(\bbox{0})]^N\rangle
\propto r^{(2-\xi)N/2-\Delta_N^{\theta}}$; namely:
\begin{equation}
\Delta_N^{\theta}=\frac{N(N-2)(1+2\wp)}{2(d-2)} \xi + O(\xi^2) .
\label{perturb}
\end{equation}
The perturbative approach gives thus a first clue that compressibility
works to enhance intermittent corrections.
We are however interested in checking that this is a general and robust
feature associated to compressibility and thus that it is
present for generic $\xi$.
This problem is not accessible by
perturbative techniques; numerical methods are generally needed
to investigate it.
With this purpose in mind, we have developed a new
Lagrangian numerical method
(a different viewpoint with respect to the one in Ref.~\cite{FMV98}),
where the strategy is now
formulated in terms of a {\em first exit time} problem \cite{gaw}.
The method consists in the Montecarlo simulation of
Lagrangian trajectories according to the stochastic differential equation
\begin{equation}
\dot {\bbox{r}}_n = \bbox{v}(\bbox{r}_n,t)+\sqrt{2\kappa}\dot{w}_n \;,
\end{equation}
where the $w_n$ are independent Wiener processes.
The evolution of the probability $ P_{N}(t,\bbox{x}|t_0,\bbox{x}_0) $ that
the $N$ Lagrangian tracers have a configuration
$\bbox{x}=(\bbox{r}_1,\ldots,\bbox{r}_N)$ at time
$t$ given their initial configuration $\bbox{x}_0$ at time
$t_0$ is ruled by
the Fokker-Planck equation
\begin{equation}
\frac{\partial}{\partial t} P_{N}(t,\bbox{x}|t_0,\bbox{x}_0)+
{\cal M}^{\star}_{N}(\bbox{x}) P_{N}(t,\bbox{x}|t_0,\bbox{x}_0) = 0 \;,
\label{2.1}
\end{equation}
where the operator ${\cal M}^{\star}_{N}$ is the adjoint of (\ref{emme}).
As a consequence of (\ref{2.1}) the probability
obeys also the backward Kolmogorov equation
\begin{equation}
\frac{\partial}{\partial t_0} P_{N}(t,\bbox{x}|t_0,\bbox{x}_0) +
{\cal M}_{N}(\bbox{x}_0) P_{N}(t,\bbox{x}|t_0,\bbox{x}_0) = 0 \;.
\label{2.11}
\end{equation}
We now introduce the Green function
\begin{equation}
G(\bbox{x},\bbox{x}_0)=
\int_{t_0}^{\infty} dt \; P_N(t,\bbox{x}|t_0,\bbox{x}_0)\;,
\label{eq:2.2}
\end{equation}
which enjoys the following properties
\begin{eqnarray}
{\cal M}^{\star}_{N}(\bbox{x}) G(\bbox{x},\bbox{x}_0) &=&
-\delta(\bbox{x}-\bbox{x}_0) \; ,\\
{\cal M}_{N}(\bbox{x}_0) G(\bbox{x},\bbox{x}_0) &=&
-\delta(\bbox{x}-\bbox{x}_0) \; .
\label{eq:2.21}
\end{eqnarray}
Let us define the characteristic size of a configuration of $N$ particles
as $R(\bbox{x})
=[(\sum_{i<j} |\bbox{r}_i-\bbox{r}_j|^2)/(N(N-1)/2)]^{1/2}$ .
We now impose Dirichlet (absorbing) boundary conditions
at $R(\bbox{x})=L \gg R(\bbox{x}_0)$,
and compute numerically the first exit time from
the volume of configuration space limited by the boundary, which is
expressed in terms of the Green function as (see e.g. \cite{Risken})
\begin{equation}
T_L(\bbox{x}_0)=\int_{R(x)<L} dx\; G(\bbox{x},\bbox{x}_0).
\end{equation}
A trivial consequence of the property (\ref{eq:2.21}) is that
\begin{equation}
{\cal M}_{N}(\bbox{x}_0) T_L(\bbox{x}_0) = -1,
\end{equation}
an equation whose
structure resembles that of (\ref{eqclosed}); indeed we can
conclude, similarly to what happens for correlation functions
(e.g.\cite{GK95,CFKL95}),
that $T_L(\bbox{x}_0)$ must amount to the sum of an
inhomogeneous solution plus a linear combination of
zero modes $f_j$ of the operator ${\cal M}_{N}$:
\begin{equation}
T_L(\bbox{x}_0)=\sum_j C_j L^{\gamma-\sigma_j} f_j(\bbox{x}_0)
+ \mbox{inhomog. term}\;,
\label{2.5}
\end{equation}
where the explicit dependence on $L$ has been extracted
taking advantage of the scaling properties of ${\cal M}_{N}$,
$\sigma_j$ is the scaling exponent of the zero mode $f_j$ and
$C_j$ is a constant independent of $L$.
Among the non trivial zero-modes $f_j$,
only the functions which depend on all
the coordinates can contribute to the $N$-th order structure function.
We would like to extract this contribution leaving aside all the others:
it is easy to realize that this result can be achieved
performing a linear combination of the
exit times with different initial conditions.
This operation will remove also the inhomogeneous term.
If we denote with $\nabla_i(\bbox{\rho})$ the operator acting on the functions
of $N$-particles coordinates as
$\nabla_i(\bbox{\rho})F(\bbox{r}_1,\ldots,\bbox{r}_i,\ldots,\bbox{r}_N)
=F(\bbox{r}_1,\ldots,\bbox{r}_i+\bbox{\rho},\ldots,\bbox{r}_N)-
F(\bbox{r}_1,\ldots,\bbox{r}_i,\ldots,\bbox{r}_N)$
we will have
\begin{equation}
\Sigma_N(L)=\prod_i \nabla_i(\bbox{\rho}) T_L(\bbox{x}_0) \propto
L^{\gamma-\zeta_N}
\label{2.6}
\end{equation}
where $\zeta_N=(2-\xi)N/2-\Delta^{\theta}_N$ is the scaling exponent of the
structure function $S_N^{\theta}(r)\sim r^{\zeta_N}$.
Whenever $\bbox{x}_0=\bbox{0}$, due to the simmetry of the
$f_j$'s under exchanges of particles coordinates,
the expression for $\Sigma_N(L)$ takes a
simple form, which,
for example, for $N=4$ reads as
$
\Sigma_4(L)=
2\, T_L(\bbox{0},\bbox{0},\bbox{0},\bbox{0})-
8\, T_L(\bbox{\rho},\bbox{0},\bbox{0},\bbox{0})+
6\, T_L(\bbox{\rho},\bbox{\rho},\bbox{0},\bbox{0})
$.
Summarizing: the numerical method consists
in the Monte-Carlo simulation of Lagrangian trajectories
of $N$ particles advected by a rapidly changing velocity field,
according to the Fokker-Planck equation (\ref{2.1});
average first exit times outside a volume of size $L$ are computed
for different arrangements of the initial conditions, and then
linearly combined according to (\ref{2.6}) in order to extract
the scaling exponent $\zeta_N$.
As a final remark, the numerical method here employed
can be viewed as a merging of the two Lagrangian methods
introduced by Frisch, Mazzino and Vergassola in Ref.~\cite{FMV98}
and by Gat and Zeitak in Ref.~\cite{GZ97}.
Namely, it borrows form the first one the idea of subtracting
exit times of different initial conditions to extract the
only zero mode that contributes to the structure functions,
while inherits from the second the spirit of working with
particle configurations (shapes).
The advantages of the present method with respect to \cite{FMV98}
mainly reside in the evaluation of first exit times rather than
of residence times, a fact which substantially reduces the computational cost.
We present the numerical results obtained for the scaling of the
fourth-order structure function
$S_4(r;L)\equiv \langle(\theta(\bbox{r}) - \theta(\bbox{0}))^4\rangle$ in three dimensions. As previously mentioned, when the dimension $d$ of the space is lowered fluctuations increase and as a consequence the number of realizations needed to have a clean scaling grows as well; the addition of compressibility further enhances this effect.
For the first numerical experiments with the new method, we have thus opted
for $d=3$.\\
The method has been tested performing the analysis
of the incompressible
limit $\wp=0$ for different values of $\xi$:
the anomaly $\Delta_4^{\theta}=2\zeta_2 - \zeta_4 $
has always been found to be compatible with the results presented in
Refs.~\cite{FMV98,FMNV98b}. The computation of
$\Sigma_2(L)$ -- which can be evaluated
analitically -- has provided another stringent test for this method.\\
Varying the degree of compressibility $\wp$, we have studied
in the direct cascade regime the connection between
the slowing down of Lagrangian trajectories and intermittency at the two
distinct values
$\xi=0.75$ and $\xi=1.1$.
Notice that for these two values of $\xi$, the condition ($\wp < d/\xi^2$)
for the direct cascade
of energy to take place \cite{GV98} is verified for
the entire range of values $0\leq \wp \leq 1$ of the compressibility.
Different motivations account for this choice;
first of all
we avoided the region of $\xi$ close to $0$
($\gamma\rightarrow 2$) where capturing the subdominant anomalous
exponents is numerically expensive, and furthermore the results are
known from perturbative expansion. Second,
when $\xi$ is close to $2$ ($\gamma \rightarrow 0$) non local
effects are very strong and
the range of values of $\wp$ (i.e. $\wp < d/\xi^2$)
pertaining to the direct cascade is narrower.
In Figs.~\ref{fig1} and \ref{fig2} are shown the behavior of $\Sigma_4(L)$
for the two values of $\xi$ under consideration and for different values of $\wp$,
which all display a fairly good power law scaling.
According to the relation
(\ref{2.6}) the scaling exponent is
$\gamma-\zeta_4= -\gamma + \Delta_4^{\theta}$, so that the curves become
flatter and flatter as the anomaly grows .
It is thus evident from our results that when compressibility increases,
the intermittent correction to the normal scaling grows as well.
\begin{figure}
\centerline{\psfig{file=scaling_xi0.75.eps,width=7cm}}
\caption{A log-log plot of $\Sigma_4(L)$ for
$\xi=0.75$. (a): $\wp=0$; (b): $\wp=0.25$; (c): $\wp=0.5$; (d): $\wp=0.75$.
Separation $\rho=2.7\times 10^{-2}$,
diffusivity $\kappa=2.3\times 10^{-5}$,
number of realizations ranging from $20\times 10^6$
(case (a)) to $30\times 10^6$ (case (d)).
Solid lines represent the best fit power laws.}
\label{fig1}
\end{figure}
\begin{figure}
\centerline{\psfig{file=scaling_xi1.1.eps,width=7cm}}
\caption{As in Fig.~1, for $\xi=1.1$ and diffusivity $\kappa=2.5\times
10^{-3}$.}
\label{fig2}
\end{figure}
Notice that ratio between $\Sigma_4$
and the dominant contribution to each term of the sum
scales as $L^{-\zeta_4}$. As a
consequence, small values of $\xi$ (which correspond to
large values of $\zeta_4$) require
a larger amount of statistics to
make the subdominant contribution emerge.
This is the reason for which the scaling region
for $\xi=0.75$ is smaller than that for $\xi=1.1$.
Finally, our results are summarized in Fig.~\ref{fig3} which shows
the anomaly $2\zeta_2 - \zeta_4$ {\it vs} the
compressibility factor $\wp$ for $\xi=0.75$
(squares joined by a dot-dashed line) and $\xi=1.1$
(circles joined by a dashed line).
As in Ref.~\cite{FMV98}, the error
bars are obtained
by analyzing the fluctuations of local scaling exponents over octave
ratios of values for $L$, a method which gives a very conservative estimate
of the errors. The effectiveness of the first exit time computation is somehow
balanced by the need of a huge number of realizations to achieve
a satisfactory statistical convergence. This drawback is particularly
visible for large $L$, where the signal is rather noisy.
\begin{figure}
\centerline{\psfig{file=curva_anomalie.eps,width=7cm}}
\caption{The anomaly $2\zeta_2 - \zeta_4$ for the fourth-order structure function, for $\xi=0.75$
(squares joined by a dot-dashed line) and $\xi=1.1$ (circles joined by a dashed line).}
\label{fig3}
\end{figure}
In conclusion, we have shown in the context of the Kraichnan compressible
model
that there is a tight relationship between intermittency of passive scalar
statistics and compressibility of the advecting velocity field. This result
can be easily understood from the Lagrangian viewpoint.
Intermittency arises whenever the particles experience long
periods of inhibited separation: since compressible flows are characterized
by the presence of trapping regions, an enhancement of intermittency can be
reasonably expected. The validity of this
argument has been assessed by means of a numerical Lagrangian method.
We acknowledge innumerable discussions
on the subject matter with M.~Vergassola.
Simulations were performed in the framework
of the SIVAM project of the Observatoire de la C\^ote d'Azur. Part of
them were performed using the computing facilities of CINECA.
|
1,314,259,995,878 | arxiv |
\section{Introduction}\label{sec:Introduction}
We work over the complex number field $\mathord{\mathbb C}$.
By a \emph{variety},
we mean a reduced irreducible quasi-projective scheme.
The fundamental group $\pi_1(V)$ of a variety $V$
is the topological fundamental group of the analytic space
underlying $V$.
The conjunction of paths is read from left to right;
that is, for paths $\alpha: I:=[0, 1]\to V$ and $\beta: I\to V$,
we define $\alpha\beta: I\to V$ only when $\alpha(1)=\beta (0)$.
\par
For a subset $S$ of a group $G$,
we denote by $\gen{S}$ the subgroup of $G$ generated by the elements of $S$.
Let a group $\Gamma$ act on $G$ from the right.
Then the subgroup
$$
N_\Gamma:=\gen {\;\shortset{g\sp{-1} g\sp\gamma}{g\in G, \gamma\in \Gamma}\;}
$$
of $G$ is normal, because
$h\sp{-1} (g\sp{-1} g^\gamma) h =((gh)\sp{-1} (gh)\sp\gamma) (h\sp{-1} h\sp\gamma)\sp{-1}$.
We then put
$$
G/\hskip -2.2pt/\hskip 1pt \Gamma :=G/N_\Gamma,
$$
and call $G/\hskip -2.2pt/\hskip 1pt \Gamma$ the \emph{Zariski-van Kampen quotient }of $G$ by $\Gamma$.
\par
\bigskip
Let $\shortmap{f}{X}{Y}$ be a dominant morphism
from a smooth variety $X$ to a smooth variety $Y$
with a connected general fiber.
There exists a non-empty Zariski open subset $Y\sp{\circ} \subset Y$
such that $f$ is locally trivial in the $\CCC^\infty$-category over $Y\sp{\circ}$.
We put $X\sp{\circ}:=f\sp{-1} (Y\sp{\circ})$,
and denote by $\shortmap{f\sp{\circ}}{X\sp{\circ}}{Y\sp{\circ}}$ the restriction of $f$ to $X\sp{\circ}$.
We choose a base point $b\in Y\sp{\circ}$, put $F_b:=f\sp{-1} (b)$,
and choose a base point $\tilde{b}\in F_b$.
We investigate the kernel of
the homomorphism
$$
\map{\iota_*}{\pi_1 (F_b, \tilde{b})}{\pi_1 (X, \tilde{b})}
$$
induced by the inclusion $\iota: F_b\hookrightarrow X$.
The classical Zariski-van Kampen theorem,
which started from~\cite{vanKampen},
describes $\operatorname{\rm Ker}\nolimits (\iota_*)$ in terms of the monodromy action of $\pi_1 (Y\sp{\circ}, b)$ on
$\pi_1 (F_b, \tilde{b})$
\emph{under the assumption that a cross-section of $f$ passing through $\tilde{b}$ exists}.
(See~\cite{MR0366922} for an account of the proof.)
The cross-section plays a double role;
one is to define the monodromy action of $\pi_1 (Y\sp{\circ}, b)$ on
$\pi_1 (F_b, \tilde{b})$,
and the other is to prevent $\pi_2 (Y)$
from contributing to $\operatorname{\rm Ker}\nolimits (\iota_*)$.
However, the cross-section rarely exists in applications.
If we do not have any cross-section,
then the monodromy of $\pi_1(Y\sp{\circ}, b)$ on $\pi_1 (F_b)$ is not well-defined,
and moreover $\pi_2 (Y)$ may contribute to $\operatorname{\rm Ker}\nolimits (\iota_*)$.
(See Example~\ref{example:L}.)
\par
In this paper,
we give a generalization of Zariski-van Kampen theorem~(Theorem~\ref{thm:ZvK}),
which describes $\operatorname{\rm Ker}\nolimits (\iota_*)$
under weaker conditions on the existence of the cross-section.
Informally,
our theorem
states that,
if there exists a cross-section on a subspace of $Y$ whose $\pi_2$ surjects to $\pi_2(Y)$,
then,
under additional assumptions on the singular fibers of $f$,
$\operatorname{\rm Ker}\nolimits (\iota_*)$ is generated by the monodromy relations arising from the \emph{lifted monodromy},
which is defined as follows.
\par
Since $\shortmap{f\sp{\circ}}{X\sp{\circ}}{Y\sp{\circ}}$ is locally trivial,
the groups $\pi_1 (f\sp{-1} (f(x)), x)$ form a locally constant system on $X\sp{\circ}$
when $x$ moves on $X\sp{\circ}$,
and hence
$\pi_1 (X\sp{\circ}, \tilde{b})$ acts on $\pi_1 (F_b, \tilde{b})$ from the right
in a natural way.
We denote this action by
\begin{equation}\label{eq:mu}
\map{\mu}{\pi_1 (X\sp{\circ}, \tilde{b})}{\operatorname{\rm Aut}\nolimits(\pi_1 (F_b, \tilde{b}))},
\end{equation}
and call $\mu$ the \emph{lifted monodromy}.
\par
\medskip
Combining our main result with Nori's lemma~\cite{MR732347} (see Proposition~\ref{prop:nori}),
we obtain the following:
\begin{corollary}\label{cor:RRReq}
Suppose that the following three conditions are satisfied:
\begin{itemize}
\item[\cond{C1}]
the locus $\operatorname{\rm Sing}\nolimits (f)$ of critical points of $f$ is of codimension $\ge 2$
in $X$,
\item[\cond{C2}]
there exists a Zariski closed subset $\Xi_0$ of $Y$ with codimension $\ge 2$
such that
$F_y:=f\sp{-1} (y)$ is non-empty and irreducible
for any $y\in Y\setminus \Xi_0$, and
\item[\rlap{\cond{Z}}\phantom{\cond{C2}}]
there exist a subspace $Z\subset Y$ containing $b$
and a continuous cross-section $s_Z: Z\to f\sp{-1} (Z)$
of $f$ over $Z$ satisfying
$s_Z(Z)\cap \operatorname{\rm Sing}\nolimits (f)=\emptyset$ and $s_Z(b)=\tilde{b}$
such that
the inclusion $Z\hookrightarrow Y$ induces
a surjection $\pi_2 (Z, b)\mathbin{\to \hskip -7pt \to} \pi_2(Y, b)$.
\end{itemize}
Let $i_{X*}: \pi_1(X\sp{\circ}, \tilde{b})\to \pi_1 (X, \tilde{b})$ be the homomorphism
induced by the inclusion $i_{X}: X\sp{\circ}\hookrightarrow X$.
Then $\operatorname{\rm Ker}\nolimits (\iota_*)$ is equal to
\begin{equation}\label{eq:RRR}
\mathord{\mathcal R}:=\gen{\;\shortset{g\sp{-1} g^{\mu(\gamma)}}{g\in \pi_1(F_b, \tilde{b}),\; \gamma\in \operatorname{\rm Ker}\nolimits (i_{X*})}\;},
\end{equation}
and we have the exact sequence
$$
1\;\maprightsp{}\;
\pi_1(F_b, \tilde{b})/\hskip -2.2pt/\hskip 1pt \operatorname{\rm Ker}\nolimits (i_{X*})\;\maprightsp{\iota_*}\;
\pi_1(X, \tilde{b})\;\maprightsp{f_*}\;
\pi_1(Y, b)\;\maprightsp{}\;
1.
$$
\end{corollary}
\medskip
\begin{remark}
The condition~\cond{Z} is trivially satisfied
if $\pi_2(Y)=0$;
for example,
when $Y$ is an affine space $\mathord{\mathbb A}^N$,
an abelian variety, or a Riemann surface of genus $>0$.
\end{remark}
In our previous papers~\cite{MR1341806},
~\cite{MR1988200}
and~\cite{MR1952329},
we have given three different proofs to
a special case of Theorem~\ref{thm:ZvK},
where $Y$ is an affine space $\mathord{\mathbb A}^N$.
Even this special case has yielded many applications
(\cite{MR1282219,
MR1354002,
MR1421396,
MR1428061,
MR1474860,
MR2011641,
MR1952330}).
%
Thus we can expect more applications of the generalized Zariski-van Kampen theorem of this paper.
\par
\medskip
As an easy application,
we obtain the following:
\begin{corollary}\label{cor:proj}
Let $\shortmap{f}{X}{Y}$ be a morphism
from a smooth variety $X$ to a smooth variety $Y$.
Suppose that $\pi_2(Y)=0$,
that $f$ is projective
with the general fiber $F_b$ being connected,
and that $\operatorname{\rm Sing}\nolimits(f)$ is of codimension $\ge 3$ in $X$.
Let $\shortmapinj{\iota}{F_b}{X}$ be the inclusion.
Then the sequence
$$
1\;\maprightsp{}\;\pi_1 (F_b)\;\maprightsp{\iota_*} \; \pi_1 (X) \;\maprightsp{f_*} \; \pi_1 (Y) \;\maprightsp{}\; 1
$$
is exact.
\end{corollary}
As the next application, we investigate
the fundamental group of the complement
of the \emph{Grassmannian dual variety},
and prove a hyperplane section theorem of
Zariski-Lefschetz-van Kampen type.
\par
A Zariski closed subset of a projective space $\P^N$ is said to be \emph{non-degenerate}
if it is not contained in any hyperplane of $\P^N$.
We denote by $\mathord{\rm G}^c(\P^N)$
the Grassmannian variety of $(N-c)$-dimensional linear subspaces
of $\P^N$.
For a point $t\in (\P^N)\sp{\vee}=\mathord{\rm G}^1(\P^N)$ of the dual projective space,
let $H_t\subset \P^N$ denote the corresponding hyperplane.
\par
Let $W$ be a closed subscheme of $\P^N$
such that every irreducible component is of dimension $n$.
For $c\le n$,
the \emph{Grassmannian dual variety of $W$ in $\mathord{\rm G}^c(\P^N)$}
is defined to be the locus of $L\in \mathord{\rm G}^c(\P^N)$
such that
the scheme-theoretic intersection
of $W$ and the linear subspace $L\subset\P^N $
\emph{fails} to be smooth of dimension $n-c$.
For a non-negative integer $k$,
we denote by $\mathord{U}_k(W,\P^N)$ the complement of the Grassmannian dual variety of $W$ in $\mathord{\rm G}^{n-k}(\P^N)$;
that is,
$\mathord{U}_k(W,\P^N)\subset \mathord{\rm G}^{n-k}(\P^N)$ is the Zariski open subset
of all $L\in \mathord{\rm G}^{n-k}(\P^N)$ that intersect $W$ along a smooth scheme of dimension $k$.
\par
Let $X\subset \P^N$ be a smooth non-degenerate projective variety of dimension $n\ge 2$.
The fundamental group $\pi_1 ((\P^N)\sp{\vee}\setminus X\sp{\vee})=\pi_1 (U_{n-1}(X,\P^N))$
of the complement of the dual variety
has been studied in several papers~(for example,~\cite{MR1682991, MR644816}).
However, there seem to be few studies on its generalization to Grassmannian varieties.
We will investigate the fundamental groups
$\pi_1 (U_k(X,\P^N))$ for $k=0, \dots, n-2$.
%
\par
%
We choose a \emph{general} line $\Lambda $ in $(\P^N)\sp{\vee}$,
and consider the corresponding pencil $\{H_{t}\}_{t\in \Lambda}$ of hyperplanes.
Let $A:=\bigcap H_t\cong \P^{N-2}$ denote
the axis of the pencil.
We put
$$
Y_t:=X\cap H_t \quad\rmand\quad Z_{\Lambda}:=X\cap A.
$$
Let $k$ be an integer such that $0\le k\le n-2$.
Regarding $\mathord{\rm G}^{c-1} (H_t)$ as a closed subvariety of $\mathord{\rm G}^c(\P^N)$,
and $\mathord{\rm G}^{c-2} (A)$ as a closed subvariety of $\mathord{\rm G}^{c-1}(H_t)$,
we have canonical inclusions
$$
\mathord{U}_k (Z_\Lambda, A)\;\;\hookrightarrow\;\; \mathord{U}_k(Y_t,H_t)\;\;\hookrightarrow\;\;
\mathord{U}_k(X,\P^N).
$$
Since $k\le n-2$, the space $\mathord{U}_k (Z_\Lambda, A)$ is non-empty.
(When $k=n-2$, the space $\mathord{U}_{n-2} (Z_\Lambda, A)$ is equal to
the one-point set $\mathord{\rm G}^0(A)=\{A\}$.)
We choose a base point
$$
L_o\;\;\in\;\; \mathord{U}_k (Z_\Lambda, A),
$$
which serves also as a base point of $\mathord{U}_k(X,\P^N)$ and of $\mathord{U}_k(Y_t,H_t)$
by the natural inclusions above.
Consider the space
$$
\mathord{\mathcal U}_k (X,\P^N,\Lambda):=\set{(L, t)\in \mathord{U}_k(X,\P^N)\times \Lambda}{L\subset H_t}
$$
with the projection
$$
\map{f_{\Lambda}}{\mathord{\mathcal U}_k (X,\P^N,\Lambda)}{\Lambda}.
$$
The fiber of $f_{\Lambda}$ over $t\in \Lambda$ is canonically identified with
$\mathord{U}_k(Y_t, H_t)$, and
the point $L_o$ furnishes us with a holomorphic section
$$
\map{s_o}{\Lambda}{\mathord{\mathcal U}_k (X,\P^N,\Lambda)}
$$
of $f_{\Lambda}$.
There exists a proper Zariski closed subset $\Sigma_\Lambda$ of $\Lambda$
such that $f_{\Lambda}$ is locally trivial over $\Lambda\setminus \Sigma_{\Lambda}$
in the $\CCC^\infty$-category.
We choose a base point $0\in \Lambda\setminus \Sigma_\Lambda$.
By the section $s_o$,
the fundamental group $\pi_1 (\Lambda\setminus \Sigma_{\Lambda}, 0)$
acts on $\pi_1 (U_k(Y_0, H_0), L_o)$
in the classical (not lifted) monodromy.
\par
\medskip
Using the fact that $\Lambda\hookrightarrow (\P^N)\sp{\vee}$ induces an isomorphism
$\pi_2(\Lambda)\cong \pi_2((\P^N)\sp{\vee})$,
we derive from Theorem~\ref{thm:ZvK} the following:
\begin{theorem}\label{thm:ULZvK}
\setcounter{rmkakkocounter}{1}
Consider the homomorphism
$$
\map{\iota_*}{\pi_1 (\mathord{U}_k (Y_0, H_0), L_o)}{\pi_1 (\mathord{U}_k (X, \P^N), L_o)}
$$
induced by the inclusion $\iota: \mathord{U}_k (Y_0, H_0)\hookrightarrow \mathord{U}_k (X, \P^N)$.
{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}
If $k\le n-2$, then $\iota_*$ is surjective and induces an isomorphism
$$
\pi_1 (\mathord{U}_k (Y_0, H_0), L_o)/\hskip -2.2pt/\hskip 1pt \pi_1 (\Lambda\setminus \Sigma_{\Lambda}, 0)
\;\;\isom\;\; \pi_1 (\mathord{U}_k (X, \P^N), L_o).
$$
{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}
If $k< n-2$, the monodromy action of $\pi_1 (\Lambda\setminus \Sigma_{\Lambda}, 0)$
on $\pi_1 (\mathord{U}_k (Y_0, H_0), L_o)$ is trivial.
In particular,
the homomorphism $\iota_*$ is an isomorphism
for $k< n-2$.
\end{theorem}
Remark that this theorem resembles the classical Lefschetz hyperplane section theorem on the homotopy groups
of smooth projective varieties:
namely, the inclusion $Y_0\hookrightarrow X$
induces surjective homomorphisms
$\pi_k(Y_0)\mathbin{\to \hskip -7pt \to} \pi_k(X)$
for $k\le n-1$,
and isomorphisms
$\pi_k(Y_0)\isom \pi_k(X)$
for $k< n-1$.
\par
\medskip
The isomorphism in the assertion (2) of Theorem~\ref{thm:ULZvK}
seems to fail to hold for $k=n-2$, as can be seen from the argument in \S\ref{sec:ADKY} of this paper.
\par
\medskip
As the third application, we study
$\pi_1 (\mathord{U}_k (X, \P^N), L_o)$
for $k=0$.
By Theorem~\ref{thm:ULZvK},
it is enough to investigate the case where $\dim X=2$, and
to study the monodromy action of $\pi_1 (\Lambda\setminus \Sigma_{\Lambda}, 0)$
on $\pi_1 (\mathord{U}_0 (Y_0, H_0), L_o)$,
where $Y_0=X\cap H_0$ is a smooth compact Riemann surface.
\par
\medskip
First we define the simple braid group $\mathord{ SB}^d_g $
of $d$ strings on a compact Riemann surface $C$ of genus $g>0$.
We denote by $\mathord{\rm Div}^d (C)$
the variety of effective divisors of degree $d$ on $C$,
and by $\mathord{\rm rDiv}^d(C)\subset \mathord{\rm Div}^d(C)$
the Zariski open subset consisting of \emph{reduced} divisors.
We fix a base point
$$
D_0=p_1+\dots+p_d
$$
of $\mathord{\rm rDiv}^d (C)$.
The braid group
$\mathord{ B}^d_g =\mathord{ B}(C, D_0)$ is defined to be the fundamental group
$\pi_1 (\mathord{\rm rDiv}^d(C), D_0)$.
(See~\cite{MR0375281}.)
\begin{definition}\label{def:SB}
The \emph{simple braid group}
$\mathord{ SB}^d_g =\mathord{ SB}(C, D_0)$ is defined to be the kernel
of the homomorphism
$\mathord{ B}(C, D_0)\to \pi_1 (\mathord{\rm Div}^d(C),D_0)$
induced by the inclusion $\mathord{\rm rDiv}^d(C)\hookrightarrow \mathord{\rm Div}^d(C)$.
\end{definition}
Let $\mathord{\mathcal M}^d_g=\mathord{\mathcal M}(C, D_0)$
be the topological group
of orientation-preserving diffeomorphisms
$\gamma$ of $C$ acting from the right that satisfy ${p_i}^\gamma =p_i$
for each point $p_i$ of $D_0$.
We denote by
$$
\varGamma^d_g=\varGamma(C, D_0):=\pi_0(\mathord{\mathcal M}(C, D_0))
$$
the group of isotopy classes of diffeomorphisms in $\mathord{\mathcal M}^d_g=\mathord{\mathcal M}(C, D_0)$,
which acts on
$\mathord{ SB}^d_g =\mathord{ SB}(C, D_0)$ from the right in a natural way.
\par
\medskip
Let $C\subset \P^M$ be a smooth non-degenerate projective curve of degree $d$ and genus $g>0$,
and let $D_0\in \mathord{\rm rDiv}^d(C)$ be a general hyperplane section.
We will investigate $\pi_1 (\mathord{U}_0 (C, \P^M), D_0)$;
that is, the fundamental group of the complement of the \emph{dual hypersurface} of $C$.
\par
\medskip
In~\cite{KulikovMPIpreprint} and~\cite{MR1988200},
we studied
this group
under conditions that
$d\ge 2g+2$ and that the invertible sheaf $\mathord{\mathcal O}_C(D_0)$
corresponds to a \emph{general} point of the Picard variety $\mathord{\rm Pic}^d(C)$
of isomorphism classes of line bundles of degree $d$.
\par
\medskip
Using the fact that $\pi_2(\mathord{\rm Pic}^d(C))=0$,
we derive from our main theorem (Theorem~\ref{thm:ZvK})
the following result, which states the same result as in~\cite{KulikovMPIpreprint} and~\cite{MR1988200}
under weaker conditions.
\begin{definition}
We say that $C\subset \P^M$ is \emph{Pl\"ucker general}
if the dual curve $\rho(C)\sp{\vee}\subset (\P^2)\sp{\vee}$ of the image
$\rho(C)\subset \P^2$ of the general projection
$\rho: C\to \P^2$ has only ordinary nodes and ordinary cusps
as its singularities.
\end{definition}
\begin{theorem}\label{thm:SB}
Suppose that $d\ge g+4$ and that $C$ is Pl\"ucker general in $\P^M$.
Then $\pi_1 (\mathord{U}_0 (C, \P^M), D_0)$
is isomorphic to $\mathord{ SB}(C, D_0)$.
\end{theorem}
Let $X\subset \P^N$ be a smooth non-degenerate projective surface of degree $d$,
and let $\{Y_t\}_{t\in \Lambda}$ be a pencil of hyperplane sections of $X$
parameterized by a general line $\Lambda\subset(\P^N)\sp{\vee}$
with the base locus $Z_{\Lambda}:=X\cap A$,
where $A=\bigcap H_t$ is the axis of the pencil $\{H_t\}_{t\in \Lambda}$ of hyperplanes.
Let
$$
\map{\varphi}{\mathord{\mathcal Y}:=\set{(x, t)\in X\times \Lambda}{x\in H_t}}{\Lambda}
$$
be the fibration of the pencil.
Then $\varphi$ is locally trivial over $\Lambda\setminus \Sigma\sp\prime_{\Lambda}$
in the $\CCC^\infty$-category,
where $\Sigma\sp\prime_{\Lambda}$ is the set of critical values of $\varphi$.
Let $0$ be a general point of $\Lambda$.
The corresponding member $Y_0$ is a compact Riemann surface of genus
$$
g:=(d+H_0\cdot K_X)/2 +1.
$$
Note that $\mathord{U}_0 (Z_{\Lambda}, A)=\{A\}$, and that
each point of $Z_{\Lambda}$ yields a holomorphic section of $\varphi:\mathord{\mathcal Y}\to\Lambda$.
By the classical monodromy,
we obtain a homomorphism
\begin{equation}\label{eq:monhom}
\pi_1 (\Lambda\setminus \Sigma\sp\prime_{\Lambda}, 0)\;\to\; \varGamma^d_g=\varGamma(Y_0, Z_{\Lambda}),
\end{equation}
and hence $\pi_1 (\Lambda\setminus \Sigma\sp\prime_{\Lambda}, 0)$ acts on the simple braid group
$\mathord{ SB}^d_g =\mathord{ SB}(Y_0, Z_{\Lambda})$ from the right.
We denote by
$$
\Gamma_{\Lambda}\;\subset\; \varGamma^d_g=\varGamma(Y_0, Z_{\Lambda})
$$
the image of the monodromy homomorphism~\eqref{eq:monhom}.
Combining Theorems~\ref{thm:ULZvK}~and~\ref{thm:SB},
we obtain the following:
\begin{corollary}\label{cor:SB}
Let $X$, $\{Y_t\}_{t\in \Lambda}$, $Z_{\Lambda}=X\cap A$ and $\Gamma_{\Lambda}$ be as above.
Suppose that $g>0$, $d\ge g+4$,
and that a general hyperplane section of $X$ is Pl\"ucker general.
Then $\pi_1 (\mathord{U}_0 (X, \P^N), A)$ is isomorphic to
the Zariski-van Kampen quotient $\mathord{ SB}(Y_0, Z_\Lambda)/\hskip -2.2pt/\hskip 1pt \Gamma_{\Lambda}$.
\end{corollary}
A motivation of the study of the fundamental group $\pi_1 (U_0(X, \P^N))$
for a surface $X\subset \P^N$ is the conjecture
of Auroux, Donaldson, Katzarkov and Yotov~\cite{MR2081427}
about
the fundamental group $\pi_1 (\P^2\setminus B)$ of the complement of the branch curve
$B\subset \P^2$ of the general projection $X\to \P^2$,
which had been intensively studied by Moishezon, Teicher, Robb.
The weakening of the conditions
from our previous works~(\cite{KulikovMPIpreprint}, ~\cite{MR1988200}) to
the present result (Theorem~\ref{thm:SB}) is important
with respect to this application.
See Remark~\ref{rem:LSXm}.
\par
\bigskip
The plan of this paper is as follows.
In~\S\ref{sec:ZvKQ},
we state some elementary facts about Zariski-van Kampen quotients.
In~\S\ref{sec:pionefib},
we prove the generalized Zariski-van Kampen theorem~(Theorem~\ref{thm:ZvK}).
We then prove its variant~~(Theorem~\ref{thm:C}),
and deduce Corollaries~\ref{cor:RRReq}~and~\ref{cor:proj}.
The main ingredient of the proof is the notion of
\emph{free loop pairs of monodromy relation type}
(Definitions~\ref{def:frp} and~\ref{def:frpmrt}),
and Proposition~\ref{prop:TD}.
Using these results,
we prove Theorem~\ref{thm:ULZvK} in~\S\ref{sec:proof1},
and Theorem~\ref{thm:SB} in~\S\ref{sec:SB}.
In the last section, we explain the relation between
$\pi_1 (U_0(X, \P^N))$
and the conjecture of Auroux, Donaldson, Katzarkov, Yotov.%
\par
\bigskip
This paper is dedicated to the memory of Professor Nguyen Huu Duc.
\par
\bigskip
{\bf Conventions and Notation}
\begin{itemize}
\setcounter{rmkakkocounter}{1}
\item[{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}] The constant map to a point $P$ is denoted by $1_P$.
\item[{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}]
We denote by $I\subset \mathord{\mathbb R}$ the interval $[0,1]$,
by $\Delta\subset\mathord{\mathbb C}$ the open unit disc,
and by $\bar\Delta\subset\mathord{\mathbb C}$ the closed unit disc.
\item[{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}]
For a continuous map
$\shortmap{\delta}{\bar\Delta}{T}$
to a topological space $T$,
we denote by
$$
\map{\bdr_{\vexp}\delta}{I}{T}
$$
the loop given by $t\mapsto \delta(\exp(2\pi\sqrt{-1}t))$.
\end{itemize}
\section{Zariski-van Kampen quotient}\label{sec:ZvKQ}
\begin{definition}
Let $G$ be a group,
and let $S$ be a subset of $G$.
We denote by $\gen{S}_G$ or simply by $\gen{S}$
the smallest subgroup of $G$ containing $S$,
and by $\ngen{S}_G$ or simply by $\ngen{S}$
the smallest \emph{normal} subgroup of $G$ containing $S$.
\end{definition}
We let a group $\Gamma$ act on a group $G$ from the right.
The following are easy:
\begin{lemma}\label{lem:normal}
For any $\gamma\in \Gamma$, the subgroup
$\gen{\shortset{g\sp{-1} g\sp{\gamma}}{g\in G}}_G$ of $G$
is normal.
Hence,
for any subset $\Sigma\subset \Gamma$,
the subgroup
$\gen{\shortset{g\sp{-1} g\sp\sigma}{g\in G, \sigma\in \Sigma}}_G$ is normal.
\end{lemma}
\begin{lemma}\label{lem:S}
Let $S$ be a subset of $G$, and let $\Sigma$ be a subset of $\Gamma$.
If $G=\gen{S}_G$ and $\Gamma=\gen{\Sigma}_\Gamma$,
then we have
$$
\ngen{\shortset{s\sp{-1} s\sp\sigma}{s\in S, \sigma\in \Sigma}}_G=
\gen{\shortset{g\sp{-1} g\sp\sigma}{g\in G, \sigma\in \Sigma}}_G=
\gen{\shortset{g\sp{-1} g\sp\gamma}{g\in G, \gamma\in \Gamma}}_G.
$$
\end{lemma}
\begin{definition}\label{def:sdp}
We define $G\rtimes \Gamma$ to be the group
with the underlying set $G\times \Gamma$ and with the product
defined by
$$
(g, \gamma)(h, \delta):=(g \cdot \left(h^{(\gamma\sp{-1})}\right), \gamma\delta).
$$
We then define homomorphisms
$i : G\to G\rtimes \Gamma$, $p: G\rtimes \Gamma\to\Gamma$
and $s:\Gamma\to G\rtimes \Gamma$ by
$i(g):=(g, 1)$, $p(g, \gamma):=\gamma$ and $s (\gamma):=(1, \gamma)$.
Then we obtain an exact sequence
\begin{equation}\label{eq:sdp}
1 \;\;\maprightsp{}\;\;
G \;\;\maprightsp{i}\;\;
G\rtimes \Gamma \;\;\maprightsp{p}\;\;
\Gamma\;\;\maprightsp{} \;\;1
\end{equation}
with the cross-section $s$ of $p$,
and the action $g\mapsto g^\gamma$ of $\gamma\in \Gamma$ on $G$
coincides with the inner-automorphism $g\mapsto s(\gamma)\sp{-1} g s (\gamma)$
by $s(\gamma)\in G\rtimes \Gamma$
on the normal subgroup $G=i(G)$ of $ G\rtimes \Gamma$.
\end{definition}
The following two lemmas are elementary:
\begin{lemma}\label{lem:GGG}
Let $\mathord{\mathcal G}$ be a group.
Suppose that we are given an exact sequence
\begin{equation}\label{eq:GGG}
1\;\;\maprightsp{}\;\;
G \;\;\maprightsp{i\sp\prime} \;\;
\mathord{\mathcal G} \;\;\maprightsp{p\sp\prime}\;\;
\Gamma\;\;\maprightsp{}\;\; 1
\end{equation}
with a cross-section $s\sp\prime: \Gamma\to \mathord{\mathcal G}$ of $p\sp\prime$
that is a homomorphism of groups.
Suppose also that the action of $\gamma\in \Gamma$ on $g\in G$
is equal to
the inner-automorphism by $s\sp\prime(\gamma)$;
that is, we have
$i\sp\prime(g\sp\gamma)=s\sp\prime(\gamma)\sp{-1} i\sp\prime (g) s\sp\prime(\gamma)$
for any $g\in G$ and $\gamma\in \Gamma$.
Then there exists an isomorphism $\mathord{\mathcal G}\cong G\rtimes \Gamma$
such that
the exact sequences~\eqref{eq:sdp} and~\eqref{eq:GGG} coincide
and the cross-section $s$ corresponds to $s\sp\prime$
by this isomorphism.
\end{lemma}
\begin{lemma}\label{lem:sGamma}
The composite homomorphism
$$
G \;\maprightsp{i}\;
G\rtimes \Gamma\;\maprightsp{}\; (G\rtimes \Gamma)/ \ngen{s(\Gamma)}_{G\rtimes \Gamma}
$$
is surjective, and its kernel is equal to
$\gen{\shortset{g\sp{-1} g^\gamma }{g\in G, \gamma\in \Gamma}}$;
that is,
the Zariski-van Kampen quotient $G/\hskip -2.2pt/\hskip 1pt\Gamma$ is
isomorphic to $(G\rtimes \Gamma) / \ngen{s(\Gamma)}$.
\end{lemma}
\section{Fundamental groups of algebraic fiber spaces}\label{sec:pionefib}
Let $X$ and $Y$ be smooth varieties,
and let $f: X\to Y$ be a dominant morphism.
We denote by $\operatorname{\rm Sing}\nolimits (f)\subset X$
the Zariski closed subset of
the critical points of $f$.
For a point $y\in Y$, we put
$$
F_y:=f\sp{-1} (y).
$$
Let $\alpha : T\to Y$ be a continuous map
from a topological space $T$.
Then a continuous map
$\lift{\alpha}: T\to X$
is said to be a \emph{lift of $\alpha$} if $f\circ \lift{\alpha}=\alpha$.
\par
\medskip
We fix, once and for all,
a proper Zariski closed subset
$$
\Sigma\subset Y
$$
such that $f\sp{\circ}:X\sp{\circ}\to Y\sp{\circ}$ is locally trivial in the $\CCC^\infty$-category,
where
$$
Y\sp{\circ} :=Y\setminus \Sigma,
\quad
X\sp{\circ} :=f\sp{-1} (Y\sp{\circ})
\quad\rmand\quad
f\sp{\circ}:=f|_{X\sp{\circ}} : X\sp{\circ} \to Y\sp{\circ}.
$$
(In particular, $\operatorname{\rm Sing}\nolimits (f)$ is contained in $f\sp{-1} (\Sigma)$.)
It follows from Hironaka's resolution of singularities
that such a proper Zariski closed subset $\Sigma\subset Y$ exists.
We then fix base points
$$
b\in Y\sp{\circ}\quad\rmand\quad
\tilde{b} \in F_b\subset X\sp{\circ},
$$
and consider the homomorphisms
$$
\map{\iota_*}{\pi_1 (F_b, \tilde{b})}{\pi_1(X, \tilde{b})}
\quad\rmand\quad
\map{f_*}{\pi_1 (X, \tilde{b})}{\pi_1 (Y, b)}
$$
induced by the inclusion $\iota :F_b\hookrightarrow X$ and the morphism $f:X\to Y$, respectively.
The aim of Zariski-van Kampen theorem is to describe $\operatorname{\rm Ker}\nolimits (\iota_*)$.
\par
\medskip
The following result of Nori~\cite{MR732347} will be used throughout this paper:
\begin{proposition}\label{prop:nori}
Suppose that $F_b$ is connected,
and that there exists a Zariski closed subset $\Xi\sp\prime\subset Y$ of codimension $\ge 2$
such that $F_y\setminus(F_y\cap \operatorname{\rm Sing}\nolimits(f))\ne\emptyset $ for any $y\in Y\setminus \Xi\sp\prime$.
Then $f_*: \pi_1 (X, \tilde{b})\to\pi_1(Y, b)$ is surjective,
and its kernel
is equal to the image of
$\iota_*:\pi_1 (F_b, \tilde{b})\to \pi_1 (X, \tilde{b})$.
\end{proposition}
\begin{proof}
See Nori~\cite[Lemma 1.5]{MR732347} and~\cite[Proposition 3.1]{MR1988200}
\end{proof}
Let $\lift{\alpha} : I\to X\sp{\circ}$ be a path,
and we put $\alpha:=f\sp{\circ} \circ\lift{\alpha}$.
Then $\lift{\alpha}$ induces an isomorphism
$\pi_1 (F_{\alpha(0)}, \lift{\alpha}(0))\isom \pi_1 (F_{\alpha(1)}, \lift{\alpha}(1))$,
which depends only on the homotopy class (relative to $\partial I$)
of the path $\lift{\alpha}$.
Hence we can write this isomorphism
as
$$
\mapisom{[\lift{\alpha}]_*}{\pi_1 (F_{\alpha(0)}, \lift{\alpha}(0))}%
{\pi_1 (F_{\alpha(1)}, \lift{\alpha}(1))}.
$$
The \emph{lifted monodromy}
$$
\map{\mu}{\pi_1 (X\sp{\circ}, \tilde{b})}{\operatorname{\rm Aut}\nolimits(\pi_1 (F_b, \tilde{b}))}
$$
introduced in \S\ref{sec:Introduction} (see~\eqref{eq:mu})
is obtained by applying this construction to the loops in $X\sp{\circ}$ with the base point $\tilde{b}$.
By the definition, we have the following:
\begin{proposition}\label{prop:lift}
For any $[\lift{\alpha}]\in \pi_1(X\sp{\circ}, \tilde{b})$ and $g\in \pi_1 (F_b, \tilde{b})$,
we have
$$
\iota\sp{\circ}_*(g^{\mu([\lift{\alpha}])})=[\lift{\alpha}]\sp{-1} \cdot \iota\sp{\circ}_*(g) \cdot [\lift{\alpha}]
$$
in $\pi_1(X\sp{\circ}, \tilde{b})$, where
$\iota\sp{\circ}_*:\pi_1 (F_b, \tilde{b})\to \pi_1 (X\sp{\circ}, \tilde{b})$
is the homomorphism induced by the inclusion $\iota\sp{\circ}: F_b\hookrightarrow X\sp{\circ}$.
\end{proposition}
First we prove the following:
\begin{proposition}\label{prop:relisinKer}
Suppose that a loop $\lift{\alpha}:(I, \partial I)\to (X\sp{\circ}, \tilde{b})$ is null-homotopic in $(X, \tilde{b})$.
Then $g\sp{-1} g^{\mu([\lift{\alpha}])}\in \operatorname{\rm Ker}\nolimits (\iota_*)$
for any $g\in \pi_1 (F_b, \tilde{b})$.
\end{proposition}
\begin{proof}
We put $\alpha:=f\sp{\circ}\circ \lift{\alpha}$, and
$\sqcup := (I\times \{0\})\cup (\partial I\times I)$.
Let $g\in \pi_1 (F_b, \tilde{b})$ be represented by a loop
$\gamma :(I, \partial I)\to (F_b, \tilde{b})$.
We define $\phi_{\sqcup} : \sqcup \to X\sp{\circ}$ by
$$
\phi_{\sqcup}(s, 0):=\gamma(s),
\quad
\phi_{\sqcup}(0, t):=\lift{\alpha}(t),
\quad\rmand\quad
\phi_{\sqcup}(1, t):=\lift{\alpha}(t).
$$
Then we have
$f\sp{\circ} \circ \phi_\sqcup=({\alpha}\circ \operatorname{\rm pr}\nolimits_2)|_{\sqcup}$,
where $\operatorname{\rm pr}\nolimits_2: I\times I \to I$ is the second projection.
Since $\sqcup$ is a strong deformation retract of $I\times I$ and $f\sp{\circ}$ is locally trivial,
the extension of $({\alpha}\circ \operatorname{\rm pr}\nolimits_2)|_{\sqcup}: \sqcup\to Y\sp{\circ} $ to
${\alpha}\circ \operatorname{\rm pr}\nolimits_2: I\times I\to Y\sp{\circ}$ lifts to an extension from $\phi_{\sqcup}:\sqcup \to X\sp{\circ}$
to a continuous map
$\shortmap{\phi}{I\times I}{X\sp{\circ}}$
that satisfies $\phi|_\sqcup=\phi_\sqcup$ and $f\sp{\circ} \circ \phi={\alpha}\circ \operatorname{\rm pr}\nolimits_2$.
(See Figure~\ref{figphi}.)
Then the loop
$$
\map{\gamma\sp\prime:=\phi|_{I\times\{1\}}}{(I, \partial I)}{(F_b, \tilde{b})}
$$
represents $g^{\mu([\lift{\alpha}])}$.
Since $\phi|_{\{0\}\times I}=\lift{\alpha}$ and $\phi|_{\{1\}\times I}=\lift{\alpha}$,
we have
$$
[\gamma]\sp{-1} [\lift{\alpha}] [\gamma\sp\prime][\lift{\alpha}]\sp{-1} =1
$$
in $\pi_1(X\sp{\circ}, \tilde{b})$.
Since $[\lift{\alpha}] =1$ in $\pi_1(X, \tilde{b})$ by the assumption,
we have $[\gamma]\sp{-1} [\gamma\sp\prime] =1 $ in $\pi_1(X, \tilde{b})$.
\end{proof}
\input figphi
By Proposition~\ref{prop:relisinKer},
the normal subgroup $\mathord{\mathcal R}$
defined by~\eqref{eq:RRR} is contained in $\operatorname{\rm Ker}\nolimits (\iota_*)$.
However $\mathord{\mathcal R}$ is not equal to $\operatorname{\rm Ker}\nolimits (\iota_*)$ in general.
We give two examples.
\begin{example}\label{example:L}
Let $L\to \P^1$ be a line bundle of degree $d>0$,
and let $L\sp{\times}\subset L$ be the complement of the zero-section.
Since the projection $\shortmap{f}{X=L\sp{\times}}{Y=\P^1}$
is locally trivial,
we can put $\Sigma=\emptyset$,
and hence $\mathord{\mathcal R}=\{1\}$.
However, the kernel of
$$
\map{\iota_*}{\pi_1 (F_b)=\pi_1(\mathord{\mathbb C}\sp{\times})\cong\mathord{\mathbb Z}}{\pi_1(L\sp{\times})\cong \mathord{\mathbb Z}/d\mathord{\mathbb Z}}
$$
is non-trivial.
Indeed, $\operatorname{\rm Ker}\nolimits(\iota_*)$ is equal to the image of the boundary homomorphism
$\pi_2(\P^1)\to \pi_1(\mathord{\mathbb C}\sp{\times})$
in the homotopy exact sequence.
\end{example}
\begin{example}\label{example:nonirred}
Consider the morphism
$$
\map{f}{X=\mathord{\mathbb C}^2}{Y=\mathord{\mathbb C}}
$$
given by $f(x, y):=xy$.
We can put
$\Sigma=\{0\}$,
and hence the fundamental group of
$X\sp{\circ}=\mathord{\mathbb C}^2\setminus\{xy=0\}$
is isomorphic to $\mathord{\mathbb Z}^2$.
The general fiber $F_b$ is isomorphic to $\P^1$ minus two points, and
the lifted monodromy action of $\pi_1( X\sp{\circ})$ on $\pi_1 (F_b)\cong \mathord{\mathbb Z}$ is
trivial.
Therefore we have $\mathord{\mathcal R}=\{1\}$,
while we have $\operatorname{\rm Ker}\nolimits (\iota_*)=\pi_1 (F_b)\cong\mathord{\mathbb Z}$.
\end{example}
Our ultimate goal is to show that the three conditions in Corollary~\ref{cor:RRReq}
is sufficient for $\mathord{\mathcal R}=\operatorname{\rm Ker}\nolimits(\iota_*)$ to hold.
\par
\medskip
From now on, we suppose that $f:X\to Y$ satisfies
the first two of the three conditions in Corollary~\ref{cor:RRReq}; namely, we assume the following:
\begin{itemize}
\item[\cond{C1}]
$\operatorname{\rm Sing}\nolimits (f)$ is of codimension $\ge 2$ in $X$, and
\item[\cond{C2}]
there exists a Zariski closed subset
$\Xi_0\subset Y$ of codimension $\ge 2$ such that
$F_y$ is non-empty and irreducible for any $y\in Y\setminus \Xi_0$.
\end{itemize}
\begin{remark}\label{rem:C0C3}
By the conditions~\cond{C1} and~\cond{C2}, the following hold:
\begin{itemize}
\item[\cond{C0}]
for $y\in Y\sp{\circ}$, the fiber $F_y$ is connected, and
\item[\cond{C3}]
there exists a Zariski closed subset $\Xi_1\subset Y$ of codimension $\ge 2$
such that
$F_y\setminus (F_y\cap \operatorname{\rm Sing}\nolimits(f))$ is non-empty and connected for every $y\in Y\setminus \Xi_1$.
\end{itemize}
In particular,
we see that $ f_*$ is surjective
and $\Im (\iota_*)=\operatorname{\rm Ker}\nolimits (f_*)$ holds by
Nori's lemma~(Proposition~\ref{prop:nori}).
\end{remark}
Let $\Sigma_1$, \dots, $\Sigma_N$ be
the irreducible components of $\Sigma$ with codimension $1$ in $Y$.
There exists a proper Zariski closed subset $\Xi\subset \Sigma$
with the following properties.
We put
$$
Y\sp{\sharp}:=Y\setminus \Xi,
\quad
\Sigma_i\sp{\sharp} :=\Sigma_i \setminus (\Sigma_i\cap \Xi)=\Sigma_i\cap Y\sp{\sharp},
\quad
\Sigma\sp{\sharp} :=\Sigma\setminus \Xi =\Sigma\cap Y\sp{\sharp}.
$$
\begin{itemize}
\item[($\Xi 0$)]
The codimension of $\Xi$ in $Y$ is $\ge 2$.
\item[($\Xi 1$)]
The Zariski closed subsets
$\Xi_0\subset Y$ in the condition~\cond{C2}
and
$\Xi_1\subset Y$ in the condition~\cond{C3}
are contained in $\Xi$.
\item[($\Xi 2$)]
Each $\Sigma_i\sp{\sharp}$ is a smooth hypersurface of $Y\sp{\sharp}$,
and $\Sigma\sp{\sharp}$ is a disjoint union of
$\Sigma_1\sp{\sharp}, \dots, \Sigma_N\sp{\sharp}$;
that is,
$\Xi$ contains all the irreducible components of $\Sigma$ with codimension $\ge 2$ in $Y$
and the singular locus of $\Sigma$.
\item[($\Xi 3$)]
For each $y\in \Sigma_i\sp{\sharp}$,
there exist an open neighborhood
$U\subset Y\sp{\sharp}$ of $y$ in $Y\sp{\sharp}$ and an analytic isomorphism
$$
\phi : (U, U\cap \Sigma) \maprightsp{\sim} \Delta^{m-1}\times (\Delta , 0),
\qquad \textrm{where $m=\dim Y$,}
$$
with the following properties.
Let $\psi: U\to\Delta^{m-1}$ be the composite of
$\phi : U\cong \Delta^{m-1}\times \Delta$
and the projection $\Delta^{m-1}\times \Delta\to \Delta^{m-1}$.
Then
$$
\map{\Psi := \psi\circ f}{ f\sp{-1} (U)}{\Delta^{m-1}}
$$
is smooth, and the commutative diagram
$$
\renewcommand{\arraystretch}{1.2}
\begin{array}{ccccc}
f\sp{-1} (U) &&\maprightsp{f} && U \\
\hskip 15pt {}_\Psi\hskip -15pt &\searrow && \swarrow &\hskip -7pt {}_\psi \hskip 7pt \\
&& \Delta^{m-1}&&
\end{array}
$$
is a trivial family of $\CCC^\infty$-maps over $\Delta^{m-1}$ in the $\CCC^\infty$-category.
\end{itemize}
Because of the choice of $\Xi$,
for \emph{any} point $y\in \Sigma_i\sp{\sharp}$,
there exists an open disc $\Delta\subset Y\sp{\sharp}$
with the following properties:
\begin{itemize}
\item[\cond{$\Delta\sp{\sharp}$1}] $\Delta\cap \Sigma=\{y\}$, and
$\Delta$ intersects $\Sigma_i\sp{\sharp}$ transversely at $y$,
\item[\cond{$\Delta\sp{\sharp}$2}] $f\sp{-1} (\Delta)$ is a complex manifold,
\item[\cond{$\Delta\sp{\sharp}$3}] $\shortmap{f|_{f\sp{-1} (\Delta)}}{f\sp{-1} (\Delta)}{ \Delta}$
is a one-dimensional family of complex analytic spaces
that is locally trivial in the $\CCC^\infty$-category over $\Delta\setminus\{y\}$, and
\item[\cond{$\Delta\sp{\sharp}$4}] the central fiber $F_y:=f\sp{-1} (y)$ is an irreducible hypersurface of $f\sp{-1} (\Delta)$,
and $F_y\setminus (F_y\cap \operatorname{\rm Sing}\nolimits(f))$ is non-empty and connected.
\end{itemize}
Moreover
the diffeomorphism type of $\shortmap{f|_{f\sp{-1} (\Delta)}}{f\sp{-1} (\Delta)}{ \Delta}$
depends only on the index $i$ of $\Sigma_i$.
\par
\medskip
We put
$$
X\sp{\sharp}:=f\sp{-1} (Y\sp{\sharp}),\;\; f\sp{\sharp}:=f|_{X\sp{\sharp}}:X\sp{\sharp}\to Y\sp{\sharp},
\;\; \Theta\sp{\sharp}_i:=(f^{\sharp})\inv (\Sigma_i\sp{\sharp}) \;\;\textrm{and}\;\; \Theta\sp{\sharp}:=(f^{\sharp})\inv (\Sigma\sp{\sharp}).
$$
Then each $\Theta\sp{\sharp}_i$ is an irreducible hypersurface of $X\sp{\sharp}$,
and
$\Theta\sp{\sharp}$ is a disjoint union of $\Theta\sp{\sharp}_1, \dots, \Theta\sp{\sharp}_N$.
Note that we have $X\sp{\circ}=X\sp{\sharp}\setminus \Theta\sp{\sharp}$.
\begin{remark}\label{rem:C1}
By the condition \cond{C1},
the Zariski closed subset $f\sp{-1} (\Xi)$ of $X$ is also of codimension $\ge 2$,
and hence the inclusions induce isomorphisms
$\pi_1 (X\sp{\sharp}, \tilde{b})\cong \pi_1 (X, \tilde{b})$ and $\pi_1 (Y\sp{\sharp}, b)\cong \pi_1 (Y, b)$.
\end{remark}
We introduce notions of \emph{transversal discs}, \emph{leashed discs} and \emph{lassos}.
\begin{definition}\label{def:defs1}
\setcounter{rmkakkocounter}{1}
Let $H\subset M$ be a reduced hypersurface of a complex manifold $M$
of dimension $m$,
and let $H_1, \dots, H_l$
be the irreducible components of $H$.
We fix a base point $b_M\in M\setminus H$ .
{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}
Let $N$ be a real $k$-dimensional $\CCC^\infty$-manifold with $2\le k\le 2m$
(possibly with boundaries and corners),
and let $\phi: N\to M$ be a continuous map.
Let $p$ be a point of $N$ that is not in the corner of $N$.
If $k=2$, we further assume that $p\notin \partial N$.
We say that
\emph{ $\phi: N \to M$ intersects $H$ at $p$ transversely}
if the following hold:
%
%
\begin{itemize}
\item[\cond{$\phi 1$}] $\phi(p)\in H\setminus \operatorname{\rm Sing}\nolimits (H)$, and
\item[\cond{$\phi 2$}]
there exist local coordinates $(u_1, \dots, u_k)$
of $N$ at $p$
and local coordinates $(v_1, \dots, v_{2m})$
of the $\CCC^\infty$-manifold underlying $M$ at $\phi(p)$ such that
\begin{itemize}
\item[$\bullet$] $p=(0, \dots, 0)$, $\phi(p)=(0, \dots, 0)$,
\item[$\bullet$] if $p\in \partial N$, then
$N$ is given by $u_k\ge 0$ locally at $p$,
\item[$\bullet$] $H$ is locally defined by $v_1=v_2=0$ in $M$, and
\item[$\bullet$] $\phi$ is given by
$(u_1, \dots, u_k)\mapsto (v_1, \dots, v_{2m})=(u_1, \dots, u_k, 0, \dots, 0)$.
\end{itemize}
\end{itemize}
We say that
\emph{$\phi: N \to M$ intersects $H$ transversely}
if $\phi\sp{-1} (H)$ is disjoint from the corner of $N$
(when $k=2$, we assume that $\phi\sp{-1} (H)\cap \partial N=\emptyset$)
and $\phi$ intersects $H$ transversely at every point of $\phi\sp{-1} (H)$.
If $\phi$ intersects $H$ transversely,
then $\phi\sp{-1} (H)$ is a real $(k-2)$-dimensional sub-manifold of $N$.
If $k>2$, then the boundary of $\phi\sp{-1} (H)$ is equal to $\phi\sp{-1} (H)\cap \partial N$,
while if $k=2$, then $\phi\sp{-1} (H)$ is a finite set of points in the interior of $N$.
{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}
A continuous map $\delta: \bar\Delta\to M$ is called a \emph{transversal disc around $H_i$}
if $\delta\sp{-1} (H)=\{0\}$, $\delta(0)\in H_i$ and
$\delta$ intersects $H$ transversely at $0$.
In this case, the \emph{sign} of $\delta$
is the local intersection number ($+1$ or $-1$) of
$\delta$ with $H_i$ at $\delta (0)$.
{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}
An \emph{isotopy} between transversal discs $\delta$ and $\delta\sp\prime$ around $H_i$
is a continuous map
$$
\map{h}{\bar\Delta\times I}{M}
$$
such that, for each $t\in I$,
the restriction $\delta_t:=\shortmap{h|_{\bar\Delta\times\{t\}}}{\bar\Delta}{M}$
of $h$ to $\bar\Delta\times\{t\}$ is a transversal disc around $H_i$,
and such that $\delta_0=\delta$ and $\delta_1=\delta\sp\prime$ hold.
{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}
A \emph{leashed disc} around $H_i$ with the base point $b_M$ is a pair $\rho=(\delta, \eta)$
of a transversal disc $\delta: \bar\Delta\to M$ around $H_i$
and a path $\eta: I\to M\setminus H$ from $\delta (1)=\bdr_{\vexp}\delta(0)=\bdr_{\vexp}\delta(1)$
to $b_M$.
(Recall that $\bdr_{\vexp}\delta$ is the loop given by $t\mapsto \delta(\exp(2\pi\sqrt{-1}t))$.
See Convention (3).)
The \emph{sign} of a leashed disc $\rho=(\delta, \eta)$
is the sign of $\delta$.
{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}
The \emph{lasso} $\lambda(\rho)$ associated with a leashed disc $\rho=(\delta, \eta)$
is the loop $\eta\sp{-1} \cdot (\bdr_{\vexp} \delta)\cdot \eta$ in $M\setminus H$ with the base point $b_M$.
{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}
An \emph{isotopy}
of leashed discs
around $H_i$ with the base point $b_M$
is the pair of continuous maps
$$
\map{(h_{\bar\Delta}, h_I)}{ (\bar\Delta, I)\times I}{ (M, M\setminus H)}
$$
such that,
for each $t\in I$,
the restriction of $(h_{\bar\Delta}, h_I)$
to $(\bar\Delta, I)\times\{t\}$
is a leashed disc around $H_i$ with the base point $b_M$.
\end{definition}
\begin{remark}\label{rem:lambda}
The isotopy class of a leashed disc $\rho$ is denoted by $[\rho]$.
If $[\rho]=[\rho\sp\prime]$,
then $[\lambda(\rho)]=[\lambda(\rho\sp\prime)]$ holds in $\pi_1 (M\setminus H, b_M)$.
\end{remark}
The following is obvious:
\begin{proposition}\label{prop:rho}
\setcounter{rmkakkocounter}{1}
{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}
Any two transversal discs around $H_i$
with the same sign are isotopic.
{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}
The homotopy classes of lassos associated with
all the leashed discs around $H_i$
with a fixed sign
form a conjugacy class
in $\pi_1 (M\setminus H, b_M)$.
{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}
The kernel of the homomorphism $\pi_1 (M\setminus H, b_M)\to \pi_1 (M,b_M)$
induced by the inclusion
is generated by the homotopy classes of all lassos around $H_1, \dots, H_l$.
\end{proposition}
We apply these notions to the hypersurfaces
$$
\Sigma\sp{\sharp} =\Sigma\sp{\sharp}_1\cup\dots\cup \Sigma\sp{\sharp}_N
\;\;\textrm{of $Y\sp{\sharp}$},
\quad\rmand\quad
\Theta\sp{\sharp} =\Theta\sp{\sharp}_1\cup\dots\cup \Theta\sp{\sharp}_N
\;\;\textrm{of $X\sp{\sharp}$}.
$$
\begin{definition}
\setcounter{rmkakkocounter}{1}
{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}
A \emph{transversal lift} of a transversal disc $\shortmap{\delta}{\bar\Delta}{Y\sp{\sharp}}$
around $\Sigma_i\sp{\sharp}$ is a lift
$\shortmap{\lift{\delta}}{\bar\Delta}{X\sp{\sharp}}$
of $\delta$ with
$\lift{\delta} (0)\notin \operatorname{\rm Sing}\nolimits(f)$
such that $\lift{\delta}$ intersects
the irreducible hypersurface $\Theta\sp{\sharp}_i$ transversely at $0$.
{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}
Let $\rho=(\delta, \eta)$ be a leashed disc around $\Sigma\sp{\sharp}_i$ with the base point $b$.
A \emph{transversal lift} of $\rho$ is a pair $\lift{\rho}=(\lift{\delta},\lift{\eta})$
such that $\shortmap{\lift{\delta}}{\bar\Delta}{X\sp{\sharp}}$
is a transversal lift of $\shortmap{\delta}{\bar\Delta}{Y\sp{\sharp}}$
and $\shortmap{\lift{\eta}}{I}{X\sp{\circ}}$ is a lift of
$\shortmap{\eta}{I}{Y\sp{\circ}}$ such that
$\lift{\eta}(0)=\lift{\delta} (1)$
and $\lift{\eta}(1)=\tilde{b}$.
\end{definition}
\begin{remark}
Any transversal lift of a transversal disc (resp.~a leashed disc)
around $\Sigma_i\sp{\sharp}$
is a transversal disc (resp.~a leashed disc) around $\Theta\sp{\sharp}_i$.
Moreover the lifting does not change the sign.
\end{remark}
\begin{definition}\label{def:homotopylift}
\setcounter{rmkakkocounter}{1}
{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}
Let $\delta_0$ and $\delta_1$ be two transversal discs on $Y\sp{\sharp}$
around $\Sigma_i\sp{\sharp}$, and
let $\shortmap{h}{\bar\Delta\times I }{Y\sp{\sharp}}$
be an isotopy of transversal discs from $\delta_0$ to $\delta_1$.
A \emph{lift} of the isotopy $h$ is a continuous map
$$
\map{\lift{h}}{\bar\Delta\times I }{X\sp{\sharp}}
$$
such that, for each $t\in I$, the restriction $\lift{\delta}_t:=\lift{h}|_{\bar\Delta\times \{t\}}$ is
a transversal lift of the transversal disc $\delta_t:=h|_{\bar\Delta\times \{t\}}$ on $Y\sp{\sharp}$.
In particular,
we have $f\circ\lift{h}=h$ and
$\lift{h}(\bar\Delta \times I)\cap \operatorname{\rm Sing}\nolimits (f)=\emptyset$.
Moreover $\lift{h}$ is an isotopy of transversal discs
around $\Theta_i\sp{\sharp}$ from $\lift{\delta}_0$ to $\lift{\delta}_1$.
By abuse of notation,
we sometimes say that the isotopy $\lift{\delta}_t$ is the transversal lift
of the isotopy ${\delta}_t$,
understanding that $t$ is the homotopy parameter.
{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}
Let $\rho_0$ and $\rho_1$ be two leashed discs on $Y\sp{\sharp}$
around to $\Sigma_i\sp{\sharp}$, and
let $\shortmap{(h_{\bar\Delta}, h_I)}{(\bar\Delta, I)\times I }{(Y\sp{\sharp}, Y\sp{\circ})}$
be an isotopy of leashed discs from $\rho_0$ to $\rho_1$.
A \emph{lift} of the isotopy $(h_{\bar\Delta}, h_I)$ is a pair of continuous maps
$$
\map{(\lift{h}_{\bar\Delta}, \lift{h_I})}{(\bar\Delta, I)\times I }{(X\sp{\sharp}, X\sp{\circ})}
$$
such that, for each $t\in I$, the restriction
$\lift{\rho}_t:=(\lift{h}_{\bar\Delta}, \lift{h}_I)|_{(\bar\Delta, I)\times \{t\}}$ is
a transversal lift of the leashed disc $\rho_t:=(h_{\bar\Delta}, h_I)|_{(\bar\Delta, I)\times \{t\}}$ on $Y\sp{\sharp}$.
\end{definition}
The following are obvious
from the condition \cond{$\Delta\sp{\sharp}$4}:
\begin{proposition}\label{prop:homotopyliftexistence}
Every transversal disc
around $\Sigma_i\sp{\sharp}$
has a transversal lift
on $X\sp{\sharp}$.
Moreover, every isotopy $\delta_t$ of transversal discs around $\Sigma_i\sp{\sharp}$
from $\delta_0$ to $\delta_1$
lifts to an isotopy $\lift{\delta}_t$
from a given transversal lift $\lift{\delta}_0$
of $\delta_0$ to a given transversal lift $\lift{\delta}_1$
of $\delta_1$.
\end{proposition}
\begin{remark}\label{rem:homotopyliftexistence2}
Every leashed disc
on $Y\sp{\sharp}$ around $\Sigma_i\sp{\sharp}$ has a transversal lift
on $X\sp{\sharp}$.
Moreover,
every isotopy $\rho_t$ of leashed discs on $Y\sp{\sharp}$
has a lift $\lift{\rho}_t$
on $X\sp{\sharp}$ from a given transversal lift $\lift{\rho}_0$ of $\rho_0$,
but the ending lift $\lift{\rho}_1$ cannot be arbitrarily given.
\end{remark}
\begin{definition}
Let $\rho$ be a leashed disc on $Y\sp{\sharp}$ around $\Sigma_i\sp{\sharp}$,
and let $\lift{\rho}$ be a transversal lift of $\rho$.
Then we have the lasso $\lambda(\lift{\rho})$,
which is a loop in $X\sp{\circ}$
with the base point $\tilde{b}$.
Recall that $\mu$ is the lifted monodromy.
We put
$$
N(\lift{\rho}):=
\gen{\;\shortset{g\sp{-1} g^{\mu([\lambda(\lift{\rho})])}}{g\in \pi_1 (F_b, \tilde{b})}\;}_{\pi_1 (F_b, \tilde{b})}.
$$
\end{definition}
\begin{proposition_definition}\label{prop:N}
Let $\rho\sp\prime$ be a leashed disc on $Y\sp{\sharp}$ isotopic to $\rho$,
and let $\lift{\rho}\sp\prime$ be a transversal lift of $\rho\sp\prime$.
Then we have
$$
N(\lift{\rho})\;=\;N(\lift{\rho}\sp\prime).
$$
Therefore,
for an isotopy class $[\rho]$ of leashed discs on $Y\sp{\sharp}$,
we can define a normal subgroup
$N^{[\rho]}$ of $\pi_1 (F_b, \tilde{b})$ by choosing a transversal lift
$\lift{\rho}$ of a representative $\rho$ of $[\rho]$, and putting
$$
N^{[\rho]}:=N(\lift{\rho}).
$$
\end{proposition_definition}
\begin{proof}
By Remarks~\ref{rem:lambda} and~\ref{rem:homotopyliftexistence2},
the isotopy from $\rho$ to $\rho\sp\prime$ lifts to
an isotopy from $\lift{\rho}$ to some lift $\lift{\rho}\sp\prime_1$ of $\rho\sp\prime$,
and we have $[\lambda(\lift{\rho})]=[\lambda(\lift{\rho}_1\sp\prime)]$ in $\pi_1 (X\sp{\circ}, \tilde{b})$.
(However $[\lambda(\lift{\rho}\sp\prime_1)]$ and $[\lambda(\lift{\rho}\sp\prime)]$
may be distinct in general.)
Therefore it is enough to show that
$N(\lift{\rho}^{(1)})=N(\lift{\rho}^{(2)})$ holds
for any two transversal
lifts $\lift{\rho}^{(1)}=(\lift{\delta}^{(1)}, \lift{\eta}^{(1)})$ and
$\lift{\rho}^{(2)}=(\lift{\delta}^{(2)}, \lift{\eta}^{(2)})$
of a single leashed disc $\rho=(\delta, \eta)$ on $Y\sp{\sharp}$.
We can assume that the transversal disc $\delta: \bar\Delta\to Y\sp{\sharp}$
around $\Sigma\sp{\sharp}_i$
is an embedding of a complex manifold.
We denote by $\bar\Delta_\rho$ the image of $\delta$,
and by $\Delta_\rho$ the interior of $\bar\Delta_\rho$.
We can further assume that
$\bar\Delta_\rho$ is sufficiently small,
and that
$$
E_\rho:=f\sp{-1}(\Delta_\rho)
$$
is a smooth complex manifold
by the condition~\cond{$\Delta\sp{\sharp}$2}.
We then put
$$
\ol{E}_\rho\:=f\sp{-1}(\bar\Delta_\rho),
\quad
\ol{E}\sp{\times}_\rho\:=f\sp{-1}(\bar\Delta_\rho\sp{\times}),
$$
where $\bar\Delta_\rho\sp{\times}:=\bar\Delta_\rho\setminus \{\delta(0)\}=\bar\Delta_\rho\cap Y\sp{\circ}$.
We also put $q:=\delta(1)=\eta(0)\in \partial \bar\Delta_\rho$ and
$$
\lift{q}^{(1)}:=\lift{\delta}^{(1)}(1)=\lift{\eta}^{(1)}(0)\in F_q,
\quad
\lift{q}^{(2)}:=\lift{\delta}^{(2)}(1)=\lift{\eta}^{(2)}(0)\in F_q.
$$
Since $f$ is locally trivial over $\eta(I)\subset Y\sp{\circ}$
and $\sqcap=(\partial I \times I)\cup (I\times\{1\})$ is a strong deformation retract of $I\times I$,
there exists a continuous map
$\shortmap{\Omega}{I\times I}{X\sp{\circ}}$
such that the following hold for any $s, t\in I$:
$$
f(\Omega (s, t))=\eta (t),
\quad
\Omega(s, 1)=\tilde{b},
\quad
\Omega(0, t)=\lift{\eta}^{(1)} (t),
\quad
\Omega(1, t)=\lift{\eta}^{(2)} (t).
$$
(See Figure~\ref{figOmega}.)
Then,
for each $t\in I$, the map $s\mapsto \Omega(s, t)$ is a path in $F_{\eta (t)}$ from
$\lift{\eta}^{(1)} (t)$ to $\lift{\eta}^{(2)} (t)$.
We denote by $\omega: I\to F_q$ the path
in $F_q$ from $\lift{q}^{(1)}$ to $\lift{q}^{(2)}$
defined by $\omega(s):=\Omega(s, 0)$.
\input figOmega
Then we have the following commutative diagram:
$$
\begin{array}{ccccc}
\pi_1 (F_b, \tilde{b}) & \mapleftspsb{\sim}{[\lift{\eta}^{(1)}]_*}
& \pi_1 (F_q, \lift{q}^{(1)}) & \maprightsp{i_{q *}} & \pi_1(\ol{E}_\rho, \lift{q}^{(1)}) \\
\parallel& & \mapdownleftright{\hskip -10pt [\omega]_*}{\wr} & & \mapdownleftright{\hskip -10pt [\omega]_*}{\wr} \\
\pi_1 (F_b, \tilde{b}) & \mapleftspsb{\sim}{[\lift{\eta}^{(2)}]_*}
& \pi_1 (F_q, \lift{q}^{(2)}) & \maprightsp{i_{q *}}
& \pi_1(\ol{E}_\rho, \lift{q}^{(2)}),\\
&&
\end{array}
$$
where $i_{q}: F_q\hookrightarrow \ol{E}_\rho$ is the inclusion.
Hence,
in order to prove $N(\lift{\rho}^{(1)})=N(\lift{\rho}^{(2)})$, it is enough to show the following equality:
$$
[\lift{\eta}^{(1)}]_*\sp{-1} (N(\lift{\rho}^{(1)}))=\operatorname{\rm Ker}\nolimits(i_{q *} : \pi_1 (F_q, \lift{q}^{(1)}) \to \pi_1(\ol{E}_\rho, \lift{q}^{(1)})).
$$
Since
$\shortmap{f|_{\ol{E}_\rho}}{\ol{E}_\rho}{\bar\Delta_\rho}$
is locally trivial over $\bar\Delta_\rho\sp{\times}$
with the general fiber being connected by \cond{C0},
and since there exists a cross-section
$$
\map{{}^s\lift{\delta}^{(1)}}{\bar\Delta_\rho}{\ol{E}_\rho}
$$
of $f|_{\ol{E}_\rho}$
given by the transversal lift $\lift{\delta}^{(1)}$ of $\delta$,
we have an exact sequence
$$
1\;\maprightsp{}\;
\pi_1 (F_q, \lift{q}^{(1)})\;\maprightsp{i_{q*}}\;
\pi_1 (\ol{E}_\rho\sp{\times}, \lift{q}^{(1)} )\;\maprightsp{(f|_{\ol{E}\sp{\times}_\rho})_*}\;
\pi_1(\bar\Delta_{\rho}\sp{\times}, q)\;\maprightsp{}\;
1
$$
with the cross-section
$$
\map{s}{\pi_1(\bar\Delta_{\rho}\sp{\times}, q)}{\pi_1 (\ol{E}_\rho\sp{\times}, \lift{q}^{(1)} )}
$$
of $(f|_{\ol{E}\sp{\times}_\rho})_*$ that maps the positive generator $[\bdr_{\vexp} \delta]$ of $\pi_1(\bar\Delta_{\rho}\sp{\times}, q) \cong \mathord{\mathbb Z}$
to $[\bdr_{\vexp} \lift{\delta}^{(1)}]\in \pi_1 (\ol{E}_\rho\sp{\times}, \lift{q}^{(1)} )$.
By the cross-section ${}^s\lift{\delta}^{(1)}$ of $f|_{\ol{E}_\rho}$
over $\bar\Delta_\rho$,
we have the classical monodromy action of $\pi_1(\bar\Delta_{\rho}\sp{\times}, q)$ on $\pi_1 (F_q, \lift{q}^{(1)})$.
By the definition,
the action of $[\bdr_{\vexp} \delta]\in \pi_1(\bar\Delta_{\rho}\sp{\times}, q)$ is equal to
$$
g\;\;\mapsto\;\; g^{\mu([\bdr_{\vexp} \lift{\delta}^{(1)}])}=
[\bdr_{\vexp} \lift{\delta}^{(1)}]\sp{-1} \cdot g \cdot[\bdr_{\vexp} \lift{\delta}^{(1)}]
\quad\textrm{for}\quad g\in \pi_1 (F_q, \lift{q}),
$$
where the product is taken in $\pi_1 (\ol{E}_\rho\sp{\times}, \lift{q}^{(1)} )$
and $\pi_1 (F_q, \lift{q}^{(1)})$ is regarded as a normal subgroup of
$\pi_1 (\ol{E}_\rho\sp{\times}, \lift{q}^{(1)} )$ by $i_{q*}$.
Hence, by Lemma~\ref{lem:GGG},
$\pi_1 (\ol{E}_\rho\sp{\times}, \lift{q}^{(1)} )$ is isomorphic to the semi-direct product
$\pi_1 (F_q, \lift{q}^{(1)})\rtimes \pi_1(\bar\Delta_{\rho}\sp{\times}, q)$
constructed by the monodromy action.
On the other hand,
by the condition~\cond{$\Delta\sp{\sharp}$4},
the central fiber $F_{\delta(0)}$ of $\ol{E}_\rho\to \bar\Delta_\rho$ is
an irreducible hypersurface of $\ol{E}_\rho$, and hence
the kernel of
$$
\map{j_*}{\pi_1 (\ol{E}_\rho\sp{\times}, \lift{q}^{(1)} )}{\pi_1 (\ol{E}_\rho, \lift{q}^{(1)} )}
$$
induced by the inclusion
$j: \ol{E}_\rho\sp{\times}\hookrightarrow \ol{E}_\rho$ is generated by the conjugacy class of lassos
around $F_{\delta(0)}$.
(See Proposition~\ref{prop:rho}.)
Since $\bdr_{\vexp} \lift{\delta}^{(1)}=\lambda(\lift{\delta}^{(1)})$ is a lasso around $F_{\delta(0)}$,
the kernel of $j_*$
is equal to the normal subgroup
$\ngen{\{[\bdr_{\vexp} \lift{\delta}^{(1)}]\}}=\ngen{\Im (s)}$.
By Lemmas~\ref{lem:S} and~\ref{lem:sGamma},
the kernel of
the composite
$$
\pi_1 (F_q, \lift{q}^{(1)})
\;\maprightsp{i_{q*}}\;
\pi_1 (\ol{E}_\rho\sp{\times}, \lift{q}^{(1)} )
\;\maprightsp{j_*}\;
\pi_1 (\ol{E}_\rho, \lift{q}^{(1)} )=\pi_1 (\ol{E}_\rho\sp{\times}, \lift{q}^{(1)} )/\ngen{\Im (s)}
$$
is equal to
$$
N\sp\prime:=\gen{\set{g\sp{-1} g^{\mu([\bdr_{\vexp} \lift{\delta}^{(1)}])}}{g\in \pi_1 (F_q, \lift{q}^{(1)})}}.
$$
Since
$[\lift{\eta}^{(1)}]_* (g^{\mu([\bdr_{\vexp} \lift{\delta}^{(1)}])})=([\lift{\eta}^{(1)}]_* (g))^{\mu([\lambda(\lift{\rho}^{(1)})])}$
for any $g\in \pi_1(F_q, \lift{q}^{(1)})$,
we see that
$[\lift{\eta}^{(1)}]_*$ induces an isomorphism $N\sp\prime \isom N(\lift{\rho}^{(1)})$.
\end{proof}
\begin{proposition}\label{prop:Nalpha}
Let $\shortmap{\lift{\gamma}}{(I, \partial I)}{(X\sp{\circ}, \tilde{b})}$
be a loop, and we put $\gamma:=f\circ \lift{\gamma}$.
Then,
for any leashed disc
$\rho=(\delta, \eta)$ on $Y\sp{\sharp}$ around $\Sigma_i\sp{\sharp}$, we have
$$
({N^{[\rho]}})\sp{\mu([\lift{\gamma}])} =N^{[(\delta, \eta\gamma)]}.
$$
\end{proposition}
\begin{proof}
Let $g$ be an element of $\pi_1(F_b, \tilde{b})$,
and let $h$ denote $g^{\mu([\lift{\gamma}])}$.
Then, for a transversal lift $\lift{\rho}=(\lift{\delta}, \lift{\eta})$ of $\rho$,
we have
$$
(g\sp{-1} g^{\mu([\lambda(\lift{\rho})])})^{\mu([\lift{\gamma}])}
=h\sp{-1} h ^{\mu([\lift{\gamma}]\sp{-1} [\lambda(\lift{\rho})] [\lift{\gamma}])}.
$$
Since
$\lift{\gamma}\sp{-1} \lambda(\lift{\rho}) \lift{\gamma}=
\lift{\gamma}\sp{-1} \lift{\eta}\sp{-1} \cdot \bdr_{\vexp}{\lift{\delta}} \cdot \lift{\eta} \lift{\gamma}$
is a lasso
associated with the transversal lift $(\lift{\delta},\lift{\eta}\lift{\gamma})$
of the leashed disc $(\delta, \eta\gamma)$,
we obtain the proof.
\end{proof}
\begin{corollary}\label{cor:foranyrho}
If $N^{[\rho]}=1$ holds
for one leashed disc $\rho$ around $\Sigma_i\sp{\sharp}$,
then we have
$N^{[\rho]}=1$
for any leashed disc $\rho$ around $\Sigma_i\sp{\sharp}$.
\end{corollary}
We can now state the main result of this section.
\begin{theorem}\label{thm:ZvK}
Suppose that the conditions \cond{C1},~\cond{C2} and the following condition~\cond{Z} are satisfied:
\begin{itemize}
\item[\cond{Z}]
There exists a continuous cross-section $s_Z: Z\to f\sp{-1} (Z)$
of $f$ over a subspace $Z\subset Y$ satisfying
$b\in Z$, $s_Z(b)=\tilde{b}$, $s_Z(Z)\cap \operatorname{\rm Sing}\nolimits (f)=\emptyset$ and
such that
the inclusion $Z\hookrightarrow Y$ induces
a surjection $\pi_2 (Z, b)\mathbin{\to \hskip -7pt \to} \pi_2(Y, b)$.
\end{itemize}
Let $\mathord{\mathcal L}$ be the set of
isotopy classes of all leashed discs on $Y\sp{\sharp}$
around $\Sigma\sp{\sharp}_1, \dots, \Sigma\sp{\sharp}_N$.
Then $\operatorname{\rm Ker}\nolimits (\iota_*)$
is equal to
$$
\mathord{\mathcal N}:=\gen{\;\;\textstyle{\bigcup}_{[\rho]\in \mathord{\mathcal L}} N^{[\rho]}\;\;}_{\pi_1 (F_b, \tilde{b})}.
$$
\end{theorem}
\begin{remark}\label{rem:pitwo}
If $\pi_2(Y)=0$, then the condition~\cond{Z} is always satisfied,
because we can put $Z=\{b\}$ and $s_Z(b)=\tilde{b}$.
\end{remark}
For the proof,
we define the notion of \emph{free loop pairs of monodromy relation type}.
Let $\mathbbS^1$ denote the oriented circle.
\begin{definition}
Let $T$ be a topological space.
A \emph{free loop}
on $T$ is a continuous map
$\shortmap{\varphi}{\mathbbS^1}{T}$.
A \emph{homotopy} from a free loop $\varphi$ to a free loop $\varphi\sp\prime$
is a continuous map
$\shortmap{\Phi}{\mathbbS^1\times I}{T}$
such that $\Phi|_{{\mathbbS^1}\times\{0\}} =\varphi$
and $\Phi|_{{\mathbbS^1}\times\{1\}} =\varphi\sp\prime$.
The homotopy class of a free loop $\varphi$ is denoted by $[\varphi]_{\mathord{\rm FL}}$.
\end{definition}
Suppose that $T$ is path-connected, and let $b_T$ be a base point of $T$.
Then the natural map $[\alpha]\mapsto [\alpha]_{\mathord{\rm FL}}$ induces a bijection from the set of
conjugacy classes of $\pi_1 (T, b_T)$ to the set of homotopy classes
of free loops on $T$.
\par
\medskip
Let $D$ be a topological space homeomorphic to $\bar\Delta$,
let $b_D$ be a point of $D$,
and let $\partial D$ be the boundary of $D$
with an orientation.
\begin{definition}\label{def:frp}
A \emph{free loop pair} is a pair
$$
\map{(\psi, \Lift{\psi|_{\partial D}})}{(D, \partial D)}{(Y\sp{\circ}, X\sp{\circ})}
$$
of
a continuous map $\shortmap{\psi}{ D}{ Y\sp{\circ}}$
and a lift $\shortmap{\Lift{\psi|_{\partial D}}}{\partial D}{X\sp{\circ}}$ of the restriction $\shortmap{\psi|_{\partial D}}{\partial D }{Y\sp{\circ}}$
of $\psi$ to $\partial D$.
\end{definition}
\begin{remark}
The notation $\shortmap{(\psi, \Lift{\psi|_{\partial D}})}{(D, \partial D)}{(Y\sp{\circ}, X\sp{\circ})}$
for a pair of maps is different from
the usual notation in topology,
because $X\sp{\circ}$ is \emph{not} a subspace of $Y\sp{\circ}$.
The same warning is also applied to Definition~\ref{def:hfrp}.
\end{remark}
Let $\shortmap{(\psi, \Lift{\psi|_{\partial D}})}{(D, \partial D)}{(Y\sp{\circ}, X\sp{\circ})}$
be a free loop pair.
Consider the pull-back
$$
\map{\psi^*(f\sp{\circ})}{\psi^*(X\sp{\circ}):=X\sp{\circ}\times_{Y\sp{\circ}} D}{ D }
$$
of the locally trivial map $\shortmap{f\sp{\circ}}{X\sp{\circ}}{Y\sp{\circ}}$ by $\psi$.
Since $ D $ is contractible,
we have a contraction $c: \psi^*(X\sp{\circ})\to F_{\psi(b_D)}$,
which is the homotopy inverse of the inclusion $F_{\psi(b_D)}\hookrightarrow \psi^*(X\sp{\circ})$.
Then the cross-section
$$
\map{{}\sp{s}\Lift{\psi|_{\partial D}}}{\partial D }{\psi^*(X\sp{\circ})}
$$
of $\psi^*(f\sp{\circ}) $ over $\partial D $
obtained from $\shortmap{\Lift{\psi|_{\partial D}}}{\partial D}{X\sp{\circ}}$ defines a homotopy class
$[\Lift{\psi|_{\partial D}}]_{\mathord{\rm FL}}$ of free loops on $F_{\psi(b_D)}$ via the contraction $c$,
and hence a conjugacy class ${\mathord{\rm C}} (\psi, \Lift{\psi|_{\partial D}})$ of
$\pi_1 (F_{\psi(b_D)}, \tilde{b}\sp\prime)$,
where $\tilde{b}\sp\prime\in F_{\psi(b_D)}$ is an arbitrary base point.
Remark that ${\mathord{\rm C}} (\psi, \Lift{\psi|_{\partial D}})$
does not depend on the choice of the contraction $c$.
\begin{definition}\label{def:frpmrt}
We choose a path $\lift{\alpha}$ in $X\sp{\circ}$ from $\tilde{b}\in F_b$ to $\tilde{b}\sp\prime\in F_{\psi(b_D)}$.
We say that the free loop pair
$$
\map{(\psi, \Lift{\psi|_{\partial D}})}{(D, \partial D)}{(Y\sp{\circ}, X\sp{\circ})}
$$
is \emph{of monodromy relation type around $\Sigma_i\sp{\sharp}$}
if the pull-back of the conjugacy class
${\mathord{\rm C}} (\psi, \Lift{\psi|_{\partial D}})\subset \pi_1 (F_{\psi(b_D)}, \tilde{b}\sp\prime)$ by the isomorphism
$[\lift{\alpha}]_* : \pi_1(F_b, \tilde{b})\isom \pi_1 (F_{\psi(b_D)}, \tilde{b}\sp\prime)$
is contained in $N^{[\rho]}$ for some leashed disc $\rho$ on $Y\sp{\sharp}$ around $\Sigma_i\sp{\sharp}$.
\end{definition}
\begin{remark}
It is obvious that
this definition does not depend on the choice of the orientation of $\partial D$.
It also follows from
Proposition~\ref{prop:Nalpha} that
this definition does not depend on the choice of the path $\lift{\alpha}$
connecting $\tilde{b}\in F_b$ and $\tilde{b}\sp\prime\in F_{\psi(b_D)}$.
\end{remark}
\begin{definition}\label{def:hfrp}
A \emph{homotopy of free loop pairs}
is a pair of continuous maps
$$
\map{(h, \Lift{h|_{\partial D}})}{(D, \partial D)\times I}{(Y\sp{\circ}, X\sp{\circ})}
$$
such that, for each $u\in I$,
the restriction of $(h, \Lift{h|_{\partial D}})$ to $(D, \partial D)\times \{u\}$ is a free loop pair.
\end{definition}
\begin{remark}\label{rem:homotopymonrel}
Suppose that two free loop pairs are homotopic.
If one is of monodromy relation type around $\Sigma_i\sp{\sharp}$,
then so is the other.
\end{remark}
\begin{remark}\label{rem:homotopyD}
Let
$\shortmap{\psi_u}{D}{Y\sp{\circ}}$
be a homotopy of continuous maps from $\psi_0$ to $\psi_1$ parametrized by $u\in I$.
Since $f\sp{\circ}$ is locally trivial,
the homotopy
$\shortmap{\psi_u|_{\partial D}}{\partial D}{Y\sp{\circ}}$
lifts to a homotopy
$\shortmap{\Lift{\psi_u|_{\partial D}}}{\partial D}{X\sp{\circ}}$
that starts from any given lift $\Lift{\psi_0|_{\partial D}}$
of ${\psi_0|_{\partial D}}$
and hence we obtain a homotopy $(\psi_u, \Lift{\psi_u|_{\partial D}})$ of free loop pairs
starting from a given $(\psi_0, \Lift{\psi_0|_{\partial D}})$.
(The ending lift $\Lift{\psi_1|_{\partial D}}$
cannot be arbitrarily given.)
\end{remark}
\begin{proposition}\label{prop:TD}
Let $\delta_0$ and $\delta_1$ be two transversal discs on $Y\sp{\sharp}$ around $\Sigma_i\sp{\sharp}$,
and let
$\shortmap{h}{\bar\Delta\times I}{Y\sp{\sharp}}$
be an isotopy of transversal discs from $\delta_0=h|_{\bar\Delta\times\{0\}}$ to
$\delta_1=h|_{\bar\Delta\times\{1\}}$.
Let $D$ be a closed subset of $\partial\bar\Delta\times (I\setminus\partial I)$
homeomorphic to $\bar\Delta$, and put
$$
T:=\partial(\bar\Delta \times I)\setminus (D\setminus \partial D),
$$
so that $\partial T=\partial D$.
Suppose that we are given a lift
$$
\map{\Lift{h|_T}}{T}{X\sp{\sharp}}
$$
of $\shortmap{h|_T}{T}{Y\sp{\sharp}}$
such that the restrictions
$$
\shortmap{\lift{\delta}_0:=\Lift{h|_T}|_{\bar\Delta\times\{0\}}}{\bar\Delta}{X\sp{\sharp}}
\quad\rmand\quad
\shortmap{\lift{\delta}_1:=\Lift{h|_T}|_{\bar\Delta\times\{1\}}}{\bar\Delta}{X\sp{\sharp}}
$$
are transversal lifts of $\delta_0$ and $\delta_1$, respectively.
Then the free loop pair
$$
\map{(h|_D, \Lift{h|_T}|_{\partial D})}{(D, \partial D)}{(Y\sp{\circ}, X\sp{\circ})}
$$
is of monodromy relation type around $\Sigma_i\sp{\sharp}$.
\end{proposition}
\begin{remark}
In Figure~\ref{figLiftH},
the closed subset $D$ is the region surrounded by the dashed curve on the right tube $\bar\Delta\times I$.
\end{remark}
\input figLiftH
\begin{proof}[Proof of Proposition~\ref{prop:TD}]
First note that,
since $h$ is an isotopy of transversal discs,
the image of $\partial\bar\Delta \times I$ by $h$ is contained in $Y\sp{\circ}$,
and hence we have $h|_D(D)\subset Y\sp{\circ}$.
\par
\medskip
By Remarks~\ref{rem:homotopymonrel} and~\ref{rem:homotopyD}, we can assume that
$D\cap (\{1\} \times I)=\emptyset$
by moving $D$ by a homeomorphism of $\partial\bar\Delta\times I$
homotopic to the identity.
We consider the continuous map
$$
\map{\tau }{I^2 }{ \partial\bar\Delta \times I}
$$
given by $\tau(s, t):=(\exp(2\pi\sqrt{-1}s), t)$.
Then we have $D\subset \tau (I^2\setminus \partial I^2)$ and
$\tau (\partial I^2)\subset T$.
Under a suitable homeomorphism between $D$ and $I^2$,
the inclusion $D\hookrightarrow \partial\bar\Delta \times I$ is homotopic to
$\tau$.
We put
$$
H_0:=h\circ \tau\;:\; I^2\to Y\sp{\circ}
$$
and define a lift $\Lift{H_0 |_{\partial I^2}}$ of $H_0 |_{\partial I^2}$ by
$$
\Lift{H_0 |_{\partial I^2}}:=\Lift{h|_T} \circ(\tau|_{\partial I^2}) \;: \; \partial I^2\to X\sp{\circ}.
$$
By Remarks~\ref{rem:homotopymonrel} and~\ref{rem:homotopyD} again, it is enough to prove that
the free loop pair
$$
\map{(H_0, \Lift{H_0 |_{\partial I^2}})}{(I^2, \partial I^2)}{(Y\sp{\circ}, X\sp{\circ})}
$$
is of monodromy relation type around $\Sigma_i\sp{\sharp}$.
For simplicity, we put
\begin{eqnarray*}
&&q:=\delta_0 (1)=h(1,0)=H_0 (0,0)=H_0(1,0), \quad\rmand\quad \\
&&\lift{q}:=\lift{\delta}_0(1)=\Lift{h|_T}(1,0)=\Lift{H_0|_{\partial I^2}} (0,0)=\Lift{H_0|_{\partial I^2}} (1,0)\in F_q.
\end{eqnarray*}
By Proposition~\ref{prop:homotopyliftexistence},
we have an isotop
$$
\map{h\sp{L}}{\bar\Delta\times I}{X\sp{\sharp}}
$$
of transversal discs around $\Theta\sp{\sharp}_i$
from $\lift{\delta}_0=\Lift{h|_T}|_{\bar\Delta\times\{0\}}$
to $\lift{\delta}_1=\Lift{h|_T}|_{\bar\Delta\times\{1\}}$
that is a lift of
the isotopy $\shortmap{h}{\bar\Delta\times I}{Y\sp{\sharp}}$;
$$
f\circ h\sp{L}=h.
$$
In Figure~\ref{figLiftH},
the left tube is $h^L$,
while the barrel with a hole is $\Lift{h|_T}$.
We put
$$
\map{\delta_t:=h|_{\bar\Delta\times\{t\}}}{\bar\Delta}{Y\sp{\sharp}}
\quad\rmand\quad
\map{\lift{\delta}_t:=h\sp{L}|_{\bar\Delta\times\{t\}}}{\bar\Delta}{X\sp{\sharp}}.
$$
Then $\lift{\delta}_t$
is a transversal lift of
$\delta_t$.
Next we put
$$
\map{k_0:=h|_{\{1\}\times I}}{I}{Y\sp{\circ}},
$$
which is a path on $Y\sp{\circ}$ from $q=\delta_0 (1)$ to $\delta_1 (1)$,
and
$$
\lift{k}_0:=\Lift{h|_T}|_{\{1\}\times I}=\Lift{H_0 |_{\partial I^2}}|_{\{0\}\times I}=\Lift{H_0 |_{\partial I^2}}|_{\{1\}\times I},
$$
which is
a lift of $k_0$ from $\lift{q}=\lift{\delta}_0 (1)$ to $\lift{\delta}_1 (1)$.
Note that,
with the base point $(0,0)$ and the
orientation of $\partial I^2$ given in Figure~\ref{figorientation},
the map $\shortmap{\Lift{H_0 |_{\partial I^2}}}{\partial I^2}{X\sp{\circ}}$
is equal to
$$
\lift{k}_0\cdot \bdr_{\vexp}\lift{\delta}_1\cdot \lift{k}_0\sp{-1} \cdot \bdr_{\vexp}\lift{\delta}_0 \sp{-1}
$$
as a loop with the base point $\lift{q}=\Lift{H_0 |_{\partial I^2}}(0,0)\in F_{q}$.
\input figOrientation
We define a homotopy
$$
\map{H_u}{I^2}{Y\sp{\circ}}\qquad (u\in I)
$$
with $u$ being the homotopy parameter
by $H_u(s, t):=H_0(s, (1-u)t)$,
and will construct a homotopy
$\shortmap{\Lift{H_u|_{\partial I^2}}}{\partial I^2}{X\sp{\circ}}$
that covers the homotopy ${H_u|_{\partial I^2}}$ and starts from $\Lift{H_0|_{\partial I^2}}$ above.
We define
$$
\map{K}{I\times I}{Y\sp{\circ}}
$$
by $K(t, u):=k_0 ((1-u)t)$,
and put
$k_u:=K|_{I\times\{u\}}$
for $u\in I$.
Then $k_u$ gives a homotopy with parameter $u\in I$
from $k_0$ to the constant map $k_1=1_{q}$.
We then define
a lift
$\shortmap{\Lift{K|_\sqcup}}{\sqcup}{X\sp{\circ}}$
of $\shortmap{K|_\sqcup}{\sqcup}{Y\sp{\circ}}$,
where $\sqcup:=(\partial I\times I)\cup (I\times\{0\})$,
by the following:
$$
\Lift{K|_\sqcup}(t, u):=
\begin{cases}
\lift{q} & \textrm{if $t=0$,}\\
\lift{k}_0 (t) & \textrm{if $u=0$,}\\
\lift{\delta}_{1-u} (1)=h^L(1, 1-u) & \textrm{if $t=1$.}\\
\end{cases}
$$
Since $f\sp{\circ}$ is locally trivial,
the lift $\Lift{K|_\sqcup}$ extends to a lift
$\shortmap{\lift{K}}{I\times I}{X\sp{\circ}}$
of $K$.
(See Figure~\ref{figK}.)
\input figK
Then we obtain a lift
$$
\lift{k}_u:=\lift{K}|_{I\times\{u\}},
$$
of $k_u$, which is a path
from $\lift{q}\in F_{q}$ to the point
$\lift{\delta}_{1-u} (1)=h^L(1, 1-u)$ of $ F_{\delta_{1-u} (1)}$.
(See Figure~\ref{figLoopH}.)
\input figLoopH
We then define a lift
$$
\map{\Lift{ H_u|_{\partial I^2}}}{\partial I^2}{X\sp{\circ}}\qquad (u\in I)
$$
of $H_u|_{\partial I^2}$ as a loop by
$$
\lift{k}_u\cdot \bdr_{\vexp}\lift{\delta}_{1-u}\cdot \lift{k}_u\sp{-1} \cdot \bdr_{\vexp}\lift{\delta}_0 \sp{-1},
$$
where $\partial I^2$ is oriented and segmented as Figure~\ref{figorientation} above.
Then
$(H_u, \Lift{ H_u|_{\partial I^2}})$
is a homotopy of free loop pairs parametrized by $u\in I$.
By Remarks~\ref{rem:homotopymonrel} and~\ref{rem:homotopyD} again,
it is enough to prove that the free loop pair
$$
\map{(H_1, \Lift{ H_1|_{\partial I^2}})}{(I^2, \partial I^2)}{(Y\sp{\circ}, X\sp{\circ})}
$$
is of monodromy relation type around $\Sigma_i\sp{\sharp}$.
Note that
$$
\Lift{ H_1|_{\partial I^2}}=\lift{k}_1\cdot \bdr_{\vexp}\lift{\delta}_{0}\cdot \lift{k}_1\sp{-1} \cdot \bdr_{\vexp}\lift{\delta}_0 \sp{-1},
$$
(see Figure~\ref{figLK1}),
and that the lift $\lift{k}_1$ of the constant map $k_1=1_q$ is a loop in $F_q$ with the base point $\lift{q}$.
\input figLK1
Since $H_1(s, t)=H_0(s, 0)=\bdr_{\vexp}\delta_0(s)$ for any $t$,
the pull-back
$$
\map{H_1\sp* (f\sp{\circ})}{H_1\sp* (X\sp{\circ})}{I^2}
$$
of $\shortmap{f\sp{\circ}}{X\sp{\circ}}{Y\sp{\circ}}$ by $H_1$ is the product of the pull-back
$$
\map{(\bdr_{\vexp} \delta_0) \sp* (f\sp{\circ})}{(\bdr_{\vexp} \delta_0) \sp* (X\sp{\circ})}{I}
$$
of $f\sp{\circ}$ by $\shortmap{\bdr_{\vexp} \delta_0}{I}{Y\sp{\circ}}$
and
the identity map of the second factor $I$.
Let
$$
\map{{}\sp{s}\Lift{ H_1|_{\partial I^2}}}{\partial I^2}{H_1\sp* (X\sp{\circ})=(\bdr_{\vexp} \delta_0) \sp* (X\sp{\circ})\times I}
$$
be the cross-section of
$H_1 \sp* (f\sp{\circ})$
over $\partial I^2$
obtained from
$\Lift{ H_1|_{\partial I^2}}$.
We will describe the image of the free loop
${}\sp{s}\Lift{ H_1|_{\partial I^2}}$
by a contraction
$$
\map{c\sp\prime}{H_1\sp* (X\sp{\circ})}{F_q}.
$$
We construct the contraction $c\sp\prime$ as the composite of
the projection
$$
\map{\operatorname{\rm pr}\nolimits_1}{\Lift{ H_1|_{\partial I^2}}}{ (\bdr_{\vexp} \delta_0) \sp* (X\sp{\circ})}
$$
onto the first factor
and a contraction
$\shortmap{c}{(\bdr_{\vexp} \delta_0) \sp* (X\sp{\circ})}{F_q}$.
Let
$$
\map{\sigma}{\partial I^2}{(\bdr_{\vexp} \delta_0) \sp* (X\sp{\circ})}
$$
be the composite of ${}\sp{s}\Lift{ H_1|_{\partial I^2}}$ with the projection $\operatorname{\rm pr}\nolimits_1$.
The fibers $F_q^{(0)}$ and $F_q^{(1)}$ of $\shortmap{(\bdr_{\vexp} \delta_0 ) \sp* (f\sp{\circ})}{(\bdr_{\vexp} \delta_0 ) \sp* (X\sp{\circ})}{I}$
over $0\in I$ and $1\in I$ are canonically identified with $F_q$.
Let $\lift{q}^{(0)}\in F_q^{(0)}$ and $\lift{q}^{(1)}\in F_q^{(1)}$
be the points corresponding to $\lift{q}\in F_q$.
Then
$\Lift{ H_1|_{\partial I^2}}|_{\{0\}\times I}$
(resp.~$\Lift{ H_1|_{\partial I^2}}|_{\{1\}\times I}$)
gives rise
to a loop $\lift{k}_1^{(0)}$ in $F_q^{(0)}$
with the base point $\lift{q}^{(0)}$
(resp.~a loop~$\lift{k}_1^{(1)}$ in $F_q^{(1)}$
with the base point $\lift{q}^{(1)}$).
Each of them
corresponds to the loop $\lift{k}_1$
by the obvious identifications
$(F_q, \lift{q})=(F_q^{(0)}, \lift{q}^{(0)})=(F_q^{(1)}, \lift{q}^{(1)})$.
On the other hand, the loop $\bdr_{\vexp}\lift{\delta}_0$ gives rise to a cross-section
$$
\map{{}^{s} \bdr_{\vexp}\lift{\delta}_0}{I}{(\partial \delta_0 ) \sp* (X\sp{\circ})}
$$
of
$(\bdr_{\vexp} \delta_0 ) \sp* (f\sp{\circ})$
that connects $\lift{q}^{(0)}$ and $\lift{q}^{(1)}$.
The loop $\sigma$ on $(\bdr_{\vexp} \delta_0 ) \sp* (X\sp{\circ})$
is then equal to the conjunction
$$
(\lift{k}_1^{(0)})\cdot ({}^{s} \bdr_{\vexp}\lift{\delta}_0) \cdot (\lift{k}_1^{(1)})\sp{-1} \cdot ({}^{s} \bdr_{\vexp}\lift{\delta}_0)\sp{-1}.
$$
(See Figure~\ref{figsigma}.)
\input figsigma
We denote by $S\subset (\bdr_{\vexp} \delta_0 ) \sp* (X\sp{\circ})$ the image of the section ${}^{s} \bdr_{\vexp}\lift{\delta}_0$,
and choose a contraction
$$
\map{c}{((\bdr_{\vexp} \delta_0 ) \sp* (X\sp{\circ}), S)}{ (F_q^{(0)}, \lift{q}^{(0)})=(F_q, \lift{q})}
$$
to the fiber over $0\in I$
that contracts the section $S$ to the point $\lift{q}$.
We put
$$
\gamma:=\mu([\bdr_{\vexp} \lift{\delta}_0]) \in \operatorname{\rm Aut}\nolimits(\pi_1 (F_q, \lift{q})).
$$
By the definition of the lifted monodromy,
the loop
$$
({}^s \bdr_{\vexp}\lift{\delta}_{0})\cdot (\lift{k}_1^{(1)}) \cdot ({}^s \bdr_{\vexp}\lift{\delta}_0 )\sp{-1}
$$
on $\bdr_{\vexp} \delta_0 \sp* (X\sp{\circ})$ is contracted by $c$ to a loop in $F_q$ that represents
$$
[\lift{k}_1]^{(\gamma\sp{-1})}\in \pi_1 (F_q, \lift{q}),
$$
while the loop $\lift{k}_1^{(0)}$ on $F_q^{(0)}$
obviously represents
$[\lift{k}_1]\in\;\pi_1 (F_q, \lift{q})$.
Therefore, by the contraction $c$, the loop $\sigma$ on
$(\bdr_{\vexp} \delta_0) \sp* (X\sp{\circ})$ is mapped to a loop that represents
$$
[\lift{k}_1] ([\lift{k}_1]^{(\gamma\sp{-1})})\sp{-1} =(\kappa\sp{-1} \kappa\sp{\gamma})\sp{-1},
$$
where $\kappa:=([\lift{k}_1]^{(\gamma\sp{-1})})\sp{-1}$.
Hence the conjugacy class of $\pi_1 (F_q, \lift{q})$
corresponding to the free loop pair $(H_1, \Lift{ H_1|_{\partial I^2}})$
is contained in the normal subgroup $N(\bdr_{\vexp} \lift{\delta}_0)=N^{[\bdr_{\vexp} {\delta}_0]}$
generated by the monodromy relations along $[\bdr_{\vexp} {\delta}_0]$.
\end{proof}
\begin{corollary}\label{cor:A}
We put
\begin{eqnarray*}
\mathord{\mathbb T}\phantom{_\zeta}&:=&\set{(x,y,z)\in \mathord{\mathbb R}^3}{x^2+y^2\le 1, z\in I},\\
A_\zeta&:=&\set{(x,y,z)\in \mathord{\mathbb T}}{z=\zeta}, \quad\rmand\quad\\
\Upsilon\,&:=&\set{(x,y,z)\in \mathord{\mathbb T}}{x^2+y^2=1}\cup A_1\;\;=\;\;\partial \,\mathord{\mathbb T}\setminus A_0\sp{\circ},
\end{eqnarray*}
where $A_0\sp{\circ}$ is the interior of the closed disc $A_0$.
Let $\shortmap{\varphi}{\mathord{\mathbb T}}{Y\sp{\sharp}}$ be a continuous map
such that
$\varphi (\mathord{\mathbb T})\cap \Sigma\sp{\sharp}\subset \Sigma\sp{\sharp}_i$ and
$$
\varphi\sp{-1} (\Sigma\sp{\sharp}_i)=\set{(x, 0, z)\in \mathord{\mathbb T}}{x^2+(z-1)^2=1/2}
$$
hold, and such that
$\shortmap{\varphi|_{A_1}}{A_1}{Y\sp{\sharp}}$
intersects $\Sigma\sp{\sharp}$ transversely at $(\pm 1/\sqrt{2}, 0,1)$.
Suppose that we have a lift
$\shortmap{\Lift{\varphi|_{\Upsilon}}}{\Upsilon}{X\sp{\sharp}}$
of $\shortmap{\varphi|_{\Upsilon}}{\Upsilon}{Y\sp{\sharp}}$
that intersects $\Theta\sp{\sharp}_i$ transversely at the two points $(\pm 1/\sqrt{2}, 0,1)$.
Let $\shortmap{\Lift{\varphi|_{\Upsilon}}|_{\partial A_0}}{\partial A_0}{X\sp{\circ}}$ be
the restriction of $\Lift{\varphi|_{\Upsilon}}$ to $\partial \Upsilon=\partial A_0$.
Then the free loop pair
$$
\map{(\varphi|_{A_0}, \Lift{\varphi|_{\Upsilon}}|_{\partial A_0})}{(A_0, \partial A_0)}{(Y\sp{\circ}, X\sp{\circ})}
$$
is of monodromy relation type around $\Sigma_i\sp{\sharp}$.
\end{corollary}
\begin{corollary}\label{cor:Gamma}
Let $\shortmap{\delta}{\bar\Delta}{Y\sp{\sharp}}$ be a transversal disc around $\Sigma_i\sp{\sharp}$,
and let $\lift{\delta}$ and $\lift{\delta}\sp\prime$ be two transversal lifts of $\delta$.
We put $q:=\delta(1)$ and
$\lift{q}:=\lift{\delta}(1)\in F_q$,
$\lift{q}\sp\prime:=\lift{\delta}\sp\prime(1)\in F_q$.
Suppose that we are given a path
$\shortmap{\gamma_0}{I}{F_q}$
from $\lift{q}$ to $\lift{q}\sp\prime$.
Then we can deform $\gamma_0$ to a path $\gamma_t$ on $F_{\bdr_{\vexp}\delta(t)}$
from $\bdr_{\vexp}\lift\delta (t)$ to $\bdr_{\vexp}\lift\delta\sp\prime(t)$; that is,
we have a continuous map
$\shortmap{\Gamma}{I\times I}{X\sp{\sharp}}$
such that
$$
f(\Gamma(s, t))=\bdr_{\vexp}\delta(t),
\quad
\Gamma(s, 0)=\gamma_{0}(s),
\quad
\Gamma(0, t)=\bdr_{\vexp}\lift\delta (t),
\quad
\Gamma(1, t)=\bdr_{\vexp}\lift\delta\sp\prime (t),
$$
and $\gamma_t:=\Gamma|_{I\times\{t\}}$.
Consider the path $\gamma_1$
on ${F_q}$
from $\lift{q}$ to $\lift{q}\sp\prime$.
The conjunction $\gamma_0^{\phantom{1}}\gamma_1\sp{-1}$ is a loop on $F_q$,
which we write
$\shortmap{\gamma_0^{\phantom{1}}\gamma_1\sp{-1}}{\partial D}{F_q}$,
where $D$ is homeomorphic to $\bar\Delta$.
Then the free loop pair
$$
\map{(1_q, \gamma_0^{\phantom{1}}\gamma_1\sp{-1})}{(D, \partial D)}{(Y\sp{\circ}, X\sp{\circ})}
$$
is of monodromy relation type around $\Sigma_i\sp{\sharp}$.
\end{corollary}
Now we start the proof of Theorem~\ref{thm:ZvK}.
\begin{proof}[Proof of Theorem~\ref{thm:ZvK}]
By Proposition~\ref{prop:relisinKer},
we have $N^{[\rho]}\subset \operatorname{\rm Ker}\nolimits (\iota_*)$ for any $[\rho]\in \mathord{\mathcal L}$,
because the lasso $\lambda(\lift{\rho})$ is null-homotopic in $X$
for any transversal lift $\lift{\rho}$ of $\rho$.
Therefore $\mathord{\mathcal N}\subset \operatorname{\rm Ker}\nolimits (\iota_*)$ follows.
\par
\medskip
Let a loop $\gamma: (I, \bdr I)\to (F_b,\tilde{b})$ represent
an element $[\gamma]$ of $\operatorname{\rm Ker}\nolimits (\iota_*)$.
We will show that $[\gamma]\in \mathord{\mathcal N}$.
There exists a homotopy
$$
\map{h}{({I^2}, \sqcap)}{(X,\tilde{b})}
$$
from $\gamma$ to $1_{\tilde{b}}$ in $X$
stationary on $\partial I$;
that is,
$h|_{I\times\{0\}}=\gamma$ and $h|_{\sqcap}=1_{\tilde{b}}$,
where
$ \sqcap:=(\partial I\times I)\cup (I\times \{1\})\subset I^2$.
By the condition \cond{C1}, we can perturb $h$ so that
\begin{equation}\label{eq:hempty}
h({I^2})\cap \operatorname{\rm Sing}\nolimits (f)=\emptyset
\end{equation}
holds.
Since $(f\circ h)|_{\partial {I^2}}=1_b$,
the map $f\circ h: I^2\to Y$ represents an element of $\pi_2 (Y, b)$.
By the condition \cond{Z},
we have a continuous map
$$
\map{l}{({I^2},\partial{I^2})}{(Z, b)}
$$
such that $[f\circ h]+[i_Z\circ l]=0$ holds in $\pi_2 (Y, b)$,
where $i_Z: Z\hookrightarrow Y$ is the inclusion.
We then consider the continuous map
$\shortmap{s_Z\circ i_Z\circ l}{({I^2}, \partial{I^2})}{(X,\tilde{b})}$.
Replacing $h$ with
$\shortmap{h\sp\prime}{({I^2}, \sqcap)}{(X,\tilde{b})}$
defined by
$$
h\sp\prime (x, y):=\begin{cases}
h(x, 2y) & \textrm{if $2y\le 1$,}\\
s_Z\circ i_Z\circ l(x, 2y-1) & \textrm{if $2y\ge 1$,}
\end{cases}
$$
we have
\begin{equation}\label{eq:hpitwo}
[f\circ h]=0\quad\textrm{in}\quad \pi_2 (Y, b).
\end{equation}
(See Figure~\ref{fighsprime}.)
\input fighsprime
Moreover, since $s_Z(Z)\cap\operatorname{\rm Sing}\nolimits (f)=\emptyset$ by the condition~\cond{Z},
we still have~\eqref{eq:hempty}.
Then any small perturbation of $f\circ h$ can be lifted
to a small perturbation of $h$.
Since $\Xi$ is of codimension
$\ge 2$ in $Y$,
we can assume that $(f\circ h) ({I^2}) \cap \Sigma \subset \Sigma\sp{\sharp}$,
and that $f\circ h$ intersects $\Sigma\sp{\sharp}$ transversely
(see Definition~\ref{def:defs1}).
We put
$$
(f\circ h)\sp{-1} (\Sigma\sp{\sharp})=\{P_1, \dots, P_n\}\;\subset\; {I^2}\setminus \partial I^2.
$$
We will construct a continuous map
$$
\map{j}{V:=I^2\setminus (D_1\sp{\circ}\cup \dots \cup D_m\sp{\circ})}{X\sp{\sharp}}
$$
with the following properties:
\begin{itemize}
\item[\cond{j1}] $D_1, \dots, D_m$ are mutually disjoint closed discs in $I^2\setminus(\partial I^2 \cup \{P_1, \dots, P_n\})$,
and $D_\mu\sp{\circ}$ is the interior of $D_\mu$;
in particular,
$V$ contains $P_1$, \dots, $P_n$ in its interior,
\item[\cond{j2}] $j(\partial I^2)=\{\tilde{b}\}$,
\item[\cond{j3}] $f\circ j=f\circ h|_V$ holds, and hence we have
$j\sp{-1} (\Theta\sp{\sharp})=\{P_1, \dots, P_n\}$,
\item[\cond{j4}] $j$ intersects $\Theta\sp{\sharp}$ transversely at the points $P_\nu$ for $\nu=1, \dots, n$, and
\item[\cond{j5}] for each $D_\mu$, the free loop pair
$$
\map{((f\circ h)|_{D_\mu}, j|_{\partial D_\mu})}{(D_\mu, \partial D_\mu)}{(Y\sp{\circ}, X\sp{\circ})}
$$
is of monodromy relation type.
\end{itemize}
By~\eqref{eq:hpitwo},
there exists a homotopy
$$
\map{H}{({I^2}\times I, B)}{(Y, b)}
$$
from $f\circ h$ to $1_b$ that is stationary on $\partial{I^2}$;
that is,
$H|_{I^2\times\{0\}}=f\circ h$ and $H|_{B}=1_b$,
where
$$
B:=(\partial {I^2}\times I)\cup ({I^2}\times \{1\})\;\subset\; I^2\times I.
$$
Since $\Xi$ is of real codimension $\ge 4$ in $Y$,
we can perturb $H$ and assume
the following:
\begin{itemize}
\item[\cond{H1}] $H({I^2}\times I)\cap \Sigma$ is contained in $\Sigma\sp{\sharp}$,
\item[\cond{H2}] $H$ intersects $\Sigma\sp{\sharp}$ transversely
(in the sense of Definition~\ref{def:defs1}),
so that
$$
L:=H\sp{-1} (\Sigma\sp{\sharp})
$$
is a disjoint union of smooth real curves,
and
\item[\cond{H3}] the projection $\operatorname{\rm pr}\nolimits_L: L\to I$ to the second factor of $I^2\times I$
has only ordinary critical points in $L$;
that is, $\operatorname{\rm pr}\nolimits_L$ is a Morse function on $L$.
\end{itemize}
We have
$$
\partial L=L\cap (I^2\times\{0\})=(f\circ h)\sp{-1} (\Sigma\sp{\sharp})=\{P_1, \dots, P_n\}.
$$
Let $L_1, \dots, L_k$ be the connected components of $L$.
Then each $L_{\kappa}$ is a curve connecting two points
of $\{P_1, \dots, P_n\}$,
or a curve without boundary.
In particular, the cardinality $n$ of the points $(f\circ h)\sp{-1} (\Sigma\sp{\sharp})$ is even.
\par
\medskip
We denote by $p_1^+, \dots, p_l^+$ (resp.~$p_1^-, \dots, p_m^-$) the critical points in $L\setminus \partial L$
of the projection $\operatorname{\rm pr}\nolimits_L: L\to I$
at which the Morse function $\operatorname{\rm pr}\nolimits_L$ attains a local maximum (resp.~a local minimum),
and call them the \emph{positive (resp.~negative) critical points of $\operatorname{\rm pr}\nolimits_L$}.
(See Figure~\ref{figLT},
in which $L$ is drawn in thick curve.)
\par
\medskip
Let $\mathord{\mathbb T}$ and $A_\zeta$ be as in Corollary~\ref{cor:A}.
For each negative critical point $p_\mu^-$,
we can choose a continuous map
$$
\map{\tau_\mu}{\mathord{\mathbb T}}{I^2\times I}
$$
with the following properties:
\begin{itemize}
\item[\cond{$\tau$1}]
each $\tau_\mu$ is a homeomorphism onto its image $T_\mu:=\tau_\mu(\mathord{\mathbb T})$,
and $T_1, \dots, T_m$ are mutually disjoint,
\item[\cond{$\tau$2}]
there exists a strictly increasing function $t_\mu: I\to I$
with $t_\mu (0)=0$
that makes the following diagram commutative;
$$
\begin{array}{ccc}
\renewcommand{\arraystretch}{1.2}
\mathord{\mathbb T} & \maprightsp{\tau_\mu} & I^2\times I \\
\phantom{\Big\downarrow}\hskip -8pt \downarrow && \phantom{\Big\downarrow}\hskip -8pt \downarrow \\
I & \maprightsp{t_\mu} &\phantom{,}I,
\end{array}
$$
where the vertical arrows are the projections onto the last factors,
\item[\cond{$\tau$3}]
$\tau_\mu\sp{-1} (\partial (I^2\times I))=A_0$
and $\tau_\mu(A_0)\subset (I^2\setminus \partial I^2)\times\{0\}$,
\item[\cond{$\tau$4}]
$\tau_\mu\sp{-1} (L)=\shortset{(x,0,z)\in T}{x^2+(z-1)^2=1/2}$
and $\tau_\mu(1/2, 0, 1/2)=p_\mu^-$,
so that $p_\mu^-$ is the only critical point of $\operatorname{\rm pr}\nolimits_L$ in $T_\mu\cap L$,
and
\item[\cond{$\tau$5}]
$\shortmap{H\circ(\tau_\mu|_{A_1})}{A_1}{Y\sp{\sharp}}$
intersects $\Sigma\sp{\sharp}$ transversely at $(\pm 1/\sqrt{2}, 0, 1)\in A_1$.
\end{itemize}
We put
$$
T :=T_1\cup\dots\cup T_m.
$$
(In Figure~\ref{figLT}, each $T_\mu$ is depicted by dashed curves.)
We also put
$$
\mathord{\mathbb T}\sp{\circ} :=\shortset{(x,y,z)\in \mathord{\mathbb T}}{x^2+y^2<1, z<1}
$$
(the union of the interior of $\mathord{\mathbb T}$ and the bottom open disc),
and
$$
T_\mu\sp{\circ}:=\tau_\mu(\mathord{\mathbb T}\sp{\circ}), \quad T\sp{\circ} :=T_1\sp{\circ}\cup\dots\cup T_m\sp{\circ}
\quad\rmand\quad
J:=(I^2\times I)\setminus T\sp{\circ}.
$$
Note that $J$ is the closure of $(I^2\times I)\setminus T$.
Then
$$
L\sp\prime:=L\cap J
$$
is a disjoint union of
smooth real curves $L\sp\prime_1, \dots, L\sp\prime_l$,
and each connected component $L_\lambda\sp\prime$ of $L\sp\prime$
contains exactly one positive critical point $p_\lambda\sp +$
in $L_\lambda\sp\prime \setminus \partial L_\lambda\sp\prime$.
Moreover,
each $L_\lambda\sp\prime$ has two boundary points $Q_\lambda$ and $Q_\lambda\sp\prime$,
each of which is either one point among $\{P_1, \dots, P_n\}$
or one of $\tau_\mu (\pm 1/\sqrt{2}, 0, 1)$ for some $\mu$.
\input figLT
If $Q_\lambda$ is one of $P_1, \dots, P_n$,
let $D(Q_\lambda)$ be a sufficiently small closed disc on $I^2\times\{0\}$
with the center $Q_\lambda$.
If $Q_\lambda$ is one of $\tau_\mu (\pm 1/\sqrt{2}, 0, 1))$,
let $D(Q_\lambda)$ be a sufficiently small closed disc on $\tau_\mu(A_1)$
with the center $Q_\lambda$.
We choose a closed disc $D(Q_\lambda\sp\prime)$
with the center $Q_\lambda\sp\prime$ in the same way.
Note that $\shortmap{H|_{D(Q_\lambda)}}{D(Q_\lambda)}{Y\sp{\sharp}}$ and
$\shortmap{H|_{D(Q_\lambda\sp\prime)}}{D(Q_\lambda)}{Y\sp{\sharp}}$ are the transversal discs
around the irreducible component
$\Sigma\sp{\sharp}_{i(\lambda)}$
of $\Sigma\sp{\sharp}$
that contains $H(p_\lambda\sp+)$.
Then,
for each $\lambda=1, \dots, l$, we have a tubular neighborhood
$$
\map{m_\lambda}{\bar\Delta\times I}{J}
$$
of $L_\lambda\sp\prime$ in $J$
with the following properties:
\begin{itemize}
\item[\cond{m1}]
each $m_\lambda$ is a homeomorphism onto its image $M_\lambda$,
and $M_1, \dots, M_l$ are mutually disjoint,
\item[\cond{m2}]
$m_\lambda\sp{-1} (L\sp\prime)=\{0\}\times I$
and
$m_\lambda(\{0\}\times I)=L\sp\prime_\lambda$,
\item[\cond{m3}]
$m_\lambda$ is differentiable and locally a submersion at each point of $\{0\}\times I$, and
\item[\cond{m4}]
$m_\lambda\sp{-1} (\partial J)=\bar\Delta\times \partial I$
and
$m_\lambda(\bar\Delta \times \{0\})=D(Q_\lambda)$,
$m_\lambda(\bar\Delta \times \{1\})=D(Q_\lambda\sp\prime)$.
\end{itemize}
Then the composite
$\shortmap{H\circ m_\lambda}{\bar\Delta\times I}{Y\sp{\sharp}}$
is an isotopy between the transversal discs
$H|_{D(Q_\lambda)}$ and $H|_{D(Q_\lambda\sp\prime)}$.
We put
$$
M:=M_1\cup \dots \cup M_l.
$$
Let $c_\lambda\in I$ be the real number such that $m_\lambda(0, c_\lambda)=p_\lambda^+$.
We choose a point $p_\lambda^{+\prime}$ on $m_\lambda(\partial\bar\Delta\times \{c_\lambda\})\subset \partial {M_\lambda}$
and a path
$$
\map{w_\lambda}{I}{J}
$$
from $p_\lambda^{+\prime}$ to a point $p_\lambda^{+\prime\prime}$ of $ I^2\times\{1\}$
with the following properties:
\begin{itemize}
\item[\cond{w1}]
each $w_\lambda$ is a homeomorphism onto its image $W_\lambda$,
and $W_1, \dots, W_l$ are mutually disjoint,
\item[\cond{w2}]
$w_\lambda\sp{-1} (M)=\{0\}$, $w_\lambda\sp{-1} (\partial J)=\{1\}$,
and
\item[\cond{w3}]
the composite
$\shortmap{\operatorname{\rm pr}\nolimits_2\circ w_\lambda}{I}{I}$ of $w_\lambda$ with the second
projection $I^2\times I\to I$ is strictly increasing.
\end{itemize}
We put
$$
W:=W_1\cup\dots\cup W_l.
$$
In Figure~\ref{figMW},
two of $M_{\lambda}\cup W_{\lambda}$ are illustrated.
The ceiling is $I^2\times \{1\}$,
from which $W_{\lambda}$ are dangling,
and the tubes are $M_\lambda$.
\input figMW
\par
\smallskip
The following fact is the crucial point in the construction of $j: V\to X\sp{\sharp}$:
\begin{equation}\label{eq:sdr}
\textrm{\emph{$B\cup M \cup W$ is a strong deformation retract of $J$}.}
\end{equation}
We choose transversal lifts
$\Lift{H|_{D(Q_\lambda)}}$ and $\Lift{H|_{D(Q_\lambda\sp\prime)}}$ of
the transversal discs $H|_{D(Q_\lambda)}$ and $H|_{D(Q_\lambda\sp\prime)}$
around $\Sigma\sp{\sharp}_{i(\lambda)}$, respectively.
Then
the isotopy $\shortmap{H\circ m_\lambda}{\bar\Delta}{Y\sp{\sharp}}$
between
$H|_{D(Q_\lambda)}$ and $H|_{D(Q_\lambda\sp\prime)}$
lifts to an isotopy
between $\Lift{H|_{D(Q_\lambda)}}$ and $\Lift{H|_{D(Q_\lambda\sp\prime)}}$,
which yields
a lift $\Lift{H|_{M_\lambda}}$ of $H|_{M_\lambda}$.
Hence we obtain a lift
$$
\map{\Lift{H|_{M}}}{M}{X\sp{\sharp}}
$$
of $H|_{M}$.
We define a lift $\Lift{H|_B}$ of $H|_B$ to be the constant map $1_{\tilde{b}}$.
Then we can lift the path $H\circ w_\lambda$ to
a path from $\Lift{H|_{M}} (p_\lambda^{+\prime})$ to $\Lift{H|_B}(p_\lambda^{+\prime\prime})=\tilde{b}$,
and thus we obtain a lift
$$
\map{\Lift{H|_W}}{W}{X\sp{\sharp}}
$$
of $H|_W$.
Joining these three lifts together, we obtain a lift
$$
\map{\Lift{H|_{B\cup M \cup W}}}{B\cup M \cup W}{X\sp{\sharp}}
$$
of $H|_{B\cup M \cup W}$.
By the fact~\eqref{eq:sdr}, we can extend the lift $\Lift{H|_{B\cup M \cup W}}$
to a lift
$$
\map{\Lift{H|_J}}{J}{X\sp{\sharp}}
$$
of $H|_J$,
because the pull-back
$(H|_J)^*(f\sp{\sharp})$ of $\shortmap{f\sp{\sharp}}{ X\sp{\sharp}}{Y\sp{\sharp}}$
by $\shortmap{H|_J}{J}{Y\sp{\sharp}}$ is locally trivial over the complement of the interior of $M$ in $J$.
\par
\smallskip
Recall that the floor $I^2\times\{0\}$ of the source space $I^2\times I$ of $H$
is the source space $I^2$ of $f\circ h$.
For $\mu=1, \dots, m$,
we put
$$
D_\mu:=\tau_{\mu}(A_0).
$$
These $D_1, \dots, D_m$ satisfy the condition~\cond{j1}.
Then
$$
V:=I^2\setminus (D_1\sp{\circ}\cup \dots \cup D_m\sp{\circ})
$$
is identified with $J\cap(I^2\times \{0\})$.
We put
$$
j:=\Lift{H|_J}|_{V},
$$
which is a lift of $f\circ h|_V=H|_{V}$.
Hence $j$ satisfies~\cond{j3}.
It is obvious that $j$ satisfies~\cond{j1} and~\cond{j2}.
Since $\Lift{H|_{M}}$ is constructed as a union of isotopies
of transversal discs around $\Theta\sp{\sharp}$,
the continuous map
$$
\map{j|_{M\cap V}=\Lift{H|_{M}}|_{M\cap V}}{M\cap V}{X\sp{\sharp}}
$$
intersects $\Theta\sp{\sharp}$ transversely at each $P_{\nu}$.
Therefore $j$ satisfies~\cond{j4}.
By the properties \cond{$\tau$4} and~\cond{$\tau$5} of $\tau_\mu$ and
Corollary~\ref{cor:A},
we see that $j$ satisfies~\cond{j5}.
Thus we have constructed a continuous map $j: V\to X\sp{\sharp}$
which satisfies \cond{j1}\,-\,\cond{j5}, as is expected.
\par
\medskip
For $\nu=1, \dots, n$,
we choose a sufficiently small closed disc $D_{m+\nu}$ with the center $P_\nu$
in $I^2\setminus \partial I^2$
in such a way that
the $m+n$ closed discs $D_1, \dots, D_{m+n}$ are mutually disjoint.
For each $\mu=1, \dots, m+n$, we choose a path
$$
\map{\alpha_{\mu}}{I}{{I^2}}
$$
from a point $R_\mu=(\rho_\mu, 0)\in I\times \{0\}$
to a point $S_\mu\in \partial D_\mu$
with the following properties:
\begin{itemize}
\item[\cond{$\alpha$1}] $0<\rho_{1}<\dots < \rho_{m+n}<1$,
\item[\cond{$\alpha$2}] each $\alpha_\mu$ is injective and the images
$\alpha_\mu (I)$ $(\mu=1, \dots, m+n)$ are mutually disjoint, and
\item[\cond{$\alpha$3}]
$\alpha_\mu\sp{-1} (\partial{I^2})=\{0\}$, $\alpha_\mu\sp{-1} ( D_\mu)=\{1\}$,
and $\alpha_\mu\sp{-1} (D_{\mu\sp\prime})=\emptyset$ if $\mu\ne \mu\sp\prime$.
\end{itemize}
In Figure~\ref{figAlphas},
the paths $\alpha_\mu$ are illustrated by thick curves.
\input figAlphas
Then there exists a continuous map
$$
\map{\ell}{{{\mathord{\bf I}}^2}}{{I^2}}
$$
with the following properties,
where $\mathord{\bf I}:=I=[0, 1]\subset \mathord{\mathbb R}$.
(We use the boldface $\mathord{\bf I}$ to distinguish the source plane $\mathord{\bf I}^2$ and the target plane $I^2$ of $\ell$.)
\begin{itemize}
\item[\cond{$\ell$1}]
$\ell$ induces a homeomorphism from ${\mathord{\bf I}}^2\setminus \partial {\mathord{\bf I}}^2$
to
$$
I^2\setminus \left(\partial I^2 \cup \bigcup_{\mu=1}^{m+n} ( D_\mu\cup \alpha_{\mu}(I))\right),
$$
\item[\cond{$\ell$2}]
if $(x, y)\in \sqcap:=(\partial {\mathord{\bf I}}\times {\mathord{\bf I}})\cup ({\mathord{\bf I}}\times \{1\})$, then
$\ell (x, y)=(x, y)$, and
\item[\cond{$\ell$3}]
there exist real numbers $c_{\mu}, d_{\mu}, d_{\mu}\sp\prime, c_{\mu}\sp\prime\in \mathord{\bf I}$
for $\mu=1, \dots, m+n$ with
$$
\begin{array}{cccccccccccc}
0&<& c_{1}&<& d_{1}&<& d_{1}\sp\prime&<& c_{1}\sp\prime&<&\\
&<& c_{2}&<& d_{2}&<& d_{2}\sp\prime&<& c_{2}\sp\prime&<&\\
&&&&& \dots&&&&&\\
&<&c_{m+n}&<& d_{m+n}&<& d_{m+n}\sp\prime&<& c_{m+n}\sp\prime &<&1
\end{array}
$$
such that the following hold:
\begin{itemize}
\item
$\ell (c_\mu, 0)=\ell (c_\mu\sp\prime, 0)=R_{\mu}\in I\times \{0\}$,
$\ell (d_\mu\sp\prime, 0)=\ell (d_\mu, 0)=S_{\mu}\in \partial D_\mu$,
\item
$\ell|_{[c_\mu, d_\mu]\times\{0\}}$ is equal to
$\alpha_\mu$ via a parameter change
$[c_\mu, d_\mu]\cong I$, and
$\ell|_{[d_\mu\sp\prime, c_\mu\sp\prime]\times\{0\}}$ is equal to
$\alpha_\mu\sp{-1}$ via a parameter change
$[d_\mu\sp\prime, c_\mu\sp\prime]\cong I$,
\item
$\ell|_{[d_\mu, d_\mu\sp\prime]\times\{0\}}$ is the loop
that goes from $S_{\mu}$ to $S_{\mu}$ along $\partial D_\mu$
clockwise, and
\item
$\ell|_{[c_{\mu-1}\sp\prime, c_\mu]\times\{0\}}$
is equal to the path
$[\rho_{\mu-1}, \rho_{\mu}]\to {I\times\{0\}}$
given by $t\mapsto (t, 0)$
via a parameter change $[c_{\mu-1}\sp\prime, c_\mu]\cong [\rho_{\mu-1}, \rho_{\mu}]$,
where we put $\rho_{0}:=0, c_{0}\sp\prime:=0$
and $\rho_{m+n+1}:=1, c_{m+n+1}:=1$.
\end{itemize}
\end{itemize}
\input figEll
(See Figure~\ref{figEll}.)
Since the image of $\ell$ is contained in $V$ and is disjoint from $\{P_1, \dots, P_n\}$,
we have continuous maps
$$
\shortmap{j\circ \ell}{{\mathord{\bf I}}^2}{X\sp{\circ}}
\quad\rmand\quad
\shortmap{h\circ \ell}{{\mathord{\bf I}}^2}{X\sp{\circ}}
$$
to $X\sp{\circ}$.
They satisfy
$$
f\sp{\circ} \circ j \circ \ell = f\sp{\circ} \circ h \circ \ell
$$
by the property~\cond{j3}.
By the properties~\cond{j2} and~\cond{$\ell$2},
they also satisfy
$$
j \circ \ell|_\sqcap =1_{\tilde{b}}
\quad\rmand\quad
h \circ \ell|_\sqcap =1_{\tilde{b}}.
$$
We then define $G: {{\mathord{\bf I}}^2}\times {\mathord{\bf I}} \to Y\sp{\circ}$ by the composition
$$
G\;:\;
{{\mathord{\bf I}}^2}\times {\mathord{\bf I}}\;\maprightsp{\operatorname{\rm pr}\nolimits_1}\;
{{\mathord{\bf I}}^2} \;\maprightsp{f\sp{\circ} \circ j \circ \ell = f\sp{\circ} \circ h \circ \ell}\;
Y\sp{\circ},
$$
where $\operatorname{\rm pr}\nolimits_1$ is the first projection.
We put
$$
C:=
({{\mathord{\bf I}}^2}\times \partial {\mathord{\bf I}})\cup (\sqcap \times {\mathord{\bf I}})
\;\subset\;
{{\mathord{\bf I}}^2}\times {\mathord{\bf I}},
$$
and
define a lift
$$
\map{(G|_C)^{\sim}}{C}{X\sp{\circ}}
$$
of $G|_C: C\to Y\sp{\circ}$ by the following:
$$
(G|_C)^{\sim} (x, y, z)
:=
\begin{cases}
h(\ell(x, y)) & \textrm{if $z=0$, }\\
j(\ell(x, y)) & \textrm{if $z=1$, }\\
\tilde{b} & \textrm{if $(x, y, z)\in \sqcap \times {\mathord{\bf I}}$. }
\end{cases}
$$
Since $\shortmap{f\sp{\circ}}{X\sp{\circ}}{Y\sp{\circ}}$ is locally trivial and
$C$ is a strong deformation retract of ${{\mathord{\bf I}}^2}\times {\mathord{\bf I}}$,
the map $(G|_C)^{\sim}$ extends to a lift
$$
\map{\lift{G}}{{{\mathord{\bf I}}^2}\times {\mathord{\bf I}}}{X\sp{\circ}}
$$
of $\shortmap{G}{ {{\mathord{\bf I}}^2}\times {\mathord{\bf I}}}{Y\sp{\circ}}$.
By construction,
for $(x, y)\in {{\mathord{\bf I}}^2}$, the restriction of $\lift{G}$
to $\{(x, y)\}\times {\mathord{\bf I}}$ is a path
in the fiber
$$
F_{f\circ h\circ \ell (x, y)}=F_{f\circ j\circ \ell (x, y)}
$$
from the point $h\circ \ell (x, y)$ to the point $j\circ \ell (x, y)$.
For $x\in {\mathord{\bf I}}$, we put
$$
F_{[x]} :=F_{f\circ h\circ \ell (x, 0)}=F_{f\circ j\circ \ell (x, 0)},
\quad\rmand\quad
\map{\xi_{[x]}:=\lift{G}|_{\{(x, 0)\}\times {\mathord{\bf I}}} }{{\mathord{\bf I}}}{F_{[x]}}.
$$
Suppose that $x\notin \textstyle{\bigcup}_{\mu=1}^{m+n}[c_{\mu}, c\sp\prime_{\mu}]$,
so that
$$
(x\sp\prime, 0):=\ell (x, 0)\in I\times \{0\}.
$$
By~\cond{j2}, we see that
$F_{[x]}$ is equal to $F_b$ and
$\xi_{[x]}$ is a path in $F_b$ from $h(x\sp\prime, 0)=\gamma(x\sp\prime)$ to $j(x\sp\prime, 0)=\tilde{b}$.
Moreover, we have
$\xi_{[0]}=\xi_{[1]}=1_{\tilde{b}}$ because $\lift{G} |_{\sqcap\times\mathord{\bf I}}=1_{\tilde{b}}$.
Therefore, for $\mu=0, 1, \dots, m+n$, the path
$$
\map{\gamma_{\mu}:=\gamma|_{[\rho_{\mu}, \rho_{\mu+1}]}=h|_{[\rho_{\mu}, \rho_{\mu+1}]\times\{0\}}}{[\rho_{\mu}, \rho_{\mu+1}]}{F_b}
$$
is homotopic to the path
$\xi_{[c_{\mu}\sp\prime]}\sp{\phantom{-1}} \xi_{[c_{\mu+1}]}\sp{-1} $ in $F_b$,
because the boundary of $\tilde{G}|_{[c\sp\prime_\mu, c_{\mu+1}]\times\{0\}\times\mathord{\bf I}}$
is the loop
$\xi_{[c_{\mu}\sp\prime]}\sp{\phantom{-1}} \cdot 1_{\tilde{b}} \cdot \xi_{[c_{\mu+1}]}\sp{-1} \cdot \gamma_{\mu}\sp{-1}$
in $F_b$,
where ${[c\sp\prime_\mu, c_{\mu+1}]\times\{0\}\times\mathord{\bf I}}\cong I^2$ is oriented and segmented as in Figure~\ref{figorientation}.
Since $\gamma$ is the conjunction
$\gamma_{0}\gamma_{1}\dots\gamma_{m+n}$,
the homotopy class $[\gamma]\in \pi_1 (F_b,\tilde{b})$ is equal to
$$
[\;\xi_{[c_{0}\sp\prime]}\sp{\phantom{-1}} \xi_{[c_{1}]}\sp{-1}
\xi_{[c_{1}\sp\prime]}\sp{\phantom{-1}} \xi_{[c_{2}]}\sp{-1}
\dots \xi_{[c_{m+n}\sp\prime]}\sp{\phantom{-1}} \xi_{[c_{m+n+1}]}\sp{-1}\;]
\;\;=\;\;
[\xi_{[c_{1}]}\sp{-1} \xi_{[c_{1}\sp\prime]}\sp{\phantom{-1}} ]
\cdot
[\xi_{[c_{2}]}\sp{-1} \xi_{[c_{2}\sp\prime]}\sp{\phantom{-1}} ]
\cdot\cdots \cdot
[\xi_{[c_{m+n}]}\sp{-1} \xi_{[c_{m+n}\sp\prime]}\sp{\phantom{-1}}].
$$
(See Figure~\ref{figGammaXi}.)
Note that $\xi_{[c_{\mu}]}\sp{-1} \xi_{[c_{\mu}\sp\prime]}\sp{\phantom{-1}}$
is a loop in $F_b$ with the base point $\tilde{b}$.
It is enough to show that
each
$[\xi_{[c_{\mu}]}\sp{-1} \xi_{[c_{\mu}\sp\prime]}\sp{\phantom{-1}}]\in \pi_1 (F_b, \tilde{b})$
is contained in $N^{[\rho]}$
for some transversal disc $\rho$ around an irreducible component
of $\Sigma\sp{\sharp}$.
\input figGammaXi
\par
\medskip
Consider the path
$$
\map{\lift{\alpha}_\mu:=j\circ \alpha_\mu}{I}{X\sp{\circ}}
$$
from $\tilde{b}$ to $\lift{q}_\mu:=j(S_\mu)\in F_{q_\mu}$,
where $q_\mu:=f(j(S_\mu))=f(h(S_\mu))$,
and the induced isomorphism
$$
\mapisom{[\lift{\alpha}_\mu]_*}{\pi_1(F_b, \tilde{b})}{\pi_1(F_{q_\mu}, \lift{q}_\mu)}.
$$
This isomorphism maps
$[\xi_{[c_{\mu}]}\sp{-1} \xi_{[c_{\mu}\sp\prime]}\sp{\phantom{-1}}]\in \pi_1 (F_b, \tilde{b})$
to
$$
[\xi_{[d_{\mu}]}\sp{-1} \xi_{[d_{\mu}\sp\prime]}\sp{\phantom{-1}}]\in \pi_1 (F_{q_\mu}, \lift{q}_\mu).
$$
(See Figure~\ref{figLiftAlphas}.)
We consider $\xi_{[d_{\mu}]}\sp{-1} \xi_{[d_{\mu}\sp\prime]}\sp{\phantom{-1}}$ as a free loop
$\partial\bar\Delta\to F_{q_\mu}$ in $F_{q_\mu}$.
It is enough to show that the free loop pair
$$
\map{(1_{q_\mu}, \xi_{[d_{\mu}]}\sp{-1} \xi_{[d_{\mu}\sp\prime]}\sp{\phantom{-1}})}{(\bar\Delta, \partial \bar\Delta)}{(Y\sp{\circ}, X\sp{\circ})}
$$
is of monodromy relation type.
\input figLiftAlphas
\par
\medskip
Suppose that $\mu>m$, so that $D_{\mu}$ is a disc with the center $P_{\mu-m}
\in (f\circ h)\sp{-1} (\Sigma\sp{\sharp})$.
Then $(1_{q_\mu}, \xi_{[d_{\mu}]}\sp{-1} \xi_{[d_{\mu}\sp\prime]}\sp{\phantom{-1}})$ is of monodromy relation type
by Corollary~\ref{cor:Gamma}.
Suppose that $\mu\le m$.
By~\cond{j5},
it is enough to show that the free loop pair
$(1_{q_\mu}, \xi_{[d_{\mu}]}\sp{-1} \xi_{[d_{\mu}\sp\prime]}\sp{\phantom{-1}})$ is homotopic to
the free loop pair
$$
\map{((f\circ h)|_{D_\mu}, j|_{\partial D_\mu})}{(D_\mu, \partial D_\mu)}{(Y\sp{\circ}, X\sp{\circ})}
$$
under a suitable homeomorphism $\bar\Delta\cong D_\mu$.
We put
$$
\map{l_\mu:=\ell_{[d_\mu, d_\mu\sp\prime]\times\{0\}}}{[d_\mu, d_\mu\sp\prime]}{\partial D_\mu}.
$$
Consider the continuous map
$$
\map{\zeta_\mu}{[d_\mu, d_\mu\sp\prime]\times I}{X\sp{\circ}}
$$
given by $\zeta_\mu(x, t):=\xi_{[x]}(t)$.
With the base point and the orientation on the boundary of $[d_\mu, d_\mu\sp\prime]\times I$
given in Figure~\ref{figdd},
the boundary of $\zeta_\mu$ is equal to the loop
$$
\xi_{[d_\mu]}\sp{-1} \cdot (h\circ l_\mu)\cdot \xi_{[d\sp\prime _\mu]}\sp{\phantom{-1}} \cdot (j\circ l_\mu)\sp{-1}
$$
with the base point $\lift{q}_\mu$.
Since the free loop $h\circ l_\mu$
is the boundary of $h|_{D_\mu}$,
it is null-homotopic in $X\sp{\circ}$.
Hence the free loop $\xi_{[d_\mu]}\sp{-1} \cdot \xi_{[d\sp\prime _\mu]}\sp{\phantom{-1}}$
is homotopic to the free loop $j\circ l_\mu$ in $X\sp{\circ}$.
It can be easily seen that we can construct a homotopy
of free loops from
$j|_{\partial D_\mu}=j\circ l_\mu$ to $\xi_{[d_\mu]}\sp{-1} \cdot \xi_{[d\sp\prime _\mu]}\sp{\phantom{-1}}$
in $X\sp{\circ}$ as a lift of the restriction to $\partial D_\mu$
of a contraction from $f(h(D_\mu))$ to $q_\mu$,
because $f(h(D_\mu)) \subset Y\sp{\circ}$ holds for $\mu\le m$.
Hence $(1_{q_\mu}, \xi_{[d_{\mu}]}\sp{-1} \xi_{[d_{\mu}\sp\prime]}\sp{\phantom{-1}})$ is homotopic to
$((f\circ h)|_{D_\mu}, j|_{\partial D_\mu})$.
\input figdd
\end{proof}
The following is a semi-classical version of Theorem~\ref{thm:ZvK}.
\begin{theorem}\label{thm:C}
Suppose that the conditions~\cond{C1} and~\cond{C2} are satisfied.
Suppose also that there exist a reduced connected curve $C$
(possibly singular and/or reducible and not necessarily closed) on $Y$
and a continuous cross-section
$$
\map{s_C}{C}{f\sp{-1} (C)}
$$
of $f$ over $C$
with the following properties:
\begin{itemize}
\item
$C\sp{\circ}:=C\cap Y\sp{\circ}$ is non-empty and connected, and the inclusion
$C\sp{\circ}\hookrightarrow Y\sp{\circ}$ induces a surjection
$\pi_1(C\sp{\circ}, b)\mathbin{\to \hskip -7pt \to} \pi_1(Y\sp{\circ}, b)$,
where $b\in C\sp{\circ}$ is a base point,
\item
the inclusion $C\hookrightarrow Y$ induces a surjection
$\pi_2(C, b)\mathbin{\to \hskip -7pt \to} \pi_2(Y, b)$,
\item $s_C(C)\cap \operatorname{\rm Sing}\nolimits (f)=\emptyset$, and
\item
for each irreducible component $\Sigma_i$ of $\Sigma$ with codimension $1$ in $Y$,
there exists a point $p_i\in C\cap \Sigma_i$
satisfying the following:
\begin{itemize}
\item $C$ and $\Sigma$ are smooth at $p_i$,
and $C$ intersects $\Sigma_i$ transversely at $p_i$,
\item the cross-section $s_C$ is holomorphic at $p_i$.
\end{itemize}
\end{itemize}
By the cross-section $s_C$, we have the classical monodromy action
$$
\pi_1 (C\sp{\circ}, b)\;\to\;\operatorname{\rm Aut}\nolimits (\pi_1 (F_b, \tilde{b})),
\quad\textrm{where}\quad \tilde{b}:=s_C(b)\in F_b:=f\sp{-1} (b),
$$
which we denote by $g\mapsto g^u$ for $u\in \pi_1 (C\sp{\circ}, b)$.
Then $\operatorname{\rm Ker}\nolimits (\iota_*)$ is equal to
$$
N_K:=\gen{\;\shortset{g\sp{-1} g^u}{g\in \pi_1 (F_b, \tilde{b}), u\in K}\;},
$$
where $K\subset \pi_1 (C\sp{\circ}, b)$
is the kernel of $\pi_1(C\sp{\circ}, b)\to \pi_1(C, b)$
induced by the inclusion.
\end{theorem}
\begin{proof}
First of all, remark that
the condition \cond{Z} is
satisfied
with $C$ and $s_C$ being $Z$ and $s_Z$ in the condition \cond{Z},
and hence $\operatorname{\rm Ker}\nolimits (\iota_*)$ is equal to $\mathord{\mathcal N}$.
\par
\medskip
Let $\shortmap{\gamma}{(I, \partial I)}{(C\sp{\circ}, b)}$ be a loop that
represents an element $u$ of $K$.
We have a homotopy (stationary on $\partial I$) $h$ on $C$ from $\gamma$ to $1_b$.
Then $s_C\circ h$
is a homotopy on $X$ from $s_C\circ \gamma$ to $1_{\tilde{b}}$.
By definition,
the classical monodromy action by $u$
is equal to the lifted monodromy action by
$[s_C\circ \gamma]\in \pi_1(X\sp{\circ}, \tilde{b})$.
Since $s_C\circ \gamma$ is null-homotopic in $X$, we see that
$g\sp{-1} g^u=g\sp{-1} g^{\mu([s_C\circ \gamma])}$
is contained in $\operatorname{\rm Ker}\nolimits (\iota_*)$ by Proposition~\ref{prop:relisinKer}.
Thus $N_K\subset \operatorname{\rm Ker}\nolimits(\iota_*)$ is proved.
\par
\medskip
In order to prove $\mathord{\mathcal N}=\operatorname{\rm Ker}\nolimits(\iota_*)\subset N_K$,
it is enough to show that,
for any leashed disc $\rho=(\delta, \eta)$
around an irreducible component $\Sigma\sp{\sharp}_i$ of $\Sigma\sp{\sharp}$ in
$Y\sp{\sharp}$,
the normal subgroup
$N^{[\rho]}$ is contained in $N_K$.
We have a point $p_i$ of $C\cap \Sigma_i$
at which $C$ and $\Sigma$ are smooth and intersect transversely.
Let
$$
\mapinj{\delta_{i, C}}{\bar\Delta}{C}
$$
be a sufficiently small closed disc on $C$
such that $\delta_{i, C} (0)=p_i$.
Since $s_C$ is holomorphic at $p_i$
and $s_C(p_i)\notin \operatorname{\rm Sing}\nolimits (f)$ by the assumption,
$\Theta:=f\sp{-1} (\Sigma)$ is smooth at $s_C(p_i)$,
and $s_C\circ \delta_{i, C}$ intersects $\Theta$ at $s_C(p_i)$ transversely.
If $p_i\in \Xi$,
then we perturb $\delta_{i, C}$ to
a $\CCC^\infty$-map $\shortmap{\delta_{i, C}\sp\prime}{\bar\Delta}{Y\sp{\sharp}}$
such that
$\delta_{i, C}|_{\partial\bar\Delta}=\delta_{i, C}\sp\prime|_{\partial\bar\Delta}$.
If $p_i\notin \Xi$, then we put $\delta_{i, C}\sp\prime:=\delta_{i, C}$.
Then $\delta_{i, C}\sp\prime$ is a transversal disc around $\Sigma_i\sp{\sharp}$
such that $\delta_{i, C}\sp\prime (\partial\bar\Delta)\subset C\sp{\circ}$.
Since $s_C(p_i)\notin \operatorname{\rm Sing}\nolimits (f)$,
we can lift the perturbation from $\delta_{i, C}$ to $\delta_{i, C}\sp\prime$
to a perturbation
from $s_C\circ \delta_{i, C}$ to
$$
\mapinj{\lift{\delta}_{i, C}\sp\prime}{\bar\Delta}{X\sp{\sharp}}
$$
in such a way that
$$
\lift{\delta}_{i, C}\sp\prime|_{\partial\bar\Delta}
=s_C\circ {\delta}_{i, C}\sp\prime|_{\partial\bar\Delta}
=s_C\circ {\delta}_{i, C}|_{\partial\bar\Delta},
$$
and that
$\lift{\delta}_{i, C}\sp\prime$ is a transversal lift of $\delta_{i, C}\sp\prime$
around $\Theta\sp{\sharp}_i$.
The transversal disc $\delta$ of the given leashed disc $\rho=(\delta, \eta)$
is isotopic to $\delta_{i, C}\sp\prime$ (Proposition~\ref{prop:rho}).
Hence $\rho$ is isotopic to a leashed disc
$$
\rho\sp\prime=(\delta_{i, C}\sp\prime, \eta\sp\prime)
$$
for some path $\eta\sp\prime$ on $Y\sp{\circ}$ from $\delta_{i, C}(1)=\delta_{i, C}\sp\prime(1)\in C\sp{\circ}$ to $b$.
Since $C\sp{\circ}$ is connected,
there exists a path $\zeta$ on $C\sp{\circ}$ from $b$ to $\eta\sp\prime(0)=\delta_{i, C}(1)$.
Then $\zeta\eta\sp\prime$ is a loop on $Y\sp{\circ}$
with the base point $b$.
Since the inclusion $C\sp{\circ}\hookrightarrow Y\sp{\circ}$
induces a surjection $\pi_1 (C\sp{\circ}, b)\mathbin{\to \hskip -7pt \to} \pi_1 (Y\sp{\circ}, b)$,
there exists a loop $\xi$ on $C\sp{\circ}$ with the base point $b$
that is homotopic to $\zeta\eta\sp\prime$ in $Y\sp{\circ}$.
Then $\rho=(\delta, \eta)$ is isotopic to the leashed disc
$$
\rho_C:=(\delta_{i, C}\sp\prime, \zeta\sp{-1}\xi).
$$
Note that $\zeta\sp{-1}\xi$ is a path on $C\sp{\circ}$.
Since $\lift{\delta}_{i, C}\sp\prime(1)=s_C(\delta_{i, C}\sp\prime(1))$, the pair
$$
\lift{\rho}_C:=(\lift{\delta}_{i, C}\sp\prime, s_C\circ(\zeta\sp{-1}\xi))
$$
is a leashed disc,
which is a transversal lift of $\rho_C$.
Hence $N^{[\rho]}$ is generated by the monodromy relations
$g\sp{-1} g^{\mu([\lambda(\lift{\rho}_C)])}$ along $[\lambda(\lift{\rho}_C)]$.
Note that the lasso $\lambda (\rho_C)$ is a loop on $C\sp{\circ}$
that is null-homotopic in $C$,
so that we have
$[\lambda (\rho_C)]\in K$.
Because
$s_C\circ \lambda (\rho_C)=\lambda(\lift{\rho}_C)$,
the generators $g\sp{-1} g^{\mu([\lambda(\lift{\rho}_C)])}$
of $N^{[\rho]}$ are contained in $N_K$.
\end{proof}
We give a sufficient condition under which
$N^{[\rho]}=1$ holds
for one (and hence any) leashed disc $\rho$ around $\Sigma_i\sp{\sharp}$.
(See Corollary~\ref{cor:foranyrho}.)
\par
\medskip
Suppose that
$X$ is the complement to a reduced hypersurface
$W$ in a smooth variety $\ol{X}$,
and that $f$ is the restriction to $X$
of a \emph{projective} morphism $\bar{f}: \ol{X}\to Y$.
For $y\in Y$,
we put $\ol{F}_y:=\bar{f}\sp{-1} (y)$,
and denote by $W_y$ the \emph{scheme-theoretic} intersection of $\ol{F}_y$ with $W$.
Let $\operatorname{\rm Sing}\nolimits (\bar{f})\subset \ol{X}$ be the Zariski closed subset of
critical points of $\bar{f}$.
\begin{proposition}\label{prop:proj}
We assume the conditions~\cond{C1} and~\cond{C2}.
Suppose that,
for a general point $y$ of $\Sigma_i$,
the intersection $\ol{F}_{y}\cap \operatorname{\rm Sing}\nolimits (\bar{f})$
is of codimension $\ge 2$ in $\ol{F}_{y}$
and $W_{y}\setminus (W_{y}\cap \operatorname{\rm Sing}\nolimits (\bar{f}))$
is a reduced hypersurface of $\ol{F}_{y}\setminus (\ol{F}_y\cap \operatorname{\rm Sing}\nolimits (\bar{f}))$.
Then $N^{[\rho]}=1$ holds
for a leashed disc $\rho$ around $\Sigma_i\sp{\sharp}$.
\end{proposition}
\begin{proof}
Let $y_0$ be a general point $y_0$ of $\Sigma_i$,
and
let $U\subset Y$ be a sufficiently small
contractible neighborhood of $y_0$.
Since $\bar f$ is projective,
there exists an embedding over $U$
of $\bar{f}\sp{-1} (U)$ into $\P^N\times U$;
$$
\begin{array}{ccccc}
\bar{f}\sp{-1} (U) && \hookrightarrow && \P^N\times U \\
&\searrow &&\swarrow & \\
&&U.&&
\end{array}
$$
By this embedding,
we consider each $\ol{F}_y$ for $y\in U$ as a closed subscheme of $\P^N$
of dimension $\dim X-\dim Y$.
We choose a general linear subspace $P\subset \P^N$
of codimension $\dim \ol{F}_y-1$.
By the assumption
$\dim (\ol{F}_y\cap \operatorname{\rm Sing}\nolimits (\bar{f}))\le \dim \ol{F}_y -2$ for any $y\in U\cap\Sigma_i$,
we have $(P\times U)\cap \operatorname{\rm Sing}\nolimits (\bar f) =\emptyset$ and we can assume that
$P\cap\ol{F}_y$
is a smooth projective curve for any $y\in U$.
By the assumption on $W_y$,
we see that $P\cap W_y$ is a reduced divisor of $P\cap\ol{F}_y$
whose degree is independent of $y\in U$.
Hence the family
$$
P\cap F_y=P\cap (\ol{F}_y\setminus W_y)\qquad (y\in U)
$$
of punctured Riemann surfaces is trivial (in the $\CCC^\infty$-category) over $U$.
Let $\shortmap{\delta}{\bar\Delta}{Y\sp{\sharp}}$
be a transversal disc around $\Sigma_i\sp{\sharp}$
such that $\delta(\bar\Delta)\subset U$.
Then we have a transversal lift
$\shortmap{\lift{\delta}}{\bar\Delta}{X\sp{\sharp}}$
of $\delta$
such that $\lift{\delta}(z)\in P\cap F_{\delta(z)}$
holds for any $z\in \bar\Delta$.
We put
$$
q:=\delta (1),
\qquad \lift{q}:=\lift{\delta}(1)\in P\cap F_{q}.
$$
The lifted monodromy of $[\bdr_{\vexp} \lift\delta]$ on $\pi_1(P\cap F_{q}, \lift{q})$
is trivial.
On the other hand,
the inclusion $P\cap F_{q}\hookrightarrow F_{q}$ induces a surjective homomorphism
$$
\pi_1 (P\cap F_{q}, \lift{q}) \mathbin{\to \hskip -7pt \to} \pi_1 ( F_{q}, \lift{q})
$$
by the Lefschetz-Zariski hyperplane section theorem.
(See, for example, ~\cite{MR932724} or~\cite{MR820315}).
Hence the lifted monodromy of $[\bdr_{\vexp} \lift\delta]$ on $\pi_1(F_{q}, \lift{q})$
is also trivial.
\end{proof}
We prove the two corollaries stated in Introduction.
\begin{proof}[Proof of Corollary~\ref{cor:RRReq}]
Since the lasso of any transversal lift of a leashed disc on $Y\sp{\sharp}$
around $\Sigma_i\sp{\sharp}$ is null-homotopic in $X$,
we have $\mathord{\mathcal N}\subset \mathord{\mathcal R}$.
Hence
Corollary~\ref{cor:RRReq} follows
from Theorem~\ref{thm:ZvK}, Proposition~\ref{prop:relisinKer} and Nori's lemma~(Proposition~\ref{prop:nori} and Remark~\ref{rem:C0C3}).
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:proj}]
It is enough to show that $f$ satisfies the condition~\cond{C2},
and that,
for each $\Sigma_i$, $N^{[\rho]}=1$ holds
for a leashed disc $\rho$ around $\Sigma_i\sp{\sharp}$.
Since $f$ is projective and the general fiber is connected,
every fiber of $f$ is non-empty and connected.
Suppose that $F_y$ is reducible for a general point $y$
of some irreducible hypersurface $\Sigma\sp\prime$ of $Y$.
Let $\Delta\subset Y$ be a small open disc
intersecting $\Sigma\sp\prime$ transversely at $y$
such that $f\sp{-1} (\Delta)$ is smooth.
Then $F_y$ is a reducible hypersurface of $f\sp{-1} (\Delta)$.
Since $F_y$ is connected and projective,
there exist distinct irreducible components $F_y\sp\prime$ and $F_y\sp{\prime\prime}$
of $F_y$ that intersect.
Since $F_y\sp\prime\cap F_y\sp{\prime\prime}$ is of codimension $2$ in $f\sp{-1} (\Delta)$,
we obtain a contradiction to the assumption that $\operatorname{\rm Sing}\nolimits (f)$ is of codimension $\ge 3$ in $X$.
Thus the condition~\cond{C2} is satisfied.
Let $y$ be a general point of $\Sigma_i$.
By the assumption that $\operatorname{\rm Sing}\nolimits (f)$ is of codimension $\ge 3$ in $X$,
we see that $F_y\cap \operatorname{\rm Sing}\nolimits (f)$ is of codimension $\ge 2$ in $F_y$.
Applying Proposition~\ref{prop:proj}
to the case where $W=\emptyset$ and $X=\ol{X}$, we obtain $N^{[\rho]}=1$
for a leashed disc $\rho$ around $\Sigma_i\sp{\sharp}$.
\end{proof}
\section{Proof of Theorem~\ref{thm:ULZvK}}\label{sec:proof1}
\begin{proof}[Proof of Theorem~\ref{thm:ULZvK}]
We assume $k\le n-2$, where $n$ is the dimension of the
smooth non-degenerate projective variety $X\subset \P^N$.
We
put
$$
\mathord{\mathcal U}_k (X,\P^N,(\PN)\dual):=\set{(L, t)\in \mathord{U}_k(X,\P^N)\times (\PN)\dual}{L\subset H_t},
$$
and consider the projection
$$
\map{f_{(\PN)\dual}}{\mathord{\mathcal U}_k (X,\P^N,(\PN)\dual)}{(\PN)\dual}.
$$
Then the fiber of $f_{(\PN)\dual}$ over $t\in (\PN)\dual$ is canonically identified with
$\mathord{U}_k(Y_t, H_t)$, where $Y_t=X\cap H_t$.
The morphism
$$
\map{f_{\Lambda}}{\mathord{\mathcal U}_k (X,\P^N,\Lambda)}{\Lambda}
$$
defined in Introduction is the pull-back of
$f_{(\PN)\dual}$ by the inclusion $\Lambda\hookrightarrow (\PN)\dual$.
Consider the following diagram:
$$
\renewcommand{\arraystretch}{1.4}
\begin{array}{ccccc}
\mathord{\mathcal U}_k (X,\P^N,\Lambda) &\hookrightarrow & \mathord{\mathcal U}_k (X,\P^N,(\PN)\dual) &\maprightsp{\operatorname{\rm pr}\nolimits_1} & \mathord{U}_k(X,\P^N) \\
\lower 3pt \llap{${}^{f_{\Lambda}}$} \phantom{\Big\downarrow}\hskip -8pt \downarrow & \square & \phantom{\Big\downarrow}\hskip -8pt \downarrow \lower 3pt \rlap{${}^{f_{(\PN)\dual}}$} & \\
\Lambda&\hookrightarrow & (\PN)\dual, &
\end{array}
$$
where $\operatorname{\rm pr}\nolimits_1$ is the projection onto the first factor.
The fiber of $\operatorname{\rm pr}\nolimits_1$ over $L\in \mathord{U}_k(X,\P^N)$
is isomorphic to a linear subspace
$\shortset{t\in(\PN)\dual}{L\subset H_t}$ of $(\PN)\dual$,
and hence $\operatorname{\rm pr}\nolimits_1$ is smooth and proper (and thus locally trivial) with simply-connected fibers.
Therefore $\mathord{\mathcal U}_k (X,\P^N,(\PN)\dual)$ is smooth and irreducible, and
$\operatorname{\rm pr}\nolimits_1$ induces an isomorphism
\begin{equation}\label{eq:isom1}
\pi_1(\mathord{\mathcal U}_k (X,\P^N,(\PN)\dual), s_o(0))\cong \pi_1 (\mathord{U}_k(X,\P^N), L_o).
\end{equation}
The fiber of $f_{(\PN)\dual}$ over $t\in (\PN)\dual$
is a Zariski open subset of $\mathord{\rm G}^{n-1-k} (H_t)$.
Hence $f_{(\PN)\dual}$ is smooth.
There exists a Zariski closed subset $\Xi\sp{\prime\prime}$
of $(\PN)\dual$ of codimension $\ge 2$ such that,
if $t\in (\PN)\dual\setminus \Xi\sp{\prime\prime}$,
then $Y_t$ has only isolated singular points.
(See~\cite{MR592569}, for example.)
Then $\mathord{U}_k(Y_t, H_t)$ is non-empty and irreducible for $t\in (\PN)\dual\setminus \Xi\sp{\prime\prime}$.
Therefore
$f_{(\PN)\dual}$ satisfies the conditions~\cond{C1} and~\cond{C2}.
In particular,
by Nori's lemma~(Proposition~\ref{prop:nori}),
we see that the inclusion of the general fiber induces a surjective homomorphism
\begin{equation}\label{eq:iotasurj}
\mapsurj{\iota_*}{ \pi_1(\mathord{U}_k (Y_0, H_0), L_o)}{ \pi_1 (\mathord{\mathcal U}_k (X,\P^N,(\PN)\dual), s_o(0))}.
\end{equation}
On the other hand,
in virtue of the \emph{general} line $\Lambda\subset (\PN)\dual$ and the holomorphic
section $s_o$ over $\Lambda$,
we see that $f_{(\PN)\dual}$ satisfies the conditions of Theorem~\ref{thm:C},
and hence $\iota_*$ induces an injective homomorphism
\begin{equation}\label{eq:iotainj}
\pi_1 (\mathord{U}_k (Y_0, H_0), L_o)/\hskip -2.2pt/\hskip 1pt \pi_1 (\Lambda\setminus \Sigma_{\Lambda}, 0)
\;\;\hookrightarrow\;\; \pi_1 (\mathord{\mathcal U}_k (X,\P^N,(\PN)\dual), s_o(0)).
\end{equation}
Combining~\eqref{eq:isom1},~\eqref{eq:iotasurj} and~\eqref{eq:iotainj},
we complete the proof of Theorem~\ref{thm:ULZvK}(1).
\par
In particular, the inclusion $\mathord{U}_k (Y_0, H_0)\hookrightarrow \mathord{U}_k (X, \P^N)$
induces a surjective homomorphism
on the fundamental groups.
If $k<n-2$, then we can apply this result to the inclusion
$\mathord{U}_k(Z_\Lambda, A)\hookrightarrow \mathord{U}_k (Y_0, H_0)$,
and obtain a surjection
\begin{equation*}\label{eq:surj2}
\pi_1(\mathord{U}_k(Z_\Lambda, A), L_o) \mathbin{\to \hskip -7pt \to}\pi_1( \mathord{U}_k (Y_0, H_0), L_o).
\end{equation*}
By construction,
this homomorphism is equivariant under the classical monodromy action of
$\pi_1 (\Lambda\setminus \Sigma_{\Lambda}, 0)$ given by the cross-section $s_o$.
Since $\pi_1 (\Lambda\setminus \Sigma_{\Lambda}, 0)$ acts on $\pi_1(\mathord{U}_k(Z_\Lambda, A), L_o)$
trivially,
we obtain the proof of Theorem~\ref{thm:ULZvK}(2).
\end{proof}
\section{The simple braid group}\label{sec:SB}
Let $C$ be a compact Riemann surface of genus $g>0$,
and let $D_0=p_1+\dots+p_d$ be a reduced effective divisor on $C$ of degree $d$,
which we use as a base point of the space $\mathord{\rm rDiv}^d(C)$
of reduced divisors of degree $d$ on $C$.
Let $\mathord{\rm Pic}^d(C)$ be the Picard variety of isomorphism classes $[L]$ of
line bundles $L$ of degree $d$ on $C$.
We denote by
$$
\map{\bar\lambda}{\mathord{\rm Div}^d (C)}{\mathord{\rm Pic}^d(C)}
$$
the natural morphism,
and consider the induced homomorphism
$$
\map{\bar\lambda_*}{\pi_1(\mathord{\rm Div}^d (C), D_0)}{\pi_1(\mathord{\rm Pic}^d(C), \bar\lambda(D_0))=H_1(C, \mathord{\mathbb Z})}.
$$
\begin{proposition}\label{prop:barlambda}
\setcounter{rmkakkocounter}{1}
Suppose that $d\ge g$.
{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}
We have $\operatorname{\rm Sing}\nolimits(\bar\lambda)=\bar\lambda\sp{-1}(\bar\lambda(\operatorname{\rm Sing}\nolimits(\bar\lambda)))$.
{\rm (\thermkakkocounter)} \addtocounter{rmkakkocounter}{1}
If $d\ge 2g-1$ then $\operatorname{\rm Sing}\nolimits (\bar\lambda)=\emptyset$.
If $d\le 2g-2$ then $\dim \operatorname{\rm Sing}\nolimits (\bar\lambda)\le g-1$
and $\dim \bar\lambda(\operatorname{\rm Sing}\nolimits (\bar\lambda))\le 2g-2-d$.
\end{proposition}
\begin{proof}
Note that $\bar\lambda$ is surjective because $d\ge g$.
For $D\in \mathord{\rm Div}^d (C)$, we have
$$
\bar\lambda\sp{-1} (\bar\lambda(D))=|\mathord{\mathcal O}_C(D)|\;\;\cong\;\; \P^{d-g+s(D)},
$$
where $s(D):=h^0(C, K_C(-D))$.
Hence $D\in \operatorname{\rm Sing}\nolimits(\bar\lambda)$
if and only if $s(D)>0$,
and therefore the assertion (1) follows, and moreover, we have
$$
\dim \bar\lambda(\operatorname{\rm Sing}\nolimits(\bar\lambda))\le \dim \operatorname{\rm Sing}\nolimits(\bar\lambda)-(d-g+1).
$$
On the other hand,
we have $s(D)>0$ if and only if
$D$ is a sub-divisor of a member of the $(g-1)$-dimensional
linear system $|K_C|$.
Since $\deg K_C=2g-2$,
we obtain the proof.
\end{proof}
\begin{remark}
Suppose $d\ge g$.
Then $\operatorname{\rm Sing}\nolimits(\bar\lambda)$ is the locus of \emph{special divisors}
of degree $d$ on $C$,
and $\bar\lambda(\operatorname{\rm Sing}\nolimits(\bar\lambda))$ is the locus of
\emph{special line bundles} of degree $d$ on $C$.
\end{remark}
\begin{proposition}\label{prop:barlambdastar}
Suppose that $d\ge g$.
Then $\bar\lambda_*$ is an isomorphism.
\end{proposition}
\begin{proof}
The general fiber of $\bar\lambda$ is isomorphic to $\P^{d-g}$.
By Proposition~\ref{prop:barlambda},
the assumption $d\ge g$ implies that
$\bar\lambda(\operatorname{\rm Sing}\nolimits(\bar\lambda))\subset \mathord{\rm Pic}^d(C)$
is of codimension $\ge 2$.
Hence Proposition~\ref{prop:barlambdastar}
follows from Nori's lemma~(Proposition~\ref{prop:nori}).
\end{proof}
\begin{proposition}\label{prop:veryample}
{\rm (1)}
Suppose that $d\ge g+2$.
Then there exists a Zariski closed subset $\Xi_1\subset\mathord{\rm Pic}^d(C)$ of codimension
$\ge 2$ such that the complete linear system $|L|$ is base-point free
for any $[L]\in \mathord{\rm Pic}^d(C)\setminus\Xi_1$.
{\rm (2)}
Suppose that $d\ge g+4$.
Then there exists a Zariski closed subset $\Xi_2\subset \mathord{\rm Pic}^d(C)$
of codimension $\ge 2$ such that
$|L|$ is very ample for any $[L]\in \mathord{\rm Pic}^d(C)\setminus \Xi_2$.
\end{proposition}
\begin{proof}
Suppose that $d\ge g+2$, and let $L$ be a line bundle of degree $d$.
If $|L|$ has a base point $p$,
then $L(-p)$ is a special line bundle, and hence
$[L]\in \mathord{\rm Pic}^{d}(C)$ is contained in the image of the morphism
\begin{equation}\label{eq:timesC1}
\bar\lambda\sp\prime(\operatorname{\rm Sing}\nolimits(\bar\lambda\sp\prime))\times C\;\;\to\;\; \mathord{\rm Pic}^d (C)
\end{equation}
given by $([M], p)\mapsto [M(p)]$,
where $\bar\lambda\sp\prime: \mathord{\rm Div}^{d-1}(C)\to\mathord{\rm Pic}^{d-1}(C)$ is the natural morphism.
Since $\dim \bar\lambda\sp\prime(\operatorname{\rm Sing}\nolimits(\bar\lambda\sp\prime))\le 2g-d-1$ by Proposition~\ref{prop:barlambda},
the image of~\eqref{eq:timesC1} is of codimension $\ge 2$.
\par
\smallskip
Suppose that $d\ge g+4$.
If a base-point free line bundle $L$ of degree $d$ is not very ample,
then there exist points $p$, $q$ of $C$
such that $h^0(L(-p-q))=h^0(L(-p))$ holds,
and hence $L(-p-q)$ is a special line bundle of degree $d-2$.
We complete the proof by the same argument as above.
\end{proof}
We denote by
$$
\map{\lambda}{\mathord{\rm rDiv}^d (C)}{\mathord{\rm Pic}^d(C)}
$$
the restriction of $\bar\lambda$ to $\mathord{\rm rDiv}^d(C)$,
and consider the homomorphism
$$
\map{\lambda_*}{B(C, d):=\pi_1(\mathord{\rm rDiv}^d (C), D_0)}{H_1(C, \mathord{\mathbb Z})=\pi_1(\mathord{\rm Pic}^d(C))}
$$
induced by $\lambda$.
From Proposition~\ref{prop:barlambdastar},
we obtain the following:
\begin{corollary}\label{cor:SB2}
Suppose that $d\ge g$.
Then the simple braid group $\mathord{ SB}(C, D_0)$
defined in Definition~\ref{def:SB} is equal to the kernel of the homomorphism
$\lambda_*$.
\end{corollary}
Let $\sigma: (I, \bdr I)\to (\mathord{\rm rDiv}^d(C), D_0)$
be a loop.
Then there exist paths $\sigma_i: I\to C$ for $i=1, \dots, d$
such that $\sigma_i(0)=p_i$ and
such that $\sigma(t)=\sigma_1(t)+\dots+\sigma_d(t)$
for all $t\in I$.
The homology class $\lambda_*([\sigma])\in H_1(C, \mathord{\mathbb Z})$
is represented by the $1$-cycle
obtained as the conjunction of the paths $\sigma_1, \dots, \sigma_d$.
\par
\medskip
Let $\Gamma^d (C) \subset \mathord{\rm Div}^d (C)$ be the
big diagonal in $\mathord{\rm Div}^d(C)=C^d/\mathord{\hbox{\mathgot S}}_d$,
where $\mathord{\hbox{\mathgot S}}_d$ is the symmetric group
acting on the Cartesian product $C^d$ of $d$ copies of $C$
by permutation of the components.
We have
$$
\mathord{\rm rDiv}^d (C)=\mathord{\rm Div}^d (C) \setminus \Gamma^d (C).
$$
For $[L]\in \mathord{\rm Pic}^d (C)$,
we put
$$
\Gamma (L):=\Gamma^d(C)\cap \bar\lambda\sp{-1} ([L])
\quad\rmand\quad
|L|\sp{\red}:=\lambda\sp{-1} ([L])=|L|\setminus \Gamma (L),
$$
where
$\bar\lambda\sp{-1} ([L])$ is identified with $|L|$.
\begin{remark}\label{rem:dualhyp}
Suppose that $L$ is very ample,
and let $C_L\subset \P^{d-g+s(L)}$
denote the image of the embedding of $C$ by $|L|$.
Then,
under the identification $|L|\cong (\P^{d-g+s(L)})\sp{\vee}$, $\Gamma (L)$ is equal to
the dual hypersurface $C_L\sp{\vee}$ of $C_L$,
and hence it is of degree
$$
d\sp{\vee}:=2(d+g-1).
$$
\end{remark}
\begin{proposition}\label{prop:Lgeneral}
Suppose that $d\ge g+4$.
If $[L]\in \mathord{\rm Pic}^d(C)$ is general,
then the inclusion $|L|\sp{\red}\hookrightarrow \mathord{\rm rDiv}^d(C)$ induces an isomorphism
$$
\pi_1 (|L|\sp{\red}, D_0)\;\;\cong\;\; \mathord{ SB}(C, D_0),
$$
where $D_0$ is a point of $|L|\sp{\red}$.
\end{proposition}
\begin{proof}
We put
$\Xi:=\bar\lambda(\operatorname{\rm Sing}\nolimits(\bar\lambda))\cup\Xi_2$,
where $\Xi_2$ is the Zariski closed subset in
Proposition~\ref{prop:veryample}.
Then $\Xi$ is a Zariski closed subset of codimension $\ge 2$ in $\mathord{\rm Pic}^d(C)$
and $\bar\lambda\sp{-1}(\Xi)$ is of codimension $\ge 2$ in $\mathord{\rm Div}^d(C)$
by Proposition~\ref{prop:barlambda}.
Moreover $\bar\lambda\sp{-1}(\Xi)$ contains $\operatorname{\rm Sing}\nolimits(\bar\lambda)$,
and $L\sp\prime$ is very ample if $[L\sp\prime]\notin \Xi$.
We consider the restriction
$$
\map{f}{X:=\mathord{\rm rDiv}^d(C)\setminus \lambda\sp{-1} (\Xi)}{Y:=\mathord{\rm Pic}^d(C)\setminus \Xi}
$$
of $\lambda$ to $X=\mathord{\rm rDiv}^d(C)\setminus \lambda\sp{-1} (\Xi)$.
We have
\begin{eqnarray*}
&& \pi_1(Y, [L])\;\;=\;\;\pi_1(\mathord{\rm Pic}^d(C), [L])\;\;=\;\;H_1 (C, \mathord{\mathbb Z}), \\
&& \pi_1(X, D_0)\;\;=\;\;
\pi_1(\mathord{\rm rDiv}^d(C), D_0)\;\;=\;\;\mathord{ B}(C, D_0), \\
&& \pi_2(Y)\;\;=\;\;\pi_2(\mathord{\rm Pic}^d(C))\;\;=\;\;0.
\end{eqnarray*}
By the last equality, the morphism $f$ satisfies~\cond{Z}.
Since $f$ is smooth with every fiber being non-empty Zariski open subsets of
$\P^{d-g}$,
the conditions~\cond{C1} and~\cond{C2}
are also satisfied.
Therefore we can apply Theorem~\ref{thm:ZvK}.
Using Proposition~\ref{prop:proj} and Remark~\ref{rem:dualhyp}, %
the lifted monodromy action of $\pi_1 (X\sp{\circ}, D_0)$ on $\pi_1 (|L|^{\mathord{\rm red}}, D_0)$ is trivial.
Combining this result with Corollary~\ref{cor:RRReq},
we see that $\pi_1 (|L|^{\mathord{\rm red}}, D_0)$ is equal to
the kernel of the homomorphism $\mathord{ B}(C, D_0)\to H_1(C, \mathord{\mathbb Z})$
induced by $f$,
which is $\mathord{ SB}(C, D_0)$ by Corollary~\ref{cor:SB2}.
\end{proof}
Now we prove our third main result.
\begin{proof}[Proof of Theorem~\ref{thm:SB}]
We denote by $L$ the line bundle on $C\subset \P^M$
corresponding to the hyperplane section,
and let $C_L\subset \P^N$ be the image of the embedding
of $C$ by $|L|$.
Then $C\subset \P^M$ is the image of a projection
$C_L\to \P^M$
with the center being disjoint from $C_L\subset \P^N$.
Let $\rho: C\to\P^2$ be a general projection.
By this sequence of the linear projections
$\P^N\cdot\hskip -2.2pt \cdot \hskip -2.8pt \to \P^M\cdot\hskip -2.2pt \cdot \hskip -2.8pt \to\P^2$,
we have the canonical embeddings of linear subspaces
$$
(\P^2)\sp{\vee}\hookrightarrow (\P^M)\sp{\vee}\hookrightarrow (\P^N)\sp{\vee}.
$$
Let $\rho(C)\sp{\vee}\subset (\P^2)\sp{\vee}$,
$C\sp{\vee}\subset (\P^M)\sp{\vee}$ and $(C_L)\sp{\vee} \subset (\P^N)\sp{\vee}$
be the dual hypersurfaces
of $\rho(C)\subset \P^2$, $C\subset \P^M$ and $C_L\subset \P^N$,
respectively.
Then we have
$$
\rho(C)\sp{\vee} = (\P^2)\sp{\vee} \cap C\sp{\vee} =(\P^2)\sp{\vee} \cap (C_L)\sp{\vee},
\quad
C\sp{\vee} =(\P^M)\sp{\vee} \cap (C_L)\sp{\vee}.
$$
We will consider the homomorphisms
$$
\pi_1 ((\P^2)\sp{\vee} \setminus \rho(C)\sp{\vee})
\;\to \;
\pi_1((\P^M)\sp{\vee}\setminus C\sp{\vee})
\;\to \;
\pi_1((\P^N)\sp{\vee}\setminus (C_L)\sp{\vee})
$$
induced by the inclusions.
Since $C\subset \P^M$ is Pl\"ucker general by the assumption,
the degree $d\sp{\vee}$ of $\rho(C)\sp{\vee}$, the number
$\delta\sp{\vee}$ of ordinary nodes on $\rho(C)\sp{\vee}$ and
the number $\kappa\sp{\vee}$ of ordinary cusps on $\rho(C)\sp{\vee}$
are given by the Pl\"ucker formula;
$$
d\sp{\vee}=2 d+2 g-2,
\quad
\delta\sp{\vee}=2 d^2+4 d g+2 g^2-10 d-14 g+12,
\quad
\kappa\sp{\vee}=3 d+6 g-6.
$$
(See~\cite[Chap.~7]{MR2107253}, for example.)
In particular,
the section $\rho(C)\sp{\vee}$ of $(C_L)\sp{\vee}$ by $(\P^2)\sp{\vee}\subset (\P^N)\sp{\vee}$
is equisingular to the \emph{general} plane section of $(C_L)\sp{\vee}$.
By the classical Zariski hyperplane section
theorem~(\cite{MR932724},~\cite{MR820315},~\cite{MR1503330}),
we see that the inclusion induces an isomorphism
$$
\pi_1 ((\P^2)\sp{\vee} \setminus \rho(C)\sp{\vee})\;\cong\; \pi_1((\P^N)\sp{\vee}\setminus (C_L)\sp{\vee}).
$$
On the other hand,
the scheme-theoretic intersection of
$(C_L)\sp{\vee}$ and $(\P^2)\sp{\vee}$ in $(\P^N)\sp{\vee}$
is reduced, and hence
the scheme-theoretic intersection of
$C\sp{\vee}$ and $(\P^2)\sp{\vee}$ in $(\P^M)\sp{\vee}$
is also reduced,
and thus the inclusion induces a surjective homomorphism
$$
\pi_1 ((\P^2)\sp{\vee} \setminus \rho(C)\sp{\vee})\;\mathbin{\to \hskip -7pt \to}\;\pi_1((\P^M)\sp{\vee}\setminus C\sp{\vee}).
$$
Therefore we conclude that the inclusions
induce isomorphisms
$$
\pi_1 ((\P^2)\sp{\vee} \setminus \rho(C)\sp{\vee})
\;\cong \;
\pi_1((\P^M)\sp{\vee}\setminus C\sp{\vee})
\;\cong \;
\pi_1((\P^N)\sp{\vee}\setminus (C_L)\sp{\vee}).
$$
Remark that $(\P^M)\sp{\vee}\setminus C\sp{\vee}$ is equal to $U_0(C, \P^M)$,
and $(\P^N)\sp{\vee}\setminus (C_L)\sp{\vee}$ is equal to $|L|^{\mathord{\rm red}}$.
Therefore it is enough to show that
$\pi_1 (|L|^{\mathord{\rm red}})$ or $\pi_1 ((\P^2)\sp{\vee} \setminus \rho(C)\sp{\vee})$ is isomorphic to
the simple braid group $\mathord{ SB}_d^g$.
Note that, since $[L]$ is not necessarily a general point of $\mathord{\rm Pic}^d(C)$,
we cannot apply Proposition~\ref{prop:Lgeneral}.
We overcome this difficulty using Harris' theorem~\cite{MR837522}.
\par
\medskip
Note that $\rho (C)$ is a plane curve of degree $d$ with $\delta:=(d-1)(d-2)/2-g$ ordinary nodes
and no other singularities.
Let $\P_*(H^0(\P^2, \mathord{\mathcal O}(d)))$ be the space of all plane curves of degree $d$, and
let $\mathord{\mathcal S}_{d, \delta}\subset\P_*(H^0(\P^2, \mathord{\mathcal O}(d)))$ be the locus
of reduced plane curves $\Gamma\subset\P^2$ of degree $d$ such that
$\operatorname{\rm Sing}\nolimits \Gamma$ consists of only $\delta$ ordinary nodes.
In~\cite{MR837522},
Harris gave an affirmative answer to the Severi problem,
in virtue of which
we know that $\mathord{\mathcal S}_{d, \delta}$ is irreducible.
We then denote by $\mathord{\mathcal S}_{d, \delta}\sp{\circ} \subset \mathord{\mathcal S}_{d, \delta}$
the locus of $\Gamma\in \mathord{\mathcal S}_{d, \delta}$
such that the dual curve $\Gamma\sp{\vee}$ has only ordinary nodes and ordinary cusps as its singularities.
Then $\mathord{\mathcal S}_{d, \delta}\sp{\circ}$ is a Zariski open subset of $\mathord{\mathcal S}_{d, \delta}$
containing $\rho(C)$.
\par
\medskip
Let $C\sp\prime$ be an arbitrary compact Riemann surface of genus $g$,
and let $[L\sp\prime]$ be a \emph{general} point of $\mathord{\rm Pic}^d(C\sp\prime)$.
Since $d\ge g+4$,
we see from Proposition~\ref{prop:veryample} that
$|L\sp\prime|$ is very ample of dimension $d-g$.
We denote by $C\sp\prime_{L\sp\prime}\subset \P\sp{d-g}$ the image of the embedding
$C\sp\prime\hookrightarrow \P\sp{d-g}$ by $|L\sp\prime|$,
and consider the general projection $\rho\sp\prime: C\sp\prime_{L\sp\prime}\to \P^2$.
Then $\rho\sp\prime(C\sp\prime_{L\sp\prime})$ is a point of $\mathord{\mathcal S}_{d, \delta}$.
Since $\mathord{\mathcal S}_{d, \delta}$ is irreducible,
we can connect the two points
$\rho(C)\in \mathord{\mathcal S}_{d, \delta}$ and $\rho\sp\prime(C\sp\prime_{L\sp\prime})\in\mathord{\mathcal S}_{d, \delta}$
by an irreducible closed curve $T\subset \mathord{\mathcal S}_{d, \delta}$.
We put
$T^0:=T\cap \mathord{\mathcal S}_{d, \delta}\sp{\circ}$,
which is a Zariski open subset of $T$ containing $\rho(C)$.
When $\Gamma$ moves on $\mathord{\mathcal S}_{d, \delta}\sp{\circ}$
the dual curves $\Gamma\sp{\vee}$ form an equisingular family of plane curves.
Therefore we have
\begin{equation}\label{eq:T0}
\pi_1 ((\P^2)\sp{\vee}\setminus \rho(C)\sp{\vee})\;\;\cong\;\;
\pi_1 ((\P^2)\sp{\vee}\setminus \Gamma\sp{\vee})\quad\textrm{for any $\Gamma\in T\sp 0$}.
\end{equation}
On the other hand, by Propositions~\ref{prop:veryample}~and~\ref{prop:Lgeneral},
there exists a Zariski open dense subset $T\sp 1\subset T$
containing $\rho\sp\prime(C\sp\prime_{L\sp\prime})$
such that the complete linear system
$|\mathord{\mathcal O}_\Gamma (1)|$ of a hyperplane section of $\Gamma\subset\P^2$
is very ample on the normalization $\Gamma\sp\sim$ of $\Gamma$ for any $\Gamma\in T\sp 1$,
that $\dim |\mathord{\mathcal O}_\Gamma (1)|=d-g$ for any $\Gamma\in T\sp 1$, and that
\begin{equation}\label{eq:T1}
\pi_1 ((\P^2)\sp{\vee}\setminus \Gamma\sp{\vee})\;\;\cong\;\;
\pi_1(|\mathord{\mathcal O}_\Gamma (1)|\sp{\red})
\;\;\cong\;\;
\mathord{ SB}^d_g
\quad\textrm{for any $\Gamma\in T\sp 1$}.
\end{equation}
Here we have used the classical Zariski hyperplane section theorem again.
Since $T^0\cap T^1\ne\emptyset$,
we complete the proof of Theorem~\ref{thm:SB}
by combining the isomorphisms~\eqref{eq:T0},~\eqref{eq:T1}.
\end{proof}
\section{The conjecture of Auroux, Donaldson, Katzarkov and Yotov}
\label{sec:ADKY}
Let $X\subset \P^N$ be a smooth non-degenerate projective surface of degree $d$, and
let $B\subset \P^2$ be the branch curve of a general projection
$X\to \P^2$.
The fundamental group $\pi_1 (\P^2\setminus B)$
has been studied intensively by
Moishezon, Teicher and Robb
(\cite{MR644819}, \cite{MR903386}, \cite{MR1203688}, \cite{MR1360512},
\cite{MR1689261}, \cite{MR1492521}, \cite{MR1468277}, \dots\dots).
In many examples,
it has turned out that $\pi_1 (\P^2\setminus B)$ is rather ``small".
In~\cite[Conjectures 1.3 and 1.6]{MR2081427},
Auroux, Donaldson, Katzarkov and Yotov
formulated the following conjecture
(not only for algebraic surfaces but also for symplectic $4$-manifolds),
and confirmed it for some new examples.
\par
\medskip
Note that there exist natural homomorphisms
$$
\pi_1 (\P^2\setminus B)\;\to\; \mathord{\hbox{\mathgot S}}_d\quad\rmand\quad\pi_1 (\P^2\setminus B)\;\to\; H_1 (\P^2\setminus B)\cong \mathord{\mathbb Z}/\deg(B)\mathord{\mathbb Z}.
$$
For a smooth projective surface $X$ and a line bundle $L$ on $X$, we denote by
$$
\map{\lambda_{(X,L)}}{H^2(X, \mathord{\mathbb Z})}{\mathord{\mathbb Z}^2}
$$
the homomorphism given by $\lambda_{(X, L)}(\alpha):=(\alpha \cup c_1(L), \alpha \cup c_1(K_S+3L))$,
where $\cup $ denotes the cup-product.
\begin{conjecture}\label{conj:ADKY}%
Let $L$ be an ample line bundle of a smooth projective surface $S$,
and
let $X_m\subset \P^{N(m)}$ be the
image of the embedding of $S$ by the complete linear system $|L\sp{\otimes m}|$.
We denote by
$B_m\subset \P^2$ the branch curve of a general projection
$X_m\to \P^2$.
Let $G^0_m$ be the kernel of the natural homomorphism
$$
\pi_1 (\P^2\setminus B_m)\;\to\;\mathord{\hbox{\mathgot S}}_d\times\mathord{\mathbb Z}/\deg(B_m)\mathord{\mathbb Z}.
$$
Suppose that $S$ is simply-connected and that $m$ is large enough.
Then the abelianization of $G^0_m$ is isomorphic to
$(\mathord{\mathbb Z}^2/\Im(\lambda_{(X, mL)}))^{d-1}$, and the commutator subgroup $[G^0_m, G^0_m]$
is a quotient of $(\mathord{\mathbb Z}/2\mathord{\mathbb Z})^2$.
\end{conjecture}
For a smooth non-degenerate projective surface $X\subset\P^N$,
the fundamental groups $\pi_1 (U_0(X, \P^N))$ and
$\pi_1 (\P^2\setminus B)$ are related as follows.
Note that the target space $\P^2$ of the general projection $X\to \P^2$ is
identified with the closed subvariety
$$
\set{L\in \mathord{\rm G}^{2}(\P^N)}{\textrm{$L$ contains the center of the projection}}
$$
of $\mathord{\rm G}^{2}(\P^N)$,
and $\P^2\setminus B$ is identified with the pull-back of $U_0(X, \P^N)$
by this embedding $\P^2\hookrightarrow \mathord{\rm G}^{2}(\P^N)$.
\begin{proposition}\label{prop:surj}
The inclusion
$\P^2\setminus B\hookrightarrow U_0(X, \P^N)$ induces
a surjective homomorphism
$\pi_1 (\P^2\setminus B)\mathbin{\to \hskip -7pt \to} \pi_1 (U_0(X, \P^N))$.
\end{proposition}
\begin{proof}
Consider the incidence variety
$$
\renewcommand{\arraystretch}{1.4}
\begin{array}{ccc}
\set{(L, M)\in \mathord{\rm G}^2(\P^N)\times \mathord{\rm G}^3(\P^N)}{L\supset M}&\maprightsp{\operatorname{\rm pr}\nolimits_1} & \mathord{\rm G}^2(\P^N)\\
\lower 3pt \llap{${}^{\operatorname{\rm pr}\nolimits_2}$}\phantom{\Big\downarrow}\hskip -8pt \downarrow &&\\
\mathord{\rm G}^3(\P^N),
\end{array}
$$
where $\operatorname{\rm pr}\nolimits_1$ and $\operatorname{\rm pr}\nolimits_2$ are the natural projections,
and put
$$
\mathord{\mathcal U}:=\operatorname{\rm pr}\nolimits_1\sp{-1} (U_0(X, \P^N)).
$$
Since $\operatorname{\rm pr}\nolimits_1$ is smooth with every fiber being isomorphic to $\P^{N-2}$,
we see that $\mathord{\mathcal U}$ is smooth, irreducible, and that $\operatorname{\rm pr}\nolimits_1|_{\mathord{\mathcal U}}$ induces an isomorphism
$\pi_1 (\mathord{\mathcal U})\cong \pi_1 (U_0(X, \P^N))$.
For $M\in \mathord{\rm G}^3(\P^N)$,
the target space $\Pi_M$ of the projection
$$
\map{\rho_M}{X}{\Pi_M}
$$
with the center $M$ is the fiber of $\operatorname{\rm pr}\nolimits_2$ over $M$,
and we have
$$
\Pi_M\setminus B_M\cong (\operatorname{\rm pr}\nolimits_2|_{\mathord{\mathcal U}})\sp{-1} (M)=\operatorname{\rm pr}\nolimits_2\sp{-1} (M)\cap \mathord{\mathcal U},
$$
where $B_M\subset \Pi_M$ is the branch curve of $\rho_M$.
Hence it is enough to show that the inclusion of the general fiber
of $\operatorname{\rm pr}\nolimits_2|_{\mathord{\mathcal U}}$ over $M$ induces a surjective homomorphism
\begin{equation}\label{eq:surjUUU}
\pi_1 ((\operatorname{\rm pr}\nolimits_2|_{\mathord{\mathcal U}})\sp{-1} (M))\;\mathbin{\to \hskip -7pt \to}\;\pi_1(\mathord{\mathcal U}).
\end{equation}
Since $\operatorname{\rm pr}\nolimits_2$ is smooth, so is $\operatorname{\rm pr}\nolimits_2|_{\mathord{\mathcal U}}$.
Moreover the locus of all $M\in \mathord{\rm G}^3(\P^N)$ such that $(\operatorname{\rm pr}\nolimits_2|_{\mathord{\mathcal U}})\sp{-1} (M)=\emptyset$
is contained in a Zariski closed subset of codimension $\ge 2$ in $\mathord{\rm G}^3(\P^N)$.
Hence Nori's lemma~(Proposition~\ref{prop:nori})
implies the surjectivity~\eqref{eq:surjUUU}.
\end{proof}
Thus we see that
the group $\pi_1 (U_0(X, \P^N))$ is ``smaller" than $\pi_1 (\P^2\setminus B)$.
In view of Corollary~\ref{cor:SB} and Conjecture~\ref{conj:ADKY},
we expect that the image $\varGamma_{\Lambda}$ of the monodromy~\eqref{eq:monhom}
should be ``large".
\par\medskip
The group $\varGamma_{\Lambda}$ is generated by the Dehn twists
associated with
the ordinary nodes of the singular members of the pencil
$\{Y_t\}_{t\in \Lambda}$.
Hence the group $\varGamma_{\Lambda}$ and its action on $\mathord{ SB}(Y_0, Z_\Lambda)$
can be visualized by drawing on $Y_0$ the reduced divisor $Z_{\Lambda}$
and the vanishing cycles for the singular members of the pencil.
\par\medskip
As for the largeness of $\varGamma_{\Lambda}$,
we have the following result of Smith~\cite[Theorem 1.3 and Corollary 4.3]{MR1838364}.
\begin{theorem}[Smith]
The vanishing cycles of the Lefschetz fibration
$\mathord{\mathcal Y}\to\Lambda$ fill up the fiber $Y_0$;
that is, their complement is a bunch of discs.
Moreover distinct points of $Z_\Lambda$ are on distinct discs.
\end{theorem}
The second assertion follows from the argument
in the proof of~\cite[Theorem~5.1]{MR1838364},
and the fact that the homology classes of the sections of $\mathord{\mathcal Y}\to\Lambda$
corresponding to the points of $Z_\Lambda$
are distinct.
\begin{remark}\label{rem:LSXm}
In the calculation of $\pi_1 (U_0(X_m, \P^{N(m)}))$ by means of Corollary~\ref{cor:SB},
the assumption $d\ge g+4$
is satisfied
when $m$ is large enough.
Indeed, the degree $d$ of $X_m$ is given by $d=m^2 L^2$, while
the genus $g$ of the general hyperplane section $Y_0$ of $X_m$ is given by
$g=(m^2 L^2+mL\cdot K_X)/2 +1$.
\end{remark}
\bibliographystyle{plain}
\def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
|
1,314,259,995,879 | arxiv | \section{\Large Introduction} \label{sec:intr}
This paper is a supplement to~\cite{DK} where we developed a graph theoretic
construction (borrowing an idea of~\cite{cast1}) that was used as the main tool
to obtain a complete combinatorial characterization for the variety of
homogeneous quadratic identities on minors of quantum matrices.
(Recall that when speaking of the \emph{algebra of $m\times n$ quantum
matrices}, one means the quantized coordinate ring
$\Oscr_q(\Mscr_{m,n}(\Kset))$ of $m\times n$ matrices over a field $\Kset$,
where $q$ is a nonzero element of $\Kset$. In other words, one considers the
$\Kset$-algebra generated by indeterminates $x_{ij}$\; ($i\in[m],\,j\in[n]$)
satisfying Manin's relations~\cite{man}: for $i<\ell\le m$ and $j<k\le n$,
\begin{gather}
x_{ij}x_{ik}=qx_{ik}x_{ij},\qquad x_{ij}x_{\ell j}=qx_{\ell j}x_{ij},
\label{eq:xijkl}\\
x_{ik}x_{\ell j}=x_{ \ell j}x_{ik}\quad \mbox{and}\quad
x_{ij}x_{\ell k}-x_{\ell k}x_{ij}=(q-q^{-1})x_{ik}x_{\ell j}. \nonumber
\end{gather}
Hereinafter for a positive integer $n'$, \;$[n']$ denotes $\{1,2,\ldots,n'\}$.
Another useful algebraic construction is the $m\times n$ \emph{quantum affine
space}, which is the $\Kset$-algebra generated by indeterminates $t_{ij}$\;
($i\in[m],\,j\in[n]$) subject to ``simpler'' commutation relations:
\begin{eqnarray}
t_{ij}t_{i'j'}=&qt_{i'j'}t_{ij}&\quad \mbox{if either $i=i'$ and $j<j'$,
or $i<i'$ and $j=j'$}, \label{eq:trelat} \\
=&t_{i'j'}t_{ij}& \quad\mbox{otherwise}.) \nonumber
\end{eqnarray}
In this paper we prove two auxiliary theorems that were essentially used, but
left unproved, in~\cite{DK} (namely, Theorems~3.1 and~4.4 there). They concern
the class of edge-weighted planar graphs introduced in~\cite{DK} (under the
name of ``grid-shaped graphs''); in this paper we call they \emph{SE-graphs}. A
special case of these graphs is formed by the \emph{Cauchon graphs} introduced
in~\cite{cast1} in connection with the \emph{Cauchon diagrams} of~\cite{cach}.
The first theorem, viewed as a quantum analog of Lindstr\"om Lemma, is a direct
extension to the SE-graphs $G$ of the corresponding result established for
Cauchon graphs in~\cite{cast1}. It considers a matrix in which each entry is
represented as the sum of weights of paths connecting a certain pair of
vertices of $G$, called the \emph{path matrix} of $G$ and denoted by $\Path_G$.
The theorem asserts that any (quantized) minor of $\Path_G$ can be expressed
via systems of disjoint paths of $G$ connecting corresponding sets of vertices.
We refer to a system of this sort as a \emph{flow} in $G$.
The proof of the main result in~\cite{DK} (which can be regarded as a quantum
analog of a characterization of quadratic identities for the commutative case
in~\cite{DKK}) is based on a method of handling certain pairs of flows, called
\emph{double flows}, in an SE-graph $G$. An important ingredient of that proof
is a transformation of a double flow $(\phi,\phi')$ into another double flow
$(\psi,\psi')$ by use of an \emph{ordinary exchange operation}. The second
theorem that we are going to prove in this paper says that under such a
transformation the weight of a current double flow is multiplied by $q$ or
$q^{-1}$.
The paper is organized as follows. Section~\SEC{prelim} contains basic
definitions and formulates the first theorem. Section~\SEC{double} describes
exchange operations on double flows and formulates the second theorem.
Section~\SEC{two_paths} elaborates technical tools needed to prove the
theorems. It considers certain paths $P,Q$ in $G$ and describes possible
relations between the weights of the ordered pairs $(P,Q)$ and $(Q,P)$; this is
close to a machinery in~\cite{cast1,cast2}. The announced first and second
theorems are proved in Sections~\SEC{q_determ} and~\SEC{exchange},
respectively.
\section{\Large Preliminaries} \label{sec:prelim}
We start with basic definitions and some elementary properties.
\medskip
\noindent\textbf{Paths in graphs.} Throughout, by a \emph{graph} we mean a
directed graph. A \emph{path} in a graph $G=(V,E)$ (with vertex set $V$ and
edge set $E$) is a sequence $P=(v_0,e_1,v_1,\ldots,e_k,v_k)$ such that each
$e_i$ is an edge connecting vertices $v_{i-1},v_i$. An edge $e_i$ is called
\emph{forward} if it is directed from $v_{i-1}$ to $v_i$, denoted as
$e_i=(v_{i-1},v_i)$, and \emph{ backward} otherwise (when $e_i=(v_i,v_{i-1})$).
The path $P$ is called {\em directed} if it has no backward edge, and {\em
simple} if all vertices $v_i$ are different. When $k>0$ and $v_0=v_k$, ~$P$ is
called a \emph{cycle}, and called a \emph{simple cycle} if, in addition,
$v_1,\ldots,v_k$ are different. When it is not confusing, we may use for $P$
the abbreviated notation via vertices: $P=v_0v_1\ldots v_k$, or via edges:
$P=e_1e_2\ldots e_k$.
Also, using standard terminology in graph theory, for a directed edge
$e=(u,v)$, we say that $e$ \emph{leaves} $u$ and \emph{enters} $v$, and that
$u$ is the \emph{tail} and $v$ is the \emph{head} of $e$.
\medskip
\noindent\textbf{SE-graphs.} A graph $G=(V,E)$ of this sort (also denoted as
$(V,E;R,C)$) is defined by the following conditions:
(i) $G$ is planar (with a fixed layout in the plane);
(ii) $G$ has edges of two types: \emph{horizontal} edges, or \emph{H-edges},
which are directed from left to right, and \emph{vertical} edges, or
\emph{V-edges}, which are directed downwards (so each edge points either
\emph{south} or \emph{east}, justifying the term ``SE-graph'');
(iii) $G$ has two distinguished subsets of vertices: set $R=\{r_1,\ldots,r_m\}$
of \emph{sources} and set $C=\{c_1,\ldots,c_n\}$ of \emph{sinks}; moreover,
$r_1,\ldots,r_m$ are disposed on a vertical line, in this order upwards, and
$c_1,\ldots,c_n$ are disposed on a horizontal line, in this order from left to
right;
(iv) each vertex (and each edge) of $G$ belongs to a directed path from $R$ to
$C$.
The set $V-(R\cup C)$ if \emph{inner} vertices of an SE-graph $G=(V,E)$ is
denoted by $W=W_G$. An example of SE-graphs with $m=3$ and $n=4$ is drawn in
the picture:
\vspace{0cm}
\begin{center}
\includegraphics{qr1}
\end{center}
\vspace{0cm}
Each inner vertex $v\in W$ is regarded as an indeterminate (generator), and we
assign a weight $w(e)$ to each edge $e$ in a way similar to the assignment for
Cauchon graphs in~\cite{cast1}. More precisely, for $e=(u,v)\in E$,
\begin{numitem1} \label{eq:edge_weight}
\begin{itemize}
\item[(i)] $w(e):=v$ if $e$ is an H-edge with $u\in R$;
\item[(ii)] $w(e):=u^{-1}v$ if $e$ is an H-edge with $u\in W$;
\item[(iii)] $w(e):=1$ if $e$ is a V-edge.
\end{itemize}
\end{numitem1}
This gives rise to defining the weight $w(P)$ of a directed path
$P=e_1e_2\ldots e_k$ in $G$, to be the ordered (from left to right) product
\begin{equation} \label{eq:wP}
w(P)=w(e_1)w(e_2)\cdots w(e_k).
\end{equation}
Then $w(P)$ is a Laurent monomial in elements of $W$. Note that when $P$ begins
in $R$ and ends in $C$, its weight can also be expressed in the following
useful form; cf.~\cite[Prop.~3.1.8]{cast2}. Let $u_1,v_1,u_2,v_2,\ldots,
u_{d-1},v_{d-1},u_d$ be the sequence of vertices where $P$ makes turns; namely,
$P$ changes the horizontal direction to the vertical one at each $u_i$, and
conversely at each $v_i$. Then (due to the ``telescopic effect'' caused
by~\refeq{edge_weight}(ii)),
\begin{equation} \label{eq:telescop}
w(P)=u_1v_1^{-1}u_2v_2^{-1}\cdots u_{d-1}v_{d-1}^{-1} u_d.
\end{equation}
We assume that the generators $W$ obey (quasi)commutation laws somewhat similar
to those for the quantum affine space (cf.~\refeq{trelat}); namely,
\begin{numitem1} \label{eq:commut_in_G}
for distinct $u,v\in W$,
\begin{itemize}
\item[(i)] if there is a directed \emph{horizontal} path from $u$ to $v$ in $G$, then
$uv=qvu$;
\item[(ii)] if there is a directed \emph{vertical} path from $u$ to $v$ in $G$, then
$vu=quv$;
\item[(iii)] otherwise $uv=vu$.
\end{itemize}
\end{numitem1}
\noindent\textbf{Quantum minors.} \label{ssec:quant_minor} It will be
convenient for us to visualize matrices in the Cartesian form: for an $m\times
n$ matrix $A=(a_{ij})$, the row indices $i=1,\ldots,m$ are assumed to increase
upwards, and the column indices $j=1,\ldots,n$ from left to right.
We denote by $A(I|J)$ the submatrix of $A$ whose rows are indexed by
$I\subseteq[m]$, and columns indexed by $J\subseteq[n]$. Let $|I|=|J|=:k$, and
let $I$ consist of $i_1<\ldots<i_k$ and $J$ consist of $j_1<\ldots<j_k$. Then
the $q$-\emph{determinant} of $A(I|J)$, or the $q$-\emph{minor} of $A$ for
$(I|J)$, is defined as
\begin{equation} \label{eq:qminor}
[I|J]_{A,q}:=\sum_{\sigma\in S_k} (-q)^{\ell(\sigma)}
\prod_{d=1}^{k} a_{i_dj_{\sigma(d)}},
\end{equation}
where, in the noncommutative case, the product under $\prod$ is ordered by
increasing $d$, and $\ell(\sigma)$ denotes the \emph{length} (number of
inversions) of a permutation $\sigma$. In the minor notation $[I|J]_{A,q}$, the
terms $A$ and/or $q$ may be omitted when they are clear from the context.
\medskip
\noindent\textbf{Path matrices.} An important construction in~\cite{cast1}
associates to a Cauchon graph $G$ a certain matrix, called the path matrix of
$G$, which has a nice property of Lindstr\"om's type: the $q$-minors of this
matrix correspond to appropriate systems of disjoint paths in $G$.
This is extended to an arbitrary SE-graph $G=(V,E;R,C)$. More precisely, let
$m:=|R|$ and $n:=|C|$. As before, $w=w_G$ denotes the edge weights in $G$
defined by~\refeq{edge_weight}. For $i\in[m]$ and $j\in[n]$, we denote the set
of directed paths from $r_i$ to $c_j$ in $G$ by $\Phi_G(i|j)$.
\medskip
\noindent\textbf{Definition.} The \emph{path matrix} $\Path_G$ associated to
$G$ is the $m\times n$ matrix whose entries are defined by
\begin{equation} \label{eq:Mat}
\Path_G(i|j):=\sum\nolimits_{P\in\Phi_G(i|j)} w(P), \qquad (i,j)\in [m]\times [n],
\end{equation}
In particular, $\Path_G(i|j)=0$ if $\Phi_G(i|j)=\emptyset$.
\smallskip
Thus, the entries of $\Path_G$ belong to the $\Kset$-algebra $\Lscr_G$ of
Laurent polynomials generated by the set $W$ of inner vertices of $G$ subject
to relations~\refeq{commut_in_G}. (Note also that $\Path_G$ is a
$q$-\emph{matrix}, i.e., its entries obey Manin's relations;
see~\cite[Th.~3.2]{DK}).
\smallskip
\noindent\textbf{Definition.} Let $\Escr^{m,n}$ denote the set of pairs $(I|J)$
such that $I\subseteq [m]$, $J\subseteq [n]$ and $|I|=|J|$. Borrowing
terminology from~\cite{DKK}, we say that for $(I|J)\in\Escr^{m,n}$, a set
$\phi$ of pairwise \emph{disjoint} directed paths from the source set
$R_I:=\{r_i\;\colon i\in I\}$ to the sink set $C_J:=\{c_j\;\colon j\in J\}$ in
$G$ is an $(I|J)$-\emph{flow}. The set of $(I|J)$-flows is denoted by
$\Phi(I|J)=\Phi_G(I|J)$.
\medskip
We throughout assume that the paths forming $\phi$ are ordered by increasing
the source indices. Namely, if $I$ consists of $i(1)<i(2)<\ldots< i(k)$ and $J$
consists of $j(1)<j(2)<\ldots<j(k)$, then $\ell$-th path $P_\ell$ in $\phi$
begins at $r_{i(\ell)}$, and therefore, $P_\ell$ ends at $c_{j(\ell)}$ (which
easily follows from the planarity of $G$, the ordering of sources and sinks in
the boundary of $G$ and the fact that the paths in $\phi$ are disjoint). We
write $\phi=(P_1,P_2,\ldots,P_k)$ and (similar to path systems in~\cite{cast1})
define the weight of $\phi$ to be the ordered product
\begin{equation} \label{eq:w_phi}
w(\phi):=w(P_1)w(P_2)\cdots w(P_k).
\end{equation}
Our first theorem is a direct extension of a $q$-analog of Lindstr\"om's Lemma
shown for Cauchon graphs in~\cite[Th.~4.4]{cast1}; it gives a relationship
between flows and minors of path matrices.
\begin{theorem} \label{tm:Lind}
Let $G$ be an SE-graph with $m$ sources and $n$ sinks. Then for the path matrix
$\Path=\Path_G$ and for any $(I|J)\in \Escr^{m,n}$, there holds
\begin{equation} \label{eq:Lind}
[I|J]_{\Path,q}=\sum\nolimits_{\phi\in\Phi(I|J)} w(\phi).
\end{equation}
\end{theorem}
This theorem (stated in~\cite[Th.~3.1]{DK}) is proved in
Section~\SEC{q_determ}.
\section{\Large Double flows, matchings, and exchange operations} \label{sec:double}
A study of quadratic identities for minors of quantum matrices in~\cite{DK} is
reduced to handling ordered products of minors of the path matrices of
SE-graphs $G$, and further, in view of Theorem~\ref{tm:Lind}, to handling
ordered pairs of flows in $G$. On this way, a crucial role is played by
exchange operations on pairs of flows. To describe them, we first need some
definitions and conventions.
Let $G=(V,E;R,C)$ be an SE-graph with $|R|=m$ and $|C|=n$. For
$(I|J),(I'|J')\in \Escr^{m,n}$, consider an $(I|J)$-flow $\phi$ and an
$(I'|J')$-flow $\phi'$ in $G$. We call the ordered pair $(\phi,\phi')$ a
\emph{double flow} in $G$. Define
\begin{gather}
\Iw:=I-I',\quad \Jw:=J-J',\quad \Ib:=I'-I,\quad \Jb:=J'-J,
\label{eq:white-black} \\
\Yr:=\Iw\cup\Ib\quad \mbox{and}\quad \Yc:=\Jw\cup\Jb. \nonumber
\end{gather}
Note that $|I|=|J|$ and $|I'|=|J'|$ imply that $|\Yr|+|\Yc|$ is even and that
\begin{equation} \label{eq:balancIJ}
|\Iw|-|\Ib|=|\Jw|-|\Jb|.
\end{equation}
It is convenient for us to interpret $\Iw$ and $\Ib$ as the sets of
\emph{white} and \emph{black} elements of $\Yr$, respectively, and similarly
for $\Jw,\Jb,\Yc$, and to visualize these objects by use of a \emph{circular
diagram} $D$ in which the elements of $\Yr$ (resp. $\Yc$) are disposed in the
increasing order from left to right in the upper (resp. lower) half of a
circumference $O$. For example if, say, $\Iw=\{3\}$, $\Ib=\{1,4\}$,
$\Jw=\{2',5'\}$ and $\Jb=\{3',6',8'\}$, then the diagram is viewed as in the
left fragment of the picture below. (Sometimes, to avoid a possible mess
between elements of $\Yr$ and $\Yc$, and when it leads to no confusion, we
denote elements of $\Yc$ with primes.)
\vspace{0cm}
\begin{center}
\includegraphics{qr3}
\end{center}
\vspace{-0.3cm}
We refer to the quadruple $(I|J,I'|J')$ as a \emph{cortege}, and to
$(\Iw,\Ib,\Jw,\Jb)$ as the \emph{refinement} of $(I|J,I'|J')$, or as a
\emph{refined cortege}.
\smallskip
Let $M$ be a partition of $\Yr\sqcup \Yc$ into 2-element sets (recall that
$A\sqcup B$ denotes the disjoint union of sets $A,B$). We refer to $M$ as a
\emph{perfect matching} on $\Yr\sqcup \Yc$, and to its elements as
\emph{couples}.
Also we say that $\pi\in M$ is: an $R$-\emph{couple} if $\pi\subseteq \Yr$, a
$C$-\emph{couple} if $\pi\subseteq \Yc$, and an $RC$-\emph{couple} if $|\pi\cap
\Yr|=|\pi\cap \Yc|=1$ (as though $\pi$ ``links'' two sources, two sinks, and
one source and one sink, respectively). \smallskip
\noindent\textbf{Definition.} A (perfect) matching $M$ as above is called a
\emph{feasible} matching for $(\Iw,\Ib,\Jw,\Jb)$ if:
\begin{numitem1} \label{eq:feasM}
\begin{itemize}
\item[(i)] for each $\pi=\{i,j\}\in M$, the elements $i,j$ have different
colors if $\pi$ is an $R$-couple or a $C$-couple, and have the same color if
$\pi$ is an $RC$-couple;
\item[(ii)] $M$ is \emph{planar}, in the sense that the chords connecting the
couples in the circumference $O$ are pairwise non-intersecting.
\end{itemize}
\end{numitem1}
The right fragment of the above picture illustrates an instance of feasible
matchings.
Return to a double flow $(\phi,\phi')$ as above. We associate to it a feasible
matching for $(\Iw,\Ib,\Jw,\Jb)$ as follows. Let $V_\phi$ and $E_\phi$,
respectively, denote the sets of vertices and edges of $G$ occurring in $\phi$,
and similarly for $\phi'$. Denote by $\langle U\rangle$ the subgraph of $G$
induced by the set of edges
$$
U:=E_\phi\triangle E_{\phi'},
$$
writing $A\triangle B$ for the symmetric difference $(A-B)\cup(B-A)$ of sets
$A,B$. Then
\begin{numitem1} \label{eq:degrees}
a vertex $v$ of $\langle U\rangle$ has degree 1 if $v\in R_{\Iw}\cup
R_{\Ib}\cup C_{\Jw}\cup C_{\Jb}$, and degree 2 or 4 otherwise.
\end{numitem1}
We slightly modify $\langle U\rangle$ by splitting each vertex $v$ of degree 4
in $\langle U\rangle$ (if any) into two vertices $v',v''$ disposed in a small
neighborhood of $v$ so that the edges entering (resp. leaving) $v$ become
entering $v'$ (resp. leaving $v''$); see the picture.
\vspace{0cm}
\begin{center}
\includegraphics{qr4}
\end{center}
\vspace{0cm}
The resulting graph, denoted as $\langle U\rangle '$, is planar and has
vertices of degree only 1 and 2. Therefore, $\langle U\rangle'$ consists of
pairwise disjoint (non-directed) simple paths $P'_1,\ldots,P'_k$ (considered up
to reversing) and, possibly, simple cycles $Q'_1,\ldots,Q'_d$. The
corresponding images of $P'_1,\ldots,P'_k$ (resp. $Q'_1,\ldots,Q'_d$) give
paths $P_1,\ldots,P_k$ (resp. cycles $Q_1,\ldots,Q_d$) in $\langle U\rangle$.
When $\langle U\rangle$ has vertices of degree 4, some of the latter paths and
cycles may be self-intersecting and may ``touch'', but not ``cross'', each
other. The following simple facts are shown in~\cite{DK}.
\begin{lemma} \label{lm:P1Pk}
{\rm (i)} $k=(|\Iw|+|\Ib|+|\Jw|+|\Jb|)/2$;
{\rm(ii)} the set of endvertices of $P_1,\ldots,P_k$ is $R_{\Iw\cup\Ib}\cup
C_{\Jw\cup\Jb}$; moreover, each $P_i$ connects either $R_{\Iw}$ and $R_{\Ib}$,
or $C_{\Jw}$ and $C_{\Jb}$, or $R_{\Iw}$ and $C_{\Jw}$, or $R_{\Ib}$ and
$C_{\Jb}$;
{\rm(iii)} in each path $P_i$, the edges of $\phi$ and the edges of $\phi'$
have different directions (say, the former edges are all forward, and the
latter ones are all backward).
\end{lemma}
Thus, each $P_i$ is represented as a concatenation $P_i^{(1)}\circ
P_i^{(2)}\circ\ldots\circ P_i^{(\ell)}$ of forwardly and backwardly directed
paths which are alternately contained in $\phi$ and $\phi'$. We call $P_i$ an
\emph{exchange path} (by a reason that will be clear later). The endvertices of
$P_i$ determine, in a natural way, a pair of elements of $\Yr\sqcup \Yc$,
denoted by $\pi_i$. Then $M:=\{\pi_1,\ldots,\pi_k\}$ is a perfect matching on
$\Yr\sqcup \Yc$.
Moreover, $M$ is a feasible matching for $(\Iw,\Ib,\Jw,\Jb)$, since
property~\refeq{feasM}(i) follows from Lemma~\ref{lm:P1Pk}(ii), and
property~\refeq{feasM}(ii) is provided by the fact that $P'_1,\ldots,P'_k$ are
pairwise disjoint simple paths in $\langle U\rangle'$. We denote $M$ as
$M(\phi,\phi')$, and for $\pi\in M$, denote by $P(\pi)$ the exchange path $P_i$
corresponding to $\pi$ (i.e., $\pi=\pi_i$).
Figure~\ref{fig:phi} illustrates an instance of $(\phi,\phi')$ for
$I=\{1,2,3\}$, $J=\{1',3',4'\}$, $I'=\{2,4\}$, $J'=\{2',3'\}$; here $\phi$ and
$\phi'$ are drawn by solid and dotted lines, respectively (in the left
fragment), the subgraph $\langle E_\phi\triangle E_{\phi'}\rangle$ consists of
three paths and one cycle (in the middle), and the circular diagram illustrates
$M(\phi,\phi')$ (in the right fragment).
\begin{figure}[htb]
\vspace{0.3cm}
\begin{center}
\includegraphics{qr5}
\end{center}
\vspace{-0.3cm}
\caption{ flows $\phi$ and $\phi'$ (left); $\langle E_\phi\triangle
E_{\phi'}\rangle$ (middle); $M(\phi,\phi')$ (right)}
\label{fig:phi}
\end{figure}
\noindent \textbf{Ordinary flow exchange operation.} Let us be given a double
flow $(\phi,\phi')$ for a cortege $(I|J,\,I'|J')$. Fix a couple $\pi=\{i,j\}\in
M(\phi,\phi')$. The operation w.r.t. $\pi$ rearranges $(\phi,\phi')$ into
another double flow $(\psi,\psi')$ for some $(\tilde I|\tilde J,\,\tilde
I'|\tilde J')$, as follows.
Consider the exchange path $P=P(\pi)$ corresponding to $\pi$, and let $\Escr$
be the set of edges of $P$. Define
$$
\tilde I:=I\triangle (\pi\cap \Yr), \quad \tilde I':=I'\triangle (\pi\cap \Yr), \quad
\tilde J:=J\triangle (\pi\cap \Yc), \quad \tilde J':=J'\triangle (\pi\cap \Yc).
$$
The following simple lemma is shown in~\cite{DK}.
\begin{lemma} \label{lm:phi-psi}
The subgraph $\psi$ induced by $E_\phi\triangle\Escr$ gives a $(\tilde I|\tilde
J)$-flow, and the subgraph $\psi'$ induced by $E_{\phi'}\triangle \Escr$ gives
a $(\tilde I'|\tilde J')$-flow in $G$. Furthermore, $E_\psi\cup E_{\psi'}=
E_\phi\cup E_{\phi'}$,\; $E_\psi\triangle E_{\psi'}= E_\phi\triangle E_{\phi'}$
($=U$), and $M(\psi,\psi')=M(\phi,\phi')$.
\end{lemma}
We call the transformation $(\phi,\phi')\stackrel{\pi}\longmapsto (\psi,\psi')$
in this lemma the \emph{ordinary flow exchange operation} for $(\phi,\phi')$
\emph{using} $\pi\in M(\phi,\phi')$ (or using $P(\pi)$). Clearly a similar
operation applied to $(\psi,\psi')$ using the same $\pi$ returns
$(\phi,\phi')$. The picture below illustrates flows $\psi,\psi'$ obtained from
$\phi,\phi'$ in Fig.~\ref{fig:phi} by the ordinary exchange operations using
the path $P_2$ (left) and the path $P_3$ (right).
\vspace{0.cm}
\begin{center}
\includegraphics{qr6}
\end{center}
\vspace{-0.3cm}
Now we formulate the second theorem of this paper; it will be proved in
Section~\SEC{exchange}.
\begin{theorem} \label{tm:single_exch}
Let $\phi$ be an $(I|J)$-flow, and $\phi'$ an $(I'|J')$-flow in $G$. Let
$(\psi,\psi')$ be the double flow obtained from $(\phi,\phi')$ by the ordinary
flow exchange operation using a couple $\pi=\{f,g\}\in M(\phi,\phi')$. Then:
{\rm(i)} when $\pi$ is an $R$- or $C$-couple and $f<g$, we have
\begin{eqnarray*}
w(\phi)w(\phi')=qw(\psi)w(\psi') \quad &\mbox{in case}&\;\; f\in I\cup J; \\
w(\phi)w(\phi')=q^{-1}w(\psi)w(\psi') \quad &\mbox{in case}&\;\;
f\in I'\cup J';
\end{eqnarray*}
{\rm(ii)} when $\pi$ is an $RC$-couple, we have
$w(\phi)w(\phi')=w(\psi)w(\psi')$.
\end{theorem}
\section{\Large Commutation properties of paths} \label{sec:two_paths}
This section contains auxiliary lemmas that will be used in the proofs of
Theorems~\ref{tm:Lind} and~\ref{tm:single_exch}. They deal with special pairs
$P,Q$ of paths in an SE-graph $G=(V,E;R,C)$ and compare the weights $w(P)w(Q)$
and $w(Q)w(P)$. Similar or close statements for Cauchon graphs are given
in~\cite{cast1,cast2}, and our method of proof is somewhat similar and rather
straightforward as well.
We need some terminology, notation and conventions.
When it is not confusing, vertices, edges, paths and other objects in $G$ are
identified with their corresponding images in the plane. We assume that the
sets $R$ and $C$ lie on the coordinate rays $(0,\Rset_{\ge 0})$ and
$(\Rset_{\ge 0},0)$, respectively (then $G$ is disposed within $\Rset^2_{\ge
0}$). The coordinates of a point $v$ in $\Rset^2$ (e.g., a vertex $v$ of $G$)
are denoted as $(\alpha(v),\beta(v))$. W.l.o.g. we may assume that two vertices
$u,v\in V$ have the same first (second) coordinate if and only if they belong
to a vertical (resp. horizontal) path in $G$, in which case $u,v$ are called
\emph{V-dependent} (resp. \emph{H-dependent}). When $u,v$ are V-dependent,
i.e., $\alpha(u)=\alpha(v)$, we say that $u$ is \emph{lower} than $v$ (and $v$
is \emph{higher} than $u$) if $\beta(u)< \beta(v)$. (In this case the
commutation relation $uv=qvu$ takes place.)
Let $P$ be a path in $G$. We denote: the first and last vertices of $P$ by
$s_P$ and $t_P$, respectively; the \emph{interior} of $P$ (the set of points of
$P-\{s_P,t_P\}$ in $\Rset^2$) by $\Inter(P)$; the set of horizontal edges of
$P$ by $E^H_P$; and the projection $\{\alpha(x)\;\colon x\in P\}$ by
$\alpha(P)$. Clearly if $P$ is directed, then $\alpha(P)$ is the interval
between $\alpha(s_P)$ and $\alpha(t_P)$.
For a directed path $P$, the following are equivalent: $P$ is non-vertical;
$E^H_P\ne \emptyset$; $\alpha(s_P)\ne\alpha(t_P)$; we will refer to such a $P$
as a \emph{standard} path.
For a standard path $P$, we will take advantage from a compact expression for
the weight $w(P)$. We call a vertex $v$ of $P$ \emph{essential} if either $P$
makes a turn at $v$ (changing the direction from horizontal to vertical or
back), or $v=s_P\not\in R$ and the first edge of $P$ is horizontal, or $v=t_P$
and the last edge of $P$ is horizontal. If $u_0,u_1,\ldots,u_k$ is the sequence
of essential vertices of $P$ in the natural order, then the weight of $P$ can
be expressed as
\begin{equation} \label{eq:wP2}
w(P)=u_0^{\sigma_0}u_1^{\sigma_1}\ldots u_k^{\sigma_k},
\end{equation}
where $\sigma_i=1$ if $P$ makes a \horvert-turn at $u_i$ or if $i=k$, while
$\sigma_i=-1$ if $P$ makes a \verthor-turn at $u_i$ or if $i=0$. (Compare
with~\refeq{telescop} where a path from $R$ to $C$ is considered.) It is easy
to see that if $P$ does not begin in $R$, then its essential vertices are
partitioned into H-dependent pairs.
Throughout the rest of the paper, for brevity, we denote $q^{-1}$ by $\bar q$,
and for an inner vertex $v\in W$ regarded as a generator, we may denote
$v^{-1}$ by $\bar v$.
\smallskip
Now we start stating the desired lemmas on two directed paths $P,Q$. They deal
with the case when $P$ and $Q$ are \emph{weakly intersecting}, which means that
\begin{equation} \label{eq:pathsPQ}
P\cap Q=\{s_P,t_P\}\cap \{s_Q,t_Q\};
\end{equation}
in particular, $\Inter(P)\cap\Inter(Q)=\emptyset$. For such $P,Q$, we say that
$P$ is \emph{lower} than $Q$ if there are points $x\in P$ and $y\in Q$ such
that $\alpha(x)=\alpha(y)$ and $\beta(x)<\beta(y)$ (this property does not
depend on the choice of $x,y$). We define the value $\varphi=\varphi(P,Q)$ by
the relation
$$
w(P)w(Q)=\varphi w(Q)w(P).
$$
Obviously, $\varphi(P,Q)=1$ when $P$ or $Q$ is a V-path. In the lemmas below we
default assume that both $P,Q$ are standard.
\begin{lemma} \label{lm:varphi=1}
Let $\{\alpha(s_P),\alpha(t_P)\}\cap\{\alpha(s_Q),\alpha(t_Q)\}\cap
\Rset_{>0}=\emptyset$. Then $\varphi(P,Q)=1$.
\end{lemma}
\begin{proof}
~Consider an essential vertex $u$ of $P$ and an essential vertex $v$ of $Q$.
Then for any $\sigma,\sigma'\in\{1,-1\}$, we have $u^\sigma
v^{\sigma'}=v^{\sigma'} u^\sigma$ unless $u,v$ are dependent.
Suppose that $u,v$ are V-dependent. From the hypotheses of the lemma it follows
that at least one of the following is true:
$\alpha(s_P)<\alpha(u)<\alpha(t_P)$, or $\alpha(s_Q)<\alpha(v)<\alpha(t_Q)$.
For definiteness assume the former. Then there is another essential vertex $z$
of $P$ such that $\alpha(z)=\alpha(u)=\alpha(v)$. Moreover, $P$ makes a
\horvert-turn an one of $u,z$, and a \verthor-turn at the other. Since $P\cap
Q=\emptyset$ (in view of~\refeq{pathsPQ}), the vertices $u,z$ are either both
higher or both lower than $v$. Let for definiteness $u,z$ occur in this order
in $P$; then $w(P)$ contains the terms $u,\bar z$. Let $w(Q)$ contain the term
$v^\sigma$ and let $uv^\sigma=\rho v^\sigma u$, where $\sigma\in\{1,-1\}$ and
$\rho\in\{q,\bar q\}$. Then $\bar z v^\sigma= \bar \rho v^\sigma \bar z$,
implying $u\bar z v^\sigma =v^\sigma u\bar z$. Hence the contributions to
$w(P)w(Q)$ and $w(Q)w(P)$ from the pairs using terms $u,z,v$ (namely
$\{u,v^\sigma\}$ and $\{\bar z,v^\sigma\}$) are equal.
Next suppose that $u,v$ are H-dependent. One may assume that
$\alpha(u)<\alpha(v)$. Then $Q$ contains one more essential vertex $y\ne v$
with $\beta(y)=\beta(v)=\beta(u)$. Also $\alpha(u)<\alpha(v)$ and $P\cap
Q=\emptyset$ imply $\alpha(u)<\alpha(y)$. Let for definiteness
$\alpha(y)<\alpha(v)$. Then $w(P)$ contains the terms $\bar y,v$, and we can
conclude that the contributions to $w(P)w(Q)$ and $w(Q)w(P)$ from the pairs
using terms $u,y,v$ are equal (using the fact that
$\alpha(u)<\alpha(y),\alpha(v)$).
These reasonings imply $\varphi(P,Q)=1$.
\end{proof}
\begin{lemma} \label{lm:asP=asQ}
Let $\alpha(s_P)=\alpha(s_Q)>0$ and $\alpha(t_P)\ne\alpha(t_Q)$. Let $P$ be
lower than $Q$. Then $\varphi(P,Q)=q$.
\end{lemma}
\begin{proof}
~Let $u$ and $v$ be the first essential vertices in $P$ and $Q$, respectively.
Then $\alpha(u)=\alpha(s_P)=\alpha(s_Q)=\alpha(v)$ (in view of
$\alpha(s_P)=\alpha(s_Q)>0$). Since $P$ is lower than $Q$, we have
$\beta(u)\le\beta(v)$. Moreover, this inequality is strong (since
$\beta(u)=\beta(v)$ is impossible in view of~\refeq{pathsPQ} and the obvious
fact that $u,v$ are the tails of first H-edges in $P,Q$, respectively).
Now arguing as in the above proof, we can conclude that the discrepancy between
$w(P)w(Q)$ and $w(Q)w(P)$ can arise only due to swapping the vertices $u,v$.
Since $u$ gives the term $\bar u$ in $w(P)$, and $v$ the term $\bar v$ in
$w(Q)$, the contribution from these vertices to $w(P)w(Q)$ and $w(Q)w(P)$ are
expressed as $\bar u\bar v$ and $\bar v \bar u$, respectively. Since
$\beta(u)<\beta(v)$, we have $\bar u \bar v=q\bar v \bar u$, and the result
follows.
\end{proof}
\begin{lemma} \label{lm:atP=atQ}
Let $\alpha(t_P)=\alpha(t_Q)$ and let either $\alpha(s_P)\ne\alpha(s_Q)$ or
$\alpha(s_P)=\alpha(s_Q)=0$. Let $P$ be lower than $Q$. Then $\varphi(P,Q)=q$.
\end{lemma}
\begin{proof}
~We argue in spirit of the proof of Lemma~\ref{lm:asP=asQ}. Let $u$ and $v$ be
the last essential vertices in $P$ and $Q$, respectively. Then
$\alpha(u)=\alpha(t_P)=\alpha(t_Q)=\alpha(v)$. Also $\beta(u)<\beta(v)$ (since
$P$ is lower than $Q$, and in view of~\refeq{pathsPQ} and the fact that $u,v$
are the heads of H-edges in $P,Q$, respectively). The condition on
$\alpha(s_P)$ and $\alpha(s_Q)$ imply that the discrepancy between $w(P)w(Q)$
and $w(Q)w(P)$ can arise only due to swapping the vertices $u,v$ (using
reasonings as in the proof of Lemma~\ref{lm:varphi=1}). Observe that $w(P)$
contains the term $u$, and $w(Q)$ the term $v$. So the generators $u,v$
contribute $uv$ to $w(P)w(Q)$, and $vu$ to $w(Q)w(P)$. Now $\beta(u)<\beta(v)$
implies $uv=qvu$, and the result follows.
\end{proof}
\begin{lemma} \label{lm:1atP=asQ}
Let $\alpha(t_P)=\alpha(s_Q)$ and $\beta(t_P)\ge\beta(s_Q)$. Then
$\varphi(P,Q)=q$.
\end{lemma}
\begin{proof}
~Let $u$ be the last essential vertex in $P$ and let $v,z$ be the first and
second essential vertices of $Q$, respectively (note that $z$ exists because of
$0<\alpha(s_Q)<\alpha(t_Q)$). Then
$\alpha(u)=\alpha(t_P)=\alpha(s_Q)=\alpha(v)<\alpha(z)$. Also
$\beta(u)\ge\beta(t_P) \ge \beta(s_Q)\ge \beta(v)=\beta(z)$. Let $Q'$ and $Q''$
be the parts of $Q$ from $s_Q$ to $z$ and from $z$ to $t_Q$, respectively. Then
$\alpha(P)\cap\alpha(Q'')=\emptyset$, implying $\varphi_{P,Q''}=1$ (using
Lemma~\ref{lm:varphi=1} when $Q''$ is standard). Hence
$\varphi_{P,Q}=\varphi_{P,Q'}$.
To compute $\varphi_{P,Q'}$, consider three possible cases.
(a) Let $\beta(u)>\beta(v)$. Then $u,v$ form the unique pair of dependent
essential vertices for $P,Q'$. Note that $w(P)$ contains the term $u$, and
$w(Q')$ contains the term $\bar v$. Since $\beta(u)>\beta(v)$, we have $u\bar
v=q\bar vu$, implying $\varphi_{P,Q'}=q$.
(b) Let $u=v$ and let $u$ be the unique essential vertex of $P$ (in other
words, $P$ is an H-path with $s_P\in R$). Note that $u=v$ and
$\beta(t_P)\ge\beta(s_Q)$ imply $t_Q=u=v=s_P$. Also $\alpha(u)<\alpha(z)$ and
$\beta(u)=\beta(z)$; so $u,z$ are dependent essential vertices for $P,Q'$ and
$uz=qzu$. We have $w(P)=u$ and $w(Q')=\bar u z$ (in view of $u=v$). Then $u\bar
u z=\bar u uz=q\bar u z u$ gives $\varphi_{P,Q'}=q$.
(c) Now let $u=v$ and let $y$ be the essential vertex of $P$ preceding $u$.
Then $t_Q=u=v=s_P$, ~$\beta(y)=\beta(u)=\beta(z)$, and
$\alpha(y)<\alpha(u)<\alpha(z)$. Hence $y,u,z$ are dependent, $w(P)$ contains
$\bar yu$, and $w(Q')=\bar uz$. We have
$$
\bar y u\bar u z= \bar y\bar u uz=(q\bar u\bar y)(qzu)
=q^2 \bar u(\bar q z\bar y) u=q\bar u z\bar y u,
$$
again obtaining $\varphi_{P,Q'}=q$.
\end{proof}
\begin{lemma} \label{lm:2atP=asQ}
Let $\alpha(t_P)=\alpha(s_Q)$ and $\beta(t_P)<\beta(s_Q)$. Then
$\varphi(P,Q)=\bar q$.
\end{lemma}
\begin{proof}
~Let $u$ be the last essential vertex of $P$, and $v$ the first essential
vertex of $Q$. Then $\alpha(u)=\alpha(t_P)=\alpha(s_Q)=\alpha(v)$, and
$\beta(t_P)<\beta(s_Q)$ together with~\refeq{pathsPQ} implies
$\beta(u)<\beta(v)$. Also $w(P)$ contains $u$ and $w(Q)$ contains $\bar v$. Now
$u\bar v=\bar q \bar v u$ implies $\varphi_{P,Q}=\bar q$.
\end{proof}
\Xcomment{
\begin{lemma} \label{lm:astP=astQ}
Let $\alpha(s_P)=\alpha(s_Q)> 0$ and $\alpha(t_P)=\alpha(t_Q)$. Let $P$ be
lower than $Q$. Then $\varphi(P,Q)=q^2$.
\end{lemma}
\begin{proof}
~In fact, this is a combination of Lemmas~\ref{lm:asP=asQ}
and~\ref{lm:atP=atQ}. More precisely, let $u,v$ (resp. $u',v'$) be the first
(resp. last) essential vertices of $P$ and $Q$, respectively. Then
$\alpha(u)=\alpha(v)$, ~$\beta(u)<\beta(v)$, ~$\alpha(u')=\alpha(v')$, and
$\beta(u')<\beta(v')$. Also $w(P)$ contains $\bar u,u'$ and $w(Q)$ contains
$\bar v,v'$. Now $\bar u\bar v=q\bar v\bar u$ and $u'v'=qv'u'$ imply
$\varphi(P,Q)=q^2$.
\end{proof}
}
\section{\Large Proof of Theorem~\ref{tm:Lind}} \label{sec:q_determ}
It can be conducted as a direct extension of the proof of a similar
Lindstr\"om's type result given by Casteels~\cite[Sec.~4]{cast1} for Cauchon
graphs. To make our description more self-contained, we outline the main
ingredients of the proof, leaving the details where needed to the reader.
Let $(I|J)\in\Escr^{m,n}$, $I=\{i(1)<\cdots <i(k)\}$ and
$J=\{j(1)<\cdots<j(k)\}$. Recall that an $(I|J)$-flow in an SE-graph $G$ (with
$m$ sources and $n$ sinks) consists of pairwise disjoint paths $P_1,\ldots,
P_k$ from the source set $R_I=\{r_{i(1)},\ldots,r_{i(k)}\}$ to the sink set
$C_J=\{c_{j(1)},\ldots,c_{j(k)}\}$, and (due to the planarity of $G$) we may
assume that each $P_d$ begins at $r_{i(d)}$ and ends at $c_{j(d)}$. Besides, we
are forced to deal with an arbitrary \emph{path system} $\Pscr=(P_1,\ldots,
P_k)$ in which for $i=1,\ldots,k$, ~$P_d$ is a directed path in $G$ beginning
at $r_{i(d)}$ and ending at $c_{j(\sigma(d))}$, where $\sigma(1),
\ldots,\sigma(k)$ are different, i.e., $\sigma=\sigma_\Pscr$ is a permutation
on $[k]$. (In particular, $\sigma_\Pscr$ is identical if $\Pscr$ is a flow.)
We naturally partition the set of all path systems for $G$ and $(I|J)$ into the
set $\Phi=\Phi_G(I|J)$ of $(I|J)$-flows and the rest $\Psi=\Psi_G(I|J)$
(consisting of those path systems that contain intersecting paths). The
following property easily follows from the planarity of $G$
(cf.~\cite[Lemma~4.2]{cast1}):
\begin{numitem1} \label{eq:PiPi+1}
For any $\Pscr=(P_1,\ldots,P_k)\in\Psi$, there exist two \emph{consecutive}
intersecting paths $P_d,P_{d+1}$.
\end{numitem1}
The $q$-\emph{sign} of a permutation $\sigma$ is defined by
$$
\sgn_q(\sigma):=(-q)^{\ell(\sigma)},
$$
where $\ell(\sigma)$ is the length of $\sigma$ (see Sect.~\SEC{prelim}).
Now we start computing the $q$-minor $[I|J]$ of the matrix $\Path_G$ with the
following chain of equalities:
\begin{eqnarray*}
[I|J]&=& \sum\nolimits_{\sigma\in S_k} \sgn_q(\sigma)
\left( \prod\nolimits_{d=1}^{k} \Path_G(i(d)|j(\sigma(d))\right) \\
&=& \sum\nolimits_{\sigma\in S_k} \sgn_q(\sigma)
\left( \prod\nolimits_{d=1}^{k} \left(
\sum(w(P)\;\colon P\in \Pscr_G(i(d)|j(\sigma(d))\right)\right) \\
&=&\sum(\sgn_q(\sigma_\Pscr)w(\Pscr)\;\colon \Pscr\in\Phi\cup\Psi) \\
&=&\sum(w(\Pscr)\;\colon \Pscr\in\Phi)
+\sum(\sgn_q(\sigma_\Pscr)w(\Pscr)\;\colon
\Pscr\in\Psi).
\end{eqnarray*}
Thus, we have to show that the second sum in the last row is zero. It will
follow from the existence of an involution $\eta:\Psi\to\Psi$ without fixed
points such that for each $\Pscr\in\Psi$,
\begin{equation} \label{eq:invol}
\sgn_q(\sigma_\Pscr)w(\Pscr)=-\sgn_q(\sigma_{\eta(\Pscr)}) w(\eta(\Pscr)).
\end{equation}
To construct the desired $\eta$, consider $\Pscr=(P_1,\ldots,P_k)\in\Psi$, take
the minimal $i$ such that $P_i$ and $P_{i+1}$ meet, take the last common vertex
$v$ of these paths, represent $P_i$ as the concatenation $K\circ L$, and
$P_{i+1}$ as $K'\circ L'$, so that $t_K=t_{K'}=s_L=s_{L'}=v$, and exchange the
portions $L,L'$ of these paths, forming $Q_i:=K\circ L'$ and $Q_{i+1}:=K'\circ
L$. Then we assign $\eta(\Pscr)$ to be obtained from $\Pscr$ by replacing
$P_i,P_{i+1}$ by $Q_i,Q_{i+1}$. It is routine to check that $\eta$ is indeed an
involution (with $\eta(\Pscr)\ne\Pscr$) and that
\begin{equation} \label{eq:ell+1}
\ell(\sigma_{\eta(\Pscr)})=\ell(\sigma_\Pscr)+1,
\end{equation}
assuming w.l.o.g. that $\sigma(i)<\sigma(i+1)$. On the other hand, applying to
the paths $K,L,K',L'$ corresponding lemmas from Sect.~\SEC{two_paths} (among
Lemmas~\ref{lm:asP=asQ}--\ref{lm:1atP=asQ}), one can obtain
\begin{multline*}
\quad w(P_i)w(P_{i+1})=w(K)w(L)w(K')w(L')=qw(K)w(L)w(L')w(K') \\
=q^2 w(K)w(L')w(L)w(K')=qw(K)w(L')w(K')w(L)=qw(Q_i)w(Q_{i+1}),\quad
\end{multline*}
whence $w(\Pscr)=qw(\eta(\Pscr))$. This together with~\refeq{ell+1} gives
\begin{equation*}
\sgn_q(\sigma_\Pscr)w(\Pscr)+\sgn_q(\sigma_{\eta(\Pscr)}) w(\eta(\Pscr))
=(-q)^{\ell(\sigma_\Pscr)}q w(\eta(\Pscr))+(-q)^{\ell(\sigma_\Pscr)+1}
w(\eta(\Pscr)) =0,
\end{equation*}
yielding~\refeq{invol}, and the result follows. \hfill\qed
\section{\Large Proof of Theorem~\ref{tm:single_exch}} \label{sec:exchange}
Using notation as in the hypotheses of this theorem, we first consider the case
when
\begin{itemize}
\item[(C):] $\pi=\{f,g\}$ is a $C$-couple in $M(\phi,\phi')$ with $f<g$ and $f\in
J$.
\end{itemize}
(Then $f\in\Jw$ and $g\in\Jb$.) We have to prove that
\begin{equation} \label{eq:caseC}
w(\phi)w(\phi')=qw(\psi)w(\psi')
\end{equation}
The proof is given throughout Sects.~\SSEC{seglink}--\SSEC{degenerate}. The
other possible cases in Theorem~\ref{tm:single_exch} will be discussed in
Sect.~\SSEC{othercases}.
\subsection{Snakes and links.} \label{ssec:seglink}
Let $Z$ be the exchange path determined by $\pi$ (i.e., $Z=P(\pi)$ in notation
of Sect.~\SEC{double}). It connects the sinks $c_f$ and $c_g$, which may be
regarded as the first and last vertices of $Z$, respectively. Then $Z$ is
representable as a concatenation $Z=\bar Z_1\circ Z_2\circ\bar Z_3\circ \ldots
\circ \bar Z_{k-1}\circ Z_k$, where $k$ is even, each $Z_i$ with $i$ odd (even)
is a directed path concerning $\phi$ (resp. $\phi'$), and $\bar Z_i$ stands for
the path reversed to $Z_i$. More precisely, let $z_0:=c_f$, ~$z_k:=c_g$, and
for $i=1,\ldots,k-1$, denote by $z_i$ the common endvertex of $Z_i$ and
$Z_{i+1}$. Then each $Z_i$ with $i$ odd is a directed path from $z_i$ to
$z_{i-1}$ in $\langle E_\phi- E_{\phi'}\rangle$, while each $Z_i$ with $i$ even
is a directed path from $z_{i-1}$ to $z_i$ in $\langle E_{\phi'}-
E_{\phi}\rangle$.
We refer to $Z_i$ with $i$ odd (even) as a \emph{white} (resp. \emph{black})
\emph{snake}.
Also we refer to the vertices $z_1,\ldots,z_{k-1}$ as the \emph{bends} of $Z$.
A bend $z_i$ is called a \emph{peak} (a \emph{pit}) if both path $Z_i,Z_{i+1}$
leave (resp. enter) $z_i$; then $z_1,z_3,\ldots,z_{k-1}$ are the peaks, and
$z_2,z_4,\ldots,z_{k-2}$ are the pits. Note that some peak $z_i$ and pit $z_j$
may coincide; in this case we say that $z_i,z_j$ are \emph{twins}.
The rests of flows $\phi$ and $\phi'$ consist of directed paths that we call
\emph{white} and \emph{black links}, respectively. More precisely, the white
(black) links correspond to the connected components of the subgraph $\phi$
(resp. $\phi'$) from which the interiors of all snakes are removed. So a link
connects either (a) a source and a sink (being a component of $\phi$ or
$\phi'$), or (b) a source and a pit, or (c) a peak and a sink, or (d) a pit and
a peak. We say that a link is \emph{unbounded} in case (a), \emph{semi-bounded}
in cases (b),(c), and \emph{bounded} in case (d). Note that
\begin{numitem1} \label{eq:4paths}
a bend $z_i$ occurs as an endvertex in exactly four paths among snakes and
links, namely: either in two snakes and two links (of different colors), or in
four snakes $Z_i,Z_{i+1},Z_j,Z_{j+1}$ (when $z_i,z_j$ are twins).
\end{numitem1}
We denote the sets of snakes and links (for $\phi,\phi',\pi$) by $\Sscr$ and
$\Lscr$, respectively; the corresponding subsets of white and black elements of
these sets are denoted as $\Sscr^\circ,\; \Sscr^\bullet,\; \Lscr^\circ,\;
\Lscr^\bullet$.
The picture below illustrates an example. Here $k=10$, the bends
$z_1,\ldots,z_9$ are marked by squares, the white and black snakes are drawn by
thin and thick solid zigzag lines, respectively, the white links
($L_1,\ldots,L_7$) by short-dotted lines, and the black links
($M_1,\ldots,M_6$) by long-dotted lines.
\vspace{0cm}
\begin{center}
\includegraphics{ex1}
\end{center}
\vspace{0cm}
The weight $w(\phi)w(\phi')$ of the double flow $(\phi,\phi')$ can be written
as the corresponding ordered product of the weights of snakes and links; let
$\Nscr$ be the string (sequence) of snakes and links in this product. The
weight of the double flow $(\psi,\psi')$ uses a string consisting of the same
snakes and links but taken in another order; we denote this string by
$\Nscr^\ast$.
We say that two elements among snakes and links are \emph{invariant} if they
occur in the same order in $\Nscr$ and $\Nscr^\ast$, and \emph{permuting}
otherwise. In particular, two links of different colors are invariant, whereas
two snakes of different colors are always permitting.
For example, observe that the string $\Nscr$ for the above illustration is
viewed as
$$
L_1L_2Z_1L_3Z_3Z_9L_4L_5Z_5L_6Z_7L_7M_1Z_2Z_{10}M_2Z_4M_3Z_8M_4M_5Z_6M_6,
$$
whereas $\Nscr^\ast$ is viewed as
$$
L_1L_2Z_2Z_{10}L_3Z_4L_6Z_8L_4L_5Z_6L_7M_1Z_1M_2Z_3Z_9M_4M_5Z_5M_3Z_7M_6.
$$
For $A,B\in\Sscr\cup\Lscr$, we write $A\prec B$ (resp. $A\prec^\ast B$) if $A$
occurs in $\Nscr$ (resp. in $\Nscr^\ast$) earlier than $B$. We define
$\varphi_{A,B}=\varphi_{B,A}:=1$ if $A,B$ are invariant, and define
$\varphi_{A,B}=\varphi_{B,A}$ by the relation
\begin{equation} \label{eq:phiAB}
w(A)w(B)=\varphi_{A,B} w(B)w(A).
\end{equation}
if $A,B$ are permuting and $A\prec B$. Note that $\varphi_{A,B}$ is defined
somewhat differently than $\varphi(P,Q)$ in Sect.~\SEC{two_paths}.
For $A,B\in\Sscr\cup\Lscr$, we may use notation $(A,B)$ when $A,B$ are
permuting and $A\prec B$ (and may write $\{A,B\}$ when their orders by $\prec$
and $\prec^\ast$ are not important for us).
Our goal is to prove that in case~(C),
\begin{equation}\label{eq:Pi=q}
\prod(\varphi_{A,B}\;\colon A,B\in \Sscr\cup\Lscr)=q,
\end{equation}
whence~\refeq{caseC} will immediately follow.
We first consider the \emph{non-degenerate} case. This means the following
restriction:
\begin{numitem1} \label{eq:nondegenerate}
all coordinates $\alpha(z_1),\ldots,\alpha(z_{k-1}),
\alpha(c_1),\ldots,\alpha(c_n)$ of bends and sinks are different.
\end{numitem1}
The proof of~\refeq{Pi=q} subject to~\refeq{nondegenerate} will consist of
three stages I, II, III where we compute the total contribution from the pairs
of links, the pairs of snakes, and the pairs consisting of one snake and one
link, respectively. As a consequence, the following three results will be
obtained (implying~\refeq{Pi=q}).
\begin{prop} \label{pr:link-link}
In case~\refeq{nondegenerate}, the product $\varphi^I$ of the values
$\varphi_{A,B}$ over links $A,B\in\Lscr$ is equal to 1.
\end{prop}
\begin{prop} \label{pr:seg-seg}
In case~\refeq{nondegenerate}, the product $\varphi^{II}$ of the values
$\varphi_{A,B}$ over snakes $A,B\in\Sscr$ is equal to q.
\end{prop}
\begin{prop} \label{pr:seg-link}
In case~\refeq{nondegenerate}, the product $\varphi^{III}$ of the values
$\varphi_{A,B}$ where one of $A,B$ is a snake and the other is a link is equal
to 1.
\end{prop}
These propositions are proved in Sects.~\SSEC{prop1}--\SSEC{prop3}. Sometimes
it will be convenient for us to refer to a white (black) snake/link concerning
$\phi,\phi',\pi$ as a $\phi$-snake/link (resp. a $\phi'$-snake/link), and
similarly for $\psi,\psi',\pi$.
\subsection{Proof of Proposition~\ref{pr:link-link}.} \label{ssec:prop1}
Under the exchange operation using $Z$, any $\phi$-link becomes a $\psi$-link
and any $\phi'$-link becomes a $\psi'$-link. The white links occur in $\Nscr$
earlier than the black links, and similarly for $\Nscr^\ast$. Therefore, if
$A,B$ are permuting links, then they are of the same color. This implies that
$A\cap B=\emptyset$. Also each endvertex of any link either is a bend or
belongs to $R\cup C$. Then~\refeq{nondegenerate} implies that the sets
$\{\alpha(s_A),\alpha(t_A)\}\cap \Rset_{>0}$ and
$\{\alpha(s_B),\alpha(t_B)\}\cap \Rset_{>0}$ are disjoint. Now
Lemma~\ref{lm:varphi=1} gives $\varphi_{A,B}=1$, and the proposition follows.
\hfill\qed
\subsection{Proof of Proposition~\ref{pr:seg-seg}.} \label{ssec:prop2}
Consider two snakes $A=Z_i$ and $B=Z_j$, and let $A\prec B$. If $|i-j|>1$ then
$A\cap B=\emptyset$ and, moreover, $\{\alpha(s_A),\alpha(t_A)\}\cap
\{\alpha(s_B),\alpha(t_B)\}=\emptyset$ (since $Z$ is simple and in view
of~\refeq{nondegenerate}). This gives $\varphi_{A,B}=1$, by
Lemma~\ref{lm:varphi=1}.
Now let $|i-j|=1$. Then $A,B$ have different colors; hence $A$ is white and $B$
is black (in view of $A\prec B$). So $i$ is odd, and two cases are possible:
\smallskip
\noindent\underline{\emph{Case 1}:} ~$j=i+1$ and $z_i$ is a peak:
$z_i=s_A=s_B$;
\smallskip
\noindent\underline{\emph{Case 2}:} ~$j=i-1$ and $z_{i-1}$ is a pit:
$z_{i-1}=t_A=t_B$.
\smallskip
Cases 1,2 are divided into two subcases each.
\smallskip
\noindent\underline{\emph{Subcase 1a}:} ~$j=i+1$ and $A$ is lower than $B$.
\smallskip
\noindent\underline{\emph{Subcase 1b}:} ~$j=i+1$ and $B$ is lower than $A$.
\smallskip
\noindent\underline{\emph{Subcase 2a}:} ~$j=i-1$ and $A$ is lower than $B$.
\smallskip
\noindent\underline{\emph{Subcase 2b}:} ~$j=i-1$ and $B$ is lower than $A$.
\smallskip
(Recall that for directed paths $P,Q$ satisfying~\refeq{pathsPQ}, $P$ is said
to be \emph{lower} than $Q$ if there are $x\in P$ and $y\in Q$ with
$\alpha(x)=\alpha(y)$ and $\beta(x)<\beta(y)$.) Subcases~1a--2b are illustrated
in the picture:
\vspace{-0cm}
\begin{center}
\includegraphics{ex2}
\end{center}
\vspace{0cm}
Under the exchange operation using $Z$, any snake changes its color; so $A,B$
are permuting. Applying to $A,B$ Lemmas~\ref{lm:asP=asQ} and~\ref{lm:atP=atQ},
we obtain $\varphi_{A,B}=q$ in Subcases~1a,2a, and $\varphi_{A,B}=\bar q$ in
Subcases~1b,2b.
It is convenient to associate with a bend $z$ the number $\gamma(z)$ which is
equal to $+1$ if, for the corresponding pair $A\in\Sscr^\circ$ and
$B\in\Sscr^\bullet$ sharing $z$, ~$A$ is lower than $B$ (as in Subcases~1a,2a),
and equal to $-1$ otherwise (as in Subcases~1b,2b). Define
\begin{equation} \label{eq:gammaZ}
\gamma_Z:=\sum(\gamma(z)\;\colon z\;\; \mbox{a bend of}\;\; Z).
\end{equation}
Then $\varphi^{II}=q^{\gamma_Z}$. Thus, $\varphi^{II}=q$ is equivalent to
\begin{equation} \label{eq:gamma=1}
\gamma_Z=1.
\end{equation}
To show~\refeq{gamma=1}, we are forced to deal with a more general setting.
More precisely, let us turn $Z$ into simple cycle $D$ by combining the directed
path $Z_1$ (from $z_1$ to $z_0=c_f$) with the horizontal path from $c_f$ to
$c_g$ (to create the latter, we formally add to $G$ the horizontal edges
$(c_j,c_{j+1})$ for $j=f,\ldots,g-1$). The resulting directed path $\tilde Z$
from $z_1$ to $c_g=z_k$ is regarded as the new white snake replacing $Z_1$.
Then $\tilde Z_1$ shares the end $z_k$ with the black path $Z_k$; so $z_k$ is a
pit of $D$, and $\tilde Z$ is lower than $Z_k$. Thus, compared with $Z$, the
cycle $D$ acquires an additional bend, namely, $z_k$. We have $\gamma(z_k)=1$,
implying $\gamma_D=\gamma_Z+1$. Then~\refeq{gamma=1} is equivalent to
$\gamma_D=2$.
On this way, we come to a new (more general) setting by considering an
arbitrary simple (non-directed) cycle $D$ rather than a special path $Z$.
Moreover, instead of an SE-graph as before, we can work with a more general
directed planar graph $G$ in which any edge $e=(u,v)$ points arbitrarily within
the south-east sector, i.e., satisfies $\alpha(u)\le \alpha(v)$ and
$\beta(u)\ge \beta(v)$. We call $G$ of this sort a \emph{weak SE-graph}.
So now we are given a colored simple cycle $D$ in $G$, i.e., $D$ is
representable as a concatenation $\bar D_1\circ D_2\circ\ldots \circ \bar
D_{k-1}\circ D_k$, where each $D_i$ is a directed path in $G$; a path
(\emph{snake}) $D_i$ with $i$ odd (even) is colored white (resp. black). Let
$d_1,\ldots,d_k$ be the sequence of bends in $D$, i.e., $d_i$ is a common
endvertex of $D_{i-1}$ and $D_i$ (letting $D_0:=D_k$). We assume that $D$ is
oriented according to the direction of $D_i$ with $i$ even. When this
orientation is clockwise (counterclockwise) around a point in the open bounded
region $O_D$ of the plane surrounded by $D$, we say that $D$ is
\emph{clockwise} (resp. \emph{counterclockwise}). In particular, the cycle
arising from the above path $Z$ is clockwise.
Our goal is to prove the following
\begin{lemma} \label{lm:gammaD}
Let $D$ be a colored simple cycle in a weak SE-graph $G$. If $D$ is clockwise
then $\gamma_D=2$. If $D$ is counterclockwise then $\gamma_D=-2$.
\end{lemma}
\begin{proof}
~We use induction on the number $\eta(D)$ of bends of $D$. It suffices to
consider the case when $D$ is clockwise (since for a counterclockwise cycle
$D'=\bar D'_1\circ D'_2\circ\ldots \circ \bar D'_{k-1}\circ D'_k$, the reversed
cycle $\bar D'=\bar D'_k\circ D'_{k-1}\circ\ldots \circ \bar D'_2\circ D'_1$ is
clockwise, and it is easy to see that $\gamma_{\bar D'}=-\gamma_{D'}$).
W.l.o.g., one may assume that the coordinates $\beta(d_i)$ of all bends $d_i$
are different (as we can make, if needed, a due small perturbation on $D$,
which does not affect $\gamma$).
If $\eta(D)=2$, then $D=\bar D_1\circ D_2$, and the clockwise orientation of
$D$ implies that the path $D_1$ is lower than $D_2$. So
$\gamma(d_1)=\gamma(d_2)=1$, implying $\gamma_D=2$.
Now assume that $\eta(D)>2$. Then at least one of the following is true:
\smallskip
(a) there exists a peak $d_i$ such that the horizontal line through $d_i$ meets
$D$ on the left of $d_i$, i.e., there is a point $x$ in $D$ with
$\alpha(x)<\alpha(d_i)$ and $\beta(x)=\beta(d_i)$;
(b) there exists a pit $d_i$ such that the horizontal line through $d_i$ meets
$D$ on the right of $d_i$.
\smallskip
(This can be seen as follows. Let $d_j$ be a peak with $\beta(d_j)$ maximum. If
$\beta(d_{j-1})\le \beta(d_{j+1})$, then, by easy topological reasonings,
either the pit $d_{j+1}$ is as required in~(b) (when $d_{j+2}$ is on the right
from $D_{j+1}$), or the peak $d_{j+2}$ is as required in~(a) (when $d_{j+2}$ is
on the left from $D_{j+1}$), or both. And if $\beta(d_{j-1})> \beta(d_{j+1})$,
similar properties hold for $d_{j-1}$ and $d_{j-2}$.)
We may assume that case~(a) takes place (for case~(b) is symmetric to~(a)).
Choose the point $x$ as in~(a) with $\alpha(x)$ maximum and draw the horizontal
line-segment $L$ connecting the points $x$ and $d_i$. Then the interior of $L$
does not meet $D$. Two cases are possible:
\smallskip
(I) $\Inter(L)$ is contained in the region $O_D$; or
\smallskip
(O) $\Inter(L)$ is outside $O_D$.
\smallskip
Since $x$ cannot be a bend of $D$ (in view of $\beta(x)=\beta(d_i)$ and
$\beta(d_i)\ne\beta(d_{i'})$ for any $i'\ne i$), $x$ is an interior point of
some snake $D_j$; let $D'_j$ and $D''_j$ be the parts of $D_j$ from $s_{D_j}$
to $x$ and from $x$ to $t_{D_j}$, respectively. Using the facts that $D$ is
oriented clockwise and this orientation is agreeable with the forward
(backward) direction of each black (resp. white) snake, one can conclude that
\begin{numitem1} \label{eq:casesIO}
(a) in case (I), ~$D_j$ is white and $\gamma(d_i)=-1$ (i.e., for the white
snake $D_i$ and black snake $D_{i+1}$ that share the peak $d_i$, ~$D_{i+1}$ is
lower than $D_i$); and (b) in case~(O), ~$D_j$ is black and $\gamma(d_i)=1$
(i.e., $D_i$ is lower than $D_{i+1}$)
\end{numitem1}
See the picture (where the orientation of $D$ is indicated):
\vspace{-0.3cm}
\begin{center}
\includegraphics{ex3}
\end{center}
\vspace{0cm}
The points $x$ and $d_i$ split the cycle (closed curve) $D$ into two parts
$\zeta',\zeta''$, where the former contains $D'_j$ and the latter does $D''_j$.
We first examine case (I). The line $L$ divides the region $O_D$ into two parts
$O'$ and $O''$ lying above and below $L$, respectively. Orienting the curve
$\zeta'$ from $x$ to $d_i$ and adding to it the segment $L$ oriented from $d_i$
to $x$, we obtain closed curve $D'$ surrounding $O'$. Note that $D'$ is
oriented clockwise around $O'$. We combine the paths $D'_j$, $L$ (from $x$ to
$d_i$) and $D_i$ into one directed path $A$ (going from $s_{D'_j}=s_{D_j}=d_j$
to $t_{D_i}=d_{i-1}$). Then $D'$ turns into a correctly colored simple cycle in
which $A$ is regarded as a white snake and the white/black snakes structure on
the rest preserves (cf.~\refeq{casesIO}(a)).
In its turn, the curve $\zeta''$ oriented from $d_{i}$ to $x$ plus the segment
$L$ (oriented from $x$ to $d_i$) form closed curve $D''$ that surrounds $O''$
and is oriented clockwise as well. We combine $L$ and $D_{i+1}$ into one black
snake $B$ (going from $x$ to $d_{i+1}$). Then $D''$ becomes a correctly colored
cycle, and $x$ is a peak in it. (The point $x$ turns into a vertex of $G$.) We
have $\gamma(x)=1$ (since the white $D''_j$ is lower than the black $B$).
The creation of $D',D''$ from $D$ in case (I) is illustrated in the picture:
\vspace{-0.0cm}
\begin{center}
\includegraphics{ex4}
\end{center}
\vspace{-0.1cm}
We observe that, compared with $D$, the pair $D',D''$ misses the bend $d_i$
(with $\gamma(d_i)=-1$) but acquires the bend $x$ (with $\gamma(d)=1$). Then
\begin{equation} \label{eq:DD'D''}
\eta(D)=\eta(D')+\eta(D''),
\end{equation}
implying $\eta(D'),\eta(D'')<\eta(D)$. Therefore, we can apply induction. This
gives $\gamma_{D'}=\gamma_{D''}=2$. Now, by reasonings above,
$$
\gamma_D=\gamma_{D'}+\gamma_{D''}+\gamma(d_i)-\gamma(x)=2+2-1-1=2,
$$
as required.
Next we examine case~(O). From the fact that $D$ simple one can conclude that
the curve $\zeta'$ (containing $D'_j$) passes through the black snake
$D_{i+1}$, and the curve $\zeta''$ (containing $D''_j$) through the white snake
$D_i$. Adding to each of $\zeta',\zeta''$ a copy of $L$, we obtain closed
curves $D',D''$, respectively, each inheriting the orientation of $D$. They
become correctly colored simple cycles when we combine the paths
$D'_j,L,D_{i+1}$ into one black snake (from $d_{j-1}$ to $d_{i+1}$) in $D'$,
and combine the paths $L,D_i$ into one white snake (from the new bend $x$ to
$d_i$) in $D''$. Let $O',O''$ be the bounded regions in the plane surrounded by
$D',D''$, respectively. It is not difficult topological exercise to see that
two cases are possible:
\smallskip
(O1) ~$O'$ includes $O''$ (and $O_D$);
\smallskip
(O2) ~$O''$ includes $O'$ (and $O_D$).
These cases are illustrated in the picture:
\vspace{-0.0cm}
\begin{center}
\includegraphics{ex5}
\end{center}
\vspace{0cm}
Then in case~(O1), ~$D'$ is clockwise and $D''$ is counterclockwise, whereas in
case~(O2) the behavior is converse. Also $\gamma(d_i)=1$ and $\gamma(x)=-1$.
Similar to case~(I), \refeq{DD'D''} is true and we can apply induction. Then in
case~(O1), we have $\gamma_{D'}=2$ and $\gamma_{D''}=-2$, whence
$$
\gamma_D=\gamma_{D'}+\gamma_{D''}+\gamma(d_i)-\gamma(x)=2-2+1-(-1)=2.
$$
And in case~(O2), we have $\gamma_{D'}=-2$ and $\gamma_{D''}=2$, whence
$$
\gamma_D=\gamma_{D'}+\gamma_{D''}+\gamma(d_i)-\gamma(x)=-2+2+1-(-1)=2.
$$
Thus, in all cases we obtain $\gamma_D=2$, yielding the lemma.
\end{proof}
This completes the proof of Proposition~\ref{pr:seg-seg}. \hfill\qed
\subsection{Proof of Proposition~\ref{pr:seg-link}.} \label{ssec:prop3}
Consider a link $L$. By Lemma~\ref{lm:varphi=1}, for any snake $P$,
~$\varphi_{L,P}\ne 1$ is possible only if $L$ and $P$ have a common endvertex
$v$. Note that $v\notin R\cup C$. In particular, it suffices to examine only
bounded and semi-bounded links.
First assume that $s_L\notin R$. Then there are exactly two snakes containing
$s_L$, namely, a white snake $A$ and a black snake $B$ such that $s_L=t_A=t_B$.
If $L$ is white, then $A$ and $L$ belong to the same path in $\phi$; therefore,
$A\prec L\prec B$. Under the exchange operation $A$ becomes black, $B$ becomes
white, and $L$ continues to be white. Then $B,L$ belong to the same path in
$\psi$; this implies $B\precast L\precast A$. So both pairs $(A,L)$ and $(L,B)$
are permuting, and Lemma~\ref{lm:1atP=asQ} gives $\varphi_{A,L}=q$ and
$\varphi_{L,B}=\bar q$, whence $\varphi_{A,L}\varphi_{L,B}=1$.
Now let $L$ be black. Then $A\prec B\prec L$ and $B\precast A\precast L$. So
both pairs $\{A,L\}$ and $\{B,L\}$ are invariant, whence
$\varphi_{A,L}=\varphi_{B,L}=1$.
The end $t_L$ is examined in a similar way. Assuming $t_L\notin C$, there are
exactly two snakes, a white snake $A'$ and a black snake $B'$, that contain
$t_L$, namely: $t_L=s_{A'}=s_{B'}$. If $L$ is white, then $L\prec A'\prec B'$
and $L\precast B'\precast A'$. Therefore, $\{L,A\}$ and $\{L,B'\}$ are
invariant, yielding $\varphi_{L,A'}=\varphi_{L,B'}=1$. And if $L$ is black,
then $A'\prec L\prec B'$ and $B'\precast L\precast A'$. So both $(A',L)$ and
$(L,B')$ are permuting, and we obtain from Lemma~\ref{lm:1atP=asQ} that
$\varphi_{A',L}=\bar q$ and $\varphi_{L,B'}=q$, yielding
$\varphi_{A',L}\varphi_{L,B'}=1$.
These reasonings prove the proposition. \hfill\qed
\subsection{Degenerate case.} \label{ssec:degenerate}
We have proved relation~\refeq{Pi=q} in a non-degenerate case, i.e., subject
to~\refeq{nondegenerate}, and now our goal is to prove~\refeq{Pi=q} when the
set
$$
\Zscr:=\{z_1,\ldots,z_{k-1}\}\cup \{c_j\colon j\in J\cup J'\}
$$
contains distinct elements $u,v$ with $\alpha(u)=\alpha(v)$. We say that such
$u,v$ form a \emph{defect pair}. A special defect pair is formed by twins
$z_i,z_j$ (bends satisfying $i\ne j$, ~$\alpha(z_i)=\alpha(z_j)$ and
$\beta(z_i)=\beta(z_j)$). Another special defect pair is of the form
$\{s_P,t_P\}$ when $P$ is a \emph{vertical} snake or link, i.e.,
$\alpha(s_P)=\alpha(t_P)$.
We will show~\refeq{Pi=q} by induction on the number of defect pairs.
Let $a$ be the \emph{minimum} number such that the set $X:=\{u\in \Zscr\;\colon
\alpha(u)=a\}$ contains a defect pair. We denote the elements of $X$ as
$v_0,v_1,\ldots,v_r$, where for each $i$, ~$v_{i-1}$ is \emph{higher} than
$v_i$, which means that either $\beta(v_{i-1})>\beta(v_i)$, or $v_{i-1},v_i$
are twins and $v_{i-1}$ is a pit (while $v_{i}$ is a peak) in the exchange path
$Z$. The highest element $v_0$ in this order is also denoted by $u$.
In order to conduct induction, we deform the graph $G$ within a sufficiently
narrow vertical strip $S=[a-\eps,a+\eps]\times \Rset$ (where $0<\eps<
\min\{|\alpha(z)-a|\colon z\in \Zscr-X\}$) to get rid of the defect pairs
involving $u$ in such a way that the configuration of snakes/links in the
arising graph $\tilde G$ remains ``equivalent'' to the initial one. More
precisely, we shift the bend $u$ at a small distance ($<\eps$) to the left,
keeping the remaining elements of $\Zscr$; then the bend $u'$ arising in place
of $u$ satisfies $\alpha(u')<\alpha(u)$ and $\beta(u')=\beta(u)$. The
snakes/links with an endvertex at $u$ are transformed accordingly; see the
picture for an example.
\vspace{-0cm}
\begin{center}
\includegraphics{ex5b}
\end{center}
\vspace{-0.2cm}
Let $\varPi$ and $\tilde\varPi$ denote the L.H.S. value in~\refeq{Pi=q} for the
initial and new configurations, respectively. Under the deformation, the number
of defect pairs becomes smaller, so we may assume by induction that $\varPi=q$.
Thus, we have to prove that
\begin{equation} \label{eq:varPi}
\varPi=\tilde\varPi.
\end{equation}
We need some notation and conventions. For $v\in X$, the set of (initial)
snakes and links with an endvertex at $v$ is denoted by $\Pscr_v$. For
$U\subseteq X$, ~$\Pscr_U$ denotes $\cup(\Pscr_v\;\colon v\in U)$.
Corresponding objects for the deformed graph $\tilde G$ are usually denoted
with tildes as well; e.g.: for a path $P$ in $G$, its image in $\tilde G$ is
denoted by $\tilde P$; the image of $\Pscr_v$ is denoted by $\tilde \Pscr_{v}$
(or $\tilde \Pscr_{\tilde v}$), and so on. The set of standard paths in
$\Pscr_U$ (resp. $\tilde\Pscr_U$) is denoted by $\Pscr^{\rm st}_U$ (resp.
$\tilde\Pscr^{\rm st}_U$). Define
\begin{equation} \label{eq:u_X-u}
\varPi_{u,X-u}:=\prod(\varphi_{P,Q}\colon P\in\Pscr_u,\;
Q\in\Pscr_{X-u}).
\end{equation}
A similar product for $\tilde G$ (i.e., with $\tilde\Pscr_u$ instead of
$\Pscr_u$) is denoted by $\tilde\varPi_{u,X-u}$ .
Note that~\refeq{varPi} is equivalent to
\begin{equation} \label{eq:varPiX}
\varPi_{u,X-u}=\tilde\varPi_{u,X-u}.
\end{equation}
This follows from the fact that for any paths $P,Q\in\Sscr\cup\Lscr$ different
from those involved in~\refeq{u_X-u}, the values $\varphi_{P,Q}$ and
$\varphi_{\tilde P,\tilde Q}$ are equal. (The only nontrivial case arises when
$P,Q\in\Pscr_u$ and $Q$ is vertical (so $\tilde Q$ becomes standard). Then
$t_Q=v_1$. Hence $Q\in \Pscr_{X-u}$, the pair $P,Q$ is involved in
$\varPi_{u,X-u}$, and the pair $\tilde P,\tilde Q$ in $\tilde\varPi_{u,X-u}$.)
To simplify our description technically, one trick will be of use. Suppose that
for each standard path $P\in\Pscr^{\rm st}_X$, we choose a point (not
necessarily a vertex) $v_P\in\Inter(P)$ in such a way that
$\alpha(s_P)<\alpha(v_P)<\alpha(t_P)$, and the coordinates $\alpha(v_P)$ for
all such paths $P$ are different. Then $v_P$ splits $P$ into two subpaths
$P',P''$, where we denote by $P'$ the subpath connecting $s_P$ and $v_P$ when
$\alpha(s_P)=a$, and connecting $v_P$ and $t_P$ when $\alpha(t_P)=a$, while
$P''$ is the rest. This provides the following property: for any
$P,Q\in\Pscr^{\rm st}_X$, ~$\varphi_{P',Q''}=\varphi_{Q',P''}=1$ (in view of
Lemma~\ref{lm:varphi=1}). Hence
$\varphi_{P,Q}=\varphi_{P',Q'}\varphi_{P'',Q''}$. Also $P''=\tilde P''$. It
follows that~\refeq{varPiX} would be equivalent to the equality
$$
\prod(\varphi_{P',Q'}\colon P\in\Pscr_u,\;Q\in\Pscr_{X-\{u\}})
=\prod(\varphi_{\tilde P',\tilde Q'}\colon
P\in\Pscr_u,\;Q\in\Pscr_{X-\{u\}}).
$$
In light of these reasonings, it suffices to prove~\refeq{varPiX} in the
special case when
\begin{numitem1} \label{eq:assumption}
any $P\in\Pscr_u$ and $Q\in\Pscr_{X-u}$ satisfy
$\{\alpha(s_P),\alpha(t_P)\}\cap \{\alpha(s_Q),\alpha(t_Q)\}=\{a\}$.
\end{numitem1}
For $i=0,\ldots,r$, we denote by $A_i,B_i,K_i,L_i$, respectively, the white
snake, black snake, white link, and black link, that have an endvertex at
$v_i$. Note that if $v_{i-1},v_i$ are twins, then the fact that $v_{i-1}$ is a
pit implies $A_{i-1},B_{i-1}$ are the snakes entering $v_{i-1}$, and $A_i,B_i$
are the snakes leaving $v_i$; for convenience, we formally define $K_{i-1}=K_i$
and $L_{i-1}=L_i$ to be the trivial paths consisting of the the same single
vertex $v_i$. Note that if $v_r\in C$, then some paths among $A_k,B_k,K_k,L_k$
vanish (e.g., both snakes and one link).
When vertices $v_i$ and $v_{i+1}$ are connected by a (vertical) path in
$\Sscr\cup \Lscr$, we denote such a path by $P_i$ and say that the vertex $v_i$
is \emph{open}; otherwise $v_i$ is said to be closed. Note that $v_i,v_{i+1}$
can be connected by either one snake, or one link, or two links (namely,
$K_i,L_i$); in the latter case $P_i$ is chosen arbitrarily among them. In
particular, if $v_i,v_{i+1}$ are twins, then $v_i$ is open and the role of
$P_i$ is played by any of the trivial links $K_i,L_i$. Obviously, in a sequence
of vertical paths $P_i,P_{i+1},\ldots,P_{j}$, the snakes and links alternate.
One can see that if $P_i$ is a white snake, i.e., $P_i=A_i=A_{i+1}=:A$, then
both black snakes $B_i,B_{i+1}$ are standard, and there holds $v_i=s_{B_i}$ and
$v_{i+1}=t_{B_{i+1}}$. See the left fragment of the picture:
\vspace{-0.3cm}
\begin{center}
\includegraphics{ex6}
\end{center}
\vspace{0cm}
Symmetrically, if $P_i$ is a black snake: $B_i=B_{i+1}=:B$, then the white
snakes $A_i,A_{i+1}$ are standard, $v_i=s_{A_i}$ and $v_{i+1}=t_{A_{i+1}}$; see
the right fragment of the above picture.
In its turn, if $P_i$ is a nontrivial white link, i.e., $P_i=K_i=K_{i+1}$, then
two cases are possible: either the black links $L_i,L_{i+1}$ are standard,
$v_i=s_{L_i}$ and $v_{i+1}=t_{L_{i+1}}$, or $L_i=L_{i+1}=P_i$. And if $P_i$ is
a black link, the behavior is symmetric. See the picture:
\vspace{-0.3cm}
\begin{center}
\includegraphics{ex7}
\end{center}
\vspace{0cm}
Now we are ready to start proving equality~\refeq{varPiX}. Note that the
deformation of $G$ changes none of the orders $\prec$ and $\precast$.
We say that paths $P,P'\in\Pstan_X$ are \emph{separated} (from each other) if
they are not contained in the same path of any of the flows
$\phi,\phi',\psi,\psi'$. The following observation will be of use:
\begin{numitem1} \label{eq:monochPQ}
if $P,P'\in\Pstan_X$ have the same color, are separated, and $P'$ is lower than
$P$, then $P'\prec P$; and similarly w.r.t. the order $\precast$ (concerning
$\psi,\psi'$).
\end{numitem1}
Indeed, suppose that $P,P'$ are white, and let $Q$ and $Q'$ be the paths of the
flow $\phi$ containing $P$ and $P'$, respectively. Since $P,P'$ are separated,
the paths $Q,Q'$ are different. Moreover, the fact that $P'$ is lower than $P$
implies that $Q'$ is lower than $Q$ (taking into account that $Q,Q'$ are
disjoint). Thus, $Q'$ precedes $Q$ in $\phi$, yielding $P'\prec P$, as
required. When $P,P'$ concern one of $\phi',\psi,\psi'$, the argument is
similar.
\smallskip
In what follows we will use the abbreviated notation $A,B,K,L$ for the paths
$A_0,B_0,K_0,L_0$ (respectively) having an endvertex at $u=v_0$. Also for
$R\in\Pscr_{X-u}$, we denote the product $\varphi_{A,R}\varphi_{B,R}
\varphi_{K,R}\varphi_{L,R}$ by $\varPi(R)$, and denote by $\tilde \varPi(R)$ a
similar product for the paths $\tilde A,\tilde B,\tilde K, \tilde L, \tilde R$
(concerning the deformed graph $\tilde G$). One can see that $\varPi_{u,X-u}$
(resp. $\tilde \varPi_{u,X-u}$) is equal to the product of the values
$\varPi(R)$ (resp. $\tilde\varPi(R)$) over $R\in\Pscr_{X-u}$.
To show~\refeq{varPiX}, we will examine several cases. First of all we consider
\smallskip
\noindent\underline{\emph{Case (R1)}:} ~$\{u\}$ is closed; in other words, all
paths $A,B,K,L$ are standard (taking into account that $u$ is the highest
vertex in $X$).
\begin{prop} \label{pr:caseR1}
In case~(R1), ~$\varPi(R)=\tilde\varPi(R)=1$ holds for any $R\in\Pscr_{X-u}$.
As a consequence, \refeq{varPiX} is valid.
\end{prop}
\begin{proof}
~Let $R\in \Pscr_{v_p}$ for $p\ge 1$. Observe that~\refeq{assumption} together
with the fact that the vertex $u$ is shifted under the deformation of $G$
implies that $\{\alpha(s_{\tilde P}),\alpha(t_{\tilde P})\} \cap
\{\alpha(s_{\tilde R}),\alpha(t_{\tilde R})\}=\emptyset$ holds for any $P\in
\Pscr_u$. This gives $\tilde\varPi(R)=1$, by Lemma~\ref{lm:varphi=1}.
Next we show the equality $\varPi(R)=1$. One may assume that $R$ is standard
(otherwise the equality is trivial). It is easy to see that in case~(R1), each
of $A,B,K,L$ is separated from $R$.
Note that $A,B,K,L,R$ are as follows: either (a) $t_A=t_B=s_K=s_L$ or (b)
$s_A=s_B=t_K=t_L$, and either (c) $\alpha(s_R)=a$ or (d) $\alpha(t_R)=a$. Let
us examine the possible cases when the combination of~(a) and~(d) takes place.
\smallskip
1) Let $R$ be a white link, i.e., $R=K_p$. Since $R$ is white and lower than
$A,B,K,L$, we have $R\prec A,B,K,L$ (cf.~\refeq{monochPQ}). Under the exchange
operation (which, as we know, changes the colors of snakes and preserves the
colors of links), $R$ remains white. Then $R\precast A,B,K,L$. Therefore, all
pairs $\{P,R\}$ with $P\in\Pscr_u$ are invariant, and $\varPi(R)=1$ is trivial.
\smallskip
2) Let $R=L_p$. Since $R$ is black, we have $A,K\prec R\prec B,L$. The exchange
operation changes the colors of $A,B$ and preserves the ones of $K,L,R$. Hence
$B,K\precast R\precast A,L$, giving the permuting pairs $(A,R)$ and $(R,B)$.
Lemma~\ref{lm:atP=atQ} applied to these pairs implies $\varphi_{A,R}=\bar q$
and $\varphi_{R,B}=q$. Then $\varPi(R)=\varphi_{A,R}\varphi_{R,B}=\bar q q=1$.
\smallskip
3) Let $R=A_p$. Then $R\prec A,B,K,L$ and $B,K\precast R\precast A,L$ (since
the exchange operation changes the colors of $A,B,R$ but not $K,L$). This gives
the permuting pairs $(R,B)$ and $(R,K)$. Then $\varphi_{R,B}=q$, by
Lemma~\ref{lm:atP=atQ}, and $\varphi_{R,K}=\bar q$ by Lemma~\ref{lm:2atP=asQ},
and we have $\varPi(R)=\varphi_{R,B}\varphi_{R,K}=1$.
\smallskip
4) Let $R=B_p$. (In fact, this case is symmetric to the previous one, as it is
obtained by swapping $(\phi,\phi')$ and $(\psi,\psi')$. Yet we prefer to give a
proof in detail.) We have $A,K\prec R\prec B,L$ and $R\precast A,B,K,L$, giving
the permuting pairs $(A,R)$ and $(K,R)$. Then $\varphi_{A,R}=\bar q$, by
Lemma~\ref{lm:atP=atQ}, and $\varphi_{K,R}=q$, by Lemma~\ref{lm:2atP=asQ},
whence $\varPi(R)=1$.
\smallskip
The other combinations, namely,~(a) and~(c), ~(b) and~(c), ~(b) and~(d), are
examined in a similar way (where we appeal to appropriate lemmas from
Sect.~\SEC{two_paths}, and we leave this to the reader as an exercise.
\end{proof}
Next we consider
\smallskip
\noindent\underline{\emph{Case (R2)}:} ~$u$ is open; in other words, at least
one path among $A,B,K,L$ is vertical (going from $u$ to $v_1$).
\smallskip
It falls into several subcases examined in propositions below.
\begin{prop} \label{pr:caseR2_sep}
In case~(R2), let $R\in\Pstan_{X-u}$ be separated from $A,B,K,L$. Then
$\varPi(R)=\tilde\varPi(R)$.
\end{prop}
\begin{proof}
~We first assume that $u=v_0$ and $v_1$ are connected by exactly one path $P_0$
(which may be any of $A,B,K,L$) and give a reduction to the previous
proposition, as follows.
Suppose that we replace $P_0$ by a standard path $P'$ of the same color and
type (snake or link) such that $s_{P'}=u$ (and $\alpha(t_{P'})<a$). Then the
set $\Pscr'_u:=(\{A,B,K,L\}-\{P_0\})\cup\{P'\}$ becomes as in case~(R1), and by
Proposition~\ref{pr:caseR1}, the corresponding product $\Pi'(R)$ of values
$\varphi_{R,P}$ over $P\in\Pscr'_u$ is equal to 1. (This relies on the fact
that $R$ is separated from $A,B,K,L$, which implies validity of~\refeq{varPiX}
for $R$ and corresponding $P\in \Pscr'_u$.)
Now compare the effects from $P'$ and $\tilde P_0$. These paths have the same
color and type, and both are separated from, and higher than $R$. Also
$\alpha(s_{P'})=\alpha(t_{\tilde P_0})=a$ (since $s_{P'}=u$ and $t_{\tilde
P_0}=v_1$). Then using appropriate lemmas from Sect.~\SEC{two_paths}, one can
conclude that $\{\varphi_{R,P'},\varphi_{R,\tilde P_0}\}=\{q,\bar q\}$.
Therefore,
$$
\tilde\varPi(R)=\varphi_{R,\tilde P_0}=\varPi'(R)\varphi^{-1}_{R,P'} =\varPi(R).
$$
Now let $u$ and $v_1$ be connected by two paths, namely, by $K,L$. We again can
appeal to Proposition~\ref{pr:caseR1}. Consider $\Pscr''_u:=\{A,B,K'',L''\}$,
where $K'',L''$ are standard links (white and black, respectively) with
$s_{K''}=s_{L''}=u$. Then $\varPi''(R):= \varPi( \varphi_{R,P}\colon
P\in\Pscr''_u)=1$ and $\{\varphi_{R,K''}, \varphi_{R,\tilde
K}\}=\{\varphi_{R,L''}, \varphi_{R,\tilde L}\}=\{q,\bar q\}$, and we obtain
$$
\tilde\varPi(R)=\varphi_{R,\tilde K}\varphi_{R,\tilde L}=
\varPi''(R)\varphi^{-1}_{R,K''}\varphi^{-1}_{R,L''} =\varphi_{R,A}\varphi_{R,B}
=\varPi(R),
$$
as required.
\end{proof}
\begin{prop} \label{pr:caseR2_nonsep}
In case~(R2), let $R$ be a standard path in $\Pscr_{v_p}$ with $p\ge 1$. Let
$R$ be not separated from at least one of $A,B,K,L$. Then
$\varPi(R)=\tilde\varPi(R)$.
\end{prop}
\begin{proof}
We first assume that $P_0$ is the unique vertical path connecting $u$ and $v_i$
(in particular, $u$ and $v_1$ are not twins). Then $R$ is not separated from
$P_0$.
Suppose that $P_0$ and $R$ are contained in the same path of the flow $\phi$;
equivalently, both $P_0,R$ are white and $P_0\prec R$. Then neither $\psi$ nor
$\psi'$ has a path containing both $P_0,R$ (this is easy to conclude from the
fact that one of $R$ and $P_{p-1}$ is a snake and the other is a link).
Consider four possible cases for $P_0,R$.
(a) Let both $P_0,R$ be links, i.e., $P_0=K$ and $R=K_p$. Then $A,K\prec
K_p\prec B,L$ and $K_p\precast B,K,A,L$ (since $K\precast K_p$ is impossible by
the above observation). This gives the permuting pairs $(A,K_p)$ and $(\tilde
K,K_p)$, yielding $\varphi_{A,K_p}=\varphi_{\tilde K,K_p}$.
(b) Let $P_0=K$ and $R=A_p$. Then $A,K\prec A_p\prec B,L$ and $B,K\precast
A_p\precast A,L$. This gives the permuting pairs $(A,A_p)$ and $(A_p,B)$,
yielding $\varphi_{A,A_p}\varphi_{\tilde A_p,B}=1=\varphi_{\tilde K,A_p}$.
(c) Let $P_0=A$ and $R=K_p$. Then $K,A\prec K_p\prec L,B$ and $K_p\precast
K,B,L,A$. This gives the permuting pairs $(K,K_p)$ and $(\tilde A,K_p)$,
yielding $\varphi_{K,K_p}=\varphi_{\tilde A,K_p}$.
(d) Let $P_0=A$ and $R=A_p$. Then $K,A\prec A_p\prec L,B$ and $K,B\precast
A_p\precast L,A$. This gives the permuting pairs $(\tilde A,A_p)$ and $(\tilde
A_p,B)$, yielding $\varphi_{\tilde A,A_p}=\varphi_{A_p,B}$.
In all cases, we obtain $\varPi(R)=\tilde\varPi(R)$.
When $P_0,R$ are contained in the same path in $\phi'$ (i.e., $P_0,R$ are black
and $P_0\prec R$), we argue in a similar way. The cases with $P_0,R$ contained
in the same path of $\psi$ or $\psi'$ are symmetric.
A similar analysis is applicable (yielding $\varPi(R)=\tilde\varPi(R)$) when
$u$ and $v_1$ are connected by two vertical paths (namely, $K,L$) and exactly
one relation among $K\prec R$, $L\prec R$, $K\precast R$ and $L\precast R$
takes place (equivalently: either $K,R$ or $L,R$ are separated, not both).
Finally, let $u$ and $v_1$ be connected by both $K,L$, and assume that $K,R$
are not separated, and similarly for $L,R$. An important special case is when
$p=1$ and $u,v_1$ are twins.
Note that from the assumption it easily follows that $R$ is a snake. If $R$ is
the white snake $A_p$, then we have $A,K\prec A_p\prec B,L$ and
$B,K,A,L\precast A_p$. This gives the permuting pairs $(A,A_p)$ and $(\tilde
K,A_p)$, yielding $\varphi_{A,A_p}=\varphi_{\tilde K,A_p}$ (since
$\alpha(t_A)=\alpha(t_{\tilde K}$)). The case with $R=B_p$ is symmetric. In
both cases, $\varPi(R)=\tilde\varPi(R)$.
\end{proof}
\begin{prop} \label{pr:caseR2_P0}
Let $R=P_0$ be the unique vertical path connecting $u$ and $v_1$. Then
$\varPi(R)=\tilde\varPi(R)=1$.
\end{prop}
\begin{proof}
~The equality $\varPi(R)=1$ is trivial. To see $\tilde\varPi(R)=1$, consider
possible cases for $R$. If $R=K$, then $\tilde A\prec \tilde K\prec \tilde
B,\tilde L$ and $\tilde B\precast \tilde K\precast \tilde A,\tilde L$, giving
the permuting pairs $(\tilde A,\tilde K)$ and $(\tilde K,\tilde B)$ (note that
$t_{\tilde A}=t_{\tilde B}=s_{\tilde K}=\tilde u$). If $R=L$, then $\tilde
A,\tilde K,\tilde B\prec\tilde L$ and $\tilde B,\tilde K,\tilde A\precast
\tilde L$; so all pairs involving $\tilde L$ are invariant. If $R=A$, then
$\tilde K\prec\tilde A\prec \tilde L,\tilde B$ and $\tilde K,\tilde B,\tilde
L\precast \tilde A$, giving the permuting pairs $(\tilde A,\tilde L)$ and
$(\tilde A,\tilde B)$ (note that $s_{\tilde A}=s_{\tilde B}=t_{\tilde L}=\tilde
u$). And the case $R=B$ is symmetric to the previous one.
In all cases, using appropriate lemmas from Sect.~\SEC{two_paths} (and relying
on the fact that all paths $\tilde A,\tilde B,\tilde K,\tilde L$ are standard),
one can conclude that $\tilde\varPi(R)=1$.
\end{proof}
\begin{prop} \label{pr:caseR2_KL}
Let both $K,L$ be vertical. Then $\varPi(K)\varPi(L)=
\tilde\varPi(K)\tilde\varPi(L)=1$.
\end{prop}
\begin{proof}
The equality $\varPi(K)\varPi(L)=1$ is trivial. To see
$\tilde\varPi(K)\tilde\varPi(L)=1$, observe that $\tilde A\prec\tilde K\prec
\tilde B\prec \tilde L$ and $\tilde B\precast \tilde K\precast \tilde A\precast
\tilde L$. This gives the permuting pairs $(\tilde A,\tilde K)$ and $(\tilde
K,\tilde B)$. Using Lemma~\ref{lm:1atP=asQ}, we obtain $\varphi_{\tilde
A,\tilde K}=q$ and $\varphi_{\tilde K\tilde B}=\bar q$, and the result follows.
\end{proof}
Taken together, Propositions~\ref{pr:caseR2_sep}--\ref{pr:caseR2_KL} embrace
all possibilities in case~(R2). Adding to them Proposition~\ref{pr:caseR1}
concerning case~(R1), we easily obtain the desired relation~\refeq{varPiX} in a
degenerate case.
This completes the proof of Theorem~\ref{tm:single_exch} in case~(C), namely,
relation~\refeq{caseC}. \hfill\qed\qed
\subsection{Other cases.} \label{ssec:othercases}
Let $(I|J),(I'|J'),\phi,\phi',\psi,\psi'$ and $\pi=\{f,g\}$ be as in the
hypotheses of Theorem~\ref{tm:single_exch}. We have proved this theorem in
case~(C), i.e., when $\pi$ is a $C$-couple with $f<g$ and $f\in J$ (see the
beginning of Sect.~\SEC{exchange}). In other words, the exchange path
$Z=P(\pi)$, used to transform the initial double flow $(\phi,\phi')$ into the
new double flow $(\psi,\psi')$, connects the sinks $c_f$ and $c_g$ that are
covered by the ``white flow'' $\phi$ and the ``black flow'' $\phi'$,
respectively.
The other possible cases in the theorem are as follows:
\smallskip
(C1) ~$\pi$ is a $C$-couple with $f<g$ and $f\in J'$;
\smallskip
(C2) ~$\pi$ is an $R$-couple with $f<g$ and $f\in I$;
\smallskip
(C3) ~$\pi$ is an $R$-couple with $f<g$ and $f\in I'$;
\smallskip
(C4) ~$\pi$ is an $RC$-couple with $f\in I$ and $g\in J$;
\smallskip
(C5) ~$\pi$ is an $RC$-couple with $f\in I'$ and $g\in J'$.
\smallskip
Case~(C1) is symmetric to~(C). This means that if double flows $(\phi,\phi')$
and $(\psi,\psi')$ are obtained from each other by applying the exchange
operation using $\pi$ (which, in particular, changes the ``colors'' of both $f$
and $g$), and if one double flow is subject to~(C) (i.e., $f$ concerns the
first, ``white'', flow), then the other is subject to~(C1) (i.e., $f$ concerns
the second, ``black'', flow). Rewriting $w(\phi)w(\phi')=qw(\psi)w(\psi')$
(cf.~\refeq{caseC}) as $w(\psi)w(\psi')=q^{-1}w(\phi)w(\phi')$, we just obtain
the required equality in case~(C1) (where $(\psi,\psi')$ and $(\phi,\phi')$
play the roles of the initial and updated double flows, respectively).
For a similar reasons, case~(C3) is symmetric to~(C2), and~(C5) is symmetric
to~(C4). So it suffices to establish the desired equalities merely in
cases~(C2) and~(C4).
To do this, we appeal to reasonings similar to those in
Sects.~\SSEC{prop1}--\SSEC{degenerate}. More precisely, it is not difficult to
see that descriptions in Sects.~\SSEC{prop1} and \SSEC{prop3} (concerning
link-link and snake-link pairs in $\Nscr$) remain applicable and
Propositions~\ref{pr:link-link} and~\ref{pr:seg-link} are directly extended to
cases~(C2) and~(C4). The method of getting rid of degeneracies developed in
Sect.~\SSEC{degenerate} does work, without any troubles, for~(C2) and~(C4) as
well.
As to the method in Sect~\SSEC{prop2} (concerning snake-snake pairs in
case~(C)), it should be modified as follows. We use terminology and notation
from Sects.~\SSEC{seglink} and~\SSEC{prop2} and appeal to
Lemma~\ref{lm:gammaD}.
When dealing with case~(C2), we represent the exchange path $Z=P(\pi)$ as a
concatenation $Z_1\circ \bar Z_2\circ Z_3\circ\cdots \circ\bar Z_k$, where each
$Z_i$ with $i$ odd (even) is a snake contained in the black flow $\phi'$ (resp.
the white flow $\phi$). Then $Z_1$ begins at the source $r_g$ and $Z_k$ begins
at the source $r_f$. An example with $k=6$ is illustrated in the left fragment
of the picture:
\vspace{0cm}
\begin{center}
\includegraphics{ex8}
\end{center}
\vspace{0cm}
The common vertex (bend) of $Z_i$ and $Z_{i+1}$ is denoted by $z_i$. As before,
we associate with a bend $z$ the number $\gamma(z)$ (equal to 1 if, in the pair
of snakes sharing $z$, the white snake is lower that the black one, and $-1$
otherwise), and define $\gamma_Z$ as in~\refeq{gammaZ}. We turn $Z$ into simple
cycle $D$ by combining the directed path $Z_k$ (from $r_f$ to $z_{k-1}$) with
the vertical path from $r_g$ to $r_f$, which is formally added to $G$. (In the
above picture, this path is drawn by a dotted line.) Then, compared with $Z$,
the cycle $D$ has an additional bend, namely, $r_g$. Since the extended white
path $\tilde Z_k$ is lower than the black path $Z_1$, we have $\gamma(r_g)=1$,
and therefore $\gamma_D=\gamma_Z+1$.
One can see that the cycle $D$ is oriented clockwise (where, as before, the
orientation is defined according to that of black snakes). So $\gamma_D=2$, by
Lemma~\ref{lm:gammaD}, implying $\gamma_Z=1$. This is equivalent to the
``snake-snake relation'' $\varphi^{II}=q$, and as a consequence, we obtain the
desired equality
$$
w(\phi)w(\phi')=qw(\psi)w(\psi').
$$
Finally, in case~(C4), we represent the exchange path $Z$ as the corresponding
concatenation $\bar Z_1\circ Z_2\circ \bar Z_3\circ\cdots \circ Z_{k-1}\circ
\bar Z_k$ (with $k$ odd), where the first white snake $Z_1$ ends at the sink
$c_f$ and the last white snake $Z_k$ begins at the source $r_g$. See the right
fragment of the above picture, where $k=5$. We turn $Z$ into simple cycle $D$
by adding a new ``black snake'' $Z_{k+1}$ beginning at $r_g$ and ending at
$c_f$ (it is formed by the vertical path from $r_g$ to $(0,0)$, followed by the
horizontal path from $(0,0)$ to $c_f$; see the above picture). Compared with
$Z$, the cycle $D$ has two additional bends, namely, $r_g$ and $c_f$. Since the
black snake $Z_{k+1}$ is lower than both $Z_1$ and $Z_k$, we have
$\gamma(r_g)=\gamma(c_f)=-1$, whence $\gamma_D=\gamma_Z-2$. Note that the cycle
$D$ is oriented counterclockwise. Therefore, $\gamma_D=-2$, by
Lemma~\ref{lm:gammaD}, implying $\gamma_Z=0$. As a result, we obtain the
desired equality $w(\phi)w(\phi')=w(\psi)w(\psi')$.
This completes the proof of Theorem~\ref{tm:single_exch}.
|
1,314,259,995,880 | arxiv | \section{Introduction}
The discovery of the $X(3872)$ shortly after the turn of the century~\cite{Choi:2003ue}, followed by its confirmation by various collaborations \cite{Acosta:2003zx,Abazov:2004kp,Aubert:2004ns,Aaij:2011sn}, was a milestone in quarkonium physics.
For the first time a charmonium state was seen that is at odds with the interpretation as a conventional $c \bar c$ state.
Until this day its true nature has not been determined unambigiously.
While a conventional charmonium state is ruled out, various alternative interpretations have been given.
On one side, the vicinity of the $X(3872)$ mass to the $D D^{*}$ threshold inspires the models which treat the $X(3872)$ as a molecular
$D D^{*}$ bound state with a small binding energy \cite{mol1,mol2}.
Other models treat the $X(3872)$ as a bound diquark and antidiquark (tetraquark) state \cite{tetra1,tetra2}. These are the most known and most frequently discussed models of the $X(3872)$, although there are also more exotic ideas, for example an approach of Ref. \cite{combi_2_4} treats the $X(3872)$ as an admixture of two and four-quark states.
For an overview of the situation and a detailed discussion of the various models see the reviews~\cite{Guo:2017jvc,Brambilla:2010cs,Esposito:2014rxa} and references therein.
In most cases the various interpretations study whether the generation of the $X(3872)$ within their model fits the charmonium spectrum or whether the branching fractions for two- or three-body decays match the experimental observations. And so far it has not been possible to determine the structure of the $X(3872)$, since the existing data can be well explained by quite different models.
The situation, however, may change if one applies the above discussed molecular or tetraquark models to describe the production of
exotic charmonium in hadronic reactions, for example $pp$ \cite{Bignamini:2009sk,Carvalho:2015nqf}, or in relativistic heavy ion collisions \cite{Cho:2010db,Cho:2011ew,Cho:2017dcy,Cho:2013rpa,Abreu:2016qci}.
High energy heavy ion collisions offer an interesting scenario to study the production
of multiquark states in general, and the $X(3872)$ resonance in particular \cite{Cho:2010db,Cho:2011ew,Cho:2017dcy,Cho:2013rpa,Abreu:2016qci} . Assuming that the production mechanism proceeds from the coalescence of its constituents, the ExHIC collaboration showed that the yield of the $X(3872)$ strongly depends on its internal structure \cite{Cho:2010db,Cho:2011ew,Cho:2017dcy}. In particular, at RHIC and LHC energies the $X(3872)$ production yield is about 20 times smaller for a tetraquark configuration than for a molecular structure.
This difference can become even stronger if one takes into account the further $X(3872)$ tetraquark evolution in the hadronic phase \cite{Cho:2013rpa,Abreu:2016qci}. The point is that if $X(3872)$ is a tetraquark it will be produced (by coalescence of quarks and antiquarks) at the end of the hadronization of the quark-gluon plasma generated in the collision; while if $X(3872)$ is a molecular state it will be formed by hadron coalescence much later at the end of hadronic phase evolution, at the so called kinetic freeze out. Thus, after being produced at the end of the quark gluon plasma phase, the $X(3872)$ tetraquark interacts with other hadrons during the expansion of the hadronic matter. The $X(3872)$ can be destroyed in collisions with the comoving light mesons, mostly pions, for example in the reaction $X(3872)+\pi \rightarrow D+ \bar D$; and at the same time some $X(3872)$ particles can be generated through the inverse reactions, i.e. $D+ \bar D \rightarrow X(3872)+\pi$. The detailed study of all possible hadronic reactions performed in \cite{Torres:2014fxa,Abreu:2016qci}
has shown that at highest RHIC energies the final production yield of the $X(3872)$ considered as a tetraquark state becomes about 80 times smaller than that for a $D D^{*}$ molecule \cite{Abreu:2016qci}, a result which relied on the production yield of the molecular $X(3872)$ state calculated in the hadron coalescence model of the ExHIC collaboration \cite{Cho:2010db,Cho:2011ew}.
Based on these studies, one may therefore tend to interpret the fact that the $X(3872)$ particle has so far not been seen in heavy ion experiments in favor of its tetraquark composition. However, the hadron coalescence model used to calculate the production yield for the molecular $X(3872)$ state \cite{Cho:2010db,Cho:2011ew,Cho:2017dcy} does not consider its possible modification in the hot hadronic (actually pionic) medium. This is the subject of this letter.
We study the behaviour of $X(3872)$ in a finite-temperature pion bath, which we consider to be the first level approximation of the matter generated in ultrarelativistic heavy ion collision, under the assumption that it is a molecular state formed by charmed meson interactions. From our previous study on how charmed mesons behave under such conditions, see Ref. \cite{Cleven:2017fun}, we know that the charmed $D$ and $D^*$ mesons acquire a substantial width, reaching for example values of the order of $30-40$ MeV at a temperature $T=150$ MeV.
Since we assume that the $X(3872)$ is generated by the interactions of these charmed mesons, the modification of their spectral functions will necessarily affect the properties of this composite state and, consequently, its production yields.
\section{Framework}\label{sec:framework}
In this section we describe the framework employed to obtain the properties of the $X(3872)$ resonance, generated from the interactions of the charmed mesons in a pionic medium. These interactions are described by a combination of $SU(4)$ effective Lagrangians introduced by Gamermann et al. in Refs.~\cite{Gamermann:2006nm,Gamermann:2007fi} and the Imaginary Time Formalism (ITF) \cite{galekapustabook,lebellac} in a self-consistent approach. Details of the formalism can be found in our earlier work \cite{Cleven:2017fun}, where the $D$ and $D^*$ properties in a hot pionic gas were obtained from the $D\pi$ and $D^*\pi$ interactions.
Consistently, in the present work we also employ the $SU(4)$ effective Lagrangians to obtain the $D D^*/D_s D_s^*$ coupled channel interaction leading to the generation of the $X(3872)$ with $J^{PC}=1^{++}$ and $I=0$ quantum numbers. The scattering potential between a vector and a pseudscalar meson is given by
\begin{equation}
V_{ij}(s,t,u) = - \frac{\xi _{ij}}{4f^2}(s-u) \, \epsilon\cdot \epsilon^\prime \ ,
\label{eq:pot}
\end{equation}
where $s$ and $u$ are the usual Mandelstam variables and $\epsilon$, $ \epsilon^\prime$ are the polarization vectors of the vector mesons involved in the vertex. In the case studied here, where the relevant channels are $ D\bar D^*+c.c.$ and $D_s \bar D_s^*+c.c.$, the parameter $f$, which in the SU(3) sector stands for the pion decay constant, is replaced by that of the heavy $D$-meson, $f_D=165$~MeV. The matrix of coefficients $\xi _{ij}$ is given by:
\begin{equation}
\xi = \left( \begin{array}{cc} -\psi-2 & -\sqrt2 \\ -\sqrt2 & -\psi-1 \end{array}\right) \ ,
\end{equation}
where $\psi$ is a SU(4)-breaking parameter, defined as $\psi=-1/3+4/3(m_L/m_{H^\prime})$, which accounts for the different mass of the mesons that can be exchanged in a t-channel diagram that would be approximated by the point-like potential of Eq.~(\ref{eq:pot}). In the present $ D D^*/D_s D_s^* $ coupled channel case one may exchange light type mesons ($\rho,\omega,\phi,K^*$), for which we assume a common mass $m_L\sim 800$ MeV, or the heavy $J/\Psi$ one, with mass $ m_{H^\prime}\sim3000$ MeV.
The corresponding $s$-wave projection of this potential in then used as the kernel for the Bethe-Salpeter equation, which implicitly sums the multiple meson-meson scattering processes to all orders.
Within the on-shell formalism this simplifies to a simple algebraic equation that can be easily solved as
\begin{equation}\label{Eq:T}
T= (1 - VG)^{-1}V\,\vec \epsilon\cdot \vec \epsilon\,^\prime \ ,
\end{equation}
for the scattering of vector mesons off pseudoscalar ones.
The diagonal matrix $G$ contains the two-meson loops and reads
\begin{eqnarray}\label{Eq:G_Vacuum}
G_{ii}(s) &=& {\rm i} \int\!\frac{\mathrm d^4q}{(2\pi)^4} \frac{1}{[q^2-m_1^2+{\rm i}\varepsilon][(P-q)^2-m_2^2+{\rm i}\varepsilon]}\\
&= &{\frac{1}{16\pi ^2}}\biggr[ \alpha+\log{\frac {m_1^2 }{ \mu ^2}}+{\frac{m_2^2-m_1^2+s}{ 2s}}\log{\frac{m_2^2}{ m_1^2}}+\nonumber\\
&&{\frac{{\rm p}}{\sqrt{s}}}\Big( \log{\frac{s-m_2^2+m_1^2+2{\rm p}\sqrt{s} }{ -s+m_2^2-m_1^2+2{\rm p}\sqrt{s}}}+\log{\frac{s+m_2^2-m_1^2+2{\rm p}\sqrt{s} }{ -s-m_2^2+m_1^2+ 2{\rm p}\sqrt{s}}}\Big)\biggr] \ ,
\end{eqnarray}
where the index $i$ refers to the pair of mesons with masses $m_1$ and $m_2$, ${\rm p}$ is the on-shell three-momentum of the mesons in the c.m. frame
and $P^2=s$. The scale $\mu$ is set to 1.5 GeV and the subtraction constant used here is $\alpha=-1.26$. With this model and parameters the $X(3872)$ emerges as a pole of the scattering amplitude a couple of MeV below the averaged $D D^*$ threshold.
The properties of the $X(3872)$ resonance in a hot pionic gas will be derived from a temperature dependent amplitude obtained by solving Eq.~(\ref{Eq:T}) with a two-meson loop function $G$ that incorporates the medium effects. Within ITF, the meson-meson loop at finite temperature reads
\begin{eqnarray}\label{Eq:G_TVac}
G_{MM^\prime}(P^0,\vec P;T) = \int\!\frac{\mathrm d^3q}{(2\pi)^3} \int\!\mathrm d\omega\int\!\mathrm d\omega^\prime \frac{S_M(\omega,\vec q;T)S_{M^\prime}(\omega^\prime,\vec P - \vec q;T)}{P^0-\omega-\omega^\prime+{\rm i}\varepsilon} [1+f(\omega,T)+f(\omega^\prime,T)] \ ,
\end{eqnarray}
where $f(\omega,T)=[{\rm exp}(\omega/T)-1]^{-1}$ is the meson Bose distribution function at temperature $T$, while $S_M(\omega,\vec q;T)$ denotes the spectral function of meson $M$,
\begin{equation}
S_M(\omega,\vec q;T) = -(1/\pi)\mathrm{Im}(D_M(\omega,\vec q;T))\ ,
\end{equation}
which is related to the imaginary part of the meson propagator given by
\begin{equation}
D_M(\omega,\vec q;T) = [\omega^2-\vec q\,^2-m_M^2-\Pi_M(\omega,\vec q;T)]^{-1} \ .
\label{eq:prop}
\end{equation}
The quantity $\Pi_M(\omega,\vec q;T)$ is the meson self-energy, which is obtained from closing the pion line in the $M\pi \to M\pi$ amplitude diagram, leading to
\begin{eqnarray}\label{Eq:Pi}
\Pi_M(p^0,\vec p;T) = \int\!\frac{\mathrm d^3q}{(2\pi)^3} \int\!\mathrm d\Omega
\frac{ f(\Omega,T)-f(\omega_\pi,T)}{(p^0)^2 - (\omega_\pi-\Omega)^2 + {\rm i}\varepsilon}
\left(-\frac1\pi\right) \mathrm{Im} T_{M \pi}(\Omega,\vec p+\vec q;T) \ .
\end{eqnarray}
\section{Results}
The self-consistent determination of the meson self-energies has been done in Ref.~\cite{Cleven:2017fun} and we show here the results for the most relevant mesons, $D$ and $D^*$. Their zero momentum spectral functions are shown in Fig.~\ref{Fig:SpecDDstar} as functions of energy $p^0$ for various temperatures. As already commented in Ref.~\cite{Cleven:2017fun}, the shift of the spectral function peak, related to the real part of the self-energy, is negligible, while the width, connected to the imaginary-part, becomes substantially larger as temperature increases.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.85\linewidth]{SpecFuncDDstar.eps}
\caption{Meson spectral function as a function of the energy $p^0$ at temperatures $T = 50$, 100, 150 MeV and $\vec{p} = 0$. a): $D$, (b): $D^*$ .}
\label{Fig:SpecDDstar}
\end{figure*}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.85\linewidth]{DDstar_All.eps}
\caption{Real part of the $DD^*$ loop (left panel), imaginary part of the $DD^*$ loop (right panel).
All quantities are shown at temperatures 0, 50, 100, 150~MeV. The dashed gray line represents the $DD^*$ threshold} .
\label{Fig:DDstar}
\end{figure*}
In the following we discuss the effect the finite temperature spectral functions of the charmed mesons have in the loop function and, consequently, in the amplitude $T_{D D^*}$ which signals the generation of the $X(3872)$.
The left and right panels of Fig.~\ref{Fig:DDstar} show the real and imaginary parts of the $DD^*$ loop, respectively, as functions of the total energy $P^0$ for a total momentum $\vec{P}=0$ and various temperatures.
Below threshold we find that the finite temperature real part retains its shape but is slightly shifted compared to the vacuum case, increasingly so with increasing temperature.
Above threshold the effect of the hot pion bath becomes more pronounced.
Instead of the sharp rise after the kink at threshold observed in vacuum, we see a smoother behaviour and a slower rise at finite temperature, resulting in differences of about a factor two in size far above threshold. However, this is beyond the energy region of interest with regard to the $X(3872)$.
For the imaginary part of the loop we see a similar behaviour. The sharp opening of the unitarity cut at zero temperature is transformed into a smoother curve at finite temperature. It is interesting to note that there is a substantial strength in a range of a few tens of MeV below threshold, right where the vacuum imaginary part vanishes.
This is quite significant since it means that the energy region where the pole of the $X(3872)$ is located is affected strongly by the surrounding pions.
Thus the loosely bound $X(3872)$ is an excellent candidate to study the effect of the hot medium compared to more tightly bound states.
The noticeable effect of temperature on the $D D^*$ loop around threshold is due to an important modification of the $D\pi$ and $D^*\pi$ interactions in a hot medium composed essentially of pions. It is precisely the self-energy of the $D$ and $D^*$ mesons what makes their corresponding spectral functions acquire a substantial width, affecting the value of the $D D^*$ loop especially around threshold.
We note that the null contributions of the diagonal $D_s\pi$ and $D^*_s\pi$ potentials \cite{Gamermann:2006nm,Gamermann:2007fi} make these unitarized interactions substantially weaker than their non-strange $D\pi$ and $D^*\pi$ counterparts, which would be reflected into narrower $D_s$ and $D^*_s$ spectral functions. This, together with the fact that the $ D_s D_s^*$ channel lies about 300 MeV above the relevant region of interest for the $X(3872)$ meson, justifies evaluating the $G_{D_s D^*_s} $ loop with delta-type distributions for the $D_s$ and $D^*_s$ spectral functions.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.5\linewidth]{X_Alone.eps}
\caption{Absolute value of the unitarized amplitude for $DD^*$ scattering at temperatures 0, 50, 100, 150~MeV. The dashed gray line represents the $DD^*$ threshold, the vertical line represents the stable $X(3872)$ at $T=0~~\mathrm{MeV}$ .
}
\label{Fig:XTemp}
\end{figure*}
The solution of Eq.~(\ref{Eq:T}) with hot medium modified loops gives rise to unitarized amplitudes from which the properties of the $X(3872)$ can be extracted.
In Fig.~\ref{Fig:XTemp} we show the absolute value of the $D D^*$ scattering amplitude for various temperatures.
These results illustrate the impact that the modifications of the loop function discussed in the previous paragraphs have on the dynamical generation of the $X(3872)$.
The most important observation is that the peak associated to the $X(3872)$ becomes significantly wider with increasing temperature. This is due to the combination of two effects. Firstly, the peak broadens because of the appearance of a finite imaginary part of the amplitude at the pole position when temperature effects are included. Secondly, the repulsive shift in the real part of the loop below and around threshold, as clearly seen in the inset of Fig.~\ref{Fig:DDstar}(a), implies that the resonance is generated at higher energies, moving the peak position from slightly below threshold in vacuum to some MeV above it at finite temperature. This shift increases the width of the $X(3872)$ tremendously since it can decay into most of the $D$ and $D^*$ meson spectral strength.
From the line shape of the amplitude we derive the $X(3872)$ width, going from being
stable at zero temperature to values $\sim$10, $\sim$30 and $\sim$60 MeV at temperatures 50, 100 and 150 MeV, respectively.
Thus, at typical kinetic freeze out temperatures for RHIC and LHC, the $X(3872)$ can not be considered as a loosely bound (deutron-like) $D D^*$ bound state.
For example, in the calculations of the ExHIC collaboration \cite{Cho:2017dcy} the freeze out temperature was established between 115 and 119 MeV and out results indicate that the $X(3872)$ would be converted into a resonance with a width of about 40 MeV. In fact, during the whole hadronic phase of the reaction the width of $X(3872)$ would slowly be changing from about 65 MeV at the hadronization temperature (162~MeV at the highest RHIC energy and 156~MeV at LHC \cite{Cho:2017dcy}) to 40 MeV at freeze out.
Our expectation is that such a displacement of the $X(3872)$ peak above the threshold together with the related broadening of its shape should translate, according to the hadron coalescence model, into a lower estimation of production yields for a molecular-like $X(3872)$ state in energetic relativistic heavy-ion collisions.
\section{Summary and Conclusions}
In this work we have explored the properties of the $X(3872)$ in a hot pionic medium assuming this resonance to be a molecular state generated by the interaction of $D \bar D^* + c.c.$ pairs and associated coupled channels. The model employed considers the broadening of the $D$ and $D^*$ mesons in the pionic medium as a consequence of their self-energies, developed from the corresponding $D\pi$ and $D^*\pi$ temperature dependent amplitudes. Once the properties of the $D$ and $D^*$ mesons in a hot pionic medium are incorporated in the loop functions of the Bether-Salpeter equation, the properties of the generated $X(3872)$ can be inferred from the resulting $D D^*$ amplitude.
We find a substantial broadening of the $X(3872)$ of a few tens of MeV when temperature effects are considered. The behaviour of the $X(3872)$ presented here is a unique feature of the molecular interpretation and is due to the relatively strong interactions of the charmed mesons with the pion bath. A tetraquark-type state would barely change its behaviour under the same circumstances.
It has been argued that if the $X(3872)$ is a molecular state it will be produced by hadron coalescence at the end of the hadronic phase with yields at least one order of magnitude higher than the surviving abundance, in the hadronic phase, of tetraquark-type states produced at the mixed phase by quark coalescence \cite{Cho:2010db,Cho:2011ew,Cho:2017dcy,Abreu:2016qci}. These estimations would need to be revisited in view of the findings of the present letter, as the temperature dependent widening of the $X(3872)$ established here will influence the predictions of its production yield in the molecular scenario.
We hope that the consideration of our results would help in finding a more realistic prediction of the heavy-ion collision observables that have been argued to be additional indicators for establishing the nature of the $X(3872)$.
\section*{Acknowledgments}
This work is partly supported by the Spanish Ministerio de Economia y Competitividad (MINECO) under the project MDM-2014-0369 of ICCUB (Unidad de Excelencia 'Mar\'\i a de Maeztu'),
and, with additional European FEDER funds, under the contracts FIS2014-54762-P and
FIS2017-87534-P.
|
1,314,259,995,881 | arxiv | \section{\label{intro} Introduction}
About 30 years ago, Yang \cite{yang}, generalizing the Dirac monopole, found a
(singular) spherically symmetric solution of the five-dimensional Euclidean
Yang-Mills (YM) theory with the $SU(2)$ gauge group. In the same paper, he also
showed that his $SU(2)$ monopole does not exist in more than five dimensions.
Yang's monopole (on which we shall dwell a bit more in a moment) went pretty
much unnoticed up until it emerged in a rather unlikely place, in the study
of the four-dimensional analog of the quantum Hall effect \cite{zhang}. [We
have nothing more to say about the Yang monopole in its relevance to the
quantum Hall effect, except to remark that no solution of the YM theory seems
to be wasted!]
The present work was inspired by and follows closely the recent article by
Gibbons and Townsend \cite{gib}, which does a couple of things at once. It
introduces gravity into the picture to get gravitating Yang monopoles, and
gives a reconstruction (and reinterpretation) of the higher dimensional
versions (with gauge groups other than $SU(2)$) of both the curved and the
flat space Yang monopoles. [See \cite{hp} for an earlier discussion of the
higher dimensional Yang monopoles.] Before we explain how we ``improve'' on
the work of Gibbons-Townsend, let us recapitulate some properties of the Yang
monopole.
The way Yang constructed his solution is quite interesting: He considered
self-dual, spherically symmetric single instanton (and anti-instanton) solutions
on $S^4$ and showed that they solve the full YM equations in five Euclidean
dimensions. As five-dimensional solutions, these instantons have a singularity at
the origin just like their three-dimensional cousin, ``the Dirac monopole''. The
action of the single self-dual instanton, \( \int {\cal F} \wedge {\cal F} \)
integrated over $S^{4}$, becomes a conserved monopole charge of the
five-dimensional Yang monopole. [Note that even though there are instantons whose
charge can take an arbitrary integral value in four dimensions, none save the $\pm 1$
charge solves the five-dimensional YM equations. Put in another way, there are no
Yang multi-monopoles! This is a curious result, but can be shown to be valid by
topological arguments \cite{hp}, as we will also argue.] As summarized in \cite{gib},
the rare appearance of the Yang monopole in high energy physics literature might be
due to the fact that unlike the ultra-violet (UV) divergence
[\( \int d^{3}x B^{2} \to g^{2} \int_{0}^{\infty} dr/r^{2} \to \infty\)] of the
Dirac monopole, the Yang monopole has an infra-red (IR) divergence, i.e. its mass
is IR divergent. We know that when compact Maxwell theory with Dirac monopoles is
considered as a low energy limit of, say, a broken $SO(3)$ Georgi-Glashow type theory,
finite mass 't Hooft-Polyakov monopoles emerge, which look exactly like Dirac monopoles
from a distance. Therefore, UV divergence of the Dirac monopole is not a great concern
if some unified theory picture is adopted. In the case of the Yang monopole, one
needs to construct a microscopic theory which takes over in the IR limit, which,
of course, is quite a difficult task. [See \cite{rt, khn} where some higher derivative
YM actions with Higgs fields are used to construct regular monopole solutions in
higher dimensions.]
Note that all of the discussion about the mass-divergence of the Yang monopole above
is in flat space. If we turn on gravity, as we shall do in this paper, the picture
changes drastically. Gravity could be blamed for introducing UV divergences, curvature
singularities and black holes, but since it clumps matter and fields, it should be a
good cure for IR divergences. Gibbons-Townsend \cite{gib} introduced the self-gravitating
Yang monopole and argued that, in contrast to this expectation, the mass is still IR
divergent (beyond four dimensions in their classification). Here, we show that once
the proper mass-energy definitions in asymptotically flat and AdS spaces are employed,
the Yang monopole does indeed have a finite mass in all dimensions. The main issue here
has to do with the choice of a proper background to work out the relevant mass-energy
formula.
Our second aim in this paper is to find Yang-monopole type solutions in more generic
gravity theories coupled with YM systems. To this end, we consider the cosmological
Einstein-Gauss-Bonnet (GB) theory, which appears as a low energy limit of some string
theories, and construct new solutions. Compared to General Relativity, GB theory behaves
better in the UV region, which is not our main concern here, but exact solutions in this
rather complicated theory are always good to have, since there are very few known anyway.
The organization of this paper is as follows. In the next section, we briefly review the
Dirac and Yang monopoles in flat space. In section \ref{efmo}, we show how the IR
divergence mentioned above is overcome. We describe the cosmological Einstein-GB-YM theory
in section \ref{set}, and present our ansatz for the Yang monopole, our assumptions
and the field equations we obtained from these in section \ref{eqns}. Section \ref{solns}
is devoted to the solutions found and their properties. Finally we conclude with section
\ref{conc}.
\section{\label{mono} Dirac and Yang monopoles in flat space}
As there can be occasional confusions with regard to gauge symmetries, spacetime
symmetries and the charge definitions of higher dimensional singular monopoles, we
start by giving a brief recollection of these concepts in flat space, with the help
of \cite{eh} and \cite{naka}.
Let us start with Yang's generalization of the three-dimensional Dirac monopole.
The latter lives on $\mathbb{R}^{3}$ with the origin removed. The Maxwell field
strength ${\cal F}$ is a 2-form whose flux $\int_{S^{2}} {\cal F}$ gives the magnetic
charge which can take \emph{any} integral value up to a normalization. Even though
the vector potential ${\cal A}$ does not reflect it, the physical field ${\cal F}$
is spherically symmetric, i.e. it is invariant under the action of $SO(3)$. [This in
fact means that spatial rotations can be undone with gauge transformations.] As is
well known, the singular Dirac monopole can be described by pure geometry:
$\mathbb{R}^{3}-\{0\}$ is homotopically equivalent to $S^{2}$, therefore one may study
the corresponding principal bundle $P(S^{2},U(1))$. [For the charge-1 monopole, this
is the Hopf fibration of $S^{3}$.] Then the
transition functions defined on the equator $S^{1}$ of $S^{2}$ classify the magnetic
charge; namely, they map $S^{1} \to U(1)$, having $\pi_{1}(U(1)) = \mathbb{Z}$. A
complementary picture is provided by the first Chern character of this ``monopole''
bundle, i.e. the magnetic flux equals $\int_{S^{2}} ch_{1}({\cal F})$.
Let us now look at the ``original'' Yang monopole \cite{yang} in $\mathbb{R}^{5}-\{0\}$,
which is homotopically equivalent to $S^{4}$. Yang considered the field strength
${\cal F}$ to be an $\mathfrak{su}(2)$-valued 2-form that ``generalizes'' the Dirac
monopole in the sense that the physically measurable quantities are $SO(5)$ invariant.
Now the relevant geometrical object is the principal bundle $P(S^{4},SU(2))$. Even though
the corresponding homotopy group $\pi_{3}(SU(2))$ equals $\mathbb{Z}$ which arises from
the maps $S^{3} \to SU(2)$, the Euclidean YM equation in five dimensions admits only
\emph{two} of these solutions. These are just the four-dimensional self-dual and anti
self-dual solutions (BPST instanton having the charge $\pm 1$). The charge is now given
(up to a normalization) by the integral of the second Chern character
\[ \int_{S^{4}} ch_{2}({\cal F}) = \int_{S^{4}} \, \mbox{Tr} \, ({\cal F} \wedge {\cal F}) = \pm 1 \, . \]
\section{\label{efmo} The effect of gravity on monopoles}
Here we will show how gravity cures the IR divergence of the mass-energy of the Yang monopole
in any even dimensions (time included), just as it cures the UV divergence of the 3+1-dimensional
Dirac monopole. [In this context, the latter is nothing but the celebrated Reissner-Nordstrom
black hole.] Note that our result about the mass of the Yang monopole is not in agreement with
\cite{gib}, who incorrectly claimed that the divergence persisted in the presence of gravity
except for four dimensions. The gist of the problem lies in the correct definition of gravitational
mass-energy. For this purpose, we resort to the procedure developed in \cite{ad, dt1, dt2}.
Stated briefly, the idea is to define gauge invariant conserved charges in a diffeomorphism
invariant theory by employing the generalized ``Gauss law'' provided there exist asymptotic
Killing symmetries of the relevant spacetimes. Put in another way, one chooses a vacuum that
satisfies the field equations as the background with respect to which background gauge
invariant quantities (such as energy) is calculated. These charges are expressible as
surface integrals and, by construction, their value for the background itself is always
zero. The latter is quite important.
Given a background Killing vector $\bar{\xi}^{\mu}$, the corresponding conserved charges
can be written as \footnote{Throughout, we set the Newton's constant $G_{n} = 1$.} \cite{dt1, dt2}
\begin{equation}
Q^{\mu}(\bar{\xi}) = \frac{1}{4 \, \Omega_{n-2}} \,
\int d^{n-2} x \, \bar{\xi}_{\nu} \, {\cal G}^{\mu\nu}_{L} \,, \label{einyuk}
\end{equation}
where ${\cal G}^{\mu\nu}_{L}$ denotes the linearized Einstein tensor about
the background and $\Omega_{n-2}$ is the solid angle on the unit $(n-2)$-sphere.
As it would be too much of a digression to redrive this formula and its form
as a surface integral, we refer the reader to \cite{dt1, dt2} for the details
and simply employ it here.
For the gravitating Yang-monopole type solutions found in \cite{gib}, the
spacetime metric in Schwarzschild-like coordinates is simply given by
\begin{equation}
ds^{2} = - f^{2}(r) \, dt^{2} + \frac{dr^{2}}{f^{2}(r)} + r^{2} \,
d \Omega^{2}_{n-2} \,, \label{gibsol}
\end{equation}
where $d \Omega^{2}_{n-2}$ is the metric on the $(n-2)$-sphere and the function
$f(r)$ reads (in the form presented by \cite{gib} but adapted to our conventions
for $n \geq 4$)
\begin{equation}
f^{2}(r) = 1 - \frac{2 \, m}{r^{n-3}} - \frac{\mu^{2}}{r^{2}} -
\frac{2 \, \Lambda \, r^{2}}{(n-2)(n-1)} \,. \label{fgib}
\end{equation}
Here the constant $\mu$ is given by
\[ \mu^{2} = \frac{8 \pi (n-3)}{(n-5) \sigma^{2}} \,, \]
which follows from the normalization choice for the generators $\Sigma_{ij}$
of the gauge group $SO(n-2)$ (see \cite{gib} for details), and cannot be chosen
as zero. This is a rather important point. Together with the cosmological term, the
$\mu^{2}$ piece in (\ref{fgib}) constitute the background with respect to which
any spacetime with nonvanishing $m$ can have a finite and meaningful mass.
Otherwise, apart from the special $n=4$ case, one always finds a divergent mass for
(\ref{gibsol}). Thus taking the background to be the spacetime (\ref{gibsol})
with $m=0$ in (\ref{fgib}), which has the timelike Killing vector
\( \bar{\xi}^{\mu} = (-1, 0, \dots, 0) \) in the notation and conventions
of \cite{dt1, dt2}, one finds the total energy of these solutions as
\[ E = \frac{1}{4 \Omega_{n-2}} \, \Omega_{n-2} \, (2(n-2) \, m)
= \frac{m(n-2)}{2} \,. \]
This is the result of the surface integration at $r \to \infty$ in the notation of
\cite{dt1, dt2}. To see how gravity modifies the IR divergence of the Yang monopole,
let us also compute the (gauge non-invariant) energy contained in a ball of radius
$R$ about the origin of spacetime. One then finds
\begin{equation}
E(R) = \frac{(n-2) \, m \, R^{n-5} \, \left[ 2 \, \Lambda \, R^{4} + (n-1)(n-2) (\mu^{2} - R^{2}) \right]}
{2 \, \big( \left[ 2 \, \Lambda \, R^{4} + (n-1)(n-2) (\mu^{2} - R^{2}) \right] R^{n-5} + 2 (n-1)(n-2) \, m \big)} \,,
\end{equation}
which is finite in contrast to the flat space result, that goes like $R^{n-5}$ and
diverges as $R \to \infty$ for $n \geq 6$ \cite{gib}.
\section{\label{set} The cosmological Einstein-GB-YM theory}
Let us now describe the cosmological Einstein-GB-YM theory, the assumptions we make
and the solutions they lead to in various dimensions. We start with the action
\begin{equation}
I[e, {\cal A}] = \int {\cal L} \,, \label{act}
\end{equation}
where the Lagrangian density $n$-form
\begin{eqnarray}
{\cal L} & = & \frac{1}{2}\, R^{ab} \wedge \ast (e_{a} \wedge e_{b})
- \frac{1}{2 \sigma^{2}}\, \mbox{Tr} ({\cal F} \wedge \ast {\cal F} ) + \Lambda \ast 1
+ \frac{\gamma}{4}\, R^{ab} \wedge R^{cd} \wedge \ast ( e_{a}
\wedge e_{b} \wedge e_{c} \wedge e_{d} ) \label{lag}
\end{eqnarray}
contains the Einstein-Hilbert term, the YM Lagrangian for the 2-form
field ${\cal F}$ with coupling constant $\sigma$, a cosmological constant $\Lambda$ and
a second order Euler-Poincar\'{e} term (the so called GB term in this case) with coupling
constant $\gamma$.
The basic gravitational field variables are the coframe 1-forms $e^{a}$ in terms
of which the spacetime metric is decomposed as
\( {\mbox{\bf g}} = \eta_{ab} \, e^{a} \otimes e^{b} \), where
\( \eta_{ab} = \mbox{diag} \, ( -, +, +, \dots ) \) is the Minkowski metric.
The Hodge duality map is specified by the oriented volume element
\( \ast 1 = e^{0} \wedge e^{1} \wedge \dots \wedge e^{n-2} \wedge e^{n} \). The
torsion-free, Levi-Civita connection 1-forms $\omega^{a}\,_{b}$ satisfy the first
Cartan structure equations
\[ d e^{a} + \omega^{a}\,_{b} \wedge e^{b} = 0, \]
where metric compatibility implies \( \omega_{ab} = - \omega_{ba} \). The corresponding
curvature 2-forms follow from the second Cartan structure equations
\[ R^{a}\,_{b} = d \omega^{a}\,_{b} + \omega^{a}\,_{c} \wedge \omega^{c}\,_{b} \,. \]
The GB term in the Lagrangian density (\ref{lag}) can also be written in the alternative form
\[ R^{ab} \wedge R^{cd} \wedge \ast ( e_{a} \wedge e_{b} \wedge e_{c} \wedge e_{d} ) =
2 \, R_{ab} \wedge \ast R^{ab} - 4 \, P_{a} \wedge \ast P^{a} + {\cal R}_{(n)}^{2} \ast 1 \,, \]
where the Ricci 1-form \( P^{a} = \iota_{b} R^{ba} \) and the curvature scalar
\( {\cal R}_{(n)} = \iota_{a} \, \iota_{b} \, R^{ba} \) have been utilized via the interior
product operator \( \iota_{a} = \iota_{X^{a}} \) for which \( \iota_{X^{b}} (e^a) = \delta_b\,^{a} \).
Before moving on to the field equations, let us present our setting on the YM sector
as well. We take the YM potential ${\cal A}$ to be a Lie algebra $\mathfrak{g}$-valued 1-form.
The YM 2-form field follows from
\begin{equation}
{\cal F} = d {\cal A} + \frac{1}{2} \, [{\cal A}, {\cal A}] \label{ym2f}
\end{equation}
in the usual way and satisfies the Bianchi identity
\begin{equation}
D {\cal F} = d {\cal F} + [{\cal A}, {\cal F}] = 0 \,. \label{bian}
\end{equation}
The field equations read
\begin{eqnarray}
\frac{1}{2} \, R^{ab} \wedge \ast ( e_{a} \wedge e_{b} \wedge e_{c} )
& = & - \frac{1}{4 \sigma^{2}} \tau_{c} [{\cal F}] - \Lambda \ast e_{c}
- \frac{\gamma}{4} \, R^{ab} \wedge R^{dg} \wedge \ast ( e_{a} \wedge
e_{b} \wedge e_{d} \wedge e_{g} \wedge e_{c} ) \,, \label{eingb} \\
D \ast {\cal F} & = & d \ast {\cal F} + [{\cal A}, \ast {\cal F} ] = 0 \,. \label{ymeq}
\end{eqnarray}
Here
\begin{equation}
\tau_{c} [{\cal F}] = 2 \, \mbox{Tr} \, \left( \iota_{c} {\cal F} \wedge \ast {\cal F}
- {\cal F} \wedge \iota_{c} \ast {\cal F} \right) \label{emten}
\end{equation}
is the corresponding stress-energy $(n-1)$-form for the gauge field ${\cal F}$.
\section{\label{eqns} The Ans{\"a}tze and equations for the fields}
Following \cite{yang} and \cite{gib}, we will consider solutions that have field
strengths only on an $(n-2)$-sphere. [Namely, there will be no radial components.
In fact, as explained in \cite{gib}, when radial components are introduced, one
usually gets a different class of (numerical) solutions such as the ones obtained by
Bartnik-McKinnon in four dimensions \cite{bm}.] This naturally leads to the choice of
the gauge group $G$ to be $SO(n-2)$ (for $n \geq 4$) and the Ans{\"a}tze for the
metric and gauge potential follow accordingly. Let us decompose the local coordinates
for the spacetime as
\[ x^{M} = \left\{ x^{0} \equiv t, \; x^{n} \equiv r, \; x^{i} \;\; \mbox{where}
\;\; i = 1, 2, \dots, (n-2) \right\} \,. \]
We think of $x^{i}$ as a parameterization of the local coordinates on an
$(n-2)$-sphere whose radius equals $\rho$, i.e. we take \( \rho^{2} = x_{i} x^{i} \),
and consider the spacetime metric to be in the form \footnote{Note that the change of
variable $\chi = \rho/(1 + \rho^{2}/4)$ transforms the metric (\ref{met}) to the
following equivalent form:
\[ ds^{2} = - f^{2}(r) \, dt^{2} + u^{2}(r) \, dr^{2} + g^{2}(r) \,
\left( \frac{d \chi^{2}}{1 - \chi^{2}} + \chi^{2} \, d \Omega^{2}_{n-3} \right) \,, \]
where $d\Omega^{2}_{n-3}$ denotes the metric on the unit $(n-3)$-sphere.}
\begin{equation}
ds^{2} = - f^{2}(r) \, dt^{2} + u^{2}(r) \, dr^{2} + g^{2}(r) \,
\sum_{i=1}^{n-2} \frac{dx_{i} \, dx^{i}}{(1 + \rho^{2}/4)^{2}} \;. \label{met}
\end{equation}
We choose the coframe 1-forms for the metric (\ref{met}) as
\begin{equation}
e^{0} = f(r) \, dt, \quad e^{n} = u(r) \, dr, \quad
e^{i} = g(r) \, \frac{dx ^{i}}{(1 + \rho^{2}/4)} \,,
\;\; i = 1, 2, \dots, (n-2) \,. \label{cof}
\end{equation}
Levi-Civita connection 1-forms follow easily from the first Cartan structure equations as
\begin{equation}
\omega^{0}\,_{i} = 0, \quad \omega^{i}\,_{j} = \frac{1}{2 g} (x^{i} e^{j} - x^{j} e^{i}), \quad
\omega^{0}\,_{n} = \frac{f^{\prime}}{f u} \, e^{0}, \quad
\omega^{i}\,_{n} = \frac{g^{\prime}}{u g} \, e^{i}, \label{con1f}
\end{equation}
where prime denotes derivative with respect to $r$. The curvature 2-forms that follow from
these read
\begin{equation}
R^{0n} = B \, e^{0} \wedge e^{n}, \quad R^{ij} = A \, e^{i} \wedge e^{j}, \quad
R^{0i} = C \, e^{0} \wedge e^{i}, \quad R^{in} = G \, e^{n} \wedge e^{i}, \label{cur2f}
\end{equation}
where we have used
\begin{equation}
A = \frac{1}{g^{2}} \Big( 1 - \big( \frac{g^{\prime}}{u} \big)^{2} \Big), \quad
B = -\frac{1}{f u} \Big( \frac{f^{\prime}}{u} \Big)^{\prime}, \quad
C = -\frac{f^{\prime} g^{\prime}}{u^{2} f g}, \quad
G = \frac{1}{ug} \Big( \frac{g^{\prime}}{u} \Big)^{\prime}. \label{abcg}
\end{equation}
As for the YM potential 1-form, we employ the ansatz
\begin{equation}
{\cal A} = \frac{1}{2} \, \Sigma_{ij} \, \frac{x^{i} dx^{j} - x^{j} dx^{i}}{(1 + \rho^{2}/4)}
\,, \label{gpot}
\end{equation}
where the matrices $\Sigma_{ij}$ denote the generators of the gauge group $SO(n-2)$
in the fundamental representation. Specifically, we choose them as
\begin{equation}
\Sigma_{ij}^{\alpha\beta} = \delta_{i}^{\alpha} \, \delta_{j}^{\beta}
- \delta_{j}^{\alpha} \, \delta_{i}^{\beta} \,, \label{sig}
\end{equation}
with \( 1 \leq \alpha < \beta \leq n-2 \). This choice leads to the $\mathfrak{so}(n-2)$
commutation relations
\begin{equation}
[ \Sigma_{ij}, \Sigma_{k \ell} ] = 2 \, ( \delta_{\ell [ i} \, \Sigma_{j]k}
- \delta_{k [ i} \, \Sigma_{j] \ell} ) \,, \label{sigal}
\end{equation}
so that one obtains via (\ref{ym2f}) the YM 2-form field strength to be
\begin{equation}
{\cal F} = \frac{1}{2} \, \Sigma_{ij} \, \frac{dx^{i} \wedge dx^{j}}{(1 + \rho^{2}/4)^{2}}
= \frac{1}{2 g^{2}} \, \Sigma_{ij} \, e^{i} \wedge e^{j} \,. \label{fcal}
\end{equation}
It is not hard to show that ${\cal F}$ satisfies (\ref{bian}) and (\ref{ymeq}) thanks to
(\ref{sigal}). Our choice (\ref{sig}) also leads to
\begin{equation}
\mbox{Tr} \, (\Sigma_{ik} \, \Sigma_{kj}) = 2(n-3) \, \delta_{ij} \quad \mbox{and}
\quad \sum_{i<j} \, \mbox{Tr} \, (\Sigma_{ij} \, \Sigma_{ij}) = -(n-2)(n-3)
\,. \label{sigcon}
\end{equation}
Note that (\ref{gpot}), and thus (\ref{fcal}), satisfy the flat space YM equations as
well. Therefore, before moving onto the gravitational field equations, we want to make
a digression and consider Yang's problem reviewed in section \ref{mono} for $n=6$ with
gravitation still turned off. This time we want to replace the $SU(2)$ gauge group by
$SO(4) \simeq (SU(2) \times SU(2))/\mathbb{Z}_{2}$. Following the discussion above, the
corresponding bundle is $P(S^{4},SO(4))$ and the relevant homotopy group
$\pi_{3}(SO(4))$ equals $\mathbb{Z} \oplus \mathbb{Z}$, therefore one may be inclined to think
that \emph{if} there are solutions, their charges should be labelled by two independent
integers. However, this is not the whole story since the gauge fields do have to satisfy
the Euclidean YM equations as well. Using the 2-form field strength (\ref{fcal}) (with $g=1$)
for $n=6$, if one naively calculates the charge as before using the analogous expression, one
immediately finds
\[ \int_{S^{4}} \, \mbox{Tr} \, ({\cal F} \wedge {\cal F}) = \int_{S^{4}} ch_{2}({\cal F}) = 0 \,. \]
Nevertheless, one is saved by the topological quantity that takes the place of the charge
which turns out to be the Euler characteristic given by \cite{hp, naka}
\[ \chi(S^{4}) = \frac{1}{32 \pi^{2}} \, \int_{S^{4}} \, \epsilon_{\alpha\beta\gamma\delta} \,
{\cal F}^{\alpha\beta} \wedge {\cal F}^{\gamma\delta} = \frac{1}{128 \pi^{2}} \,
\int_{S^{4}} \, \epsilon_{\alpha\beta\gamma\delta} \, \Sigma_{ij}^{\alpha\beta} \,
\Sigma_{kl}^{\gamma\delta} \, \epsilon^{ijkl} \, \hat{\ast} 1_{(4)} = 2 \,, \]
where $\hat{\ast} 1_{(4)}$ denotes the volume element of the $4$-sphere. We remark that
for $n \geq 6$ and $n$ even with the gauge group $SO(n-2)$, a similar argument goes through
analogously. Namely
\[ \int_{S^{n-2}} \, \mbox{Tr} \, {\cal F}^{(n-2)/2} = \int_{S^{n-2}} ch_{(n-2)/2}({\cal F}) = 0 \,, \]
and the Euler characteristic reads \( \chi(S^{n-2}) = 2 \). In fact, for generic $n$
\[ \chi(S^{n-2}) = \left\{
\begin{array}{ll}
0 \,, & n \; \mbox{is odd} \\
2 \,, & n \; \mbox{is even}
\end{array} \right. \,, \]
and since the Euler characteristic vanishes for any odd-dimensional manifold, one is urged to set
$n$ even. Thus from now on, we always take $n \geq 6$ and even. The solutions thus obtained
are what we mean by flat-space Yang monopoles in higher (even) dimensions.
Finally, turning on gravity, the use of (\ref{sigcon}) in (\ref{eingb}) lead to the following
system of coupled ordinary differential equations:
\begin{eqnarray}
B + (n-3) \Big( \frac{n-4}{2} A + C - G \Big) & = & \frac{(n-2)(n-3)}{4 \sigma^{2} g^{4}} - \Lambda \nonumber \\
& & - \tilde{\gamma} \Big( A B - 2 C G + A (n-5) \big( C - G + \frac{n-6}{4} A \big) \Big) \,, \label{gb1} \\
(n-2) \Big( \frac{n-3}{2} A + C \Big) & = & - \frac{(n-2)(n-3)}{4 \sigma^{2} g^{4}} - \Lambda
- (n-2) \tilde{\gamma} A \Big( \frac{n-5}{4} A + C \Big) \,, \label{gb2} \\
(n-2) \Big( \frac{n-3}{2} A - G \Big) & = & - \frac{(n-2)(n-3)}{4 \sigma^{2} g^{4}} - \Lambda
- (n-2) \tilde{\gamma} A \Big( \frac{n-5}{4} A - G \Big) \,, \label{gb3}
\end{eqnarray}
where we have defined and used $\tilde{\gamma} = (n-3)(n-4) \gamma$.
\section{\label{solns} The solutions and their properties}
Setting \( u(r) =1/f(r) \) in (\ref{abcg}), one finds that (\ref{gb2}) and (\ref{gb3}) yield
\( g^{\prime\prime} = 0 \), and this leads to two independent cases: Either \textbf{i)}
\( g(r) = g_{0} = \) constant or \textbf{ii)} \( g(r) = r \). It follows
that (\ref{gb1}), (\ref{gb2}) and (\ref{gb3}) admit two classes of solutions
corresponding to each case:
\textbf{i)} The first case leads to a cylindrical metric
\begin{equation}
ds^{2} = - f^{2}(r) \, dt^{2} + \frac{dr^{2}}{f^{2}(r)} + g_{0}^{2} \,
\sum_{i=1}^{n-2} \frac{dx_{i} \, dx^{i}}{(1 + \rho^{2}/4)^{2}} \,, \label{2ndsol}
\end{equation}
where
\( f^{2}(r) = C_{0} \, r^{2} + C_{1} \, r + C_{2} \,. \)
Here $C_1$ and $C_2$ are integration constants, and $C_{0}$ is given by
\[ C_{0} = - \frac{1}{g_{0}^{2} + \tilde{\gamma}} \, \left( \frac{(n-2)(n-3)}{2 \sigma^{2} g_{0}^{2}}
+ \frac{n(n-3)}{4} + \frac{(n-5)\tilde{\gamma}}{g_{0}^{2}} \right) \,. \]
Note that the metric (\ref{2ndsol}) is conformally flat when $C_{0} g_{0}^{2} = 1$, which
was also observed in \cite{halil}. We will not be interested in this solution.
\textbf{ii)} The second case is definitely more interesting and leads to the cosmological
Einstein-GB Yang-monopole type solutions
\begin{equation}
ds^{2} = - f^{2}(r) \, dt^{2} + \frac{dr^{2}}{f^{2}(r)} + r^{2} \,
\sum_{i=1}^{n-2} \frac{dx_{i} \, dx^{i}}{(1 + \rho^{2}/4)^{2}} \,, \label{1stsol}
\end{equation}
where
\begin{equation}
f^{2}(r) = 1 + \frac{r^{2}}{\tilde{\gamma}} \left( 1 \pm
\sqrt{ 1 - 4 \tilde{\gamma} \left( \frac{\Lambda}{(n-2)(n-1)} - M r^{1-n}
+ \frac{(n-3)}{4 \sigma^{2} (n-5)} r^{-4} \right) } \, \right) \label{fdef}
\end{equation}
now. Here, as we will see, the constant $M$ is related to the gravitational mass
of the solution.
Before we move on to studying the physical properties of this solution, we should note
that there is yet another, perhaps simpler, way of
obtaining the solutions (\ref{2ndsol}) and (\ref{1stsol}). It is based on inserting in
the action (\ref{act}) (and (\ref{lag})) the gauge fixed, static, spherically symmetric
metric (\ref{met}) with the corresponding YM field content calculated using (\ref{fcal})
and (\ref{sigcon}). This method was originally introduced by Weyl \cite{weyl} for obtaining
the exterior Schwarzschild solution of General Relativity, but was put on solid ground
much later in \cite{palais}. [See also \cite{dt} and \cite{dst} for some applications of
this technique to various theories of gravitation.] The method considerably simplifies
the labor involved in obtaining the relevant field equations. Moreover, one can also use
it to show that the Birkhoff's theorem holds for the solution (\ref{2ndsol}): If the functions $f$ and
$u$ in the metric (\ref{met}) are also allowed to depend on the time coordinate $t$, the
Lagrangian density (\ref{lag}) turns out to be $t$-independent \cite{dt, df}. Thus
all spherically symmetric solutions are static in this model.
Let us look at various limits of this solution. For $\gamma \to 0$, we recover the
solutions presented in \cite{gib} by choosing the $-$ branch. When one takes
$\Lambda = 0$ and $\sigma \to \infty$ in (\ref{fdef}), one recovers the external
solutions of the Einstein-GB theory given in \cite{bode}. The branching of the solutions
with either a Schwarzschild \( f^{2}(r) = 1 - 2 M r^{3-n} \) or a Schwarzschild-AdS
\( f^{2}(r) = 1 + 2 M r^{3-n} + 2 r^{2}/\tilde{\gamma} \) type of asymptotics is
recovered. For both sign choices in $f^{2}(r)$, the gravitational energy is found to
be (up to some normalizations) proportional to $M$ by employing the energy
definition of \cite{dt1, dt2}.
Now we consider the singularity structure of our solution. It is clear that there
is a curvature singularity at $r=0$, which follows from
\( R_{ab} \wedge \ast R^{ab} = {\cal O}(r^{1-n}) \ast 1 \). There is an event horizon
at $r_{H}>0$ ($f^{2}(r_{H})=0$), depending on the choice of parameters. In the most
general case, this is a complicated analysis, but can be carried out along the lines
of \cite{tm}. For simplicity, we concentrate on $n=6$ (the case of the Yang monopole)
with $\Lambda=0$ and $\gamma \neq 0$ \footnote{$\gamma = 0$ case was considered in
\cite{gib}.}. For this choice the location of the event horizon is given by the roots
of the equation
\[ r^{3} + 3 \Big( \frac{1}{2 \sigma^{2}} + \gamma \Big) - 2 M = 0 \,, \]
which always has a real root $r_{H}$ if
\[ \Big( \frac{1}{2 \sigma^{2}} + \gamma \Big)^{3} + M^{2} \geq 0 \,, \]
and moreover, that root is positive if $M>0$ and $\gamma>0$.
Let us now compute the mass of this solution. Given a background Killing vector
$\bar{\xi}^{\mu}$, the corresponding conserved charges of the model (\ref{lag})
can be written as \cite{dt1, dt2}
\begin{equation}
Q^{\mu}(\bar{\xi}) = \frac{1}{4 \, \Omega_{n-2}} \,
\sqrt{1-\frac{4 \, \Lambda \, \tilde{\gamma}}{(n-1)(n-2)}} \,
\int d^{n-2} x \, \bar{\xi}_{\nu} \, {\cal G}^{\mu\nu}_{L} \,. \label{yuk}
\end{equation}
Note that all the information coming from the GB part is encoded in the coefficient.
The correct background to work with is the spacetime (\ref{1stsol}) with $M=0$ in
(\ref{fdef}), and of course, with the timelike Killing vector
\( \bar{\xi}^{\mu} = (-1, 0, \dots, 0) \) again. For convenience we also choose
the $-$ branch \footnote{One can also proceed with the $+$ branch, in which case
the spacetime is asymptotically AdS.}, which is asymptotically flat. One then finds
the total energy according to (\ref{yuk}) as
\[ E = \frac{1}{4 \Omega_{n-2}} \, \frac{2(n-2)M}{\sqrt{1 - \frac{4 \Lambda \tilde{\gamma}}{(n-1)(n-2)}}}
\, \sqrt{1 - \frac{4 \Lambda \tilde{\gamma}}{(n-1)(n-2)}} \, \Omega_{n-2} = \frac{(n-2)M}{2} \,, \]
which is finite.
\section{\label{conc} Conclusions}
We have shown that, contrary to the claim in \cite{gib}, the Yang monopole defined in even
dimensions has a finite mass once gravity is introduced. This has been achieved by employing
the method developed in \cite{ad, dt1, dt2} for which a proper choice of background is
essential. Specifically, we have shown that out of the three generic parameters $m, \mu$
and $\Lambda$ of the gravitating Yang monopole, the first one can be interpreted as a
\emph{mass} once the remaining two are allowed to constitute the background.
We have also extended the family of Yang-monopole type solutions by studying the cosmological
Einstein-GB-YM theory in higher even dimensions. We have also shown that these solutions
have black hole singularities and event horizons for a proper choice of parameters.
Throughout this work, our discussion has been relying on $SO(n-2)$ gauge theory and
on static spherically symmetric $n$-dimensional metrics. If one abandons spherical
symmetry, one ends up with quite a nontrivial task of solving highly complicated
differential equations. For example there is no solution describing a \emph{rotating}
Yang monopole. As for the case of the (cosmological) Einstein-GB theory, the problem
is even harder: Let alone a rotating Yang monopole, there are no known exact
rotating black hole solutions.
\begin{acknowledgments}
We would like to thank Y{\i}ld{\i}ray Ozan and Turgut {\"O}nder for useful
discussions. This work is partially supported by the Scientific and Technological
Research Council of Turkey (T{\"U}B\.{I}TAK). B.T. is also partially supported by
the Turkish Academy of Sciences (T{\"U}BA) and by the T{\"U}B\.{I}TAK Kariyer
Grant 104T177.
\end{acknowledgments}
|
1,314,259,995,882 | arxiv | \section{Conclusions}
The proposed three-point functions appear very useful.
They are seen to be strongly dominated by the lowest terms in the $1/z$
expansion. As a consequence, the three-point functions may well be applied
to fix the remaining two unknowns, $\zvhqet$ and $\zakhqet$, in the
static approximation non-perturbatively. We would recommend $\theta=0.5$,
but the one-loop study does not suggest this choice to be much superior to
$\theta=0$ or $\theta=1$.
At order $1/m$ the full system determining the 19 parameters has to be
considered. Three of these parameters come from the HQET action
\cite{stat:eichhill2}, two from the temporal components of the vector
and axial vector current respectively and the spatial components
of the currents require the inclusion of further six parameters each
\cite{Falk:1990de,Falk:1992fm,Neubert:1992tg}. A study of this system
in perturbation theory is presently being carried out by the ALPHA
collaboration.
We can also confirm that the new package {\texttt{pastor}}\ is very useful in
studying such problems in perturbation theory. This goes beyond
issues related to the regularization such as renormalization factors
or improvement coefficients. In fact, all results
presented here refer to the $z$-dependence in continuum perturbation theory,
since we were able to reliably take the continuum limit
$a/L\to0$. We have presented the results in the
lattice minimal subtraction scheme for the quark mass. They can
trivially be connected to the $\msbar$ scheme by using
\cite{pert:gabrielli}
$\mbar(L)=(1+0.122282\,\gbar^2) \times \mbar_\msbar(1/L) +
\rmO(\gbar^4)$.\\[1em]
{\noindent \bf Acknowledgements.} We want to thank Piotr Korcyl,
Michele della Morte and Hubert Simma for helpful discussions, Jochen
Heitger for a critical reading of our draft and the computing center
at DESY Zeuthen for support and CPU time on the PC farm. This work has
been partly funded by the Research Executive Agency (REA) of the
European Union under Grant Agreement number PITN-GA-2009-238353 (ITN
STRONGnet) and by the SFB/TR 9 of the Deutsche Forschungsgemeinschaft.
\section{The large mass limit of QCD: Heavy Quark Effective Theory
\label{s:hqet}}
\newcommand{\ensuremath{\mathcal{M}}}{\ensuremath{\mathcal{M}}}
\newcommand{\ensuremath{\mathcal{M}^\mathrm{QCD}}}{\ensuremath{\mathcal{M}^\mathrm{QCD}}}
We consider QCD with at least three flavors,
one of them
massive, $m_\beauty=m$, and the others massless,
in particular $m_\up=m_\down=0$. A pseudo-scalar state with the flavor
content $\beauty\bar\down$ is written $|P_{\beauty\bar\down}\,,\;L\rangle$,
with $L$ denoting a single external (kinematical) length scale. Analogously
a light pseudo-scalar state is $|P_{\up\bar\down}\,,\;L\rangle$ and
vector states are labelled with $V$ instead of $P$. We are interested
in matrix elements
\bes
\ensuremath{\mathcal{M}^\mathrm{QCD}}(L,\mbar) =
\langle X_{\up\bar\down},L | \hat J_\nu^{\up\beauty}(\vecx) |
X_{\beauty\bar\down},L\rangle\,,\quad
\label{e:matrixel}
\ees
of the QCD heavy-light current operators which correspond
to the classical field
\bes
J_\nu^{\up\beauty}(x) = Z_J\, \psibar_\up(x) \Gamma_\nu \psi_\beauty(x)\,.
\ees
In particular we consider the axial vector current, $J_\nu=A_\nu$,
with $\Gamma_\nu=\gamma_5\gamma_\nu$ and the vector current, $J_\nu=V_\nu$
with $\Gamma_\nu=\gamma_\nu$. In physical processes, $L$ is an
inverse momentum scale, but we will later use states
in a finite periodic $L\times L \times L$ volume. For the moment
the relevant point is that $L$ is the only scale apart from
$m$. Then there is a perturbative expansion
\bes
\ensuremath{\mathcal{M}^\mathrm{QCD}}(L,\mbar) = ({\ensuremath{\mathcal{M}^\mathrm{QCD}}})^{(0)}(z) + \gbar^2(L) ({\ensuremath{\mathcal{M}^\mathrm{QCD}}})^{(1)}(z)
+ \rmO(\gbar^4(L)) \,,\quad z=L\mbar \,.
\ees
We will specify the renormalization scheme for $\gbar,\mbar$
when it becomes relevant.
The renormalization factors $Z_J$ of the flavor currents are to be chosen
such that the currents satisfy the chiral
Ward identities\cite{curralgebra:MaMA,impr:pap4}. In the
large mass limit, $\mbar\to\infty$, $L$ fixed, the matrix elements $\ensuremath{\mathcal{M}^\mathrm{QCD}}$
are logarithmically divergent \cite{Shifman:1987sm,Politzer:1988wp},
\bes
({\ensuremath{\mathcal{M}^\mathrm{QCD}}})^{(1)}(z) \simas{z\to\infty}\; H^{(1)} - \gamma_0 \log(z) H^{(0)} \,,
\quad
\gamma_0=-1/(4\pi^2)\,,\quad z=L\mbar\,. \label{e:asym}
\ees
This limit of QCD is described by an effective field theory,
HQET.
Up to corrections of order $1/z$, it is
the static effective theory~\cite{stat:eichhill1} where
the b-field is replaced by a two-component static field,
\bes
\psi_\beauty(x) \to \heavy(x)=\frac12(1+\gamma_0)\heavy(x)\,,
\ees
with Lagrangian\footnote{We are
in the frame where $|X_{\beauty\bar\down}\rangle$ has spatial momentum zero
and HQET at zero velocity applies.},
\bes
\Lstat(x) = \heavyb(x) (\dmstat + D_0) \heavy(x) \,.
\ees
The mass counter term $\dmstat$ does not play a role in the following.
The static flavor currents are form-identical with the QCD ones, for example
$\Vstat(x) = \psibar_\up(x) \gamma_0 \heavy(x)$,
$\Akstat(x) = \psibar_\up(x) \gamma_5\gamma_k \heavy(x)$.
Chiral Ward identities fix the relative normalization of the static
vector and axial vector currents but not the overall normalization.
Furthermore space and time-components
are to be treated separately and the currents have an anomalous
dimension in the effective theory. Choosing the lattice regularization
we can in a first step define finite currents by renormalizing them
in the lattice minimal subtraction scheme. The renormalized currents
are then
\bes
(J_\mathrm{lat}^\mathrm{stat})_\nu(x;\mu) =
Z_\mathrm{lat}(\mu a, g_0)\,J^\mathrm{stat}_\nu(x) =
Z_\mathrm{lat}(\mu a, g_0)\,\psibar_\up(x) \Gamma_\nu \heavy(x)\,,
\ees
with a renormalization constant
\bes
Z_\mathrm{lat}(\mu a, g_0) = 1 - \gamma_0 \log(a\mu) g_0^2 + \rmO(g_0^4)\,,
\ees
which is common to all currents (see \cite{LH:rainer} for a pedagogical
introduction). Their matrix elements
\bes
\ensuremath{\mathcal{M}}^\mathrm{stat}_{J_\nu}(L,\mu) = Z_\mathrm{lat}(\mu a, g_0)
\langle X_{\up\bar\down} | \hat J^\mathrm{stat}_\nu(\vecx) |
X_{\beauty\bar\down}\rangle_\mathrm{stat}\,,
\ees
are then finite. When we set $\mu=\mbar$, they are equal to the
corresponding QCD matrix elements
up to higher order terms in $1/m$,
\bes
\ensuremath{\mathcal{M}^\mathrm{QCD}}_{J_\nu}(L, \mbar) = C_{J_\nu}^\mathrm{match}(\gbar^2(\mbar))\,
\ensuremath{\mathcal{M}}^\mathrm{stat}_{J_\nu}(L,\mbar) + \rmO(1/\mbar) \,,
\label{e:match}
\ees
and up to the finite renormalization factor
\bes
C_{J_\nu}^\mathrm{match}(g^2) = 1+ B_{J_\nu}g^2 + \rmO(g^4) \,.
\ees
The one-loop coefficients are
\bes
B_{A_0} &=& -0.137(1)\,, \label{e:Bastat}
\\
B_{V_0} - B_{A_0} &=& 0.0521(1) = B_{V_k} - B_{A_k}\,,\label{e:Zva}
\\
B_{A_k} - B_{V_0} &=& -0.016900\,. \label{e:diffB}
\ees
Here \eq{e:Bastat}, due to \cite{BorrPitt,zastat:pap2}, and
\eq{e:Zva}, due to \cite{zvstat:filippo}, depend on the
lattice regularisation. They are given
for the Eichten-Hill lattice action for the
static quark, the $\rmO(a)$-improved Wilson action for the light quarks
and the plaquette gauge action. We note that \eq{e:Zva} follows
from requiring a chiral Ward identity. On the other hand the bare currents
$V_0$ and $A_k$ are related by the spin symmetry of the static effective
theory which is exact in lattice regularization. The difference,
\eq{e:diffB}, is therefore known very precisely from continuum perturbation
theory~\cite{BroadhGrozin2}. Of course the renormalization of the
fields and therefore in particular $B_{J_\nu}$ are independent of the states
in \eq{e:matrixel}.
\section{Introduction}
B meson decays are an excellent source of information
for constraining physics beyond the Standard Model.
Precision based on a solid theory and advanced experiments
is becoming increasingly important as we know that
effects due to fields which are not present in the Standard Model
are small. Next to leptonic decays, exclusive semileptonic
decays are easiest to treat in theory. Take for example the decay
$B \to \pi l\nu$ which is relevant for a determination
of $V_{\rm ub}$. Theory only needs to predict two form factors
(in practice a single one dominates) from non-perturbative
QCD. This is a strong motivation to extend the HQET programme of
the ALPHA-collaboration\cite{hqet:pap1,hqet:first1,hqet:first2,hqet:first3,
zastat:nf2,hqet:nf2:1} to include matrix elements of all components
of the weak heavy-light currents. And it is a significant step
beyond what has been achieved so far, where only the
HQET action and the time-component of the axial current were
determined non-perturbatively\cite{hqet:first1,hqet:nf2:1}.
Instead of the previous five we now need 19
parameters in order to have the effective theory defined non-perturbatively
including all $1/m$ terms, namely all terms of mass dimension
five in the action and dimension four in the currents.
Therefore, 19 matching conditions
are needed. It is important to choose them well. Each matching condition
simply consists of a matching observable $\Phi_i$
which is evaluated in QCD and in
HQET --- in the latter theory including the terms of order $1/m$
and no more.
Setting $\Phiqcd_i=\Phihqet_i$ determines (in fact defines)
the parameters in HQET. What does it mean to choose the
matching observables well? Ideally we would like each one of
them to be sensitive
to a single parameter in HQET, in practice we would like them to receive little
contributions from terms of order $1/m^2$ in the effective theory.
If such contributions from $\rmO(1/m^2)$ terms are unnaturally large, they
affect the determined parameters and then inflict unnaturally large $1/m^2$
terms into the observables that one wants to determine from HQET after
the matching has been carried out.
One thus better chooses the matching observables in QCD which are strongly
dominated by the terms of order $m^0$ and $m^{-1}$. Since the ALPHA strategy
consists of matching in a finite volume with \SF\ boundary conditions,
the size of different terms in the expansion is given in terms of
$z^{-n}=(Lm)^{-n}$ with $L$ the linear extent of the finite volume.
Of course, in the whole process, the most important terms are
those which appear at order $m^0$, the static terms. They are simply
dominating numerically.
It is thus of importance
to make sure that those matching observables which determine the
normalization of
the static currents are chosen well. Due to the breaking of relativistic
invariance we need to normalize the space and time components
of the currents separately. Thus we consider the axial vector current $A_0,A_k$
and the vector one $V_0,V_k$. Previously, the normalization factor
$\zahqet$ of $A_0$ has been studied in
detail~\cite{zastat:pap1,zastat:pap1,zastat:pap2,zastat:pap3,zastat:nf2,hqet:first1,hqet:nf2:1}. It
is defined through a \SF\ two-point function \cite{hqet:first1}. Since
in static approximation $A_0$ and $V_k$ are related through the spin symmetry
(see \sect{s:hqet} for a more precise statement), the natural condition
for $\zvkhqet$ follows from a simple spin rotation. However, $\zakhqet$
and $\zvhqet$ do not appear in the \SF\ two-point functions which have been
considered so far. We are thus lead to either consider two-point functions
with more complicated kinematics or three-point functions.
In fact three-point functions appear naturally, since they are
also used to determine
the desired form factor for $B\to\pi l \nu$\cite{
Bailey:2008wp,Dalgic:2006dt,Liu:2011raa,Bahr:2012qs,Zhou:2012sna,Bouchard:2012tb,Kawanai:2012id
}.
One thus uses a process in the finite volume matching
which is related to one of the desired infinite volume matrix elements
and there is even a
potential that higher order in $1/m$ terms cancel between the matching and the
physical matrix element. On the other hand, these functions have not been
considered before. We therefore evaluate them first in perturbation theory,
including the one-loop parts. We can then verify that they are indeed
dominated by the first two terms in the $1/m$-expansion.
The perturbative study is rather straight forward, since one of us has
developed ``{\texttt{pastor}}'', a tool to carry out one-loop computations of \SF\ correlation
functions in a largely automatic manner. Still, the scope of this paper
is not to consider the full
system of 19 unknowns, but to study the two numerically dominating
matching conditions for
$\zvhqet$ and $\zakhqet$.
The {\texttt{pastor}}\ software
package was first introduced in \cite{lat11:dirk} and the
publication of a more thorough description along with the source code
is planned for the near future.
\section{Matching conditions}
\subsection{Definitions of correlation functions}
As discussed in the introduction, in the ALPHA strategy
we use finite volume matrix elements
to define the matching of HQET and QCD. These matrix elements
are constructed in the \SF, where they are exactly related
to ratios of correlation functions, see \cite{hqet:test1} for more details.
Here we define those correlation functions and ratios which
are suitable for the matching of $V_0$ and $A_k$.
We choose the \SF\ with vanishing background field, denote the time-extent
by $T$ and the space-extent by $L$.
As a shorthand we introduce (non-local) boundary fields
\bes
\Obound{ij}{\Gamma} &=& {a^6\over L^3}
\sum_{\vecx,\vecy}
\zetabar_i(\vecx) \Gamma \zeta_j(\vecy)\,,
\quad
\Oboundp{ij}{\Gamma} = {a^6\over L^3}
\sum_{\vecx,\vecy}
\zetabarprime_i(\vecx) \Gamma \zeta'_j(\vecy)\,,
\ees
where the first one creates a meson with flavor content $i\bar j$
at time zero and the second annihilates a meson with flavor content
$j\bar i$ at final time $T$. The boundary quark fields
$\zeta_i,\zetabar_i$ are defined in \cite{impr:pap1}.
For simplicity and because more sophisticated choices seem
unnecessary, we take each flavor to have the same
periodicity phase $\theta$ in the boundary conditions
$\psi(x=L\hat k) = \rme^{i\theta} \psi(x)\,,\;
\psibar(x=L\hat k) = \rme^{-i\theta} \psibar(x)$~.
With these preliminaries we define boundary-to-boundary correlation functions
(remember $z=\mbar L$)
\bes
\fone^{\beauty\down}(\theta,z) &=& -\frac12
\langle \Oboundp{\down\beauty}{\gamma_5}
\Obound{\beauty\down}{\gamma_5} \rangle \,,
\\
\fone^{\up\down}(\theta) &=& -\frac12
\langle \Oboundp{\down\up}{\gamma_5}
\Obound{\up\down}{\gamma_5} \rangle \,,
\\
\kone^{\up\down}(\theta) &=& -\frac12
\langle \Oboundp{\down\up}{\gamma_k}
\Obound{\up\down}{\gamma_k} \rangle \,,
\ees
and three-point correlation functions with the desired
currents
\bes
\fv(x_0;\theta,z) &=& -\frac{L^3}2
\langle \Oboundp{\down\up}{\gamma_5}V_0^{\rm ub}(x)
\Obound{\beauty\down}{\gamma_5} \rangle\,,
\\
J^1_{\rm A_1}(x_0;\theta,z) &=& -\frac{L^3}2
\langle \Oboundp{\down\up}{\gamma_1} A_1^{\rm ub}(x)
\Obound{\beauty\down}{\gamma_5} \rangle\,.
\ees
\subsection{Possible matching observables for $V_0,A_k$}
The defined correlation functions are easily combined to form the desired
finite volume matrix elements,
\bes
L^3 \ensuremath{\mathcal{M}^\mathrm{QCD}}_{\rm V_0}(L,\mbar) &=&
- \zv\,{{\fv(T/2;\theta,z) \over
[\fone^{\up\down}(\theta) \fone^{\beauty\down}(\theta,z)]^{1/2}}}\,,
\label{e:phiv0}
\\
L^3 \ensuremath{\mathcal{M}^\mathrm{QCD}}_{\rm A_{k}}(L,\mbar) &=&
- \za\,{{J^1_{\rm A_1}(T/2;\theta,z) \over
[\kone^{\up\down}(\theta) \fone^{\beauty\down}(\theta,z)]^{1/2}}}\,,
\label{e:phiak}
\ees
where we set $ T=L$. As explained in \cite{hqet:test1} these ratios
are equal to the matrix elements \eq{e:matrixel} with the
finite volume states such as
$|P_{\beauty\bar\down}\,,\;L\rangle$, all normalized to unity.
We here neglect $\Oa$-improvement, but this is used in the perturbative
computations in \sect{s:pt}.
We now have good candidates for matching conditions which we write in
the form
\be
\Phiqcd_{J_\nu}(L, \mbar) = \Phi^\mathrm{stat}_{J_\nu}(L, \mbar)
+ \log \left\{
C_{J_\nu}^\mathrm{match}\left(\gbar^2(\mbar)\right)\right\} + \rmO(1/\mbar)\,,
\label{e:matchphi}
\ee
with $\Phiqcd_{J_\nu} \equiv \log\left(L^3\ensuremath{\mathcal{M}^\mathrm{QCD}}_{J_\nu}\right)$. In this
way the $\log(C_{J_\nu}^\mathrm{match})$-term appears additively, which is advantageous
once the $1/m$-terms are included \cite{hqet:first1}.
\subsection{Checking their quality}
Expanding \eq{e:matchphi} in the coupling we have
\bes
(\Phiqcd_{J_\nu})^{(0)}(z) &=&
(\Phi^\mathrm{stat}_{J_\nu})^{(0)} + \rmO(1/z) \,,
\\
(\Phiqcd_{J_\nu})^{(1)}(z) &=&
(\Phi^\mathrm{stat}_{J_\nu})^{(1)} + B_{J_\nu} - \gamma_0 \log(a \mbar)
+ \rmO(1/z)\,.
\ees
The one-loop part can be rewritten as in \eq{e:asym}, namely
\bes
G^{(1)}_{J_\nu}(z) &\equiv& (\Phiqcd_{J_\nu})^{(1)}(z)
+ \gamma_0 \log(z) = H^{(1)}_{J_\nu} + \rmO(1/z)
\label{e:oneloopsubtr}
\ees
with
\bes
H^{(0)}_{J_\nu}&=&(\Phi^\mathrm{stat}_{J_\nu})^{(0)} \,,
\\
H^{(1)}_{J_\nu}&=& (\Phi^\mathrm{stat}_{J_\nu})^{(1)} + B_{J_\nu} - \gamma_0 \log(a/L)
\,,
\ees
where we subtract the logarithmic singularity
in $z$ from $(\Phiqcd_{J_\nu})^{(1)}(z)$ such that $H^{(1)}_{J_\nu}$ represents the one-loop
coefficient of the matched static matrix element at renormalization scale $1/L$.
In this form the size of $1/m$ terms is directly
visible as deviations of the left hand side of \eq{e:oneloopsubtr} from
$H^{(1)}_{J_\nu}$. We want to investigate these deviations in the following
in order to ensure that \eq{e:phiv0} and \eq{e:phiak} are good observables
for the matching.
\section{One-loop computation \label{s:pt}}
All the required quantities (\fv, J^1_{\rm A_1}, \ensuremath{F_{\mathrm 1}^{\mathrm{ud}}}, \ensuremath{K_{\mathrm 1}^{\mathrm{ud}}}, and their static
counterparts) were calculated at the one-loop level using the {\texttt{pastor}}\
software package for automated lattice perturbation theory
calculations~\cite{lat11:dirk}. As input, {\texttt{pastor}}\ accepts a rather general class of
lattice actions and observables defined in the Schr\"odinger
functional. It will then automatically generate computer programs for
the evaluation of all contributions of the observables under
investigation up to one-loop order including improvement- and
counter-terms. We did implement full $\Oa$-improvement, including
the terms proportional to $a\mq$ not written in \eq{e:phiv0} and
\eq{e:phiak}.
For the quantities in QCD, we choose lattice resolutions of $L/a$ up
to 40, while for the HQET counterparts lower resolutions up to $L/a =
30$ are sufficient to obtain reliable continuum extrapolations,
c.f. \fig{fig:cont_etr}. To determine the continuum limits, we
employ the method described in \cite{pert:2loop_fin} using the
implementation provided by {\texttt{pastor}}. We choose $\theta \in \{0, 0.5, 1.0\}$
and $z \in\{ 4, 6, 8, 10\}$.
\begin{figure}[hb]
\centering
\includegraphics{plots/continuum_extrapolation.pdf}
\caption{Continuum extrapolation of \ensuremath{H_{\mathrm{V_0}}}, \ensuremath{G_{\mathrm{V_0}}}
at one-loop level, $\theta = 0.5$. The round-off errors on the
data points at finite $L/a$ and the uncertainty of the continuum
extrapolation for the static point are much smaller than the
symbol size.
}
\label{fig:cont_etr}
\end{figure}
We employ the
mass-independent lattice minimal subtraction scheme \cite{impr:pap1}
in which the $\Oa$ improved renormalized mass at scale
$\mu=1/L$ is given by
\begin{equation}
\label{eq:3}
\mbar(L) = \zmlat(\ensuremath{{g}_0^2}, a/L) \, \mq \left[ 1 + a \,\bm(\ensuremath{{g}_0^2}) \,
\mq\right], \quad \mq = m_0 - \mc\,
\end{equation}
in terms of the bare mass of the lattice theory. At one-loop order
we have \cite{impr:pap5,pert:gabrielli}
\begin{align}
\bm(\ensuremath{{g}_0^2}) \;&= - 0.5 - 0.07217(2)\,C_F\, \ensuremath{{g}_0^2} + O(\ensuremath{{g}_0}^4),\\
\zmlat(\ensuremath{{g}_0^2}, a/L) \;&= 1 - \frac 1 {2\,\pi^2} \log (a/L)
\ensuremath{{g}_0^2} + O(\ensuremath{{g}_0}^4).
\end{align}
All calculations in {\texttt{pastor}}\ are performed with $z = \mbar(L) L$ as
input. It inverts \eq{eq:3} to obtain
$m_0 = m_0 \ord 0 + \ensuremath{{g}_0^2} m_0 \ord 1 + O(\ensuremath{{g}_0}^4)$ and calculates the series
\begin{multline}
\label{eq:4}
{\cal O} \left(m_0 \ord 0 + \ensuremath{{g}_0^2} m_0 \ord 1\right) = {\cal
O}\ord 0\left(m_0 \ord 0\right) \\+ \ensuremath{{g}_0^2} \left[ {\cal O}\ord
1\left(m_0 \ord 0\right) + m_0 \ord 1 \partial_{m_0} {\cal O}\ord
0\left(m_0 \ord 0\right) \right] + O\left(\ensuremath{{g}_0}^4\right)
\end{multline}
for a given observable ${\cal O}(m_0)$.
For the evaluation of the diagrams of a Schr\"odinger functional
observable, it is beneficial to work in a time-momentum
representation. Due to the periodic spatial boundary conditions one
does not have to perform a momentum-integration but a sum over a
finite set of allowed lattice momenta of size $(L/a)^3$. The round-off
errors introduced by the numerical evaluation of this sum are
estimated from the difference of \texttt{long double} precision and
\texttt{double} precision results for representative
parameters. Apart from this test we use \texttt{double} precision
since it is roughly a factor three faster.
The execution time to evaluate the numerically most
challenging loop diagram at $L/a = 40$ was about
50 hours on a single core CPU (Nehalem).
\section{Results}
\begin{figure}[htb!]
\centering
\includegraphics{plots/gv0_tree.pdf}
\caption{$G^{(0)}_{V_0} \equiv \left(\Phiqcd_{V_0}\right)^{(0)}(z)$
in the continuum limit. Errors are much smaller
than the symbol size.}
\label{f:v0}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\includegraphics{plots/ga1_tree.pdf}
\end{center}
\caption[]{\label{f:a1}
$G^{(0)}_{A_1} \equiv \left(\Phiqcd_{A_1}\right)^{(0)}(z)$
in the continuum limit. Errors are much smaller
than the symbol size.
}
\end{figure}
\subsection{Tree-level}
We start the discussion of our results with the tree-level functions
$G^{(0)}_{\rm V_0}(z) \equiv (\Phiqcd_{\rm V_0})^{(0)}(z)$ and
$G^{(0)}_{\rm A_1}(z) \equiv (\Phiqcd_{\rm A_k})^{(0)}(z)$.
Together with the static values $H^{(0)}_{J_\nu}$ they are displayed in
\fig{f:v0} and \fig{f:a1} for three different values
of $\theta$. Curves are fits of the form
$H^{(0)}(1 + h_1/z +h_2/z^2)$, fitted to the data with weights
$w(z)=1/z^3$. The fits are thus dominated by the results at large
$z$. The coefficients $h_i$, listed for the
different cases in \tab{t:treelevelzexp}, are small. For all considered values
of $\theta$ the $1/m$-expansion is
well behaved and we can also be confident that the fitted coefficients
are close to the true Taylor coefficients. Obviously, from the point of view
of tree-level, one would prefer $\theta=0$ where
$G^{(0)}_{J_\nu}(z) = H^{(0)}_{J_\nu}$ holds exactly.
\begin{table}[ht!]
\centering
\begin{tabular}{c | c c | c c | c c}
$\theta$ & \multicolumn{2}{c|}{0.0} & \multicolumn{2}{c|}{0.5} &
\multicolumn{2}{c}{1.0}\\\hline\hline
& $h_1$ & $h_2$ & $h_1$ & $h_2$ & $h_1$ & $h_2$\\\hline
$G_{\mathrm{A_k}}^{(0)}$& 0.00000 & 0.00000& 0.77621 & 1.11933& 1.05083 & 1.53061\\[1ex]
$G_{\mathrm{V_0}}^{(0)}$& 0.00000 & 0.00000& -2.30791 & 1.57043& -3.06017 & 3.11951\\[1ex]\hline\hline
& $h_1$ & $f_1$ & $h_1$ & $f_1$ & $h_1$ & $f_1$\\\hline
\multirow{2}{*}{$G_{\mathrm{A_k}}^{(1)}$}& 0.05245 & -0.00132& 0.12513 & 0.00139& 0.21893 & 0.01547\\
& 0.03099 & 0.01054& 0.10547 & 0.01225& 0.19391 & 0.02929\\\hline
\multirow{2}{*}{$G_{\mathrm{V_0}}^{(1)}$}& 0.15093 & -0.00923& 0.08803 & -0.00811& 0.04548 & -0.01340\\
& 0.12100 & 0.00692& 0.07042 & 0.00139& 0.05268 & -0.01728\\\hline
\end{tabular}
\caption{Fit coefficients for \ga and
\ensuremath{G_{\mathrm{V_0}}}. The upper row of fit coefficients for the one-loop results comes
from the fits omitting the data at $z = 4$.}
\label{t:treelevelzexp}
\end{table}
\subsection{One-loop}
We get more information at one-loop order. In order to have all
finite pieces defined, we need to specify the renormalization scheme
for the quark mass. As stated in \sect{s:pt},
we take $\mbar$ to be the renormalized mass
in the lattice minimal subtraction scheme at scale $\mu=1/L$.
The continuum limit is taken as described in the previous section.
The combination $G_{J_\nu} \ord 1 (z)$, \eq{e:oneloopsubtr}, is shown in \fig{f:v01lp}
and \fig{f:a11lp}. We perform a fit to the one loop data employing a
function of the form
\begin{equation}
\label{e:oneloopfit}
G_{J_\nu} \ord 1 (z) = H_{J_\nu}\ord 1 + h_1 /z + f_1 \log(z)/z,
\end{equation}
choosing in this case constant weights, as only few data point are
available anyway. It is compared to a fit of the same form, omitting
the data at $z = 4$. The fit parameters for the one-loop quantities in
\tab{t:treelevelzexp} are not expected to be accurate estimates for
the corresponding asymptotic expansion. The accuracy of the fits and
the smallness of the coefficient $f_1$, however, may be taken as an
indication that higher order terms in the $1/z$-expansion are not
very important for the considered range in $z$.
\begin{figure}[htb!]
\begin{center}
\includegraphics{plots/gv0_loop.pdf}
\end{center}
\caption[]{\label{f:v01lp}
$G^{(1)}_{\rm V_0}(z)$ in the continuum limit,
compared to the static result.
}
\end{figure}
\begin{figure}[htb!]
\begin{center}
\includegraphics{plots/ga1_loop.pdf}
\end{center}
\caption[]{\label{f:a11lp}
$G^{(1)}_{\rm A_1}(z)$ in the continuum limit,
compared to the static result.
}
\end{figure}
The size of $H^{(1)}$ is relevant for us only as a consistency check:
for all cases it is a little smaller than the expected magnitude $1/(4\pi)$
for a perturbatively accessible quantity. The interesting question is the
magnitude of $1/z$-terms as well
as curvature when $G_{J_\nu}$ are considered a function of $1/z$.
We observe that also at one-loop order
the $1/z$-terms in $\Phi_{J_\nu}$ remain small, but $\theta=0$
is not preferred any more. A choice $\theta=0.5$ appears a good compromise
between tree-level and one-loop. Take for illustration $\gbar^2=4$ and
$z \geq 10$
as it is typical in the non-perturbative application \cite{hqet:nf2:1}.
Then we roughly have a few per-mille $1/z$ correction at tree-level
and an $\approx 3\%$ correction at one-loop. This is very acceptable.
We then have all rights to expect that the $1/z^2$ corrections, which are omitted
when HQET is treated non-perturbatively \cite{hqet:first3,lat11:patrick},
are negligible and indeed the curvatures seen in \fig{f:v01lp} and \fig{f:a11lp}
are small.
|
1,314,259,995,883 | arxiv | \section{Introduction and preliminaries}
In this paper groups definable in $o$-minimal and closely related structures are studied, partly for their own sake and partly as a ``testing ground" for general conjectures.
Given a $\emptyset$-definable group $G$ in a saturated structure ${\bar M}$, $G^{00}_{\emptyset}$ is the smallest subgroup of $G$ of bounded index which is type-definable over $\emptyset$, and $G^{000}_{\emptyset}$ is the smallest subgroup of $G$ of bounded index which is $Aut({\bar M})$-invariant. In $o$-minimal structures and more generally theories with $NIP$, these ``connected components" remain unchanged after naming parameters and so are just referred to as $G^{00}$ and $G^{000}$. In any case $G^{00}_{\emptyset}$ and $G^{000}_{\emptyset}$ are ``definable group" analogues of the groups of $KP$-strong automorphisms and Lascar strong automorphisms, respectively, of a saturated structure. The relationship between these definable group and automorphism group notions is explored in \cite{Gismatullin-Newelski}. Although examples were given in \cite{CLPZ} where the strong automorphism groups differ, until now no example was known where $G^{000}_{\emptyset} \neq G^{00}_{\emptyset}$. In this paper (Section 3) we give a ``natural" example: $G$ is simply a saturated elementary extension of $\widetilde{SL(2,\R)}$ (the universal cover of $SL(2,\R)$) in the language of groups. $G$ is {\em not} actually definable in an $o$-minimal structure, but we give another closely related example which is. In any case the two-sorted structure consisting of $G$ and a principal homogeneous space for $G$ is now a (natural) example of a ``non $G$-compact" structure (or theory) i.e. where the group of Lascar strong automorphisms is a properly contained in the group of $KP$-strong automorphisms.
Another fruitful theme in recent years has been the generalization of stable group theory outside the stable context. The $o$-minimal case has been important and there is now a good understanding of ``definably compact" groups from this point of view; for example they are definably amenable, ``generically stable for measure", and $G$ is dominated by $G/G^{00}$. In the current paper we try to go beyond the definably compact setting, motivated partly by questions of Newelski and Petrykowski. In \cite{NIPI}, definable groups $G$ with ``finitely satisfiable generics" (which include definably compact groups in $o$-minimal structures) were shown to be definably amenable by lifting the Haar measure on $G/G^{00}$ to a left invariant Keisler measure on $G$, making use of a global {\em generic type} $p$, whose stabilizer is $G^{00}$. We guess this encouraged Petrykowski to suggest that if a definable group $G$ (in any structure) has a global type whose stabilizer has ``bounded index" then $G$ is definably amenable. In Section 4 we confirm this conjecture when $G$ is definable in an $o$-minimal structure, as well as raise questions about nature of types with bounded orbit in the $o$-minimal and more generally $NIP$ environment.
In Section 2 of the paper we give a rather basic decomposition theorem (implicit in the literature) for groups in
$o$-minimal structures, which is useful for understanding the issues around definable amenability and bounded orbits, as well as $G^{00}$ and $G^{000}$ (although Section 3 can be more or less read independently of Section 2). We introduce and discuss the notion of $G$ having a ``good decomposition" (Definition 2.7), and in fact the $o$-minimal examples where $G^{00} \neq G^{000}$ will be also examples where good decomposition fails, although good decomposition does hold for algebraic groups.
In a sequel to the current paper, \cite{Conversano-PillayII}, we will give a systematic account of $G^{00}$, $G^{000}$ as well as the quotient $G^{00}/G^{000}$, for groups $G$ in $o$-minimal structures. The decomposition theorem (2.6) as well as refinements of it will play a major role.
In general $T$ will denote a complete theory, $M$ an arbitrary model of $T$, and $G$ a group definable in $M$.
We sometimes work in a sufficiently saturated and homogeneous model ${\bar M}$ of $T$, in which case ``small" or ``bounded" essentially means of cardinality strictly less than the degree of saturation of ${\bar M}$, but we will make the meaning more precise later in the paper.
{\em Definability} usually means with parameters, and we say $A$-definable to mean definable with parameters from $A$ for $A$ a subset of $M$.
When we talk about $o$-minimal theories we will mean $o$-minimal expansions of the theory of real closed fields (and we leave it for later or to others to consider more general $o$-minimal contexts). In the $o$-minimal context, the important notion of definable compactness
was introduced by Peterzil and Steinhorn in \cite{PS}. For $X$ a definable subset of
$M^{n}$, definable compactness of $X$ amounts to $X$ being closed and bounded in $M^{n}$. In the more general case of $X$ being a {\em
definable manifold}, it means that for any definable function $f$ from $[0,1)$ to $X$, $lim_{x \to 1}f(x)$ exists in $X$.
When $G$ is a definable group, $G$ can be equipped with a definable manifold structure such that multiplication and inversion are continuous \cite{Pillay - groups}. Definable compactness of a definable group $G$ is then meant with respect to this definable manifold structure. But, as we are working in an $o$-minimal expansion of a real closed field, any definable group manifold $G$ can be assumed to be a definable subset of some $M^{n}$, and so definable compactness of $G$ reduces to $G$ being closed and bounded.
{\em Definable connectedness} of $G$ is meant with respect to its definable manifold structure mentioned above. But it turns out that $G$ is definably connected in this sense if and only if $G$ has no proper definable subgroup of finite index (i.e. $G = G^{0}$). Any definable group $G$ is definably connected by finite, and so (in this $o$-minimal context) we will often assume that our definable groups are definably connected.
We will often use the well-known fact that any definably compact, definably connected, solvable normal definable subgroup $N$ of a definably connected group is central. This follows from Corollaries 5.3 and 5.4 of \cite{Peterzil-Starchenko}. We will also use the fact that if $N$ is normal and definable in $G$, then $G$ is definably compact if and only if $N$ and $G/N$ are definably compact. (Following from \cite{NIPI} and \cite{NIPII}.)
In Section 4 of this paper we will make some references to ``stability-type" notions, $NIP$ theories, forking, etc. We generally refer the reader to \cite{NIPII} for the definitions, but make a few comments here. For ${\bar M}$ a saturated model of arbitrary theory $T$ and $G$ a group definable in ${\bar M}$, recall that $S_{G}({\bar M})$ denotes the space of complete types $p(x)$ over ${\bar M}$ such that $``x\in G" \in p$. $G$ (namely $G({\bar M})$) acts on $S_{G}({\bar M})$ on the left by $gp = tp(ga/{\bar M})$ where $a$ realizes $p$ in a bigger model. Slightly modifying Definition 5.1 from \cite{NIPII}, we will say that $p(x)\in S_{G}({\bar M})$ is {\em left $f$-generic} if there is a small model $M_{0}$ such that for any $g\in G({\bar M})$, $gp$ does not fork over $M_{0}$.
The second author was partly motivated by some e-mail discussions with Hrushovski and Newelski in the late summer of 2010. Thanks to both of them for the inspiration, and in particular to Hrushovski for allowing us to include (in Section 4) some observations that he made on definable amenability.
Many of the themes and results of this paper and the sequel appear in one form or another in the first author's doctoral thesis \cite{Conversano-thesis}, which is devoted to structural properties of groups definable in $o$-minimal structures (but does not explicitly discuss $G^{000}$).
In particular the $o$-minimal example where $G^{00} \neq G^{000}$ (Example 2.10/Theorem 3.3) appears in her thesis as an example of a definable group {\em without} a definable ``Levi decomposition".
In any case the first author would like to thank her advisor Alessandro Berarducci, as well as Ya'acov Peterzil for useful conversations.
\section{Decomposition theorems}
In this section $T$ is a complete $o$-minimal expansion of $RCF$, and we work in a model $M$ of $T$. $G$ will typically denote a definable, definably connected group, although we usually explicitly state definable connectedness. $K$ will denote the underlying real closed field of $M$.
We first aim towards a useful ``basic decomposition theorem", Proposition 2.6 below (which is easily extracted from results in the literature).
We begin by pointing out the existence, in every definable group, of a (unique) maximal normal definable torsion-free subgroup. As usual, for a positive integer $n$, an $n$-torsion element of $G$ is an element $x \in G$ such that $x^n = 1$, $1$ being the identity of the group (note that we are not assuming $G$ is commutative). We make use of results from \cite{Strzebonski}
connecting the existence of $n$-torsion elements with the $o$-minimal Euler characteristic of $G$.
Recall that if $\mathcal{P}$ is a cell decomposition of a definable set $X$, then the
{\em $o$-minimal Euler characteristic} $E(X)$ is the number of even-dimensional cells in $\mathcal{P}$ minus the number of odd-dimensional cells in $\mathcal{P}$. This does not depend on $\mathcal{P}$, and when $X$ is finite then $E(X) = |X|$. A definable torsion-free group will be definably connected (Corollary 2.4 of \cite{PeSta} but also follows from the proof of (ii) below).
\begin{Proposition} \label{unifree}
(i) $G$ is torsion-free if and only if $G$ is ``solvable with no definably compact parts" in the sense of \cite{Edmundo}, namely there are
definable subgroups $\{1\} = G_{0} < ... < G_{n} = G$ of $G$ such that for each $i<n$, $G_{i}$ is normal in $G_{i+1}$ and $G_{i+1}/G_{i}$ is $1$-dimensional and torsion-free. (In particular a torsion-free definable group is solvable.)
\newline
(ii) In every definable group $G$ there is a normal definable torsion-free subgroup which contains every
normal definable torsion-free subgroup of $G$. It is the unique normal definable torsion-free subgroup of $G$
of maximal dimension. We will refer to it as {\em the maximal normal definable torsion-free subgroup} of $G$, and note that it is invariant under all automorphisms of $(G,\cdot)$ which are definable in the ambient structure.
\end{Proposition}
\begin{proof}
(i) Right to left is obvious. Left to right follows (using induction) from Corollary 2.12 of \cite{PeSta} which states that if $G$ is torsion-free (and nontrivial) then there is a normal definable subgroup $H$ of $G$ such that $G/H$ is $1$-dimensional and torsion-free.
\newline
(ii)
We recall that for definable groups $K < G$,
\[
E(K) E(G/K) = E(G),
\]
and $G$ is torsion-free if and only if $E(G) = \pm 1$ (\cite{Strzebonski}). It follows that a quotient of torsion-free definable groups is still torsion-free (and hence torsion-free definable groups are definably connected).
Let $N$ be a normal definable torsion-free subgroup of $G$ of maximal dimension, and $H$ any normal definable torsion-free subgroup of $G$. We want to show that $H \subseteq N$.
We claim that $HN$ is a normal definable torsion-free subgroup of $G$: the definable group $H/(H \cap N)$ is torsion-free and it is definably isomorphic to $HN/N$. Thus
$E(HN) = E(N)E(HN/N) = \pm1$ and $HN$ is torsion-free.
But $N$ is of maximal dimension among the normal definable torsion-free subgroups of $G$, so
$\dim(HN) = \dim(N)$.
Since definable torsion-free groups are definably connected, it follows that $HN = N$, $H \subseteq N$ and
$\dim H < \dim N$, unless $H = N$.
\end{proof}
Bearing in mind Proposition 2.1, the following proposition is easily deduced from Theorem 5.8 of \cite{Edmundo}, together with the fact that definably compact, definably connected, solvable definable groups are commutative:
\begin{Proposition} Let $G$ be a definable, solvable, definably connected group, and let $W$ be its maximal normal definable torsion-free subgroup. Then $G/W$ is definably compact and commutative.
\end{Proposition}
\vspace{2mm}
\noindent
Recall that a definable group $G$ is said to be {\em semisimple} if $G$ has no definable, normal, definably connected, solvable (or commutative), nontrivial subgroups. Then, clearly, for an arbitrary definable group $G$, we have the exact sequence
$$ 1 \to R \to G \to G/R \to 1 $$
\noindent
where $R$, the {\em solvable radical} of $G$ is the maximal definable, normal, solvable, definably connected subgroup of $G$, and $G/R$ is semisimple.
If $R$ is definably compact then it is central in $G$.
\begin{Definition} We call a definable group $G$, {\em definably almost simple}, if $G$ is noncommutative and has no infinite (equivalently nontrivial, definably connected) proper definable normal subgroup.
\end{Definition}
Note that if $G$ is definably almost simple, then $Z(G)$ is finite and $G/Z(G)$ is definable simple, and moreover $G$ is definably compact if and only if $G/Z(G)$ is definably compact.
\begin{Lemma} Let the definable group be semisimple and definably connected. Then there are definable, definably almost simple subgroups $H_{1},..,H_{t}$ of $G$ such that $G$ is the almost direct product of the $H_{i}$, namely there is a definable surjective homomomorphism from $H_{1}\times ... \times H_{t}$ to $G$ with finite kernel.
\end{Lemma}
\begin{proof} Well known. By \cite{PPSI}, $G/Z(G)$ is the direct product of definably simple groups $B_{1},..,B_{t}$. Let $H_{i}$ be the definably connected component of the preimage of $B_{i}$ under the quotient map $G\to G/Z(G)$.
\end{proof}
\begin{Definition} Let $G$ be semisimple and definably connected. We say that $G$ has no definably compact part if in Lemma 2.4, no $H_{i}$ is definably compact.
\end{Definition}
\noindent
We can now observe:
\begin{Proposition} Let $G$ be a definable (definably connected) group. Then there is a definable, definably connected normal, subgroup $W$ of $G$, and a definable, definably connected normal subgroup $C$ of $G/W$, such that
\newline
(i) $W$ is torsion-free,
\newline
(ii) $C$ is definably compact, and
\newline
(iii) $(G/W)/C$ is semisimple with no definably compact part.
\newline
$W$ is the maximal normal definable torsion-free subgroup of $G$, and $C$ is the maximal normal definable, definably compact, definably connected subgroup of $G/W$.
\end{Proposition}
\begin{proof} Let $R$ be the solvable radical of $G$, and let $W$ be the maximal normal definable torsion-free subgroup of $R$ (given by Proposition 2.1). So $R/W$ is definably compact and commutative by 2.2. But let us note for now that since any definable torsion-free group is definably connected and solvable (\cite[2.11]{PeSta}), then $W$ coincides with the maximal normal definable torsion-free subgroup of $G$.
Now $R/W$ is the solvable radical of $G/W$ (and is also connected, definably compact, so in fact central in $G/W$), and $G/R$ is semisimple. Let us denote $G/R$ by $H$ for now, and $\pi$ the surjective homomorphism from $G/W$ to $H$. Let $H_{1},..,H_{t}$ be given for $H$ by Lemma 2.4, namely the $H_{i}$ are definable, definably almost simple and $H$ is their (almost direct) product. Let $C_{1}$ be the product of those $H_{i}$ which are definably compact, and $D_{1}$ the product of the rest. So $G/R = H$ is the almost direct product of the semisimple definable groups $C_{1}$ and $D_{1}$. Let $C = (\pi^{-1}(C_{1}))$. So $C$ is an extension of the definably compact connected group $C_{1}$ by the definably compact definably connected group $R/W$, hence is also definably compact and definably connected. Note that $C$ is normal in $G/W$, and the quotient $(G/W)/C$ is an image of $D_{1}$ (with finite kernel) so is semisimple with no definably compact parts.
\end{proof}
Let us fix notation for the data obtained in the proof above, so as to be able to refer to them in the future. $R$ denotes the solvable radical of $G$ and $W$ the maximal normal definable torsion-free subgroup of $G$ (equivalently of $R$).
\newline
$G/R$ is the semisimple part of $G$ which can be written uniquely as $C_{1}\cdot D_{1}$ (almost direct product) where $C_{1}$ is semisimple and definably compact and $D_{1}$ is semisimple with no definably compact parts (and everybody is definably connected).
\newline
We have the exact sequence
$$1 \to R/W \to G/W \to_{\pi} G/R = C_{1}\cdot D_{1} \to 1$$
and $C$ denotes $\pi^{-1}(C_{1})$ which is the maximal normal definable, definably connected, definably compact
subgroup of $G/W$, and we call it the {\em normal definably compact part} of $G$.
Finally $(G/W)/C$ is denoted $D$ and called the {\em semisimple with no definably compact parts} part of $G$.
Note that $R/W$ is the connected component of the centre of $C$ and
$$1 \to R/W \to C \to C_{1} \to 1$$ definably almost splits by results from \cite{HPP}.
\vspace{5mm}
\noindent
One natural question is whether there is a better decomposition theorem.
\begin{Definition} We will say that $G$ has a {\em good decomposition}, if, with above notation, the exact sequence
$1 \to C \to G/W \to D \to 1$ definably almost splits, namely $G/W$ can be written as $C\cdot D_{2}$ for some definable, definably connected, subgroup $D_{2}$ of $G/W$ which is semisimple with no definably compact parts (i.e. the map $D_{2} \to D$ is surjective with finite kernel).
\end{Definition}
\begin{Lemma} The following are equivalent:
\newline
(i) $G$ has a good decomposition.
\newline
(ii) $\pi^{-1}(D_{1})$ is an almost direct product of $R/W$ (the connected component of its centre) and a definable semisimple group (again necessarily without definably compact parts).
\end{Lemma}
\begin{proof}
This is clear, because $G/W$ will the almost direct product of $C$ and some $D_{2}$ if and only if $\pi^{-1}(D_{1})$ is the almost direct product of $R/W$ and $D_{2}$.
\end{proof}
Hence the existence of good decompositions depends on the definable almost splitting of central extensions of semisimple groups without definably compact parts by definable compact groups.
\begin{Remark} $G$ has a good decomposition in either of the cases:
\newline
(i) $G$ is linear, namely a definable, in $M$, subgroup of some $GL(n,K)$, or
\newline
(ii) $G$ is algebraic, namely of the form $H(K)^{0}$ for some algebraic group $H$ defined over $K$.
\end{Remark}
\begin{proof} In fact in both cases (i) and (ii), we point out that $G$ has a {\em definable Levi decomposition}, namely $G$ is an almost semidirect product of its solvable radical $R$ and a definable semisimple group, and this clearly implies that $G$ has a good decomposition. When $G$ is linear this is Theorem 4.5 of \cite{PPSIII}.
\newline
Suppose now that $H$ is a connected algebraic group defined over $K$, and $G = H(K)^{0}$. We have Chevalley's theorem for $H$ yielding the following exact sequence of connected algebraic groups defined over $K$:
\newline
$$ 1 \to L \to H \to_{f} A \to 1$$ where $L$ is linear and $A$ is an abelian variety. Then
$f(G)$ is a connected semialgebraic subgroup of $A(K)$ so is definably compact and commutative, and the semialgebraic connected component of the group of $K$-points of $L$ is a definably connected definable subgroup of $GL(n,K)$ for some $n$. Namely at the level now of definable, definably connected, groups in $M$, we have an exact sequence
$$ 1 \to R \to G \to_{f} B \to 1$$ where $R$ is linear, and $B$ is commutative (and definably compact). Again by \cite{PPSII}, $R$ is an almost semidirect product of a definably connected solvable group $R_{1}$ and a definable semisimple group $S$. Let $R$ be the solvable radical of $G$ (as a definable group). As $G/R$ is semisimple, $R$ must map onto $B$ under $f$, whereby $G$ is the almost direct product of $R$ and $S$.
\end{proof}
\vspace{5mm}
\noindent
Finally in this section we give:
\begin{Example} There is a (Nash) group $G$ without a good decomposition. $T$ will be $RCF$, $M$ the standard model $(\R,+,\cdot)$, and $G$ a certain amalgamated central product of $SO_{2}(\R)$ with the universal cover of $SL_{2}(\R)$.
\end{Example}
\noindent
The model-theoretic setting is the structure $M = (\R,+,\cdot)$. Let $H$ be the definable group $SL_{2}(\R)$ consisting of $2$-by-$2$ matrices over $\R$ of determinant $1$.
Let $\tilde H = \widetilde{SL_{2}(\R)}$ be the universal cover of $H$. $\tilde H$ is a connected, simply connected Lie group and we have the exact sequence (of Lie groups) $$1 \to \Z \to \tilde H \to_{\pi} H\to 1$$ where $\Z$ is the discrete group $(\Z,+)$. $\tilde H$ is not definable in $M$, but we will make use of a certain description from section 8.1 of \cite{HPP} (see Theorem 8.5 there) of $\tilde H$ as a group definable in the $2$-sorted structure $((\Z,+),M)$, and this will be used again in the next section:
\begin{Fact} There is a $2$-cocycle $h:H\times H \to \Z$ with finite image which is moreover definable in $M$ (in the sense that for each $n\in Im(h)$, $\{(x,y)\in H\times H, h(x,y) = n\}$ is definable in $M$), and such that the set $\Z\times H$ with group structure $(t_{1},x_{1})*(t_{2},x_{2}) = (t_{1} + t_{2} + h(x_{1},x_{2}), x_{1}x_{2})$ and projection to the second coordinate, is isomorphic to the group $\tilde H$ with its projection $\pi$ to $H$.
\end{Fact}
Although not needed, let us say a few words of where the cocycle $h$ comes from, referring to \cite{HPP} for more details. The group $\tilde H$ is naturally ind-definable in $M$, namely as an increasing union $\bigcup_{i}X_{i}$ of definable sets with group operation and projection $\pi$ to $H$ piecewise definable. For some $i$, the restriction of $\pi$ to $X_{i}$ is surjective and as $M$ has Skolem functions there is a definable section $s:H \to X_{i}$ of $\pi|X_{i}$. Define $h$
on $H\times H$ by $h(x,y) = s(x)s(y)s(xy)^{-1}$. Then $h$ is as required.
\vspace{2mm}
\noindent
Let now consider the circle group $SO_{2}(\R)$ and we use additive notation for it. Let $g\in SO_{2}(\R)$ be an element of infinite order. Define a group operation $*$ on
$SO_{2}(\R) \times H$ by $(t_{1},x_{1})*(t_{2},x_{2}) = (t_{1} + t_{2} + h(x_{1},x_{2})g, x_{1}x_{2})$. Let $G$ be the resulting group, and note that $G$ is now definable (without parameters, taking $g$ algebraic) in $M$. Note that $\{(ng,x): n\in \Z, x\in H\}$ is a subgroup of $(G,*)$ isomorphic to $\tilde H$ (with again projection on second coordinate corresponding to $\pi:\tilde H \to H$). So identifying $\langle g \rangle$ with $\Z$, we have that
\newline
(i) $SO_{2}(\R)$ is central in $(G,*)$,
\newline
(ii) $G = SO_{2}(\R)\cdot{\tilde H}$,
\newline
(iii) $SO_{2}(\R) \cap {\tilde H} = \Z$,
\newline
and we have the exact sequence of definable, definably connected, groups in $M$,
$$1 \to SO_{2}(\R) \to G \to H \to 1$$ (where remember $H = SL_{2}(\R)$).
\newline
$H$ is of course definably almost simple and not (definably) compact, whereas $SO_{2}(\R)$ is (definably) compact and central in $G$. To show
that $G$ does not have a good decomposition it suffices to show that the exact sequence above does not definably almost split in $M$. In fact there is no (even abstract) subgroup $H_{1}$ of $G$ such that $SO_{2}(\R)\cap H_{1}$ is finite and $SO_{2}(\R)\cdot H_{1} = G$, for otherwise (as $SO_{2}(\R)$ is central in $G$), the commutator subgroup $[G,G]$ is contained in $H_{1}$ so has finite intersection with $SO_{2}(\R)$. But, using (ii) above and the fact that $\widetilde{SL_{2}(\R)}$ is perfect, $[G,G] = \tilde H$ and so has infinite intersection with $SO_{2}(\R)$, a contradiction.
We have completed the exposition of Example 2.10.
\vspace{5mm}
\noindent
In the next section an elaboration of the above analysis will show that passing to a saturated elementary extension, $G^{00} \neq G^{000}$.
\begin{Remark} A definably connected group $G$ with a good decomposition does not have necessarily a definable Levi decomposition.
\end{Remark}
\begin{proof}
If one replaces $SO_2(\R)$ with $(\R, +)$ in Example 2.10, then one obtains a group with a good decomposition ($G/W = SL_2(\R)$), but without a definable Levi decomposition (for the same reason as in Example 2.10).
\end{proof}
\section{$G^{00}$, $G^{000}$ and the examples}
We will first repeat the definitions and geneses of the various notions of ``connected components" of a definable group.
To begin with let $T$ be an arbitrary complete theory. We can identify a definable set with the formula $\phi(x)$ which defines it, or rather the functor taking $M$ to $\phi(M)$ from the category $Mod(T)$ (of models of $T$ with elementary embeddings) to $Set$ given by that formula. If the formula has parameters from a set $A$ in a given model of $T$, then the functor is from $Mod(Th(M,a)_{a\in A})$ to $Set$. Likewise for type-definable sets, and also hyperdefinable sets (a type-definable set quotiented by a type-definable equivalence relation). If $X$ is a type-definable set over $A\subseteq M$, then we sometimes identify $X$ with its interpretation in an $|A|^{+}$-saturated model ${\bar M}$ containing $M$. If $X$ is a type-definable (over $A$) set, defined by partial type $\Phi(x)$ and $E$ a type-definable (over $A$) equivalence relation on $X$ given by partial type $\Psi(x,y)$ then we say that $X/E$ is ``bounded" if $|\Phi(N)/\Psi(N)|$ is bounded as the model $N$ (containing $A$) varies. If $X/E$ is bounded it is not hard to see that $|\phi(N)/\Psi(N| \leq 2^{|T|+|A|}$ for all $N$, and if $N_{1}< N_{2}$ are $|A|^{+}$-saturated models containing $A$ then the natural embedding of $\Phi(N_{1})/\Psi(N_{1})$ in $\Phi(N_{2})/\Psi(N_{2})$ is a bijection. In fact, assuming $X/E$ bounded, for a fixed model $M$ containing $A$, and $N$ a saturated model containing $M$, the $E$-class of some $b\in X$ depends only on $tp(b/M)$, hence the map $X\to X/E$ factors through the space $S_{\phi}(M)$ of complete types over $M$ extending $\phi(x)$.
Equipped with the quotient topology (which we call the logic topology), $X/E$ is a compact Hausdorff space.
Now suppose that the equivalence relation $E$ on $X$ is given instead by a possibly infinite disjunction $\bigvee_{i} \Psi_{i}(x,y)$ of partial types over $A$ (i.e. working in a saturated model ${\bar M}$, is $Aut({\bar M}/A)$-invariant, or as we often just say $A$-invariant). The whole discussion above regarding boundedness of $E$ goes through in this more general case, including the fact that the map $X \to X/E$ factors through the type space $S_{\Phi}(M)$ (for $M$ any model containing $A$) However the quotient topology on $X/E$ is no longer Hausdorff, and it is probably better to view $X/E$ as an object of descriptive set theory or maybe even noncommutative geometry.
Let us first consider the case where $X$ is a sort of $T$. Then given any (small) set $A$ of parameters, there is a finest bounded type-definable over $A$ equivalence relation on $X$ which we call $E_{X,A,KP}$. Likewise there is finest bounded $A$-invariant equivalence relation on $X$ which we call $E_{X,A,L}$. For $a\in X$, the $KP$-strong type of $a$ over $A$ is precisely the $E_{X,A,KP}$-class of $a$, and the Lascar strong type of $a$ over $A$ is precisely the $E_{X,A,L}$-class of $a$. There is also of course the usual strong type of $a$ over $A$, which is the $E_{X,A,Sh}$-class of $a$ where $E_{X,A,Sh}$ is the intersection of all $A$-definable equivalence relations on $X$ with finitely many classes. In stable theories all these strong types coincide. In \cite{CLPZ} an example was given where $KP$-strong types differ from Lascar strong types. More (natural) examples will be given later.
We now consider the case where $X = G$ is a definable group, and $E$ comes from an appropriate subgroup of $G$. So we assume $G$ to be a group definable in a saturated model ${\bar M}$, and we fix a small set $A$ of parameters over which $G$ is defined. $G_{A}^{0}$ denotes the intersection of all $A$-definable subgroups of $G$ of finite index. It is clearly a type-definable (normal) subgroup of $G$ of bounded index, and equipped with the logic topology the quotient $G/G_{A}^{0}$ is a profinite group. We let $G_{A}^{00}$ denote the smallest type-definable over $A$ subgroup of $G$ of bounded index. It is also normal, the quotient $G/G_{A}^{00}$, equipped with the logic topology is a compact (Hausdorff) topological group, and $G/G_{A}^{0}$ is its maximal profinite quotient. Finally $G_{A}^{000}$ is the smallest $A$-invariant subgroup of $G$, of bounded index, which is again normal.
We have that
$G_{A}^{000} \leq G_{A}^{00} \leq G_{A}^{0}$.
A well-known construction (see \cite{Gismatullin-Newelski}) links these different ``connected components" of definable groups with the various strong types. We give a simplified version: Let $T$ be a complete theory such that $dcl(\emptyset)$ is a model. Let $G$ be a $\emptyset$-definable group.
Adjoin a new sort $S$ together with a regular action of $G$ on $S$. Call the new theory $T'$. Clearly no ``new structure" is imposed on $T$. Work in a saturated model of $T'$. Then
\begin{Fact} (i) $E_{S,\emptyset,Sh}$ is the orbit equivalence relation on $S$ induced by $G_{\emptyset}^{0}$.
\newline
(ii) $E_{S,\emptyset,KP}$ is the orbit equivalence relation on $S$ induced by $G_{\emptyset}^{00}$, and
\newline
(iii) $E_{S,\emptyset,L}$ is the orbit equivalence relation on $S$ induced by $G_{\emptyset}^{000}$.
\end{Fact}
Hence, if for example $G^{00} \neq G^{000}$, then we obtain in this way examples where $KP$-strong type differs from Lascar strong type.
There are plenty of examples where $G_{\emptyset}^{0} \neq G_{\emptyset}^{00}$ (such as definably compact groups definable in $o$-minimal structures). However, until now no examples had been worked out where $G_{\emptyset}^{00} \neq G_{\emptyset}^{000}$.
\noindent
We say, for example, that ``$G^{0}$ exists" if for some set $A$ of parameters, for all $B\supseteq A$, $G_{A}^{0}= G_{B}^{0}$. If $G^{0}$ exists, then,
assuming $G$ is $\emptyset$-definable, we can take $A$ to be $\emptyset$ and we define $G^{0}$ to be
$G_{\emptyset}^{0}$. Likewise for $G^{00}$ and $G^{000}$. If $G^{000}$ exists then so do $G^{00}$ and $G^{0}$.
Gismatullin \cite{Gismatullin} proves, following work of Shelah, that if $T$ has $NIP$ then for any definable group $G$,
$G^{000}$ exists. When $T$ is stable, $G^{0} = G^{00} = G^{000}$. For $T$ simple, $G^{0}$ may not exist, but it is known that for any $A$, $G_{A}^{00} = G_{A}^{000}$. It is conjectured (for $T$ simple) that $G_{A}^{0} = G_{A}^{00}$ and this is known in the supersimple case (\cite{Wagner}).
When we are working with either $o$-minimal theories, or closely related $NIP$ theories, we just say $G^{0},G^{00}$, $G^{000}$.
\vspace{2mm}
\noindent
We now give examples of $G$ (including $o$-minimal examples) where $G^{00} \neq G^{000}$. In the sequel to this paper we will make a systematic analysis of $G^{00}$ and $G^{000}$ in the $o$-minimal case, showing that the behaviour in Theorem 3.3 for example is typical.
\begin{Theorem} Let $T = Th(\widetilde{SL_{2}(\R)},\cdot)$. Then $T$ has $NIP$, and if $(G,\cdot)$ denotes a saturated model, then
$G^{00}\neq G^{000}$. In fact $G = G^{00}$ and $G/G^{000}$ is isomorphic to ${\widehat \Z}/\Z$ where $\widehat \Z$ is the profinite completion of $(\Z,+)$.
\end{Theorem}
\begin{proof} From Fact 2.11 and the discussion following it (taken from \cite{HPP}) the group $(\widetilde{SL_{2}(\R)},\cdot)$ is interpretable (with parameters) in the $2$-sorted structure
\newline
$$((\Z,+), (\R,+,\times))$$
(where there are no additional basic relations between the sorts). As $Th(\Z,+)$ is stable (in fact superstable of $U$-rank $1$) and $RCF$ has $NIP$ clearly the $2$-sorted structure has $NIP$ too, and hence the interpretable group
$(\widetilde{SL_{2}(\R)},\cdot)$ has $NIP$.
\noindent
In fact we will work with $T = Th((\Z,+),(\R,+,\times))$ (rather than $Th(\widetilde{SL_{2},\R)},\cdot)$, and will point out how the results are also valid for the ``pure group structure".
\noindent
Let $M$ denote $(\R,+,\cdot)$, and $N$ denote the $2$-sorted structure $((\Z,+),(\R,+,\times))$. Then a saturated model ${\bar N}$ of $T$ will be of the form $((\Gamma,+), \bar M)$ where ${\bar M}$ is a saturated real closed field $(K,+,\cdot)$ say, and $(\Gamma,+)$ is a saturated elementary extension of $(\Z,+)$.
Let now $G$ denote the interpretation in the big model ${\bar N}$ of the formula(s) defining the group $\widetilde{SL_{2}(\R)}$ in $N$. So clearly, using Fact 2.11,
$G$ has universe the definable set $\Gamma \times SL(2,K)$ and group operation given by $(t_{1},x_{1})*(t_{2},x_{2}) = (t_{1} + t_{2} + h(x_{1},x_{2}),x_{1}x_{2})$.
Here $h(x_{1},x_{2})\in \Z < \Gamma$ so everything makes sense. We write the group $G$ as $(G,\cdot)$ hopefully without ambiguity.
We identify the group $\Gamma$ with the subgroup $(\{(t,1):t\in\Gamma\},*)$ of $G$ via the (definable) isomorphism $\iota$ which takes $t\in \Gamma$ to $(t-h(1,1),1)\in G$. As such $\Gamma$ is central in $G$ and we have the exact sequence
\begin{equation}1 \to\Gamma \to G \to SL(2,K)\to 1 \end{equation}
We again identify $\Z < \Gamma$ with the subgroup $(\{(t,1):t\in \Z\},*)$ of $G$ via $\iota$.
Note that $(\{(t,x):t\in \Z, x\in SL(2,K)\},*)$ is a (non definable) subgroup of $G$, which we will take the liberty to call
$\widetilde{SL_{2}(K)}$. (In fact it will identify with the so-called $o$-minimal universal cover of $SL(2,K)$, an ind-definable group in ${\bar M}$, but this fact will not be needed.). From (1) we obtain:
\begin{equation} 1 \to \Z \to \widetilde{SL_{2}(K)} \to SL_{2}(K) \to 1 \end{equation}
(where only $SL_{2}(K)$ is definable).
So with the above identifications we write
\begin{equation} G = \Gamma \cdot \widetilde{SL_{2}(K)} \end{equation}
where the subgroup $\Gamma$ of $G$ is definable and central, the subgroup $\widetilde{SL_{2}(K)}$ of $G$ is not definable and $\Z = \Gamma \cap \widetilde{SL_{2}(K)}$.
\vspace{2mm}
\noindent
We now aim to understand $G^{000}$ in terms of this decomposition (even though $\widetilde{SL_{2}(K)}$ is not definable).
\vspace{2mm}
\noindent
{\em Claim 1.} $\Gamma^{000} = \Gamma^{00} = \Gamma^{0} = \bigcap_{n}n\Gamma$, and is contained in $G^{000}$.
\newline
{\em Proof of Claim 1.} $\Gamma$ (as a group definable in $N$) is simply a model of $Th(\Z,+)$ which is stable, so we have equality of the various connected components and $\Gamma^{0}$ is the intersection of all definable subgroups of finite index which is as described. Also $G^{000}\cap \Gamma$ clearly contains $\Gamma^{000}$.
\newline
{\em End of proof.}
\vspace{2mm}
\noindent
{\em Claim 2.} $\widetilde{SL_{2}(K)}$ is perfect, namely equals its own commutator subgroup.
\newline
{\em Proof of Claim 2.} Because of the exact sequence (2) above and the well-known fact that $SL_{2}(K)$ is perfect, it is enough to show that the subgroup $\Z$ of $\widetilde{SL_{2}(K)}$ is contained in
$[\widetilde{SL_{2}(K)},\widetilde{SL_{2}(K)}]$. But this follows immediately because $\Z$ is contained in the (naturally embedded) subgroup $\widetilde{SL_{2}(\R)}$ of $\widetilde{SL_{2}(K)}$, and again $\widetilde{SL_{2}(\R)}$ is known to be perfect.
\newline
{\em End of proof.}
\vspace{2mm}
\noindent
{\em Claim 3.} $\widetilde{SL_{2}(K)} \subseteq G^{000}$.
\newline
{\em Proof of Claim 3.} Let $H = \widetilde{SL_{2}(K)} \cap G^{000}$. $H$ is then a normal subgroup of $\widetilde{SL_{2}(K)}$ of index at most the continuum.
Hence $\pi(H)$ the image of $H$ under $\pi:\widetilde{SL_{2}(K)} \to SL_{2}(K)$ is an infinite normal subgroup of $SL_{2}(K)$. As $SL_{2}(K)$ is
simple as an abstract group modulo its finite centre, and is also perfect, it follows that $\pi(H) = SL_{2}(K)$. Hence
$\widetilde{SL_{2}(K)} = \Z\cdot H$, and as $\Z$ is central, the commutator subgroup of $\widetilde{SL_{2}(K)}$ is contained in $H$. By Claim 2, $H = \widetilde{SL_{2}(K)}$, as required.
\newline
{\em End of proof.}
\vspace{2mm}
\noindent
(Note that we have shown that $\widetilde{SL_{2}(K)}$ has {\em no} proper normal subgroup not contained in its centre.)
\vspace{2mm}
\noindent
{\em Claim 4.} $[G,G] = \widetilde{SL_{2}(K)}$
\newline
{\em Proof of Claim 4.}
By the description of $G$ in (3), $[G,G]$ is a subgroup of $\widetilde{SL_{2}(K)}$. By Claim 2, we get equality.
\newline
{\em End of proof.}
\vspace{2mm}
\noindent
{\em Claim 5.} $G^{000} = \Gamma^{0}\cdot \widetilde{SL_{2}(K)}$
\newline
{\em Proof of Claim 5.} By Claims 1 and 3, $G^{000}$ contains $\Gamma^{0}\cdot \widetilde{SL_{2}(K)}$.
On the other hand $\Gamma^{0}\cdot \widetilde{SL_{2}(K)}$ is clearly of bounded index in $G$, and using Claim 4 is also clearly invariant under automorphisms of $N$ which fix the parameters defining $G$.
So we get equality. In fact note at this point that $\Gamma^{0}\cdot \widehat{SL_{2}(K)}$ is also invariant under automorphisms of the structure $(G,\cdot)$, so coincides with $G^{000}$ in this reduct.
\newline
{\em End of proof.}
\vspace{2mm}
\noindent
{\em Claim 6.} $G = G^{00}$.
\newline
{\em Proof of claim 6.} By Claim 5 and (3), $G^{000} \cap \Gamma = \Gamma^{0}\cdot \Z$. So as $G^{000}\subseteq G^{00}$, $G^{00}\cap\Gamma$ contains $\Gamma^{0}\cdot\Z$ {\em and must} type-definable. This can be directly seen to be a contradiction unless $G^{00}\cap \Gamma = \Gamma$. For example $\Gamma/\Gamma^{0} = \widehat\Z$ the profinite completion of
$\Z$ and the subgroup $\Z$ of $\Gamma$ goes isomorphically to the dense subgroup $\Z$ of $\hat\Z$ under the quotient map. But then under this quotient map $G^{00}\cap \Gamma$ must go to a closed subgroup of $\widehat\Z$ which contains the dense subgroup $\Z$, hence must go to $\widehat Z$ and so $G^{00}\cap \Gamma = \Gamma$.
\newline
{\em End of proof.}
\vspace{2mm}
\noindent
We have already seen in the proof of Claim 6 that $G/G^{000}$ is naturally isomorphic to $\widehat\Z/\Z$. So together with Claims 5 and 6 this completes the proof of Theorem 3.2.
\end{proof}
\vspace{5mm}
\noindent
We now give a similar $o$-minimal example. We will refer at some point in the proof to the fact that for a definably compact group $H$ (such as $SO_{2}$) in a saturated $o$-minimal structure, $H^{00} = H^{000}$ which follows from results in \cite{NIPII}.
\begin{Theorem} Let $T$ be $RCF$, and $G$ the group from Example 2.10. Let $G_{1}$ be $G({\bar M})$ for ${\bar M} = (K,+,\cdot)$ a saturated model. Then $G_{1} = G_{1}^{00}$, but $G_{1} \neq G_{1}^{000}$ and in fact
$G_{1}/G_{1}^{000}$ is naturally isomorphic to the quotient of the circle group $SO_{2}(\R)$ by a dense cyclic subgroup.
\end{Theorem}
\begin{proof} The proof is more or less identical to that of Theorem 3.2, so we just give a sketch. In analogy with
(3) from the proof of 3.2 and with the same notation we have:
\newline
(*) $G_{1}$ is a central product of its subgroups $SO_{2}(K)$ (which is definable) and $\widetilde{SL_{2}(K)}$ which is not definable, and with intersection ``$\Z$ " (an infinite cyclic subgroup $\langle g \rangle$ of $SO_{2}(\R)< SO_{2}(K)$).
\vspace{2mm}
\noindent
As in Claims 3 and 4 in the proof of 3.2, $G_{1}^{000}$ contains $\widetilde{SL_{2}(K)}$, and (using (*)) $[G_{1},G_{1}] = \widetilde{SL_{2}(K)}$. Also $G^{000} \cap SO_{2}(K)$ contains $SO_{2}(K)^{000}$ which we know to be equal to $SO_{2}(K)^{00}$. Hence we conclude that
\newline
(**) $G_{1}^{000} = SO_{2}(K)^{00}\cdot\widetilde{SL_{2}(K)}$.
\vspace{2mm}
\noindent
Now the quotient map $SO_{2}(K) \to SO_{2}(K)/SO_{2}(K)^{00}$ identifies with the standard part map $SO_{2}(K) \to SO_{2}(\R)$ which is the identity on $SO_{2}(\R)$ and in particular on $\langle g \rangle$ (so $\langle g \rangle \cap SO_{2}(K)^{00}$ is trivial).
\newline
By (**) $G_{1}^{00} \cap SO_{2}(K)$ is type-definable and contains $SO_{2}(K)^{00}\cdot\langle g \rangle$, so its image under the standard part map $SO_{2}(K) \to SO_{2}(\R)$ is a closed subgroup which contains the dense subgroup $\langle g\rangle$, hence has to be $SO_{2}(\R)$. So $G_{1}^{00}$ contains $SO_{2}(K)$ hence by (*) $G_{1}^{00} = G_{1}$.
\end{proof}
\vspace{5mm}
\noindent
As remarked earlier the above theorems provide new examples of non $G$-compact theories, i.e. where Lascar strong types differ from $KP$-strong types.
A possibly interesting question, especially bearing in mind the above examples, is how one can or should view, naturally, $G/G^{000}$ (or even $G^{00}/G^{000}$) as a mathematical object. For example it is an abstract group, a quasi-compact (compact but not necessarily Hausdorff) topological group, as well as a quotient of a type-space by an $F_{\sigma}$ equivalence relation. In the above examples it is, in a natural fashion, the quotient of a compact (Hausdorff) commutative group by a countable dense subgroup. We will show in the sequel that this is always the case when $G$ is definable in an $o$-minimal structure. A natural problem at this point is to find $G$ such that $G_{\emptyset}^{00}/G_{\emptyset}^{000}$ is noncommutative. Also we see, via the examples above, some relationships between universal covers and fundamental groups on the one hand, and Lascar groups on the other, and maybe the connection is more than just accidental.
\section{Definable amenability and bounded orbits}
We begin with an arbitrary theory $T$. We recall that if $M$ is a model, and $X$ a definable set in $M$, then a Keisler measure $\mu$ on $X$ (over $M$) is a finitely additive probability measure on the family of subsets of $X$ which are definable (with parameters) in $M$. A Keisler measure $\mu$ on $X$ over $M$ induces and is induced by a (unique) regular Borel probability measure on the space $S_{X}(M)$ of complete types over $M$ containing the formula defining $X$, which we sometimes identify with $\mu$. (See the introduction to Section 4 of \cite{NIPII}.) In fact a Keisler measure on $X$ over $M$ should be seen as a generalization of a complete type over $M$ (which contains the formula $``x\in X"$).
When $X = G$ is a definable group, namely is equipped with a definable group structure, then $G(M)$ acts (on both the left and right) on the set (in fact space) of Keisler measures $\mu$ on $G$ over $M$: if $Y$ is an $M$-definable subset of $G$ then,
$(g\cdot\mu)(Y) = \mu(g^{-1}\cdot Y)$. In particular it makes sense for a Keisler measure $\mu$ on $G$ over $M$ to be left (or right)
$G(M)$-invariant. If $G$ has such a left $G$-invariant Keisler measure then we say that $G$ is {\em definably amenable}.
In fact (assuming $G$ is definable without parameters), this is a property of $Th(M)$, in the sense that if $N$ is another model of $T$ and $G(N)$ is the interpretation in $N$ of the formulas defining $G$, then $G(M)$ is definably amenable iff $G(N)$ is. This follows from Proposition 5.4 of \cite{NIPI}.
In the above context we also have the (left and right) actions of $G(M)$ on the space $S_{G}(M)$ (of completes types over $M$ concentrating on $G$).
When $M$ is a ``big" model, and $p(x)\in S_{G}(M)$, we have the notion ``$p$ has bounded orbit" from \cite{Newelski} for example. We will take our working definition as the following rather crude one, which on the face of it depends on set theory.
\begin{Definition} Suppose $\bar\kappa$ is an inaccessible cardinal, and ${\bar M}$ a saturated model of cardinality $\bar\kappa$.
\newline
(i) We will say that {\em $p(x)\in S_{G}({\bar M})$ has bounded orbit} if the orbit of $p$ under the (left) action of $G({\bar M})$ is of cardinality
$<\bar\kappa$, equivalently if $Stab(p) = \{g\in G({\bar M}): gp = p\}$ is a subgroup of $G({\bar M})$ of index $< \bar\kappa$.
\newline
(ii) We say that {\em $G$ has a bounded orbit} if some $p(x)\in S_{G}({\bar M})$ has bounded orbit.
\end{Definition}
In \cite{Newelski} some more careful definitions (see Definition 1.1 there) are given of ``bounded orbit" avoiding the dependence on set theory (and some problems are mentioned concerning the possible sizes of bounded orbits), and our results in this section hold with these more refined definitions. The same paper \cite{Newelski} states a conjecture attributed to Petrykowski:
\begin{Conjecture} If $G$ has a bounded orbit then $G$ is definably amenable.
\end{Conjecture}
As discussed in the introduction the motivation for this conjecture seems to be also closely connected to $G^{00}$ and $G^{000}$, in the sense that one may hope, given a global type $p$ with bounded orbit, to be able to show that $G^{00} = G^{000} = Stab(p)$ and then to $p$ to lift the Haar measure on $G/G^{00}$ to a translation invariant Keisler measure on $G$.
The aim of this section is to prove Conjecture 4.2 in the $o$-minimal context (although we have not yet ``identified" those types with bounded orbit).
We do this by characterizing each of the properties ``definable amenability" and ``having a bounded orbit" in terms of the decomposition given in Proposition 2.6 and concluding that they coincide. So in a sense it is a proof by inspection.
We first describe when a definable group in an $o$-minimal structure is definably amenable. The proof is basically due to Hrushovski.
We begin with some preparatory lemmas, the first two of which are in a general context.
\begin{Lemma} Suppose $T$ has definable Skolem functions. Let $G$ be definable and definably amenable. Then any definable subgroup $H$ of $G$ is also definably amenable.
\end{Lemma}
\begin{proof} Let $\mu$ be a left $G$-invariant Keisler measure on $G$. By the existence of definable Skolem functions there is a definable subset $S$ of $G$ which meets each coset of $H$ in $G$ in exactly one point. Define $\lambda$ on definable subsets of $H$ by: for $Y$ a definable subset of $H$,
$\lambda(Y) = \mu(Y\cdot S)$ where $Y\cdot S = \{a\cdot b: a\in Y, b\in S\}$.
\newline
It is easy to see that $\lambda$ is a Keisler measure on $H$. Left $H$-invariance, is because, for $Y\subseteq H$ definable and $h\in H$, $\lambda(h\cdot Y) = \mu((h\cdot Y)\cdot S) = \mu(h\cdot(Y\cdot S)) =$ (by left invariance of $\mu$) $\mu(Y\cdot S) = \lambda(Y)$.
\end{proof}
\begin{Lemma} Suppose $G$ is definable and $H$ is a definable normal subgroup.
\newline
(i) If $G$ is definably amenable, so is $G/H$.
\newline
(ii) (Assume $T$ has $NIP$.) If both $H$ and $G/H$ are definably amenable, so is $G$.
\end{Lemma}
\begin{proof}
(i) Let $\pi:G\to G/H$ be the canonical surjective homomorphism. If $\mu$ is a left $G$-invariant Keisler measure on $G$, then the ``pushforward measure" on $G/H$ defined by $\lambda(Y) = \mu(\pi^{-1}(Y))$ is a left invariant Keisler measure on $G/H$.
\newline
(ii) We work in a saturated model ${\bar M}$. Let $\mu,\lambda$ be translation-invariant Keisler measures on $H$ and $G/H$ respectively over ${\bar M}$ (i.e. ``global" Keisler measures). By Lemma 5.8 of \cite{NIPII} we may assume that $\mu$ is {\em definable}. We define a global Keisler measure $\chi$ on $G$ by integration:
Namely, let $X$ a definable subset of $G$, and we may assume that both $X$ and $\mu$ are definable over a small model $A$. For $g/H \in G/H$, let $f(g/H) = \mu((g^{-1}X) \cap H)$, noting by translation invariance of $\mu$, that this is well-defined. By definability of $\mu$ over $M$, $f(g/H)$ depends on $tp((g/H)/M)$ and the corresponding map from the relevant space of complete types $S_{G/H}(M)$ to $[0,1]$ is continuous. So considering $\lambda$ as inducing a Borel measure on $S_{G/H}(M)$ we can form $\int f d\lambda$, which we define to be $\chi(X)$. It is easily checked that $\chi$ is a global translation invariant Keisler measure on $G$.
\end{proof}
\begin{Lemma} Suppose $G$ is a definably almost simple, non definably compact group, definable in an $o$-minimal expansion $M$ of a real closed field $K$ say. Then $G$ is not definably amenable.
\end{Lemma}
\begin{proof} The main point is to observe that, working up to definable isogeny, $G$ contains a definable subgroup definably isomorphic to $PSL(2,K)$.
Granting this observation, the lemma follows from Lemma 4.4 together with Remark 5.2(iv) of \cite{NIPI} (which states that $PSL(2,K)$ is not definably amenable). The observation itself follows from results in \cite{PPSI} and \cite{PPSIII}, together with the classification of the real simple Lie algebras corresponding to simple noncompact Lie groups.
\end{proof}
We can now conclude, where notation comes from the paragraph following the proof of Proposition 2.6.
\begin{Proposition} Let $G$ be a definable, definably connected, group in an $o$-minimal expansion $M$ of a real closed field. Then $G$ is definably amenable if and only if $D$ (the semisimple with no definably compact parts, part of $G$) is trivial.
\end{Proposition}
\begin{proof} First suppose that $D$ is trivial, so we have a short exact sequence $$1\to W \to G \to C \to 1$$ where $W$ is solvable and $C$ is definably compact. Now $W$ is amenable as an abstract group, so in particular definably amenable, and by \cite{NIPII}, $C$ is definable amenable. As $Th(M)$ has $NIP$, by Lemma 4.4(ii) $G$ is definably amenable.
Conversely, if $G$ is definably amenable, then by Lemma 4.4(i), $D$ is too, as it is a quotient of $G$. If $D$ is nontrivial then it contains a definably almost simple (non definably compact) definable subgroup, which by Lemma 4.3 is definably amenable. This contradicts Lemma 4.5.
\end{proof}
We give a little more information around definable amenability by noting:
\begin{Proposition} ($T$ an $o$-minimal expansion of $RCF$.) Suppose $G$ is definable, definably connected, and torsion-free. Then $G$ has a (left) invariant, definable, global complete type.
\end{Proposition}
\begin{proof} We again argue by induction on $dim(G)$. By Proposition 1.1 (i), $G$ contains a normal definable subgroup $H$ such that $G/H$ is $1$-dimensional. From results in \cite{Razenj} we may assume that $G/H$ is an open interval in $1$-space with continuous group operation. The global type at ``$+\infty$", $p$ say, is both definable and translation invariant. On the other hand the induction hypothesis gives a definable translation invariant global complete type $q$ of $H$. The argument (by integration) in the proof of Lemma 4.4(ii) produces a global complete type of $G$ which is both translation invariant and definable.
\end{proof}
\vspace{2mm}
\noindent
We now focus on Conjecture 4.2.
From now on ${\bar M}$ denotes a saturated model of (arbitrary complete countable) $T$, of cardinality $\bar\kappa$ where $\bar\kappa$ is inaccessible, and $G$ an $\emptyset$-definable group.
Let us first remark that the converse to Conjecture 4.2 holds for $NIP$ theories.
\begin{Remark} (Assume $T$ has $NIP$.) Suppose $G$ is definably amenable. Then $G$ has a bounded orbit.
\end{Remark}
\begin{proof} By Proposition 5.12 of \cite{NIPII}, $G$ has a global $f$-generic type $p$. Fix a small model $M_{0}$ which witnesses this. There will then be a bounded number of global complete types which do not fork over $M_{0}$, as there are a bounded number of complete types over $M_{0}$, and by $NIP$ any complete type over $M_{0}$ has a bounded number of global nonforking extensions (Proposition 2.1 of \cite{NIPII}). As every $G({\bar M})$-translate of $p$ does not fork over $M_{0}$ there are a bounded number of such translates so $p$ has bounded orbit.
\end{proof}
\begin{Lemma} Suppose $G = G({\bar M})$ is almost simple as an abstract group, in the sense that $G$ has no infinite proper normal subgroups. Then $G$
has no proper subgroup of index $<\bar\kappa$. In particular any bounded orbit of $G$ is a singleton (namely a translation invariant type).
\end{Lemma}
\begin{proof} Suppose $H$ were a proper subgroup of $G$ of bounded index. Then $G$ acts transitively on the homogeneous space $X = G/H$. Let
$N = \{g\in G:gx = x$ for all $x\in X\}$. Then $N$ is a proper normal subgroup of $G$. As $G/N$ acts faithfully on $X$ and $|X|< \bar\kappa$, also
$|G/N| < \bar\kappa$, in particular $N$ is an infinite proper normal subgroup of $G$, contradiction.
\newline
For the ``in particular" clause: if $p\in S_{G}({\bar M})$ has bounded orbit, then $Stab(p)$ is a subgroup of $G$ of bounded index. By what has just been shown $Stab(p) = G$ so $p$ is left $G$-invariant.
\end{proof}
\begin{Lemma} Let $f:G\to H$ be a definable surjective homomorphism. If $G$ has a bounded orbit, so does $H$.
\end{Lemma}
\begin{proof} Let $p\in S_{G}({\bar M})$ have bounded orbit. Then $q = f(p)\in S_{H}({\bar M})$, and if $g\in Stab_{G}(p)$ then
$q = f(p) = f(gp) = f(g)q$ hence $f(Stab_{G}(p)) \subseteq Stab_{H}(q)$. As $Stab_{G}(p)$ has bounded index in $G$, also $Stab_{H}(q)$ has bounded index in $H$.
\end{proof}
\begin{Proposition} Assume $T$ is an $o$-minimal expansion of $RCF$ and $G$ is definably connected. Suppose $G$ has a bounded orbit. Then $D$ (the semisimple with no definably compact parts, part of $G$) from Proposition 2.6 is trivial.
\end{Proposition}
\begin{proof} Suppose for a contradiction that $D$ is nontrivial. Then $D$ is an almost direct product of definable, definably almost simple non definably compact groups $D_{i}$. But then for $i=0$ say there is a definable surjective homomorphism $f$ from $G$ to $D_{0}$. By Lemma 4.10, $D_{0}$ has a bounded orbit. As remarked earlier (Corollary 6.3 of \cite{PPSIII}) $D_{0}$ is almost simple as an abstract group, so by Lemma 4.9, $D_{0}$ has an invariant (global) type. This contradicts non definable amenability of $D_{0}$ (Lemma 4.5).
\end{proof}
\begin{Corollary} ($T$ an $o$-minimal expansion of $RCF$). $G$ has a bounded orbit if and only if $G$ is definably amenable.
\end{Corollary}
\begin{proof} If $G$ has a bounded orbit, then by Proposition 4.11 and Proposition 4.6, $G$ is definably amenable. The converse is Remark 4.8.
\end{proof}
\vspace{5mm}
\noindent
Finally we discuss a strengthening of Conjecture 4.2 in which we try to describe bounded orbits themselves. As we are not completely sure which way it will go we state the new conjecture as a question (with notation as above).
\begin{Problem} (Assume $T$ has $NIP$.) Is it the case that $p\in S_{G}({\bar M})$ has bounded orbit (equivalently stabilizer of bounded index) if and only if $p$ is $f$-generic?
\end{Problem}
Again the right to left direction holds with proof contained in the proof of Remark 4.8.
In the $o$-minimal case we hope to give an explicit description of global types with bounded orbit from which a positive answer to Problem 4.13 can be just read off. By Corollary 4.12 and Proposition 4.6 we may restrict ourselves to definable groups $G$ for which $D$ (from the discussion after Proposition 2.6) is trivial, hence $G$ is built up from a definably compact group, and $1$-dimensional torsion-free groups. Here we just point out that Problem 4.13 has a positive answer for these constituents, and leave the general ($o$-minimal case) to later work. For the next lemma we recall that a
definable subset of $G$ (or the formula defining it) is said to be left generic if finitely many left translates of $X$ cover $G$. Likewise for right generic. Definably compact groups $G$ in $o$-minimal expansions of real closed fields have the so-called ``finitely satisfiable generics" property (see \cite{NIPI}) which says that there is a global type of $G$ every left translate of which is finitely satisfied in some given small model. The $fsg$ property implies among other things that left genericity coincides with right genericity for definable subsets of $G$, so we just say {\em generic}. A generic type $p\in S_{G}(M)$ is one all of whose formulas are generic, and again such global types exist when $G$ is definably compact in $o$-minimal $T$.
\begin{Lemma} ($T$ $o$-minimal.) Suppose $G$ is definably compact, and $p(x)\in S_{G}({\bar M})$. Then the following are equivalent:
\newline
(i) $p$ has bounded $G$-orbit,
\newline
(ii) $p$ is generic,
\newline
(iii) $p$ is $f$-generic.
\end{Lemma}
\begin{proof}
In fact the implications $(ii) \to (iii) \to (i)$ hold for $fsg$ groups in arbitrary $NIP$ theories and the proof will be at this level of generality.
\newline
(iii) implies (i) is given by the proof of Remark 4.8.
\newline
(ii) implies (iii): By \cite{NIPI} (see also Fact 5.2 of \cite{NIPII}), any generic formula $\phi(x)$ over ${\bar M}$ is satisfied in any small model $M_{0}$ (over which $G$ is defined). So if $p\in S_{G}({\bar M})$ is generic, then every left translate of $p$ is finitely satisfied in $M_{0}$ (where $M_{0}$ is any small model over which $G$ is defined), so in particular every left translate of $p$ does not fork over $M_{0}$, hence $p$ is left generic.
\vspace{2mm}
\noindent
(i) implies (ii): Here we give the proof assuming $o$-minimality of $T$ and definable compactness of $G$. Suppose $p$ is not generic. Let $X$ be a definable set (or formula) in $p$ which is not generic. Note that we may assume $G$ to be a closed bounded definable subset of some ${\bar M}^{n}$. The closure of $X$ in $G$ equals $X\cup Y$ where $dim(Y) < dim(G)$. So $Y$ is not generic in $G$. Hence as the set of non generic definable sets is an ideal, the closure of $X$ is also non generic (and of course in $p$). The upshot is that we may assume $X$ to be closed. Let $M_{0}$ be a small model over which $G$ and $X$ are defined. If for every $g\in G$, the left translate $g\cdot X$ meets $G(M_{0})$, then by compactness $X$ is right generic, so generic, a contradiction. Hence for some $g\in G$, $(g\cdot X) \cap G(M_{0}) = \emptyset$. Now $g\cdot X$ is also closed in $G$. So by results in \cite{Dolich} and \cite{Peterzil-Pillay} (see also \cite{Starchenko}),
$g\cdot X$ forks over $M_{0}$. By the main result of \cite{Chernikov-Kaplan} (which is maybe implicit in other papers
in the $o$-minimal case), $g\cdot X$ divides over $M_{0}$. As $X$ is defined over $M_{0}$ this means that for some $M_{0}$-indiscernible sequence $(g_{i}:i<\omega)$ and
some $k< \omega$, $\{g_{i}\cdot X: i< \omega\}$ is $k$-inconsistent, in the sense that for every (some) $i_{1} < ..
< i_{k}$, $(g_{i_{1}}\cdot X) \cap .... \cap (g_{i_{k}}\cdot X) = \emptyset$. We can stretch the $M_{0}$
-indiscernible sequence $(g_{i}:i<\omega)$ to $(g_{i}:i< {\bar\kappa})$. So $\{(g_{i}\cdot X):i< {\bar\kappa}\}$ is
also $k$-inconsistent. It follows easily that among the set $\{g_{i}p:i<{\bar\kappa}\}$ of
of complete global types there are $\bar\kappa$ many distinct types. So $p$ does not have bounded orbit.
\end{proof}
\vspace{2mm}
\noindent
Let us note that various ingredients of the proof of (i) implies (ii) above also appear in earlier papers such as \cite{NIPII}. In fact there {\em is} a proof of (i) implies (ii) (so of the whole lemma) in the more general context of $fsg$ groups in $NIP$ theories, but depending on some additional machinery. It will appear in a subsequent paper.
\begin{Lemma} Suppose $G$ is $1$-dimensional and torsion-free (divisible), and $p\in S_{G}({\bar M})$. Then the following are equivalent:
\newline
(i) $p$ has bounded $G$-orbit,
\newline
(ii) $p$ is $G$-invariant,
\newline
(iii) $p$ is the type at $+\infty$ or the type at $-\infty$ (so definable and $G$-invariant, hence $f$-generic).
\end{Lemma}
\begin{proof} As remarked earlier we can and will identify $G$ with an open interval on which the group operation is
continuous, and write $G$ additively (it is commutative). We know (or it is clear) that the types at $+\infty$ and
$-\infty$ are $G$ invariant hence have bounded orbit. So it suffices to prove that any other type
$q(x)\in S_{G}({\bar M})$ has unbounded $G$-orbit. This is really obvious but we go through details. So $q$ defines a cut in $G$ with nonempty left hand side $L$ and right hand side $R$. Let $a\in L$, $b\in R$ and $c = b-a > 0$. By compactness and saturation we can clearly find an increasing sequence $(d_{i}:i<\bar\kappa)$ in $G$, such that $i < j$ implies $(d_{j}-d_{i}) \geq c$. Hence $\{d_{i}+ q: i< \bar\kappa\}$ witnesses that $q$ has unbounded orbit.
\end{proof}
\vspace{2mm}
\noindent
|
1,314,259,995,884 | arxiv | \section{Introduction}
\label{sec:intro}
Recent commonsense reasoning benchmarks~\cite{sap2019socialiqa,bisk2019piqa} and neural advancements~\cite{liu2019roberta,lin2019kagnet} shed a new light on the longstanding task of capturing, representing, and reasoning over commonsense knowledge.
While state-of-the-art language models \cite{devlin2018bert,liu2019roberta} capture linguistic patterns that allow them to perform well on commonsense reasoning tasks after fine-tuning, their robustness and explainability could benefit from integration with structured knowledge, as shown by KagNet~\cite{lin2019kagnet} and HyKAS \cite{ma2019towards}.
Let us consider an example task question from the SWAG dataset~\cite{zellers2018swag},\footnote{The multiple-choice task of choosing an intuitive follow-up scene is customary called question answering~\cite{ma2020knowledge,zellers2018swag}, despite the absence of a formal question.} which describes a woman that takes a sit at the piano:
\footnotesize{
\begin{verbatim}
Q: On stage, a woman takes a seat at the piano. She:
1. sits on a bench as her sister plays with the doll.
2. smiles with someone as the music plays.
3. is in the crowd, watching the dancers.
-> 4. nervously sets her fingers on the keys.
\end{verbatim}
}
\normalsize
Answering this question requires knowledge that humans possess and apply, but machines cannot distill directly in communication.
Luckily, graphs of (commonsense) knowledge contain such knowledge. ConceptNet's~\cite{speer2017conceptnet} triples state that pianos have keys and are used to perform music, which supports the correct option and discourages answer 2. WordNet~\cite{miller1995wordnet} states specifically, though in natural language, that pianos are played by pressing keys. According to an image description in Visual Genome, a person could play piano while sitting and having their hands on the keyboard. In natural language, ATOMIC~\cite{sap2019atomic} indicates that before a person plays piano, they need to sit at it, be on stage, and reach for the keys. ATOMIC also lists strong feelings associated with playing piano. FrameNet's~\cite{baker1998berkeley} frame of a performance contains two separate roles for the performer and the audience, meaning that these two are distinct entities, which can be seen as evidence against answer 3.
While these sources clearly provide complementary knowledge that can help commonsense reasoning, their different foci, representation formats, and sparse overlap makes integration difficult. Taxonomies, like WordNet , organize conceptual knowledge into a hierarchy of classes. An independent ontology, coupled with rich instance-level knowledge, is provided by Wikidata \cite{vrandevcic2014wikidata}, a structured counterpart to Wikipedia. FrameNet, on the other hand, defines an orthogonal structure of frames and roles; each of which can be filled with a WordNet/Wikidata class or instance. Sources like ConceptNet or WebChild \cite{tandon2017webchild}, provide more `episodic' commonsense knowledge, whereas ATOMIC captures pre- and post-situations for an event. Image description datasets, like Visual Genome \cite{krishna2017visual}, contain visual commonsense knowledge.
While links between these sources exist (mostly through WordNet synsets), the majority of their nodes and edges are disjoint.
In this paper, we propose an approach for integrating these (and more sources) into a single Common Sense Knowledge Graph (CSKG). We suvey existing sources of commonsense knowledge to understand their particularities and we summarize the key challenges on the road to their integration (section \ref{sec:integration}).
Next, we devise five principles and a representation model for a consolidated CSKG (section \ref{sec:approach}). We apply our approach to build the first version of CSKG, by combining seven complementary, yet disjoint, sources. We compute several graph and text embeddings to facilitate reasoning over the graph. In section \ref{sec:analysis}, we analyze the content of the graph and the generated embeddings. We provide insights into the utility of CSKG for downstream reasoning on commonsense Question Answering (QA) tasks in section \ref{sec:downstream}. In section \ref{sec:discussion} we reflect on the learned lessons and list the next steps for CSKG. We conclude in section \ref{sec:conclusions}.
\section{Problem statement}
\label{sec:integration}
\subsection{Sources of Common Sense Knowledge}
Table \ref{tab:survey} summarizes the content, creation method, size, external mappings, and example resources for representative public commonsense sources: ConceptNet~\cite{speer2017conceptnet}, WebChild~\cite{tandon2017webchild}, ATOMIC~\cite{sap2019atomic}, Wikidata~\cite{vrandevcic2014wikidata}, WordNet~\cite{miller1995wordnet}, Roget~\cite{kipfer2005roget}, VerbNet~\cite{schuler2005verbnet}, FrameNet~\cite{baker1998berkeley},
Visual Genome~\cite{krishna2017visual}, and ImageNet~\cite{deng2009imagenet}.
Primarily, we observe that the commonsense knowledge is spread over a number of sources with different focus: commonsense knowledge graphs (e.g., ConceptNet), general-domain knowledge graphs (e.g., Wikidata), lexical resources (e.g., WordNet, FrameNet), taxonomies (e.g., Wikidata, WordNet), and visual datasets (e.g., Visual Genome)~\cite{ilievskiinprep}. Therefore, these sources together cover a rich spectrum of knowledge, ranging from everyday knowledge, through event-centric knowledge and taxonomies, to visual knowledge. While the taxonomies have been created manually by experts, most of the commonsense and visual sources have been created by crowdsourcing or curated automatic extraction.
Commonsense and common knowledge graphs (KGs) tend to be relatively large, with millions of nodes and edges; whereas the taxonomies and the lexical sources are notably smaller. Despite the diverse nature of these source
, we note that many contain mappings to WordNet, as well as a number of other sources. These mappings might be incomplete, e.g., only a small portion of ATOMIC can be mapped to ConceptNet. Nevertheless, these high-quality mappings provide an opening for consolidation of commonsense knowledge, a goal we pursue in this paper.
\noindent \subsection{Challenges} Combining these sources in a single KG faces three key challenges:
\noindent 1. The sources follow \textbf{different knowledge modeling approaches}. One such difference concerns the relation set: there are very few relations in ConceptNet and WordNet, but (tens of) thousands of them in Wikidata and Visual Genome. Consolidation requires a global decision on how to model the relations. The granularity of knowledge is another factor of variance. While regular RDF triples fit some sources (e.g., ConceptNet), representing entire frames (e.g., in FrameNet), event conditions (e.g., in ATOMIC), or compositional image data (e.g., Visual Genome) might benefit from a more open format. An ideal representation would support the entire granularity spectrum.
\noindent 2. As a number of these sources have been created to support natural language applications, they often contain \textbf{imprecise descriptions}. Natural language phrases are often the main node types in the provided knowledge sources, which provides the benefit of easier access for natural language algorithms, but it introduces ambiguity which might be undesired from a formal semantics perspective. An ideal representation would harmonize various phrasings of a concept, while retaining easy and efficient linguistic access to these concepts via their labels.
\noindent 3. Although these sources contain links to existing ones, we observe \textbf{sparse overlap}. As these external links are typically to WordNet, and vary in terms of their version (3.0 or 3.1) or target (lemma or synset),
the sources are still disjoint and establishing (identity) connections is difficult. Bridging these gaps, through optimally leveraging existing links, or extending them with additional ones automatically, is a modeling and integration challenge.
\subsection{Prior consolidation efforts}
Prior efforts that combine pairs or small sets of (mostly lexical) commonsense sources exist. A unidirectional manual mapping from VerbNet classes to WordNet and FrameNet is provided by the Unified Verb Index~\cite{trumbo2006increasing}. The Predicate Matrix~\cite{de2016predicate} has a full automatic mapping between lexical resources, including FrameNet, WordNet, and VerbNet. PreMOn~\cite{corcoglioniti2016premon} formalizes these in RDF. In \cite{mccrae2018mapping}, the authors produce partial mappings between WordNet and Wikipedia/DBpedia. Zareian et al.~\cite{zareian2020bridging} combine edges from Visual Genome, WordNet, and ConceptNet to improve scene graph generation from an image.
None of these efforts aspires to build a consolidated KG of commonsense knowledge.
Most similar to our effort, BabelNet~\cite{navigli2012babelnet} integrates many sources, covers a wide range of 284 languages, and primarily focuses on lexical and general-purpose resources, like WordNet, VerbNet, and Wiktionary. While we share the goal of integrating valuable sources for downstream reasoning, and some of these sources (e.g., WordNet) overlap with BabelNet, our ambition is to support commonsense reasoning applications. For this reason, we focus on commonsense knowledge graphs, like ConceptNet and ATOMIC, or even visual sources, like Visual Genome, none of which are found in BabelNet.
\section{The Common Sense Knowledge Graph}
\label{sec:cskg}
\label{sec:approach}
\subsection{Principles}
Question answering and natural language inference tasks require knowledge from heterogeneous sources (section \ref{sec:integration}). To enable their joint usage, the sources need to be harmonized in a way that will allow straightforward access by linguistic tools~\cite{ma2019towards,lin2019kagnet}, easy splitting into arbitrary subsets, and computation of common operations, like (graph and word) embeddings or KG paths.
For this purpose, we devise five principles for consolidatation of sources into a single commonsense KG (CSKG), driven by pragmatic goals of simplicity, modularity, and utility:
\noindent \textbf{P1. Embrace heterogeneity of nodes}
One should preserve the natural node diversity inherent to the variety of sources considered, which entails
blurring the distinction between objects (such as those in Visual Genome or Wikidata), classes (such as those in WordNet or ConceptNet), words (in Roget), actions (in ATOMIC or ConceptNet), frames (in FrameNet), and states (as in ATOMIC). It also allows formal nodes, describing unique objects, to co-exist with fuzzy nodes describing ambiguous lexical expressions.
\noindent \textbf{P2. Reuse edge types across resources} To support reasoning algorithms like KagNet~\cite{lin2019kagnet}, the set of edge types should be kept to minimum and reused across resources wherever possible.
For instance, the ConceptNet edge type \texttt{/r/LocatedNear} could be reused to express spatial proximity in Visual Genome.
\noindent \textbf{P3. Leverage external links} The individual graphs are mostly disjoint according to their formal knowledge. However, high-quality links may exist or may be easily inferred, in order to connect these KGs and enable path finding. For instance, while ConceptNet and Visual Genome do not have direct connections, they can be partially aligned, as both have links to WordNet synsets
\noindent \textbf{P4. Generate high-quality probabilistic links} Inclusion of additional probabilistic links, either with off-the-shelf link prediction algorithms or with specialized algorithms (e.g., see section \ref{ssec:consolidation}), would improve the connectedness of CSKG and help path finding algorithms reason over it.
Given the heterogeneity of nodes (cf. P1), a `one-method-fits-all' node resolution might not be suitable.
\noindent \textbf{P5. Enable access to labels} The CSKG format should support easy and efficient natural language access. Labels and aliases associated with KG nodes provide application-friendly and human-readable access to the CSKG, and can help us unify descriptions of the same/similar concept across sources.
\subsection{Representation}
We model CSKG as a \textbf{hyper-relational graph}, describing edges in a tabular KGTK~\cite{ilievski2020kgtk} format.
We opted for this representation rather than the traditional RDF/OWL2 because it allows us to fulfill our goals (of simplicity and utility) and follow our principles more directly, without compromising on the format. For instance, natural language access (principle P5) to RDF/OWL2 nodes requires graph traversal over its \texttt{rdfs:label} relations. Including both reliable and probabilistic nodes (P3 and P4) would require a mechanism to easily indicate edge weights, which in RDF/OWL2 entails inclusion of blank nodes, and a number of additional edges. Moreover, the simplicity of our tabular format allows us to use standard off-the-shelf functionalities and mature tooling, like the \texttt{pandas}\footnote{\url{https://pandas.pydata.org/}} and \texttt{graph-tool}\footnote{\url{https://graph-tool.skewed.de/}}
libraries in Python, or graph embedding tools like \cite{lerer2019pytorch}, which have been conveniently wrapped by the KGTK~\cite{ilievski2020kgtk} toolkit
\footnote{CSKG can be transformed to RDF with \texttt{kgtk generate-wikidata-triples}.}
The edges in CSKG are described by ten columns. Following KGTK, the primary information about an edge consists of its \texttt{id}, \texttt{node1}, \texttt{relation}, and \texttt{node2}. Next, we include four ``lifted'' edge columns, using KGTK's abbreviated way of representing triples about the primary elements, such as \texttt{node1;label} or \texttt{relation;label} (label of \texttt{node1} and of \texttt{relation}). Each edge is completed by two qualifiers: \texttt{source}, which specifies the source(s) of the edge (e.g., ``CN'' for ConceptNet), and \texttt{sentence}, containing the linguistic lexicalization of a triple, if given by the original source. Auxiliary KGTK files can be added to describe additional knowledge about some edges, such as their weight, through the corresponding edge \texttt{id}s. We provide further documentation at: \url{https://cskg.readthedocs.io/}.
\subsection{Consolidation}
\label{ssec:consolidation}
Currently, CSKG integrates seven sources, selected based on their popularity in existing QA work: a commonsense knowledge graph ConceptNet, a visual commonsense source Visual Genome, a procedural source ATOMIC, a general-domain source Wikidata, and three lexical sources, WordNet, Roget, and FrameNet.
Here, we briefly present our design decisions per source, the mappings that facilitate their integration, and further refinements on CSKG.
\subsubsection{Individual sources}
We keep the original edges of \textbf{ConceptNet} 5.7 expressed with 47 relations in total. We also include the entire \textbf{ATOMIC} KG, preserving the original nodes and its nine relations. To enhance lexical matching between ATOMIC and other sources, we add normalized labels of its nodes, e.g., adding a second label ``accepts invitation'' to the original one ``personX accepts personY's invitation''.
We import four node types from \textbf{FrameNet}: frames, frame elements (FEs), lexical units (LUs), and semantic types (STs), and we reuse 5 categories of FrameNet edges: frame-frame (13 edge types), frame-FE (1 edge type), frame-LU (1 edge type), FE-ST (1 edge type), and ST-ST (3 edge types). Following principle P2 on edge type reuse, we map these 19 edge types to 9 relations in ConceptNet, e.g., \texttt{is\_causative\_of} is converted to \texttt{/r/Causes}. \textbf{Roget} We include all synonyms and antonyms between words in Roget, by reusing the ConceptNet relations \texttt{/r/Synonym} and \texttt{/r/Antonym} (P2).
We represent \textbf{Visual Genome} as a KG, by representing its image objects as WordNet synsets (e.g., \texttt{wn:shoe.n.01}). We express relationships between objects via ConceptNet's \texttt{/r/LocatedNear} edge type. Object attributes are represented by different edge types, conditioned on their part-of-speech: we reuse ConceptNet's \texttt{/r/CapableOf} for verbs, while we introduce a new relation \texttt{mw:MayHaveProperty} for adjective attributes.
We include the \textit{Wikidata-CS} subset of \textbf{Wikidata}, extracted in~\cite{ilievski2020commonsense}. Its 101k statements have been manually mapped to 15 ConceptNet relations.
We include four relations from \textbf{WordNet} v3.0 by mapping them to three ConceptNet relations: hypernymy (using \texttt{/r/IsA}), part and member holonymy (through \texttt{/r/PartOf}), and substance meronymy (with \texttt{/r/MadeOf}).
\subsubsection{Mappings}
We perform node resolution by applying existing identity mappings (P3) and generating probabilistic mappings automatically (P4). We introduce a dedicated relation, \texttt{mw:SameAs}, to indicate identity between two nodes.
\noindent \textbf{WordNet-WordNet} The WordNet v3.1 identifiers in ConceptNet and the WordNet v3.0 synsets from Visual Genome are aligned by leveraging ILI: the WordNet InterLingual Index,\footnote{\url{https://github.com/globalwordnet/ili}} which generates 117,097 \texttt{mw:SameAs} mappings
\noindent \textbf{WordNet-Wikidata} We generate links between WordNet synsets and Wikidata nodes as follows.
For each synset, we retrieve 50 candidate nodes from a customized index of Wikidata.
Then, we compute sentence embeddings of the descriptions of the synset and each of the Wikidata candidates by using a pre-trained XLNet model \cite{yang2019xlnet}.
We create a \texttt{mw:SameAs} edge between the synset and the Wikidata candidate with highest cosine similarity of their embeddings.
Each mapping is validated by one student. In total, 17 students took part in this validation. Out of the 112k edges produced by the algorithm, the manual validation marked 57,145 as correct. We keep these in CSKG and discard the rest.
\noindent \textbf{FrameNet-ConceptNet} We link FrameNet nodes to ConceptNet in two ways. FrameNet LUs are mapped to ConceptNet nodes through the Predicate Matrix~\cite{de2016predicate} with $3,016$ \texttt{mw:SameAs} edges.
Then, we use 200k hand-labeled sentences from the FrameNet corpus, each annotated with a target frame, a set of FEs, and their associated words. We treat these words as LUs of the corresponding FE, and ground them to ConceptNet with the rule-based method of \cite{lin2019kagnet}
\noindent \textbf{Lexical matching} We establish 74,259 \texttt{mw:SameAs} links between nodes in ATOMIC, ConceptNet, and Roget by exact lexical match of their labels. We restrict this matching to lexical nodes (e.g., \texttt{/c/en/cat} and not \texttt{/c/en/cat/n/wn/animal}).
\subsubsection{Refinement}
We consolidate the seven sources and their interlinks as follows. After transforming them to the representation described in the past two sections, we concatenate them in a single graph. We deduplicate this graph and append all mappings, resulting in \texttt{CSKG*}. Finally, we apply the mappings to merge identical nodes (connected with \texttt{mw:SameAs}) and perform a final deduplication of the edges, resulting in our consolidated CSKG graph. The entire procedure of importing the individual sources and consolidating them into CSKG is implemented with KGTK operations~\cite{ilievski2020kgtk}, and can be found on our GitHub.\footnote{\url{https://github.com/usc-isi-i2/cskg/blob/master/consolidation/create\_cskg.sh}}
\subsection{Embeddings}
\label{ssec:embeddings}
Embeddings provide a convenient entry point to KGs and enable reasoning on both intrinsic and downstream tasks. For instance, many reasoning applications (cf.~\cite{ma2019towards,lin2019kagnet}) of ConceptNet leverage their NumberBatch embeddings~\cite{speer2017conceptnet}
Motivated by these observations, we aspire to produce high-quality embeddings of the CSKG graph. We experiment with two families of embedding algorithms. On the one hand, we produce variants of popular graph embeddings: TransE~\cite{bordes2013translating}, DistMult~\cite{yang2014embedding}, ComplEx~\cite{trouillon2016complex}, and RESCAL~\cite{nickel2011three}. On the other hand, we produce various text (Transformer-based) embeddings based on BERT-large~\cite{devlin2018bert}. For BERT, we first create a sentence for each node, based on a template that encompasses its neighborhood, which is then encoded with BERT's sentence transformer model. All embeddings are computed with the KGTK operations \texttt{graph-embeddings} and \texttt{text-embeddings}. We analyze them in section~\ref{ssec:embedding_eval}.
The CSKG embeddings are publicly available at \url{http://shorturl.at/pAGX8}.
\section{Analysis}
\label{sec:analysis}
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{images/snippet.png}
\caption{Snippet of CSKG for the example task of section~\ref{sec:intro}. CSKG combines: 1) lexical nodes (piano, keys, music; in blue), 2) synsets like piano (artifact), seat (dramaturgy) (in green), and 3) frames (\texttt{fn:noise\_makers}) and frame elements (\texttt{fn:fe:use}) (in purple). The link between \texttt{piano} and \texttt{piano (artifact)} is missing, but trivial to infer.}
\label{fig:snippet}
\end{figure}
Figure \ref{fig:snippet} shows a snippet of CSKG that corresponds to the task in section~\ref{sec:intro}. Following P1, CSKG combines: 1) lexical nodes (piano, keys, music), 2) synsets like piano (artifact), seat (dramaturgy) (in green), and 3) frames (\texttt{fn:noise\_makers}) and frame elements (\texttt{fn:fe:use}). According to P2, we reuse edge types where applicable: for instance, we use ConceptNet's \texttt{LocatedNear} relation to formalize Visual Genome's proximity information between a woman and a piano. We leverage external links to WordNet to consolidate synsets across sources (P3). We generate further links (P4) to connect FrameNet frames and frame elements to ConceptNet nodes, and to consolidate the representation of \texttt{piano (artifact)} between Wikidata and WordNet. In the remainder of this section, we perform qualitative analysis of CSKG and its embeddings.
\subsection{Statistics}
\textbf{Basic statistics} of CSKG are shown in Table \ref{tab:statistics}.
In total, our mappings produce 251,517 \texttt{mw:SameAs} links and 45,659 \texttt{fn:HasLexicalUnit} links.
After refinement, i.e., removal of the duplicates and merging of the identical nodes, CSKG consists of 2.2 million nodes and 6 million edges.
In terms of edges, its largest subgraph is ConceptNet (3.4 million), whereas ATOMIC comes second with 733 thousand edges. These two graphs also contribute the largest number of nodes to CSKG. The three most common relations in CSKG are: \texttt{/r/RelatedTo} (1.7 million), \texttt{/r/Synonym} (1.2 million), and \texttt{/r/Antonym} (401 thousand edges).
\begin{table}[!t]
\centering
{\footnotesize
\caption{CSKG statistics. Abbreviations: CN=ConceptNet, VG=Visual Genome, WN=WordNet, RG=Roget, WD=Wikidata, FN=FrameNet, AT=ATOMIC. Relation numbers in brackets are before consolidating to ConceptNet.}
\label{tab:statistics}
\begin{tabular} {l c c c c c c c c c}
\toprule
& \bf AT & \bf CN & \bf FN & \bf RG & \bf VG & \bf WD & \bf WN & \bf CSKG* & \bf CSKG \\
\midrule
\#nodes & 304,909 & 1,787,373 & 15,652 & 71,804 & 11,264 & 91,294 & 71,243 & 2,414,813 & \bf 2,160,968 \\
\#edges & 732,723 & 3,423,004 & 29,873 & 1,403,955 & 2,587,623 & 111,276 & 101,771 & 6,349,731 & \bf 6,001,531 \\
\#relations & 9 & 47 & 9 (23) & 2 & 3 (42k) & 3 & 15 (45) & 59 & \bf 58 \\
avg degree & 4.81 & 3.83 & 3.82 & 39.1 & 459.45 & 2.44 & 2.86 & 5.26 & \bf 5.55\\
std degree & 0.07 & 0.02 & 0.13 & 0.34 & 35.81 & 0.02 & 0.05 & 0.02 & \bf 0.03\\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}[!t]
\centering
{\footnotesize
\caption{Nodes with highest centrality score according to PageRank and HITS. Node labels indicated in bold.}
\label{tab:centrality}
\begin{tabular} {l c r}
\toprule
\bf PageRank & \bf HITS hubs & \bf HITS authorities \\
\midrule
/c/en/\textbf{chromatic}/a/wn & /c/en/\textbf{red} & /c/en/\textbf{blue} \\
/c/en/\textbf{organic\_compound} & /c/en/\textbf{yellow} & /c/en/\textbf{red} \\
/c/en/\textbf{chemical\_compound}/n & /c/en/\textbf{green} & /c/en/\textbf{silver} \\
/c/en/\textbf{change}/n/wn/artifact & /c/en/\textbf{silver} & /c/en/\textbf{green} \\
/c/en/\textbf{natural\_science}/n/wn/cognition & /c/en/\textbf{blue} & /c/en/\textbf{gold} \\
\bottomrule
\end{tabular}
}
\end{table}
\textbf{Connectivity and centrality} The mean degree of CSKG grows by 5.5\% (from 5.26 to 5.55) after merging identical nodes. Compared to ConceptNet, its degree is 45\% higher, due to its increased number of edges while keeping the number of nodes nearly constant. The best connected subgraphs are Visual Genome and Roget. CSKG's high connectivity is owed largely to these two sources and our mappings, as the other five sources have degrees below that of CSKG. The abnormally large node degrees and variance of Visual Genome are due to its annotation guidelines that dictate all concept-to-concept information to be annotated, and our modeling choice to represent its nodes through their synsets. We report that the in-degree and out-degree distributions of CSKG have Zipfian shapes, a notable difference being that the maximal in degree is nearly double compared to its maximal out degree (11k vs 6.4k). To understand better the central nodes in CSKG, we compute PageRank and HITS metrics. The top-5 results are shown in Table \ref{tab:centrality}. We observe that the node with highest PageRank has label ``chromatic'', while all dominant HITS hubs and authorities are colors, revealing that knowledge on colors of real-world object is common in CSKG. PageRank also reveals that knowledge on natural and chemical processes is well-represented in CSKG. Finally, we note that the top-centrality nodes are generally described by multiple subgraphs, e.g., \texttt{c/en/natural\_science/n/wn/cognition} is found in ConceptNet and WordNet, whereas the color nodes (e.g., \texttt{/c/en/red}) are shared between Roget and ConceptNet.
\subsection{Analysis of the CSKG embeddings}
\label{ssec:embedding_eval}
\begin{table}[!t]
\centering
{
\footnotesize
\caption{Top-5 most similar nodes for \texttt{/c/en/turtle/n/wn/animal} (E1) and \texttt{/c/en/happy} (E2) according to TransE and BERT.}
\label{tab:embedding}
\begin{tabular} {l l r}
\toprule
& \bf TransE & \bf BERT \\
\midrule
E1 & /c/en/chelonian/n/wn/animal & /c/en/glyptemys/n \\
& /c/en/mud\_turtle/n/wn/animal & /c/en/pelocomastes/n \\
& /c/en/cooter/n/wn/animal & /c/en/staurotypus/n \\
& /c/en/common\_snapping\_turtle/n/wn/animal & /c/en/parahydraspis/n \\
& /c/en/sea\_turtle/n/wn/animal & /c/en/trachemys/n \\ \midrule
E2 & /c/en/excited & /c/en/bring\_happiness \\
& /c/en/satisfied & /c/en/new\_happiness \\
& /c/en/smile\_mood & at:like\_a\_party\_is\_a\_good\_way\_to\_... \\
& /c/en/pleased & /c/en/encouraging\_person's\_talent \\
& /c/en/joyful & at:happy\_that\_they\_went\_to\_the\_party \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{figure}[!t]
\hfill
\subfigure{\includegraphics[width=0.49\textwidth]{images/graph_emb2D.png}}
\hfill
\subfigure{\includegraphics[width=0.49\textwidth]{images/text_emb2D.png}}
\hfill
\caption{UMAP visualization of 5,000 randomly sampled nodes from CSKG, represented by TransE (left) and BERT (right) embeddings. Colors signify node sources.}
\label{fig:embeddings}
\end{figure}
We randomly sample 5,000 nodes from CSKG and visualize their embeddings computed with an algorithm from each family: TransE and BERT. The results are shown in Figure~\ref{fig:embeddings}. We observe that graph embeddings group nodes from the same source together. This is because graph embeddings tend to focus on the graph structure, and because most links in CSKG are still within sources. We observe that the sources are more intertwined in the case of the BERT embeddings, because of the emphasis on lexical over structural similarity.
Moreover, in both plots Roget is dispersed around the ConceptNet nodes, which is likely due to its broad coverage of concepts, that maps both structurally and lexically to ConceptNet. At the same time, while ATOMIC overlaps with a subset of ConceptNet~\cite{sap2019atomic}, the two sources mostly cover different areas of the space.
Table~\ref{tab:embedding} shows the top-5 most similar neighbors for \texttt{/c/en/turtle/n/wn/animal} and \texttt{/c/en/happy} according to TransE and BERT. We note that while graph embeddings favor nodes that are structurally similar (e.g., \texttt{/c/en/turtle/n/wn/animal} and \texttt{/c/en/chelonian/n/wn/animal} are both animals in WordNet), text embeddings give much higher importance to lexical similarity of nodes or their neighbors, even when the nodes are disconnected in CSKG (e.g., \texttt{/c/en/happy} and \texttt{at:happy\_that\_they\_went\_to\_the\_party}). These results are expected considering the approach behind each algorithm.
\textbf{Word association with embeddings} To quantify the utility of different embeddings, we evaluate them on the \textit{USF-FAN}~\cite{nelson2004university} benchmark, which contains crowdsourced common sense associations for 5,019 ``stimulus'' concepts in English. For instance, the associations provided for \texttt{day} are: \texttt{night}, \texttt{light}, \texttt{sun}, \texttt{time}, \texttt{week}, and \texttt{break}. The associations are ordered descendingly based on their frequency. With each algorithm, we produce a top-K most similar neighbors list based on the embedding of the stimulus concept. Here, $K$ is the number of associations for a concept, which varies across stimuli. If CSKG has multiple nodes for the stimulus label, we average their embeddings. For the graph embeddings, we use logistic loss function, using a dot comparator, a learning rate of 0.1, and dimension 100. The BERT text embeddings have dimension 1024, which is the native dimension of this language model. As the text embedding models often favor surface form similarity (e.g., associations like \texttt{daily} for \texttt{day}), we devise variants of this method that excludes associations with Levenshtein similarity higher than a threshold $t$.
We evaluate by comparing the embedding-based list to the benchmark one, through customary ranking metrics, like Mean Average Precision (MAP) and Normalized Discounted Cumulative Gain (NDCG). Our investigations show that TransE is the best-performing algorithm overall, with MAP of 0.207 and NDCG of 0.530. The optimal BERT variant uses threshold of $t=0.9$, scoring with MAP of 0.209 and NDCG of 0.268. The obtained MAP scores indicate that the embeddings capture relevant signals, yet, a principled solution to USF-FAN requires a more sophisticated embedding search method that can capture various forms of both relatedness and similarity. In the future, we aim to investigate embedding techniques that integrate structural and content information like RDF2Vec~\cite{ristoski2016rdf2vec}, and evaluate on popular word similarity datasets like WordSim-353~\cite{finkelstein2001placing}
\section{Applications}
\label{sec:downstream}
\begin{table}[!t]
\centering
{
\caption{Number of triples retrieved with ConceptNet and CSKG on different datasets.
\begin{tabular} {l | c c c | c c c}
\toprule
& \multicolumn{3}{c}{\emph{train}} & \multicolumn{3}{c}{\emph{dev}} \\
& \bf \#Questions & \bf ConceptNet & \bf CSKG & \bf \#Questions & \bf ConceptNet & \bf CSKG \\
\midrule
CSQA & 9,741 & 78,729 & 125,552 & 1,221 & 9,758 & 15,662 \\
SIQA & 33,410 & 126,596 & 266,937 & 1,954 & 7,850 & 16,149 \\
PIQA & 16,113 & 18,549 & 59,684 & 1,838 & 2,170 & 6,840 \\
aNLI & 169,654 & 257,163 & 638,841 & 1,532 & 5,603 & 13,582 \\
\bottomrule
\end{tabular}
}
\label{tab:downstream}
\end{table}
As the creation of CSKG is largely driven by downstream reasoning needs, we now investigate its relevance for commonsense question answering: 1) we measure its ability to contribute novel evidence to support reasoning, and 2) we measure its role in pre-training language models for zero-shot downstream reasoning.
\subsection{Retrieving evidence from CSKG}
We measure the relevance of CSKG for commonsense question answering tasks, by comparing the number of retrieved triples that connect keywords in the question and in the answers. For this purpose, we adapt the lexical grounding in HyKAS~\cite{ma2019towards} to retrieve triples from CSKG instead of its default knowledge source, ConceptNet. We expect that CSKG can provide much more evidence than ConceptNet, both in terms of number of triples and their diversity. We experiment with four commonsense datasets: CommonSense QA (CSQA)~\cite{talmor2018commonsenseqa}, Social IQA (SIQA)~\cite{sap2019socialiqa}, Physical IQA (PIQA)~\cite{bisk2019piqa}, and abductive NLI (aNLI)~\cite{bhagavatula2019abductive}. As shown in Table \ref{tab:downstream}, CSKG significantly increases the number of evidence triples that connect terms in questions with terms in answers, in comparison to ConceptNet. We note that the increase is on average 2-3 times, the expected exception being CSQA, which was inferred from ConceptNet.
We inspect a sample of questions to gain insight into whether the additional triples are relevant and could benefit reasoning. For instance, let us consider the CSQA question ``Bob the lizard lives in a warm place with lots of water. Where does he probably live?'', whose correct answer is ``tropical rainforest''. In addition to the ConceptNet triple \texttt{/c/en/lizard /c/en/AtLocation /c/en/tropical\_rainforest}, CSKG provides two additional triples, stating that tropical is an instance of place and that water may have property tropical.
The first additional edge stems from our mappings from FrameNet to ConceptNet, whereas the second comes from Visual Genome.
We note that, while CSKG increases the coverage with respect to available commonsense knowledge, it is also incomplete: in the above example, useful information such as warm temperatures being typical for tropical rainforests is still absent.
\subsection{Pre-training language models with CSKG}
\begin{table}[t]
\begin{center}
\caption{
Zero-shot evaluation results with different combinations of models and knowledge sources, across five commonsense tasks, as reported in \cite{ma2020knowledge}. \texttt{CWWV} combines ConceptNet, Wikidata, WordNet, and Visual Genome. \texttt{CSKG} is a union of \texttt{ATOMIC} and \texttt{CWWV}. We report mean accuracy over three runs, with 95\% confidence interval.}
\label{tab:aaai}
\begin{tabular}{@{}l@{\hspace{5pt}}c@{\hspace{7pt}}c@{\hspace{7pt}}c@{\hspace{7pt}}c@{\hspace{7pt}}c@{\hspace{7pt}}c@{}}
\toprule
\bf Model & \bf KG & \bf aNLI & \bf CSQA & \bf PIQA & \bf SIQA & \bf WG \\ \midrule
GPT2-L & \texttt{ATOMIC} & $59.2(\pm 0.3)$ & $48.0(\pm 0.9)$ & $67.5(\pm 0.7)$ & $53.5(\pm 0.4)$ & $54.7(\pm 0.6)$\\
GPT2-L & \texttt{CWWV} & $58.3(\pm 0.4)$ & $46.2(\pm 1.0)$ & $68.6(\pm 0.7)$ & $48.0(\pm 0.7)$ & $52.8(\pm 0.9)$\\
GPT2-L & \texttt{CSKG} & $59.0(\pm 0.5)$ & $48.6(\pm 1.0)$ & $68.6(\pm 0.9)$ & $53.3(\pm 0.5)$ & $54.1(\pm 0.5)$\\
RoBERTa-L & \texttt{ATOMIC} & $\bf 70.8(\pm 1.2)$ & $64.2(\pm 0.7)$ & $72.1(\pm 0.5)$ & $63.1(\pm 1.5)$ & $59.6(\pm 0.3)$\\
RoBERTa-L & \texttt{CWWV} & $70.0(\pm 0.3)$ & $\bf 67.9(\pm 0.8)$ & $72.0(\pm 0.7)$ & $54.8(\pm 1.2)$ & $59.4(\pm 0.5)$ \\
RoBERTa-L & \texttt{CSKG} & $70.5(\pm 0.2)$ & $67.4(\pm 0.8)$ & $\bf 72.4(\pm 0.4)$ & $\bf 63.2(\pm 0.7)$ & $\bf 60.9(\pm 0.8)$ \\
\midrule
\it Human & - & 91.4 & 88.9 & 94.9 & 86.9 & 94.1 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
We have studied the role of various subsets of CSKG for downstream QA reasoning extensively in \cite{ma2020knowledge}. Here, CSKG or its subsets were transformed into artificial commonsense question answering tasks. These tasks were then used instead of training data to pre-train language models, like RoBERTa and GPT-2. Such a CSKG-based per-trained language model was then `frozen' and evaluated in a zero-shot manner across a wide variety of commonsense tasks, ranging from question answering through pronoun resolution and natural language inference.
We select key results from these experiments in Table \ref{tab:aaai}. The results demonstrate that no single knowledge source suffices for all benchmarks and that using CSKG is overall beneficial compared to using its subgraphs, thus directly showing the benefit of commonsense knowledge consolidation. In a follow-up study~\cite{ilievskiinprep}, we further exploit the consolidation in CSKG to pre-train the language models with one dimension (knowledge type) at a time, noting that certain dimensions of knowledge (e.g., temporal knowledge) are much more useful for reasoning than others, like lexical knowledge. In both cases, the kind of knowledge that benefits each task is ultimately conditioned on the alignment between this knowledge and the targeted task, indicating that subsequent work should further investigate how to dynamically align knowledge with the task at hand.
\section{Discussion}
\label{sec:discussion}
Our analysis in section~\ref{sec:analysis} revealed that the connectivity in CSKG is higher than merely concatenation of the individual sources, due to our mappings across sources and the merge of identical nodes. Its KGTK format allowed us to seamlessly compute and evaluate a series of embeddings, observing that TransE and BERT with additional filtering are the two best-performing and complementary algorithms. The novel evidence brought by CSKG on downstream QA tasks (section \ref{sec:downstream}) is a signal that can be exploited by reasoning systems to enhance their performance and robustness, as shown in~\cite{ma2020knowledge}.
Yet, the quest to a rich, high-coverage CSKG is far from completed. We briefly discuss two key challenges, while broader discussion can be found in~\cite{ilievskiinprep}.
\textbf{Node resolution} As large part of CSKG consists of lexical nodes, it suffers from the standard challenges of linguistic ambiguity and variance. For instance, there are 18 nodes in CSKG that have the label `scene', which includes WordNet or OpenCyc synsets, Wikidata Qnodes, frame elements, and a lexical node. Variance is another challenge, as \texttt{/c/en/caffeine}, \texttt{/c/en/caffine}, and \texttt{/c/en/the\_active\_ingredient\_caffeine} are all separate nodes in ConceptNet (and in CSKG). We are currently investigating techniques for node resolution applicable to the heterogeneity of commonsense knowledge in CSKG.
\textbf{Semantic enrichment} We have normalized the edge types across sources to a single, ConceptNet-centric, set of 58 relations. In~\cite{ilievskiinprep}, we classify all CSKG's relations into 13 dimensions, enabling us to consolidate the edge types further. At the same time, some of these relations hide fine-grained distinctions, for example, WebChild~\cite{tandon2017webchild} defines 19 specific property relations, including temperature, shape, and color, all of which correspond to ConceptNet's \texttt{/r/HasProperty}. A novel future direction is to produce hierarchy for each of the relations, and refine existing triples by using a more specific relation (e.g., use the predicate `temperature' instead of `property' when the object of the triple is `cold').
\section{Conclusions and Future Work}
\label{sec:conclusions}
While current commonsense knowledge sources contain complementary knowledge that would be beneficial as a whole for downstream tasks, such usage is prevented by different modeling approaches, foci, and sparsity of available mappings.
Optimizing for simplicity, modularity, and utility, we proposed a hyper-relational graph representation that describes many nodes with a few edge types, maximizes the high-quality links across subgraphs, and enables natural language access. We applied this representation approach to consolidate a commonsense knowledge graph (CSKG) from seven very diverse and disjoint sources: a text-based commonsense knowledge graph ConceptNet, a general-purpose taxonomy Wikidata, an image description dataset Visual Genome, a procedural knowledge source ATOMIC, and three lexical sources: WordNet, Roget, and FrameNet. CSKG describes 2.2 million nodes with 6 million statements. Our analysis showed that CSKG is a well-connected graph and more than `a simple sum of its parts'. Together with CSKG, we also publicly release a series of graph and text embeddings of the CSKG nodes, to facilitate future usage of the graph. Our analysis showed that graph and text embeddings of CSKG have complementary notions of similarity, as the former focus on structural patterns, while the latter on lexical features of the node's label and of its neighborhood. Applying CSKG on downstream commonsense reasoning tasks, like QA, showed an increased recall as well as an advantage when pre-training a language model to reason across datasets in a zero-shot fashion. Key standing challenges for CSKG include semantic consolidation of its nodes and refinement of its property hierarchy.
Notebooks for analyzing these resources can be found on our public GitHub page: \url{https://github.com/usc-isi-i2/cskg/tree/master/ESWC2021}.
\section*{Acknowledgements}
This work is sponsored by the DARPA MCS program under Contract No. N660011924033 with the United States Office Of Naval Research, and by the Air Force Research Laboratory under agreement number FA8750-20-2-10002.
\bibliographystyle{splncs04}
|
1,314,259,995,885 | arxiv | \section{Introduction}
\label{sec:introduction}
In the recent years, multiple research fields and industries have become interested in Deep Reinforcement Learning (DRL) frameworks as a way to enhance decision-making processes in the real world and design better autonomous systems. The range of domains is large \cite{Li2017a,Naeem2020}, including---but not limited to---robotics \cite{Polydoros2017}, communications \cite{Luong2019}, drug discovery \cite{Elton2019}, fluid mechanics \cite{GARNIER2021104973}, autonomous vehicles \cite{Talpaert2019}, and recommender systems \cite{Afsar2021}.
Different motivations trigger the interest of domain-specific research in DRL:
\begin{itemize}
\item In some industries systems are scaling and presenting new degrees of freedom, which makes them harder to operate, especially in time-sensitive settings. In those cases DRL becomes a new approach to fast decision-making. An example is the work within the satellite communications community, in a moment when constellations are getting larger and more flexible \cite{Deng2020,Ferreira2019a}.
\item Some systems require control policies that leverage raw signals such as image, sound, or brain activity. DRL and its representation capabilities are thus studied to achieve better performance. Some robotics \cite{Polydoros2017,Ibarz2021} and healthcare applications \cite{Esteva2019,Yu2019} fall in this category.
\item When supervision is costly or not possible, DRL offers a way to learn policies by encoding system goals into the reward function and leveraging exploration during training. This is the case of NP-hard combinatorial optimization problems \cite{Drori2020} or drug design studies \cite{Olivecrona2017,Popova2018}, where DRL is able to provide approximate solutions and good candidate molecules, respectively.
\item DRL is a framework that can account for long-term dependencies in decision-making, which is especially interesting for recommender platforms and other interaction-based systems \cite{Afsar2021,Zhang2019}.
\end{itemize}
Despite these motivations and the extensive research efforts to prove the usefulness of DRL in real-world contexts, the successful deployment in real environments is a test many of the proposed models in the literature still need to pass. This is mostly due to the added complexities of the real world compared to current experimental DRL settings.
From a domain-specific perspective, concrete tasks, problems, and environments in the real world are hard to fully characterize and training in real environments is not always possible or preferred. In addition, the nature of these tasks or problems---regardless of the domain---entails dealing with certain challenges that make learning more difficult: non-stationarity, high-dimensionality, sparse reward, etc.
Reinforcement Learning (RL) researchers have identified a large number of those challenges \cite{dulac2019challenges,Dulac-Arnold2021,Ibarz2021,Zhu2020} and many solutions have been proposed to mitigate each of them. While the results are positive, testing is mostly limited to simulated environments. A key question remains often unanswered: how do the proposed models work in a real-world setting? Without that feedback it is not clear how much we have progressed in the path towards DRL-based autonomy.
To try to shed some light on this issue, in this paper we attempt to summarize and evaluate the progress of real-world-oriented DRL research from the perspective of both domain-agnostic and domain-specific research. We start by reviewing the domain-agnostic challenges of real-world DRL and compiling the solutions proposed in the literature. Based on this review, we identify five gaps that we deem necessary to address moving forward: a bias towards robotics use cases, not enough research on combined challenges, lack of real-world follow-through, little understanding of design tradeoffs, and operation being ignored.
In addition to the focus of the RL community on the problem, we argue domain-specific operators play an important role in adopting the technology. Therefore, we next analyze the same problem---the lack of real-world deployments---from the domain-specific viewpoint and try to align both efforts. We first highlight some examples of success stories, then discuss why other deployments might fail, and finally address ways to move forward.
\section{Challenges of real-world DRL}
\label{sec:challenges}
\begin{figure}[t]
\centering
\includegraphics[width=.99\linewidth]{venn.pdf}
\caption{List of challenges of real-world DRL considered in different studies. Some of the challenges might interact or overlap in specific settings.}
\label{fig:challenges}
\end{figure}
Different studies have tried to summarize the challenges of real-world DRL, these are pictured in Figure \ref{fig:challenges}. The work in \cite{dulac2019challenges,Dulac-Arnold2021} offers a comprehensive review of nine different domain-agnostic challenges and their impact on trained agents. Within the context of robotics, \cite{Ibarz2021} identifies twelve issues based on real case studies and discusses possible mitigation strategies for each. Also focusing on robotics, \cite{Zhu2020} highlights three real-world challenges and proposes a model taking those into account on a set of dexterous robotic manipulation tasks.
While the set of challenges is diverse, the mitigation strategies to address them are not unique to one challenge but sometimes the same method or mechanism is proposed in different contexts. To summarize the data, in this work we assume a domain-agnostic perspective and extend the challenges identified in \cite{Dulac-Arnold2021}. In the following section, we address each challenge independently and list the mitigation approaches found in the literature.
The challenges we consider are: offline DRL, learning from limited samples, large-scale learning, high-dimensionality, safe DRL, partial observability, non-stationarity, unspecified reward, multiobjective reward, system delays, representation, transferability, long trajectories and credit assignment, stochasticity, multiagent DRL, counterfactual reasoning, stability, and the combination of many of these challenges.
Note these challenges do not necessarily need to be uncorrelated, in some contexts specific challenges might be consequences of other challenges being present (e.g., non-stationarity during learning might be a cause of a partially observable environment). There are two other challenges some authors have highlighted that are not included in our analysis as isolated challenges: continuous spaces and real-time inference. In our view these challenges are hard to be found being addressed in isolation and therefore are excluded from our list.
Finally, designing, implementing, and operating a DRL model in the real world might also entail other considerations that go beyond difficult learning setups and are directly connected with the societal implications of the technology. Here we refer to interpretability (identified as one of the nine challenges in \cite{Dulac-Arnold2021}), reproducibility (discussed in detail in \cite{Henderson2018}), reliability, fairness, and privacy, as other challenges that are out of the scope of this work but remain important on a societal level.
\section{Challenge mitigation strategies}
\label{sec:strategies}
In this section we aim to summarize the approaches different researchers have proposed to address the DRL challenges introduced in the previous section. Table \ref{tab:summary} offers a summary of the high-level approaches and the contexts in which they are applied. This summary is based on our view of the challenges and the reviewed literature; it is open to different interpretations.
\renewcommand{\arraystretch}{1.4}
\begin{table*}[t]
\caption{Summary of the different approaches that have been considered by the RL community to address the specific challenges of real-world DRL. Each approach has been considered for different challenges independently.}
\label{tab:summary}
\vskip 0.1in
\begin{center}
\begin{small}
\begin{tabular}{p{0.15\linewidth-2\tabcolsep-1.3333\arrayrulewidth}p{0.4\linewidth-2\tabcolsep-1.3333\arrayrulewidth}p{0.45\linewidth-2\tabcolsep-1.3333\arrayrulewidth}}
\hline
\multicolumn{1}{c}{\textbf{Approach}} & \multicolumn{1}{c}{\textbf{Description}} & \multicolumn{1}{c}{\textbf{Examples}} \\ \hline
\rowcolor[HTML]{EFEFEF}
Meta learning & An outer loop learner changes meta parameters to better adapt to the challenge & Meta learning for offline DRL, meta learning for multiobjective reward, Meta RL for transferability, replacing action maximization by neural network search in high-dimensional spaces \\
Mathematical guarantee & Derive equations and theorems that support the challenge fulfillment & Lyapunov functions and primal-dual methods for safe DRL, assume uncertainty matrices to address non-stationarity \\
\rowcolor[HTML]{EFEFEF}
Neural network architectures & Rely on Deep Learning advances to increase robustness against the challenge & RNNs for partial observability, attention mechanisms for credit assignment, network ensembles for offline DRL \\
RL theory & Adapt theoretical RL frameworks to DRL settings and the use of neural networks & Off-policy algorithms, POMDPs, Delay-Aware MDPs, Constrained MDPs, Maximum-entropy RL \\
\rowcolor[HTML]{EFEFEF}
Embeddings and latent spaces & Address challenge problems by relying on robust intermediate embeddings & High- to low-dimensional embeddings, latent variables for non-stationarity, multimodal and contrastive learning-based representations, unsupervised learning \\
Reward estimation or modification & Try to overcome the challenge by directly modifying the reward structure and/or function & Reward shaping in long trajectories, reward shaping for safe DRL, reward redistribution, distributional RL against stochasticity \\
\rowcolor[HTML]{EFEFEF}
Deriving a model & Instead of learning a policy, learn models of the environment and use them to plan & Model-based RL, imitation learning, inverse RL \\
Pruning and masking & The learning process involves deciding, among different learning signals, how important each of them is and eliminating the unnecessary ones & Batch-constrained methods for offline DRL, distributed training in large-scale settings, action elimination in high-dimensional spaces, divide-and-conquer methods in stochastic environments \\
\rowcolor[HTML]{EFEFEF}
Use auxiliary tasks & Provide the agent with auxiliary tasks that, combined, increase robustness against the challenge & Multitask learning and online learning for data efficiency, self-supervised learning for unspecified reward, hierarchical RL in long trajectories \\
Data augmentation & Rely on different data augmentation and data wrangling techniques & Randomization to transfer better, data augmentation to address non-stationarity and stochasticity \\
\rowcolor[HTML]{EFEFEF}
Heuristics & Use human-crafted rules or processes to address the challenge & Scalarization of multiple objectives, hyperparameter tuning \\
Population-based methods & Have multiple agents with slightly different parameters/objectives and search for the best ones & Multiagent populations in partially observable environments, multiobjective populations \\
\rowcolor[HTML]{EFEFEF}
Multiagent specific & Solutions specific to the multiagent challenge that can not be mapped to other challenges & Independent Q-learning, decentraliced actors and centralized critic, hybrid mechanisms \\
\hline
\end{tabular}
\end{small}
\end{center}
\vskip -0.1in
\end{table*}
\textbf{Offline DRL
\,\,\, Some systems might require to learn from offline logs instead of directly interacting with the environment, as that might be costly or not possible. An extensive review on the subject is presented in \cite{levine2020offline}. To address this issue, different \highlight{off-policy algorithms} such as DDPG \cite{lillicrap2015continuous} or D4PG \cite{Barth-Maron2018} can be used in some cases. Other authors propose \highlight{batch-constrained RL} approaches \cite{fujimoto2019off,Kumar2019,Siegel2020,Wu2019}, where the learned policy is constrained based on the state-action distribution from the dataset and the extrapolation error is accounted for. Then, the work in \cite{agarwal2020optimistic} considers an \highlight{ensemble} or convex combination of Q-functions to leverage data in a replay buffer. Finally, \highlight{model-based RL} \cite{sutton2018reinforcement} constitutes another research area in the context of offline DRL \cite{Yu2020a,Kidambi2020}.
\textbf{Learning from limited samples
\,\,\, Sometimes an agent must learn from a small number of samples, either because acquiring experience is slow or costly, or rapid adaptation to a new context is needed. While the representations chosen or learned can impact the learning speed \cite{Srinivas2020}, multiple methods have been proposed to directly address data efficiency. One alternative is to \highlight{learn a model} of the world and use that model to plan \cite{chua2018deep,buckman2018sample}. In the context of learning specific tasks, if \highlight{expert demonstrations} \cite{duan2017one} or behavioral priors \cite{Singh2020} are available, the agent can bootstrap from those to increase data efficiency. If the goal of the agent is \highlight{multitask learning}, the tasks can be learned concurrently taking into account multiple gradient inputs \cite{Yu2020}, or if the tasks are to be learned sequentially, \highlight{meta learning} algorithms, especially few-shot methods, offer a way to learn new tasks faster \cite{Finn2017,Li2017,sung2018learning,lee2019meta}. Finally, in \highlight{online learning} contexts, where new tasks need to be learned fast and on-the-fly, different approaches have been proposed to promote forward and backward transfer \cite{Schwarz2018,mallya2018packnet,Chaudhry2018,Nagabandi2018} and avoid catastrophic forgetting \cite{kirkpatrick2017overcoming}.
\textbf{Large-scale learning}\,\,\, In specific settings an agent should be able to capitalize on massive amounts of data fast, either because experience comes at a high frequency or multiple independent agents can collect experience simultaneously. For the latter case, when environments can be parallelized (e.g., self-driving cars, recommender systems), \highlight{distributed training with importance and priority mechanisms} has been proposed in different works \cite{Adamski2018,Horgan2018,Espeholt2018}.
\textbf{High-dimensionality
\,\,\, Some agents might need to operate in high-dimensional or combinatorial state and action spaces (e.g., natural language, molecular space). Here one approach is to operate with lower dimensional \highlight{embeddings} of the spaces \cite{Dulac-Arnold2015,Robine2020}. Other authors propose \highlight{action elimination} mechanisms to determine which actions not to take first \cite{zahavy2018learn}. In the context, of Q-learning \cite{sutton2018reinforcement} over large action spaces, \cite{van2020q} proposes \highlight{replacing the maximization} operation for a neural network. Finally, the use of \highlight{canonical spaces} can help reduce the state space by encapsulating redundant spaces together \cite{wu2017emergent}.
\textbf{Safe DRL
\,\,\, While exploration plays an important part in the success of RL, agents acting in real-world environments should account for safety constraints and be able to evaluate risks. One common approach to that end is to encode constraints as part of the reward function \cite{garcia2015comprehensive}, but that might not be always desirable \cite{achiam2017constrained}. Some studies propose adding learnable safety layer on top of the policy in order to \highlight{prune or correct unsafe actions} \cite{dalal2018safe}. The work in \cite{Tessler2018} explores \highlight{reward shaping} and proposes a method that substracts constraint-violation penalties to the reward. Then, \highlight{Lyapunov functions} have been proposed to certify the stability and safety of different RL-based controllers \cite{Chow2018,berkenkamp2017safe}. Constraint satisfaction can also be guaranteed by means of \highlight{primal-dual methods}, as shown in \cite{qin2021density}. Finally, an agent can also learn to \highlight{trade rewards and costs} by specifying constraints as costs with state-dependent and learnable Lagrangian coefficients \cite{Bohez2019}.
\textbf{Partial observability
\,\,\, Many environments in the real world are partially observable. In the context of DRL, some authors initially proposed incorporating \highlight{past observations} to the state \cite{mnih2015human} or use \highlight{recurrent neural network} architectures \cite{Hausknecht2015}. Inspired by the theory on POMDPs \cite{cassandra1994acting}, works like \cite{igl2018deep} propose training a variational autoencoder to learn latent representations encoding \highlight{belief states}. In the case the agent competes against other non-fully-observable agents, \cite{Jaderberg2019} shows that \highlight{training populations of agents} eventually leads to best agents finding suitable policies for the environment. If the agent must cooperate with the other agents, the use of \highlight{shared experience replay} helps mitigating the effect of partial observability \cite{omidshafiei2017deep}.
\textbf{Non-stationarity
\,\,\, A robust policy should be effective in non-stationary environments, where the underlying transition dynamics change over time due to various factors such as noise. In these contexts, one alternative is the use of \highlight{latent variables} that encode environment representations robust to noisy cues \cite{Xie2020}. A well-established approach is to \highlight{assume uncertainty in the transition matrices and derive robust algorithms} that consider worst-case scenarios \cite{Mankowitz2019} or pursue soft-robustness \cite{Derman2018}. Bayesian optimization-based methods can be also derived from this latter idea \cite{derman2020bayesian}. Finally, \highlight{data augmentation and randomization} during training can also lead to policies that adapt to real-world environments and generalize better \cite{Peng2018}.
\textbf{Unspecified reward
\,\,\, Sometimes agents must learn skills without reward signals, due to unavailable human feedback, complex exploration dynamics, or long horizon tasks. If there is no reward function but expert demonstrations are available, \highlight{Inverse RL} is an approach to learn reward signals \cite{Fu2017}. The work in \cite{Hansen2020} proposes a method to train policies by means of \highlight{self-supervised learning} when deploying in environments without reward information. Another alternative is to learn a goal-conditioned policy via \highlight{unsupervised learning}, maximizing the similarity between visited states and a goal state \cite{Warde-Farley2018}. In the context of multitask learning, in \cite{Eysenbach2018} an agent is shown to learn a diverse set of distinguishable skills by \highlight{maximizing entropy}. These skills can be then used to beetter adapt to new tasks.
\textbf{Multiobjective reward
\,\,\, Several tasks in the real world require accounting for multiple objectives and an agent must learn to reason about them. To that end, many works rely on \highlight{scalarization} approaches that combine the different objectives into a weighted reward function. This approach can be hard to tune if there are changes on the individual rewards' scale or their priorities over time. To have a better control over the objectives, \cite{abdolmaleki2020distributional} proposes training \highlight{individual policies for each objective} and then, instead of combining rewards in the reward space, combine policies in the distribution space. Another alternative is to train a different \highlight{policy per preference over objectives} \cite{xu2020prediction,Yang2019}, which leads to dense Pareto-optimal sets of policies that trade the different objectives following the operator's preferences. Finally, \highlight{meta learning} methods have also been proposed to learn new preferences in a few-shot fashion \cite{Chen2019}.
\textbf{System delays
\,\,\, DRL experimental setups generally assume negligible delay when acting, observing the new state, or receiving the reward. That might not be the case in the real world. To address this issue, the framework of \highlight{Delay-Aware MDPs} was introduced in \cite{Chen2020} to account for delayed dynamics. A similar idea is proposed in \cite{Derman2021}, where the delayed-Q algorithm leverages a forward dynamics model to predict delayed states. In the context of recommendation systems, the method in \cite{Mann2018} exploits \highlight{intermediate observations/symbols} to mitigate the effect of delays.
\textbf{Representation}\,\,\, In certain environment the challenge lies in encoding all information relevant to the problem or task efficiently, leveraging the sufficient amount of domain knowledge or inductive biases \cite{Hessel2019}. Tradeoffs are present, e.g., learning policies from physical state-based features might be more sample-efficient---although not always possible---than learning from pixels \cite{Tassa2018}. The question ``what makes a good representation for RL?'' is studied in \cite{Singh2020}. A simple approach is to design different representations for the same environment and turn the specific chosen representation into a \highlight{hyperparameter} that can be tuned based on the scenario \cite{Kim2020}. Different environment encodings can be also combined into \highlight{multimodal representations} (e.g., image and sound in video-based environments) \cite{Tsai2018,Tian2019}. Helpful representations can be also learned, for instance by means of \highlight{contrastive learning} frameworks \cite{Wu2018,Srinivas2020}. Then, \cite{Zhang2020a} proposes learning \highlight{invariant representations} by means of lossy autoencoders that capture only task-relevant elements. Finally, representation problems can be also regarded from the perspective of the reward; better reward functions might be devised following \highlight{reward shaping} methods \cite{Faust2019,Chiang2019}.
\textbf{Transferability}\,\,\, Policies should be able to be transferred to different instances of the system and/or environment regardless of their low-level features, without posing a considerable challenge. To that end, \highlight{randomization} strategies can be used to increase robustness against transfer \cite{Lee2019,Tobin2017}. \highlight{Meta RL} methods serve as another way to achieve transferability, by parametrizing specific elements of the DRL framework and using an outer loop learner trained on multiple environments \cite{Oh2020,houthooft2018evolved,Kirsch2019,Alet2020}. Learning common \highlight{invariant latent spaces} could be another approach to consider in some contexts \cite{gupta2017learning}.
\textbf{Long trajectories and credit assignment}\,\,\, When trajectories are long and/or rewards are sparse, learning efficient behaviors can be hard; the agent must discover a long sequence of correct actions. \highlight{Hierarchical RL} poses a possible solution, by considering a hierarchy of auxiliary tasks with known reward structure in order for the agent to reason at different levels of temporal resolution \cite{Riedmiller2018,nachum2018data,vezhnevets2017feudal}. Other works propose \highlight{attention mechanisms} to ease credit assignment over long timescales \cite{Wayne2018,Hung2019}. Then, the method presented in \cite{arjona2019rudder} tackles the problem by \highlight{redistributing reward}, i.e., creating a return-equivalent MDP that redistributes reward more uniformly. Finally, \highlight{reward shaping} methods are also studied for this type of contexts \cite{Su2015,Chiang2019}.
\textbf{Stochasticity
\,\,\, In some occasions, real-world environments can be too stochastic, which might lead to high variance gradient estimates that hamper learning. To make sure the agent is trained over a wide distribution of states, using \highlight{data augmentation} strategies and \highlight{randomization} are proposed by some authors \cite{Lee2019,Tobin2017}. The method presented in \cite{Ghosh2017} suggests partitioning the initial state distribution and train different policies later to be merged in a \highlight{divide-and-conquer} fashion. In highly-stochastic environments, the final policy might be better if the agent does not learn based on the average return but on a \highlight{distribution over returns} \cite{dabney2018implicit,bellemare2017distributional}.
\textbf{Multiagent DRL}\,\,\, In many real-world environments (e.g., robot swarms, autonomous cars), a team of agents must align their behavior while acting in a decentralized way \cite{Rashid2018}; leveraging experience from multiple agents is not always straightforward and other challenges such as partial-observability and non-stationarity might also come into play. An extended review on the subject can be found in \cite{Nguyen2020}. To address this challenge, one approach is to have \highlight{each agent learn independently} \cite{Tampuu2017}, which decentralizes training but might originate stability problems \cite{Foerster2017}. On the opposite side, \cite{foerster2018counterfactual} explores the framework of \highlight{multiple decentralized actors and a single centralized critic}. Inspired by Value Decomposition Networks \cite{sunehag2018value}, the work in \cite{Rashid2018,Son2019} proposes \highlight{hybrid mechanisms} to combine per-agent Q-functions into a single centralized Q-function. \highlight{Multiagent Policy Gradient} algorithms introduce a similar concept focused on continuous spaces \cite{lowe2017multi,li2019robust}, which can be also combined with attention \cite{iqbal2019actor}.
\textbf{Counterfactual reasoning}\,\,\, The ability to reason about actions not taken and ``what-ifs'' is necessary in some real-world systems, especially when constraints or risks are hard to capture. This is a relevant problem in healthcare applications \cite{Prasad2017}. This challenge partly overlaps with offline RL, since extrapolation techniques can be useful in some contexts, especially when there is correlation between state-action pairs inside and outside databases \cite{fujimoto2019off}. While some studies might touch on this concrete challenge, we did not find any work specifically focusing on counterfactuals and real-world DRL. Facebook's platform Horizon \cite{gauci2019horizon}, one of the success stories of real-world DRL, leverages work on Counterfactual Policy Evaluation \cite{wang2017optimal} to evaluate policies without deploying them online.
\textbf{Stability}\,\,\, Once deployed, agents should maintain the desired behavior for indefinite time, even when new experience is collected. This challenge has a link with other considerations: online learning and the problem of catastrophic forgetting, autonomous resets (\cite{Ibarz2021} identifies autonomous resets as one of the specific challenges in the context of robotics), and the general issue of reliability. While this is a challenge directly related to the post-deployment or operation phase, we did not find specific works directly tackling this issue for real-world DRL.
\textbf{Combined challenges}\,\,\, Finally, as pointed out in \cite{Dulac-Arnold2021}, real-world DRL challenges usually do not appear in isolation but combined. The literature specifically addressing multiple challenges simultaneously is scarce. For instance, the work in \cite{Jaderberg2019} focuses on both multiagent settings and partial observability, although they are commonly related. We have not been able to find works tackling numerous challenges at the same time.
\section{Domain-agnostic research's gaps}
\label{sec:gaps}
As seen in Table \ref{tab:summary}, different strategies have been adopted to address specific challenges of real-world DRL; these can be grouped into thirteen types. Based on this literature review, we deem there is a good understanding of each individual challenge, and the provided references demonstrate that novel methods are able to reach new levels of robustness in the test environments. The review has also allowed us to identify the following five gaps that we believe are important to consider when evaluating the progress of real-world DRL.
\textbf{1. Bias towards robotics use cases}\,\,\, Most of the frameworks, use cases, and test environments are focused on control applications, specifically robotics. There are multiple problems in the real world that consist of optimization, design, or recommendation tasks. In some cases their underlying systems are simpler than highly-actuated robots and thus considering them as additional benchmarks could be beneficial moving forward. Extending the focus to these systems might be an opportunity to achieve new successful deployments.
\textbf{2. Not enough research on combined challenges}\,\,\, The majority of the presented studies focus on one challenge at a time and ignore combined challenges analyses. Real-world environments display multiple challenges simultaneously, often with high degrees of interaction. Combined effects are studied in \cite{Dulac-Arnold2021}; their paper proves a simple interaction of a few challenges can substantially hamper the policy performance. They also provide a benchmark task to study this issue in more depth.
\textbf{3. Lack of real-world follow-through}\,\,\, Most of the studies cited in this work make use of the same test environments: MuJoCo \cite{Todorov2012}, DeepMind Control Suite \cite{Tassa2018}, the Arcade Learning Environment \cite{Bellemare2013}, DeepMind Lab \cite{Beattie2016}, Behaviour Suite for RL \cite{Osband2019}, Alchemy \cite{Wang2021}, Meta-World \cite{yu2020meta}, or PyBullet \cite{coumans2016pybullet}. These environments are still simulations; in most cases real-world testing is left as future work. While it is highly difficult to create real-world benchmarks all researchers can use, presenting results on real systems would add value to the community.
\textbf{4. Little understanding of design tradeoffs}\,\,\, Comparisons of different approaches generally tie loosely to the design perspective and tradeoffs. We believe domain-specific operators looking to introduce DRL into their systems might not find obvious which approaches to try and use when designing their models. Currently, there are little efforts to align all research directions alongside that goal.
\textbf{5. Operation is generally ignored}\,\,\, The vast majority of works ignore operation after deployment, which is an essential piece of information for potential domain-specific operators. In that sense, an open question is: what test scenarios and/or Key Performance Indicators (KPIs) should we use to guarantee operability over time and identify possible performance degradations?
\section{Domain-specific research perspective}
\begin{figure}[t]
\centering
\includegraphics[width=.99\linewidth]{subproblem.pdf}
\caption{Representation of a real-world problem in terms of a problem space, known problem space, and operation space.}
\label{fig:subproblem}
\end{figure}
In the previous sections we outlined the contributions and gaps of RL research when it comes to real-world DRL. In general, though, many of the papers proposing DRL to address concrete problems or applications come from domain-specific communities, mainly following the motivations presented in Section \ref{sec:introduction}. In that sense, the adoption is positive. However, in the majority of cases the path from proofs of concept and prototypes to real deployments and system integrations is still to be traversed. In this section we focus on this issue by, first, reviewing some examples of successful deployments; then, arguing what is missing in other proofs of concept; and finally, outlining possible ways of achieving new deployments that benefit from both RL and domain-specific research.
\subsection{Successful deployments in the real world}
One of the main roadblocks in real-world DRL is training in simulation before deploying. The \textit{sim-to-real gap} is a well-known problem that many applications face. Still, the work in \cite{OpenAI2019} proved a robot could learn manipulation skills and solve a Rubik's Cube only from high-quality simulation training. The agent relies on a simple algorithm, PPO \cite{schulman2017proximal}, to learn the policy. Training in simulation also offered the advantage to easily randomize different environment properties, which helped learning more robustly. \cite{Osinski2020} and \cite{Tai2017} are examples that follow the same framework for autonomous driving and robot mapless navigation, respectively. However, the deployment context in both applications is more complex than solving a Rubik's Cube, therefore the authors only limit themselves to very controlled test conditions in real settings. These differences suggest that the specific real-world problem addressed plays an important role in determining the deployability of a certain DRL model. This motivates our discussion in the following parts of this section.
In some cases, agents might be able to train in the real world and not need to rely on simulations. The work in \cite{Haarnoja2018a} focuses on the ability to learn in a real setting, specifically proposing a method for a quadrupedal robot to learn locomotion skills. By maximizing both return and entropy, the robot acquires a stable gait from scratch in about two hours. Entropy maximization might become essential in real-world training as a means to explore more and better.
In many industries, deployment involves integration within a company's operations. A good example of an industrial deployment is Google Loon's DRL-based station-keeping mechanism \cite{Bellemare2020}. The company replaced its previous controller by a DRL agent that is better at keeping balloons close to base stations. A key feature of this example is that the agent's actions do not alter the environment (i.e., wind currents) and therefore the learning process has less interactions to capture.
Another relevant industrial deployment is Facebook's Horizon \cite{gauci2019horizon}, which the company uses to decide when to send notifications to users. In this latter case, the possibility to gather massive amounts of data from millions of users has made DRL a successful decision-maker. This mirrors the usefulness of self-play in videogame applications such as AlphaZero \cite{silver2018general} or AlphaStar \cite{vinyals2019grandmaster}.
\subsection{Missing links between proofs of concept and deployments}
We have presented a collection of real-world examples in which DRL is making a difference. Still, the quantity of proofs of concept in the literature substantially outnumbers the quantity of reported deployments. We argue this is due to a combination of different factors inherent to domain-specific research. Throughout this section we use Figure \ref{fig:subproblem} to better understand them.
The premise of the majority of domain-specific papers entails picking a concrete decision-making problem or task of the domain and adapting the DRL framework to address it. In its broadest sense, this problem (e.g., autonomous driving, warehouse management, designing a new drug) can be defined by a \textit{problem space} that captures all possible scenarios that can be encountered in the real world. For example, in the case of autonomous driving, this would correspond to all types of roads, traffic, localizations, etc. Many of these possible scenarios are known beforehand, and therefore constitute the \textit{known problem space}, represented in Figure \ref{fig:subproblem}. We also assume there is a space, whose size varies depending on the application, that encapsulates the scenarios that are unknown to model designers and operators until the model is deployed and/or operated in the real world.
Then, almost all real-world tasks are embedded in business frameworks in which quality of service considerations come into play. This originates an \textit{operation space}, which basically defines what a successful deployment is, especially in industrial contexts. This space is not necessarily completely contained within the known problem space. We argue that in the literature, generally, all proofs of concept of a DRL model addressing a real-world problem in a specific domain are based on scenarios within both the operation and known problem spaces. However, the generalizability of such models is not enough to capture the complete operation space, although it might capture scenarios outside of the operation and the known problem spaces.
The difference between the operation space and the proofs of concept's generalizability is what is slowing the deployment of DRL models in the real world. Operators might be reluctant to integrate DRL models into the systems until this gap is reduced or completely covered for an individual problem. For instance, in the case of the Rubik's Cube addressed in \cite{OpenAI2019}, the operation space is almost certainly covered by the proposed model, resulting in a successful deployment in real life.
\subsection{Possible considerations to move forward}
The framework presented in Figure \ref{fig:subproblem} is intrinsic to domain-specific research, where methods are outlined within a real problem's context and there is a good understanding of this problem. In contrast, as discussed in Section \ref{sec:gaps}, many of the domain-agnostic studies presented have strong generalizability but lack real-world follow-through, i.e., an operation space as background. When acknowledging both an operation space and the generalizability of proofs of concept, three ideas follow to achieve new deployments: pushing the generalizability boundaries of current proofs of concept, designing additional models that aim to cover remaining areas of the operation space, and considering if certain areas can't be covered by DRL-based methods at all. We weigh in on each of them in the following paragraphs.
A straightforward direction to cover the operation space is to design models that generalize better; this has been a central issue in the RL community. Broadly speaking, the challenges presented in Section \ref{sec:challenges} hamper the generalizability of DRL models in the real-world, and the mitigation strategies discussed in Section \ref{sec:strategies} aim to increase it. However, in domain-specific communities success is in many cases measured by the performance on specific scenarios, ignoring generalizability. Hence, model designers tend to follow different routes and tailor their solutions to very specific scenarios in order to surpass other state-of-the-art methods, usually hand-crafting many elements of the DRL framework in the process. While these dynamics increase engagement in DRL, they generalize poorly and hardly fulfill expectations from the operation perspective. Operators likely care more about generalizability and might want to trade it for performance.
Even though pursuing solutions that generalize better is interesting, it is also fair to assume that a single DRL model might not be sufficient to cover the entire operation space, especially if there are certain quality of service requirements in place. Therefore, another idea to consider is implementing additional proofs of concept so that the overall union covers the operation space. The operator would then need to decide which model to use in which scenario. We believe in some cases this approach is easier to implement than attempting to design a model capable of covering the whole operation space. This is especially easier to ponder from domain-specific research, where an operation space is usually acknowledged. However, current design practices focused on tailoring DRL models to very specific scenarios make this approach hard to scale. The cost of covering the operational space of complex problems by implementing multiple proofs of concept could be unaffordable in some cases.
Finally, we should consider those problems in which no DRL model or combination of models is able to cover the entire operation space. There might be certain operational scenarios that are too complex for a DRL agent (see Figure \ref{fig:subproblem}); we might want to rely on other types of decision-making approaches in those cases. Acknowledging the possible existence of this space could avoid many futile attempts to achieve deployments solely based on DRL, especially if model designers have already explored pushing generalizability boundaries and adding new models. We believe in some cases DRL will not work in the real world only by itself, but in combination with other decision-making technologies.
\section{Conclusion}
In this work we have focused on real-world-oriented Deep Reinforcement Learning (DRL) research from both domain-agnostic and domain-specific perspectives. We have offered our view on why there is a lack of real-world deployments of DRL models despite the numerous efforts from different research communities, and identified different directions to move forward. On one hand, we have provided a comprehensive review of the domain-agnostic challenges of real-world DRL and summarized which are the different approaches being taken to address them. Thanks to this review, we have identified five gaps in domain-agnostic research: a bias towards robotics use cases, not enough research on combined challenges, a lack of real-world follow-through, little understanding of the design tradeoffs, and an omission of operation considerations. On the other hand, we have explained the motivations and success stories of domain-specific research when it comes to DRL. Still, the number of deployments is low. We attribute this to a misalignment between the generalizability of proofs of concept in the literature and operation requirements. Finally, we have discussed possible ways to increase the operability and robustness of domain-specific DRL models and how those can benefit from the research on how to mitigate real-world DRL challenges.
|
1,314,259,995,886 | arxiv | \section{Introduction}
\label{sec:Introduction}
In the study of two-dimensional quantum magnets, the anisotropic triangular
model has been a continuing object of attention. This is partially due to its
applicability to real experimental materials such as the organic salts
$\kappa-$(BEDT-TTF)$_2$Cu$_2$(CN)$_3$,\cite{NMR_Organic_Salt,Sachdev_Organic_Salts,
Organic_Salts_Hamiltonian_Work}
$\kappa-$(BEDT-TTF)$_2$Cu$_2$[N(CN)$_2$],\cite{Organic_Salts_Hamiltonian_Work}
and inorganic
Cs$_2$CuCl$_4$,\cite{Coldea_Tsvelik_Neutron_Scatter_CsCuCl,Balents_Initial_CsCuCl,Coldea_Initial,Coldea_Hamiltonian_Work,Coldea_Zheng}
and Cs$_2$CuBr$_4$,\cite{Coldea_Zheng,Tanaka_CsCuBr} and partially due to
early theoretical and numerical speculation that it could exhibit a coveted
2D spin liquid
phase.\cite{Weng_Weng_Bursill_ED,Becca_Sorella_ED,Hauke_ED_MSWT,Chung_Marston_LSW_HATM}
This was followed by suggestions that experimental results on
Cs$_2$CuCl$_4$\cite{Coldea_Tsvelik_Neutron_Scatter_CsCuCl} could be explained
by, less exotic, quasi-1D spin liquid
behaviour.\cite{Kohno_Starykh_Balents_Nature,Balents_Nature} This led to
more recent theoretical work, utilizing renormalization group techniques,
suggesting a subtle collinear antiferromagnetic (CAF) ordering in this same
region; \cite{Balents_RG,Kallin_Ghamari_RG} this ordering being in
competition with the more classical incommensurate spiral order, which also
may exist.\cite{Pardini_and_Singh_Series_Expansion} Most recently a DMRG
study using periodic boundary conditions considered substantially larger
systems than before and found a gapped state with strong antiferromagnetic
correlations accented by weak, short-range, incommensurate spiral ones.\cite{Weichselbaum_and_White_DMRG}
Thus, the question in the $J' \ll J$ region is whether the systems exhibits a
one- or two-dimensional spin liquid
phase,\cite{Becca_Sorella_ED,Weng_Weng_Bursill_ED,Hauke_ED_MSWT,Chung_Marston_LSW_HATM,Reuther_Thomale_Functional_RG}
or a collinear antiferromagnetic order driven by next-nearest chain
antiferromagnetic correlations and order by disorder,\cite{Balents_RG} or
something entirely different. Suffice it to say that the true physics of this
system remains controversial.
Though Dzyaloshinskii-Moriya and interplane interaction are believed to play a
role in the physics of the previously mentioned real materials, the more
simplified system of a Heisenberg model on a triangular lattice with exchange
interactions $J$ along one direction and differing interactions ($J'$) along
the other two primitive vectors (see Fig.~\ref{fig:Triangular_Lattice}), is
believed to capture much of the relevant physics. For $J'<J$ this can be
visualized as an array of weakly interacting chains. In the limit of only two
chains this system reduces to the well studied $J_1-J_2$ chain, which is known
to be a gapless Luttinger liquid for $J \ll J'$ before undergoing a phase
transition at $J\simeq 0.24 J'$ to a gapped phase characterized by dimer-like and
incommensurate spiral
correlations.\cite{Thesberg_Sorensen_J1_J2_Chain,Chen_J1_J2,J1_J2_ED_1,Eggert,DMRG_2,DMRG_3,J1_J2_Field_Theory_1,Essler_TBC}
Though it is known that the behaviour of the true two-dimensional system
differs greatly.
In this paper we explore the $J' < J$ region of the anisotropic
triangular lattice Heisenberg model (ATLHM) through the use of twisted boundary conditions (TBC) and
exact diagonalization (ED). This allows for a minimally biased
exploration of the incommensurate behaviour of the system.
A typical cluster used in the calculations along with the imposed twists is shown in Fig.~\ref{fig:Triangular_Lattice}.
By minimizing the total energy of the ground-state with respect to the applied twist we can determine the {\it optimal twist} $\theta^{gs}$ that most closely fit with the natural ordering
present in the system. It is then possible
to infer a preferred $q$-vector from the value of the $\theta^{gs}$. The inferred $q$-vector can tentatively be interpreted as the prefered $q$-vector for the system
in the thermodynamic limit. It is not limited to the usual discrete values $2\pi n/L$ but can take {\it any} value between 0 and $2\pi$. When such an analysis is performed
for the {\it ground-state} we can directly determine $q^{gs}$ for the ground-state, a substantial advantage of
the present approach.
We identify non-trivial values of $\theta^{gs}$ with the presence
of long-range spiral order.
Our results seem to indicate a phase transition between two gapless
phases: long-range spiral order with a non-trivial ground-state $q^{gs}\neq 0 $
and a more subtle phase with $q^{gs}=0$ and antiferromagnetic
intrachain ordering. At the critical point, the minimum in twist-space abruptly jumps between
two distinct minima resulting in a similar jump in the inferred $q^{gs}$.
We very roughly estimate this transition to occur at a $J'_c \lesssim 0.5$ in the thermodynamic limit.
However, we note that the severe limitation in system sizes when performing exact diagonalizations makes
it difficult to to draw a definitive conclusion concerning this transition in the thermodynamic limit.
The
interchain correlations of the latter phase are further explored with specific
attention paid to the competition between next-nearest chain antiferromagnetic
and ferromagnetic correlations as well as nearest chain incommensurate spiral
interactions. Our results, though not conclusive, seem to favor a CAF-like
ordering in this region. A schematic phase-diagram is shown in
Fig.~\ref{fig:Triangular_Lattice}.
It is important to realize that the behavior of actual { correlation functions} are { not only }
determined by $q^{gs}$. In fact,
following Ref.~\onlinecite{Essler_TBC}, we argue that the dominant part of the incommensurate transverse
correlations can be estimated by studying the {\it first excited state}. In general,
$q$-vectors, describing the {\it transverse correlations}, are best determined by locating the twist
minimizing the energy of the first excited-state. If this minimum is located we can infer a $q^1$-vector
from which $q$, describing the incommensurate correlations, can be determined through the relation $q^1=q+q^{gs}$.
It is quite possible to have $q\neq 0$ and thus clear
incommensurate (short-range) correlations in the absence of long-range spiral order. Such short-range incommensurate would then
typically be modified by an exponentially decaying envelope.
Hence, by studying the minima of mainly the first excited-state,
we are able to extract the incommensurate $q$-vectors describing correlations along both the inter- and
intrachain directions.
Our results for the intrachain $q$-vector describing the incommensurate
correlations are in {\it very close}
agreement with recent DMRG results~\cite{Weichselbaum_and_White_DMRG} on substantially larger systems,
a strong validation of our approach. Further, the extracted $q$-vector for the correlations {\it varies smoothly} with $J'$ through the
tentative phase transition described above where $q^{gs}$ abruptly jumps showing that incommensurate correlations are
present on either side of the transition.
The organization of this paper is as follows: In section~\ref{sec:Introduction}
we introduce the model and its classical phase diagram, this is then followed
by an introduction to the twisted boundary conditions used here in
section~\ref{sec:Twisted_Boundary_Conditions} along with a detailed explanation of how $q^{gs}$ and $q$ are determined.
We then show our results in
section~\ref{sec:Results_and_Discussion} along with analysis of the two phases.
We conclude in section~\ref{sec:Conclusion_and_Summary}.
\subsection{The Anisotropic Triangular Lattice Heisenberg Model (ATLHM)}
The system under consideration, the anisotropic triangular lattice Heisenberg
model (ATLHM), is described by the following Hamiltonian:
\begin{align}
\label{Hamiltonian}
H=J \sum_{\mathbf{x},\mathbf{y}} \hat{S}_{\mathbf{x},\mathbf{y}}\hat{S}_{\mathbf{x}-1,\mathbf{y}} + J' \sum_{\mathbf{x},\mathbf{y}} \hat{S}_{\mathbf{x},\mathbf{y}} \cdot \left( \hat{S}_{\mathbf{x},\mathbf{y+1}} + \hat{S}_{\mathbf{x}-1,\mathbf{y+1}} \right)
\end{align}
where for simplicity of exposition all lattice spacings $a$ are taken to be 1
and where $J>0$ corresponds to antiferromagnetic interactions. A diagram can
be found in Fig.~\ref{fig:Triangular_Lattice}. In this paper, we are solely
concerned with the $J' < J$ region, particularly the region where $J' \ll J$.
For reference, the anisotropy found in Cs$_2$CuCl$_4$ is estimated to be $J'/J
\sim 0.3$.\cite{Coldea_Tsvelik_Neutron_Scatter_CsCuCl} Throughout this paper we
use the convention that a system of size $N$ is composed of $W$ chains (i.e.
width $W$) of length $L$ and is notated $N=W \times L$.
\subsection{The Classical System}
The classical limit case of the ATLHM (i.e. $S \rightarrow \infty$) can be
straightforwardly solved.\cite{Yoshimori_1959,Villain_1959} The lowest energy
configuration can be determined by positing a spiral solution of the form
$\mathbf{S} = S \mathbf{u} e^{-i \mathbf{q} \cdot \mathbf{r}}$. This is
identical to a local rotation of the quantization direction at each site which
is done in spin-wave theory. The resulting energy expression is then
\begin{align}
E_{cl}(\mathbf{q})= J \cos{(\mathbf{q}_{J})} + J' \cos{(\mathbf{q}_{J'})} + J' \cos{(\mathbf{q}_{J'} - \mathbf{q}_J)}
\label{eq:Ecl}
\end{align}
where the $ \hat{S}_i^z \hat{S}_j^z $ term is neglected since it carries no
$\mathbf{q}$ dependence. For $J'<J$ we can find the minimum of Eq.~(\ref{eq:Ecl}) by first treating $\mathbf{q}_J$ as a fixed
parameter. In that case it immediately follows that the minimum with respect to $\mathbf{q}_{J'}$ is at
$2\mathbf{q}_{J'}=\mathbf{q}_J. $
Thus, we get
\begin{align}
E_{cl}(\mathbf{q})= J \cos{(\mathbf{q}_J)} +2 J' \cos{\left( \frac{ \mathbf{q}_J}{2} \right)}.
\end{align}
The global minimum for $J'<J$ can now be found by minimizing this function with respect to $\mathbf{q}_J$. Solving with the
use of trigonometric identities yields the classical ground-state solutions:
\begin{align}
\mathbf{q}_J = 2 \arccos \left( -\frac{J'}{2J}\right), \; \mathbf{q}_{J'} =\arccos \left( -\frac{J'}{2J}\right).
\end{align}
However, for the region $J'/J = ( 0 , 1 ] $ the $\mathbf{q}_J$ solution goes
from $\pi$ to $4 \pi /3$, we therefore choose a different solution
$\tilde{{\mathbf{q}}}_J = 2\pi - \mathbf{q}_J$, corresponding to a different
choice of branch, which ranges from the more physical $\pi$ to $2 \pi /3$. The
$\mathbf{q}_{J'}$ solution needs no such adjustment. Thus the final classical
solutions are
\begin{align}
q_J = 2\pi -2 \arccos \left( -\frac{J'}{2J}\right), \; q_{J'} =\arccos \left( -\frac{J'}{2J}\right).
\end{align}
where we no longer emphasize $q_J$ and $q_{J'}$ as vectors. In the limit of
$J'/J \rightarrow 0$ we find $q_J=\pi$, $q_{J'}=\pi/2$, consistent with
antiferromagnetic chains with only perturbative coupling.
\section{Twisted Boundary Conditions}
\label{sec:Twisted_Boundary_Conditions}
The $J'\leq J$ region of the ATLHM is dominated by both incommensurate spiral
ordering and short-range incommensurate spiral correlations. These
long-wavelength, incommensurate, correlations present formidable challenges to
numerical analysis, since attempts to capture physics with wavelengths of
$O(10,000) - O(\infty)$ using a system of length $\sim O(10)$ will undoubtedly
be dominated by extreme finite-size effects. Even the most recent 2D DMRG
results, allowing for the largest systems, can only probe systems of $L \sim 100$
when at $J'=0.2$ the wavelength of the spiral correlations is expected to be on
the order of $10,000$.\cite{Weichselbaum_and_White_DMRG} Thus, it is little
wonder that early numerical work produced such disputed
results.\cite{Weng_Weng_Bursill_ED,Becca_Sorella_ED}
Many of these finite-size effects can be successfully mitigated through a
careful consideration of the boundary conditions. Previous numerical
studies\cite{Weng_Weng_Bursill_ED,Becca_Sorella_ED,Weichselbaum_and_White_DMRG}
on the ATLHM were produced using either open, periodic or mixed boundary
conditions. Such boundary conditions will strongly distort the physics of an
incommensurate system in favour of an ordering which is commensurate with the
system size, only admitting the ordering $q$-vectors
\begin{align}
q_n = \frac{2 \pi}{L} n \notag
\end{align}
where $L$ is the length of the system in a given direction. It is this
tendency of them to ``lock'' a long wavelength structure into a much smaller
box that produces such spurious, unphysical, results such as sudden parity
transitions (a point to be discussed in greater detail below).\cite{Weng_Weng_Bursill_ED,Becca_Sorella_ED}
Thus to greatly reduce
this sort of error our calculations were performed using twisted boundary
conditions (TBC).
When using twisted boundary conditions,
spin interactions which cross the periodic boundary of the otherwise translationally invariant system
become rotated in
the $x-y$ plane by an angle $\theta$. This corresponds to the boundary
conditions
\begin{eqnarray}
S^-_{L+1}=e^{-i \theta} S^-_{1}, \; S^+_{L+1}=e^{i \theta} S^+_{1}
\end{eqnarray}
or, equivalently,
\begin{eqnarray}
S^+_{L} S^-_1 \rightarrow S^+_{L} S^-_1 e^{-i\theta},\;S^-_{L} S^+_1 \rightarrow S^-_{L} S^+_1 e^{i\theta}.
\end{eqnarray}
where we simplify the discussion by only discussing one dimension of the
system. Generalization to higher dimensions is straightforward although care has to be taken in order to
define positive and negative $\theta$ consistently when a twist is introduced along several bonds. (See Fig.~\ref{fig:Triangular_Lattice}. )
Physically, the twist corresponds to a spin current where a(n) $\uparrow$-spin
($\downarrow$-spin) acquires an extra phase when traversing the periodic
boundary from the left (right). Alternatively, the Heisenberg system can be
mapped to one of $N_{\uparrow}$ fermions with the initial Jordan-Wigner
transformation $S^+_i = c^{\dagger}_i e^{i \pi \sum_{i<j} c^{\dagger}_j
c_j}$, followed by the gauge U(1) gauge transformation, $c^{\dagger}_i =
f^{\dagger}_i e^{i\frac{\theta}{L}}$. The interpretation is then of a
periodic system, on a ring, of $N_{\uparrow}$ up spins threaded by a flux
$\theta$.
\subsection{$J-J_2$ Spin Chain}
For an initially translationally invariant system of linear size $L$ a twist of $\theta$ imposed at the boundary
can then in general be distributed throughout the system by introducing a twist of $\theta/L$ at each bond by performing a non-unitary gauge-transformation.
We thereby obtain a model with periodic boundary conditions (PBC). Let us take
the well known $J-J_2$ spin chain model as an example:
\begin{align}
\label{J1J2}
H=J \sum_{i} \hat{S}_{i}\cdot\hat{S}_{i+1} + J_2 \sum_{i} \hat{S}_{i} \cdot \hat{S}_{i+2}.
\end{align}
This model is closely related to the ATLHM and was studied using twisted boundary conditions in
Ref.~\onlinecite{Essler_TBC} where a twist of $\theta$ was introduced at the boundary in the terms coupling sites $[L,1]$
as well as $[L-1,1]$ and $[L,2]$. We can in this case define a translationally invariant model with the {\it exact} same
energy spectrum if we instead introduce a twist of $\theta/L$ at each $[i,i+1]$ bond along with a twist of $2\theta/L$ at each
$[i,i+2]$ bond. This latter model is now manifestly translationally invariant with periodic boundary conditions and any many-body state can then be characterized
by a many-body momentum:
\begin{equation}
\tilde q =\frac{2\pi n}{L}\ \ n=0,1,\ldots,L-1
\end{equation}
To be explicit, if $T_a$ denotes the operator translating one lattice spacing $a$ in real space, then $T_a\Psi_{\mathrm{PBC}}=\exp(i\tilde q a)\Psi_{\mathrm{PBC}}$ with
$\Psi_{\mathrm{PBC}}$ the wave-function of the translationally invariant model with periodic boundary conditions.
We can then determine the energy as a function of $\theta$ as well as the many-body momentum of the corresponding state. As an illustration,
results are shown in Fig.~\ref{fig:EM12} for the lowest lying $S=1$ excitation of the $J-J_2$ at $J=J_2$ for a chain with $L=12$,
displaying the characteristic parabolic shape of the energy.
\begin{figure}[t]
\includegraphics[width=\linewidth]{EM12}
\caption{\label{fig:EM12} Energy and momentum of the lowest lying $S=1$ state for the
$J-J_2$ chain at $J_2/J=1.$ The energy minima occur at $\theta=0.6299\pi,1.3701\pi$.
At $\theta=\pi$ the lowest lying state changes from having $\tilde q = 6\pi/12$ to $\tilde q =8\pi/12$.
}
\end{figure}
In this case the first energy minimum occurs at $\theta_{min}=0.6299\pi$ where $\tilde q=\pi/2$.
We then make the quasi-classical (phenomenological) assumption that the main effect of the twist is to modify the state's natural ordering
vector $q$ to fit with the many-body momentum $\tilde q$ in the following manner:
\begin{equation}
\tilde q=q\pm\frac{\theta}{L}=\frac{2\pi n}{L}.
\label{eq:q}
\end{equation}
In the present case we immediately find
\begin{equation}
q=\pi/2+0.6299\pi/12.
\end{equation}
The second minimum at $\theta_{min}=2\pi-0.6299\pi$
and $\tilde q=2\pi/3$ yields the same
\begin{equation}
q=2\pi/3-(2\pi-0.6299)/12=\pi/2+0.6299\pi/12.
\end{equation}
In the thermodynamic limit the natural ordering vector $q$ is then simply given by $\tilde q$ and any effects of the twist $\theta$ upon the dermination
of $q$ should be negligible as expressed by Eq.~(\ref{eq:q}).
This analysis differs in some details from Ref.~\onlinecite{Essler_TBC} but yields essentially identical results for the $J-J_2$ chain.
One may also consider the momentum of the model {\it without} translational invariance and twisted boundary conditions. In this case we find for the wave-function $\Psi_{\mathrm{TBC}}$
the relation $T_a\Psi_{\mathrm{TBC}}=\exp(i\alpha a)\Psi_{\mathrm{TBC}}$ with $\alpha=\tilde q+\theta N_\uparrow/L$ where $\tilde q$ is the many-body momentum
of the translationally invariant system. Here $N_\uparrow$ denotes the number of $\uparrow$ spins in the state under consideration.
If one, at the classical level, argues that $\theta$ is the angle needed for $qL$ to equal an integer number of complete
turns one arrives at the same relation between $q$ and $\theta$:
\begin{align}
q L \pm\theta = 2 \pi n,
\end{align}
In this equation, as well as in Eq.~(\ref{eq:q}), the $\pm$ signifies if $q$ turns in the same direction
as $\theta$ as we move along the chain.
Hence the presence of the twist $\theta$
permit a continuum of ordering $q$-vectors
to 'fit' into the system of linear size $L$, where:
\begin{align}
q = \frac{1}{L} \left( 2 \pi n\pm \theta \right).
\label{eq:thetatoq}
\end{align}
A simple illustration of this is shown in Fig.~\ref{fig:Arrow_Diagram} for $q=2\pi/3$.
\begin{figure}[t]
\includegraphics[scale=0.5]{Arrow_Diagram}
\caption{\label{fig:Arrow_Diagram}
This diagram shows how a $q=2 \pi/3$
ordering can be made to ``fit'' into a system of length 4 by twisting by $4
\pi/3$ at the boundary. These twisted boundary conditions then allow any
incommensurate ordering to fit in any sized system. }
\end{figure}
In this case the ordering can be made to ``fit" a system of length $L=4$ if a twist
$\theta=4\pi/3$ is introduced as indicated in Fig.~\ref{fig:Arrow_Diagram}. From $\theta$
we can then infer $q=(2\pi\times 2-4\pi/3)/4 = 2\pi/3.$ In this example, the wavelength
of the twist ($\lambda=3$) is shorter than the linear length of the system $L=4$ and we
have to use $n=2$ in Eq.~(\ref{eq:thetatoq}) in order to obtain the correct $q$. In analogy with
the example of the $J-J_2$ chain we would therefore expect the energy minimum for $\theta=4\pi/3$ to
occur for the state with many-body momentum $\tilde q=2\pi\times 2/4=\pi$. Correspondingly we would expect
another minimum at $\theta=2\pi/3$ for a state with many-body momentum $\tilde q=2\pi/4=\pi/2$.
In practical studies it is not always feasible to use a translationally invariant system and explicitly determine
the many-body $\tilde q$ of the state corresponding to the minimizing twist and thereby $n$ in Eq.~(\ref{eq:q}) and (\ref{eq:thetatoq})
and for most of the results presented here we have not done so. However, it is almost always possible
to infer the correct $n$ to be used in Eq.~(\ref{eq:q}) and (\ref{eq:thetatoq}) by simple continuity from known results and
other expected behavior such as $qL\ll 1$.
\subsection{The ATLHM}
We now turn to a discussion of the approach we have taken to apply twisted boundary conditions to the
ATLHM.
With the analysis of the classical system in mind, we include \emph{two} twists in our
analysis of the ATLHM. The first, $\theta_J$, is associated with a
twisted boundary in the $J$ direction. The second, $\theta_{J'}$, is then
associated with the boundary in the $J'$ direction (see Fig.~\ref{fig:Triangular_Lattice}).
With both twists implemented the Hamiltonian becomes:
\begin{align}
\label{Full_Hamiltonian}
&H_{\theta}=J\sum_{\mathbf{x}>1,\mathbf{y}} \hat{S}_{\mathbf{x},\mathbf{y}}\hat{S}_{\mathbf{x}-1,\mathbf{y}} + J' \sum_{\mathbf{x},\mathbf{y}<W} \hat{S}_{\mathbf{x},\mathbf{y}} \cdot \left( \hat{S}_{\mathbf{x},\mathbf{y+1}} + \hat{S}_{\mathbf{x}-1,\mathbf{y+1}} \right) \notag \\
&+ \sum_{\mathbf{y}<W}\hat{S}^+_{1,\mathbf{y}} \cdot \left( J \hat{S}^-_{L,\mathbf{y}} + J' \hat{S}^-_{L,\mathbf{y}}\right) e^{i \theta_J} + J' \sum_{\mathbf{x}>1} \hat{S}^+_{\mathbf{x},W} \hat{S}^-_{\mathbf{x},1} e^{i \theta_{J'}} \notag \\
&+ J' \hat{S}^+_{1,W} \hat{S}^-_{L,1} e^{i(\theta_J + \theta_{J'})} + H.c. \notag \\
&+ \sum_{\mathbf{y}<W}\hat{S}^z_{1,\mathbf{y}} \cdot \left( J \hat{S}^z_{L,\mathbf{y}} + J' \hat{S}^z_{L,\mathbf{y}}\right) + J' \sum_{\mathbf{x}>1} \hat{S}^z_{\mathbf{x},W} \hat{S}^z_{\mathbf{x},1} e^{i \theta_{J'}}. \notag \\
\end{align}
Although this Hamiltonian looks quite cumbersome when written out explicitly,
conceptually it is very simple. If a left moving $\downarrow$-spin traverses,
either horizontally or diagonally, the left periodic boundary it is rotated in
the $x-y$ plane by $\theta_J$. If an upward moving $\downarrow$-spin
traverses, either vertically or diagonally, the upper periodic boundary it is
rotated in the $x-y$ plane by $\theta_{J'}$. If a $\downarrow$-spin traverses
the upper left periodic boundary diagonally, thus crossing both twisted
boundaries, it is rotated in the $x-y$ plane by $(\theta_J + \theta_{J'})$.
Spins in the bulk as well as the $z$-component of all spins are unaffected by
the boundary.
These twists, which explicitly break the global SU(2) spin symmetry, are
identical to a twist of $\theta_J/L$ on \emph{each} horizontal and north-west to south-east bond
along with a twist of $\theta_{J'}/W$ on \emph{each} south-west to north-east and north-west to south-east bond.
The north-west to south-east bonds therefore recieve a twist of $\theta_{J}/L+\theta_{J'}/W$ for a system of dimensions
$W\times L$. (See Fig.~\ref{fig:Triangular_Lattice}.) If this is done one can work with an equivalent translationally invariant model.
However, a
twist-per-site approach was found to be less fruitful for such small systems
and the explicit SU(2) symmetry breaking will play an important role in forcing
a $S^z$ quantization direction which will be discussed below.
It is worth noting that the second twist, $\theta_{J'}$, is rarely (if ever)
implemented in studies with twisted boundary conditions. Indeed, most existing
numerical studies of the ATLHM fail to consider the possibility of
incommensurate \emph{inter}chain correlations at all, often enforcing periodic
or open boundary conditions in the interchain direction even when other, more
elaborate, boundary conditions are used along chains. The parameter
$\theta_{J'}$, then, serves as a tool to explore such new physics.
Our complete Hamiltonian, boundary twists included, then has three free
parameters: The energy parameter $\frac{J'}{J}$, the intrachain boundary twist
$\theta_J$ and the interchain boundary twist $\theta_{J'}$. The numerical task
then becomes to explore the two-dimensional landscape $(\theta_J,
\theta_{J'})$, at a given $J'/J$, to find the twists which minimize the
ground-state energy. From these twists the $q$-vectors $q_J$ and $q_{J'}$ can
then be extracted using the following generalization of
Eq.~(\ref{eq:q}) and (\ref{eq:thetatoq}):
\begin{eqnarray}
\label{eq:2Dthetatoq}
q_J=\vec q_1\cdot \vec a_1 &=&\frac{2\pi n_1}{L}\pm \frac{\theta_J}{L}\nonumber\\
q_{J'}=\vec q_2\cdot \vec a_2 &=&\frac{2\pi n_2}{W}\pm \frac{\theta_{J'}}{W}.
\end{eqnarray}
These equations follow since the twists are applied in {\it direct} space and reflect the behavior of the system
upon $L$ and $W$ translations along the directions $\vec a_1$ and $\vec a_2$ in real direct space.
Our notation here for a system of $W$ chains of length $L$ is the following: As indicated in Fig.~\ref{fig:Triangular_Lattice} we
use basis vectors $\vec a_1=a(1,0)$ and $a_2=a(1/2,\sqrt{3}/2)$ for the direct lattice.
As usual reciprocal lattice vecors are then given by $\vec b_1=4\pi(\sqrt{3}/2,-1/2)/(a\sqrt{3})$ and
$b_2=4\pi(0,1)/(a\sqrt{3})$. If we now consider the translationally invariant model with twists
of $\theta_J/L$ and $\theta_{J'}/W$ along the bonds as described above, the many-body momentum of the
translationally invariant system with the imposed twist is:
\begin{equation}
\vec{\tilde q} = \frac{n_1}{L} \vec b_1+\frac{ n_2}{W}\vec b_2.
\end{equation}
Likewise, in our notation, we have:
\begin{equation}
\vec q = \vec q_1+\vec q_2 = \frac{q_J}{2\pi}\vec b_1+\frac{q_{J'}}{2\pi}\vec b_2.
\end{equation}
Hence, the application of
the twists allow us to determine the components of $\vec q$ along $\vec b_1$ and $\vec b_2$.
\begin{figure}[t]
\includegraphics[scale=0.5]{Jp1Sz2sweep}
\caption{\label{fig:Jp1sweep} The energy, $E$, as a function of the
two twists $\theta_J$ and $\theta_J'$. Results are shown for the lowest-lying $S=1$ state
of a $4\times4$ system with $J'/J=1$. The two identical minima occur for $(\theta_J,\theta_J')=(2\pi/3,4\pi/3)$ and $(4\pi/3,2\pi/3)$.
}
\end{figure}
As an illustration we show in Fig.~\ref{fig:Jp1sweep} results for the $S=1$ ground-state energy of a $4\times 4$ system with $J'/J=1$.
Two identical minima are clearly present at $(\theta_J,\theta_J')=(2\pi/3,4\pi/3)$ and $(4\pi/3,2\pi/3)$. In this case we have done
simulations using a translationally invariant model as outlined aboved and explicitly determined the many-body momentum, $\vec{\tilde q}$, of the state
corresponding to the minima. Here we find $(2\pi n_1/L,2\pi n_2/W)=(\pi/2,\pi)$ and $(\pi,\pi/2)$ respectively. Thus, following the analysis
at the end of the previous section we find $q_J=2\pi/3$. The mimima in the $S=0$ ground-state occur at the {\it exact} same $(\theta_J,\theta_{J'})$ but in this
case with $n_1=n_2=0$.
However, there is an additional complication in such an analysis brought on by
such small systems. As has very clearly been shown in
Ref.~\onlinecite{Weichselbaum_and_White_DMRG}, much of the $J'/J < 1$ region is
dominated by \emph{antiferromagnetic} correlations superimposed on much subtler
incommensurate spiral correlations. Thus, if the system can be made to adopt a
specific quantization direction $z$, through, say, a perturbative magnetic
field on a single site as was done in
Ref.~\onlinecite{Weichselbaum_and_White_DMRG}, we expect $\langle GS \vert
\hat{S}_{\mathbf{x}}^z \hat{S}_{\mathbf{x+x'}}^z \vert GS \rangle$ correlations
to be completely dominated by antiferromagnetism with a small canted
incommensurate ordering showing in the transverse correlations:
\begin{align}
&\langle GS \vert \hat{S}^x_{\mathbf{x}} \hat{S}^x_{\mathbf{x+x'}} + \hat{S}^y_{\mathbf{x}} \hat{S}^y_{\mathbf{x+x'}} \vert GS \rangle \notag \\
&\rightarrow \left\langle \frac{1}{2} \left( \hat{S}^+_{\mathbf{x}} \hat{S}^-_{\mathbf{x+x'}}+ \hat{S}^-_{\mathbf{x}} \hat{S}^+_{\mathbf{x+x'}}\right)\right\rangle&\propto \langle\hat{S}^+_{\mathbf{x}} \hat{S}^-_{\mathbf{x+x'}} \rangle.
\end{align}
In the absence of an explicit symmetry breaking term it is then extremely
difficult to separate the spiral correlations from the ``sea'' of
antiferromagnetic ones. This difficulty is addressed by twisted boundary
conditions as can be seen through consideration of the following argument,
originally detailed and validated in Ref.~\onlinecite{Essler_TBC}. First, with
the addition of a twist in the $x-y$ plane the global spin SU(2) symmetry is
broken and a unique $z$-quantization is picked out in a direction normal to the
system, since a generic twist would frustrate antiferromagnetic ordering
in-plane. It is then convenient to rewrite the transverse correlations ,
$\langle\hat{S}^+_{\mathbf{x}} \hat{S}^-_{\mathbf{x+x'}} \rangle$, in the
more intuitive Fourier transformed form
\begin{align}
&=\left\langle\left( \frac{1}{\sqrt{L}} \sum_{q'} e^{i q x} \hat{S}^+_{q'}\right) \left( \frac{1}{\sqrt{L}} \sum_{q} e^{-i q (x+x')} \hat{S}^-_q\right) \right\rangle \notag \\
& = \frac{1}{L} \left\langle e^{-i q x'} \left( e^{i (q-q') x} \right) \hat{S}^+_{q'} \left( \sum_m \vert m \rangle \langle m \vert \right) \hat{S}^-_{q} \right\rangle \notag \\
&= \frac{1}{L} \sum_q \sum_{m} e^{-i q x'} \vert \langle m \vert S^-_q \vert GS \rangle \vert^2
\end{align}
where $S^-_q$ can now be physically interpreted as a spin-wave destruction
operator. If the ground-state lies in the total $S^z=0$ sector, which it does
for an antiferromagnetic system of even system size, then $\langle GS \vert
S^-_q\vert GS \rangle=\langle GS \vert \left( \frac{1}{\sqrt{L}} \sum_q e^{-i q
x} S_{\mathbf{x}}^-\right) \vert GS \rangle=0$ and the transverse
correlations can be rewritten as
\begin{align}
\langle\hat{S}^+_{\mathbf{x}} \hat{S}^-_{\mathbf{x+x'}} \rangle = \frac{1}{L} \sum_q \sum_{m \neq GS} e^{-i q r} \vert \langle m \vert S^-_q \vert GS \rangle \vert^2.
\end{align}
As usual, the $S^-_q$ or $S^-_x$ operators take the total $S^z=0$ ground-state
into the total $S^z=-1$ sector. Additionally, if the ground-state has an
overall ordering vector $q^{gs}$ then the only terms to survive the sum over
$\langle m \vert S^-_q \vert GS \rangle$, and thus contribute to the transverse
correlations, are those for which $q^1=q^{gs}+q$. If one then makes the
assumption that only the first excited state in the total $S^z=1$ sector
dominates then one now has a method to extract the incommensurate $q$-vector,
$q$, as well as the ground-state momentum $q^{gs}$. First one
finds the $(\theta_J, \theta_{J'})$ which minimizes the ground-state
energy of the total $S^z=0$ sector, yielding
\begin{equation}
(q^{gs}_J,q^{gs}_{J'}) \ \ (S^z=0).
\end{equation}
Notice that our two twists, $\theta_J$ and
$\theta_{J'}$ yield two $q$-vectors which we denote $q_J$ and $q_{J'}$.
After
finding the minimum in the total $S^z=0$ twist-space the procedure is then
repeated in the total $S^z=1$ twist-space
yielding
\begin{equation}
q^1_J=q^{gs}_J+q_J\ \mathrm{and}\ \ q^1_{J'}=q^{gs}_{J'}+q_{J'}\ \ (S^z=1).
\end{equation}
A demonstration of
this can be found in Ref.~\onlinecite{Essler_TBC}.
As an illustration of the procedure we show
results for $E(\theta_J,\theta_{J'})$ for the
first excited state of a $4\times 6$ system with $J'/J=0.6$ in Fig.~\ref{fig:Sample_Energy_vs_Theta}. Note that, for our subsequent results the minima
are determined on a much finer grid. We also note that in both Fig.~\ref{fig:Jp1sweep} and \ref{fig:Sample_Energy_vs_Theta} do distinct minima occur for
values of $\theta>\pi$. This is due to the non-zero $\theta_{J'}$ which lifts the symmetry with respect to $\theta=\pi$ visible in Fig.~\ref{fig:EM12}.
This method of minimizing the ground-state energy in both the total $S^z=0$ and
$S^z=1$ sectors, also allows one to compute the \emph{spin gap},
$\Delta$, between these states. Thus, with knowledge of the spin gap, the
ground-state long-range ordering $q$-vectors as well as the incommensurate
short range $q$-vectors, one can imagine
two situations of interest that could arise in the ATLHM.
\begin{figure}[t]
\includegraphics[scale=0.5]{Sample_Energy_vs_Theta.eps}
\caption{\label{fig:Sample_Energy_vs_Theta}
(Color online.) $Energy$ vs. $\theta_J$: The incommensurate and ground-state
wavevectors $q_J^{in}$/$q_{J'}^{in}$ and $q_J^{gs}$/$q_{J'}^{gs}$ are
obtained by minimizing the ground-state energy in the total-$S^z=1$ and
total-$S^z=0$ subspaces respectively in terms of the boundary twists
$\theta_J$ and $\theta_{J'}$. This figure shows sample values of $\theta_J$
vs. energy for different value of $\theta_{J'}$ for $N= 4 \times 6$ and
$J'/J=0.6$ in the total-$S^z=1$ subspace. The true minimizing $(\theta_{J}, \theta_{J'})$ were determined
on a much finer grid to an accuracy of 0.001 in the twist, and for this case ($J'=0.6$) was found to be $(4.775, 0.550)$ (noted by an arrow) which corresponds to the $q$-vectors $(q_J, q_{J'})=(2.890,1.708)$. This figure
merely serves as an illustration. }
\end{figure}
\subsection{Case 1: $q^{gs} \neq 0$ or $\pi$, $q = 0$ (incommensurate spiral order)}
\label{subsec:Case_1}
In the case where the true ground-state (i.e. that in the total $S^z=0$ sector)
is minimized by incommensurate $q$-vectors $q^{gs}_J$ and $q^{gs}_{J'}$ we then
have incommensurate long-range order related to a classical incommensurate
spiral.
In such a region we also expect the spin gap ($\Delta$) to vanish owing to the
gapless magnon excitations about the spiral order which accompany U(1) symmetry
breaking. Note that the symmetry broken \emph{is} U(1), since the initial
SU(2) symmetry has already been reduced to U(1) when the twist terms were
added. This would coincide, in the limit of infinite system size, with
long-range correlations of the form
\begin{align}
\left\langle \hat{S}_{\mathbf{x},\mathbf{y}}\hat{S}_{\mathbf{x} + \mathbf{x'},\mathbf{y}}\right\rangle \underset{x' \rightarrow \infty}{\approx} e^{i q_{J}^{gs} x' } \left\langle \hat{S}_{\mathbf{x}}\right\rangle^2,
\end{align}
with a similar form in the $J'$ direction corresponding to $q_{J'}^{gs}$.
However, such long-range behaviour of the correlation functions is far beyond
the accessible range of any numerical approach. Thus, it will suffice to take a
non-zero $q^{gs}$ accompanied by a vanishing spin gap $\Delta$ to demonstrate
long-range spiral order.
\subsection{Case 2: $q^{gs}=0$ or $\pi$, $q \neq 0$ (non-spiral order)}
\label{subsec:Case_2}
The case where $q^{gs}=0$ or $\pi$ is more
complicated. Since $q$ is non-zero the system is displaying
incommensurate spiral correlations, however, these correlations are of
insufficient strength to stabilize true long-range spiral ordering. Yet, as we
find, if the spin gap $\Delta$ is found to be zero, then we expect \emph{some}
ordering to exist, unless the system is found to be a gapless spin liquid.
\begin{figure}[t]
\includegraphics[scale=0.5]{q_J_incommensurate_vs_Jp}
\caption{\label{fig:q_J_incommensurate_vs_Jp} (Color online.) $q_J$ vs. $J'$:
The intrachain ordering $q$-vector $q_J$ as a function of the interchain
interaction $J'$ for systems of width 4 along with the classical value
(dashed magenta line).
Results are obtained from the
$\theta_{J}$ which minimizes the total-$S^z=0,1$ sectors.
For $J'>J'_c$ $q_J^{gs}$ was found to be non-zero, while $q_J^{gs}=0$ or $\pi$ for $J'<J'_c$
The critical
value, $J'_c$ was determined to be $J'_c=$0.9175, 0.7835, 0.7135 for $N= 4 \times 4$, $4
\times 6$ and $4 \times 8$ respectively.
Exponential fits (black for $N=4
\times 4$, light grey for $N= 4 \times 6$ and dark grey for $N = 4 \times
8$) are of the form $a (J')^2 \exp(-b/J')$, consistent with
Ref.~\onlinecite{Weichselbaum_and_White_DMRG}, and are found to be extremely
good for most of the $J'$ region. However at $J' \sim 0.3$ the data markedly
deviates from this fit and develops a linear character. The physicality of
this linear behaviour for $J' < 0.3$ is further explored in the text.
}
\end{figure}
\begin{figure}[t]
\includegraphics[scale=0.5]{q_Jp_incommensurate_vs_Jp}
\caption{\label{fig:q_Jp_incommensurate_vs_Jp} (Color online.) $q_{J'}$ vs. $J'$:
The interchain ordering $q$-vector $q_{J'}$ as a function of the interchain
interaction $J'$ for systems of width 4.
Behaviour and fits are identical to those in
Fig.~\ref{fig:q_J_incommensurate_vs_Jp}. As $J' \rightarrow 0$, $q_{J'}$ }
\end{figure}
Should the value of $q^{gs}$ be consistent with a $\pi$ ordering vector for all
system sizes then, taken along with the gaplessness of the system, one could
conclude that the system has ordered antiferromagnetically. However, in our
case a more thorough
analysis of the correlations of the system becomes necessary. This is expanded
upon in Sec.~\ref{subsec:Non_Spiral_Ordered_Phase}
\section{Results and Discussion}
\label{sec:Results_and_Discussion}
The intrachain ($\theta_J$) and interchain ($\theta_{J'}$) boundary twists were
varied to minimize the ``ground-state'' energy in the total-$S^z=0,1$ sectors for
systems of increasing length and fixed width (4 chains). A fixed width was
chosen both since intrachain correlations are the dominant correlations for $J'
\ll J$ and to more easily compare with existing DMRG work on larger
systems.\cite{Weichselbaum_and_White_DMRG} From these minimizing twist values
the $q$-vectors $q_J$ and $q_{J'}$ were extracted as a function of $J'/J$ which we will simply call $J'$ (i.e. $J=1$).
The resulting data, as well as
the classical values, can be found in Fig. \ref{fig:q_J_incommensurate_vs_Jp}
for $q_J$ and Fig. \ref{fig:q_Jp_incommensurate_vs_Jp} for $q_{J'}$. With the
exception of the $J' \lesssim 0.3$ region which is discussed later, both $q_J$
and $q_{J'}$ are found to be fitted best by functions of the form $a (J')^2
\exp(-b/J')+c$ rather than power-law fits. This data is in close agreement
with the DMRG results found in Ref. \onlinecite{Weichselbaum_and_White_DMRG}
where the incommensurate $q$-vector $q_{J}$ ($q_{J'}$ was not considered) was
extracted by fitting $\langle S^z_{\mathbf{x}}\rangle$, as induced by a boundary field, to an exponentially decaying correlation function of the
form
$\left\langle S_{\mathbf{0}}^z\right\rangle \exp(-x/\xi)
\cos(q x)$. The close agreement of our results with the DMRG results on
substantially larger systems is surprising and indicative of the power of
twisted boundary conditions to circumvent finite-size effects in
incommensurate systems.
The lack of distinct features in Fig. \ref{fig:q_J_incommensurate_vs_Jp} and
Fig. \ref{fig:q_Jp_incommensurate_vs_Jp} suggests that the system has identical
behaviour for all $J'$. This is not the case, as can be seen by examining the
true ground-state $q$-vectors in the total-$S^z=0$ sector. As $J'$ decreases
the ground-state twists, $\theta^{gs}_J$ and $\theta^{gs}_{J'}$, are found to jump
discontinuously at some critical value of $J'$, $J'_c$ (the nature of this jump
will be discussed momentarily). For $J' \> J'_c$ the total-$S^z=0$ and
total-$S^z=1$ twists coincide.
Below $J'_c$ the
total-$S^z=0$ data is found to be either $0$ or $\pi$ for all $J'$ in the
region.
A sample illustration of this jump can be found in
Fig.~\ref{fig:Sample_L16_Sz_0_Jump} where it is shown for a $4 \times 4$
system. This critical value decreases with system size and is found to be at
$J'_c=$0.9175, 0.7835, 0.7135 for $N= 4 \times 4$, $4 \times 6$ and $4 \times
8$ respectively. A finite-size extrapolation of these values to the
thermodynamic limit can be found in Fig. \ref{fig:Jp_Critical_vs_N}.
At $J'_c=0.9175$ the $\theta^{gs}_J$ minimizing the energy jumps abruptly to $0$ due
to the appearance of a new distinct minimum in twist space.
We can extrapolate $J_c'$ to the thermodynamic limit with a linear $\frac{a}{N}+b$ fit of $N^{-1}$ estimating
$J'_c\to 0.475$ as $N\to\infty$.
As the system width is
increased $J'_c$ is found to increase as well, taking values of 0.912 and 0.917
for systems of size $N= 6 \times 4$ and $N=8 \times 4$. The infinite system
size extrapolation for fixed length, which is obviously non-linear and is thus fitted with a quadratic $\frac{a}{N^2} +
\frac{b}{N}+c$ fit, can be
found in the inset of \ref{fig:Jp_Critical_vs_N} and is found to be 0.948.
Obviously, for these very limited system sizes, a reliable estimate of the critical coupling
in the thermodynamic limit is not within reach. However, it seems plausible that the fixed
width estimate of $J'_c=0.475$ is the more realistic of our estimates.
A comparison of both
fixed width and fixed length thermodynamic limit extrapolations suggests that
the spiral-ordered region extends well into the $J'<J$ region, even in much
larger systems.
\begin{figure}[t]
\includegraphics[scale=0.45]{Sample_L16_Sz_0_Jump}
\caption{\label{fig:Sample_L16_Sz_0_Jump} (Color online.)
$\theta_J$, $\theta_{J'}$ vs. $J'$ in
the total $S^z=0$ subspace ($N=4 \times 4$): As can be clearly seen the
total $S^z=0$ minimizing boundary twists (shown as solid lines with circular
markers) undergoes an abrupt jump (occurring here
at $J'/J \sim 0.91$) before ``locking'' to some fixed value for all
$J'<J'_c$.
}
\end{figure}
It is important to note that this discontinuous jump in the ground-state
is \emph{not} due to the level crossing observed in previous numerical
work.\cite{Weng_Weng_Bursill_ED,Becca_Sorella_ED} This transition, which was
found to be a parity transition, occur for a $4 \times
4$ system at a value of $J' \sim 0.84$, for $4 \times 6$ and at $J' \sim 0.75$ for
$4 \times 8$ and thus occurs at a higher value of $J'$ for all system sizes.
Thus, this level crossing is completely avoided once one allows the boundaries
to twist freely.
The nature of the transition is essentially due to a first-order phase
transition in ``twist-space'' as described by Landau theory. At $J' > J'_c$ the
ground-state minima is found to lie at some incommensurate twist value, at $J'
\sim J'_c$ a second commensurate minimum forms elsewhere, at say
$(\theta_J,\theta_{J'})=(0,\pi)$, this second minima then lowers in energy as
the incommensurate minima, which is still the global minima, rises. At
$J'=J'_c$ the commensurate minimum overtakes the incommensurate one to
become the new global minimum and the ground-state then jumps
discontinuously.
\begin{figure}[t]
\includegraphics[width=0.55\textwidth]{Theta_vs_Energy_Transition_Graphs}
\caption{\label{fig:Theta_vs_Energy_Transitions_Graphs} (Color online.)
Energy vs. $\theta_J$ for varying values of $\theta_{J'}$ shown for different
values of $J'$ near $J'_c=0.915$ in the total $S^z=0$ subspace ($N=4 \times
4$). At $J' > J'_c$ the ground-state minima is found to lie at an
incommensurate twist value, at $J' \sim J'_c$ a second commensurate minimum forms at
$(\theta_J,\theta_{J'})=(0,\pi)$, this second minimum then moves lower in energy
and becomes the global minimum at $J'=J'_c$. The global minima is indicated in the graphs with an arrow.}
\end{figure}
We now look at each region separately:
\begin{figure}[t]
\includegraphics[scale=0.5]{Jp_Critical_vs_N}
\caption{\label{fig:Jp_Critical_vs_N} (Color online.) Critical $J'$ vs $N^{-1}$:
A thermodynamic limit extrapolation of the spiral-ordered to non-spiral
ordered transition value ($J'_c$) for systems of width 4 and length 4. For
a width of 4 a linear ($J'_c = 0.475$ as $N\rightarrow \infty$) fit was considered.
For a length of 4 the extrapolation was clearly not linear and so a
quadratic fit ($J'_c=0.948$ as $N\rightarrow \infty$) was used. For
consistency the quadratic extrapolation values for both analyses are
considered the best fit. The critical temperature was indicated by a
discontinuous jump in $\theta_J$ and $\theta_{J'}$ from their $J' \ll 1$
values.} \end{figure}
\subsection{The Incommensurate Spiral Ordered Phase, $1 \geq J'/J > J'_c/J$}
For the isotropic case, where $J' =1$, the ordering $q$-vectors
$(q_J,q_{J'})$ were found to be $\left( 2 \pi/3 , 2 \pi/3 \right)$ in agreement
with previous work. As $J'$ decreases the $q$-vectors then vary continuously
through incommensurate values. In this region the energy minima of the
total-$S^z=0$ and total-$S^z=1$ regions coincide in twist space. This, taken with the lack of
an energy gap (this is shown in section \ref{sec:The_Energy_Gap}), indicates
that this region is in a long-range spiral order phase. The transition out of
this phase seems to occur at an intrachain twist of $\sim \pi$ for all system
sizes as illustrated in Fig.~\ref{fig:Sample_L16_Sz_0_Jump} for a $4\times 4$ system.
The fact that the transition should be related to some critical value
of the boundary twist \emph{and not} some critical $q_J$ is interesting and may
represent some subtle numerical cause. However, we would comment that spin
wave theory is known to encounter a similar region, notable for its
non-convergence, for $J'$ smaller than some critical
value\cite{Hauke_ED_MSWT,Chung_Marston_LSW_HATM}. On the other hand, we also cannot exclude
the possibility that for much larger systems this transition would be absent.
\subsection{The Non-Spiral Ordered Phase, $J' < J'_c$}
\label{subsec:Non_Spiral_Ordered_Phase}
For $J'$ values greater than $J'_c$ the ground-state is found to have
incommensurate long-range spiral order as was discussed previously. However,
for $J'<J'_c$ the twists which minimize the total $S^z=0$ sector jumps to
$(\theta_{J},\theta_{J'})=(0,\pi)$ for $N=4\times4$, $4 \times 6$ and $4 \times
8$ (i.e. systems of width 4), and to $(0,0)$ for systems of size $N=6 \times 4$
and $N=8 \times 4$. These values of $\theta_J$ are found to be entirely
consistent with antiferromagnetic intrachain ordering of the ground-state.
However, for increasing system width, the values of $\theta_{J'}$, being $\pi$
for width 4 but $0$ for widths of both 6 \emph{and} 8, are inconsistent with
any $q$-vector suggesting a more careful consideration of interchain physics
must be taken. This discussion is postponed until section
\ref{subsec:Interchain_Correlations}.
The fact that no evidence of this transition can be found in the total $S^z=1$
data suggests that incommensurate correlations are always present and vary
smoothly for all $1>J'/J>0$, but that the power of those correlations to
stabilize long-range spiral order becomes insufficient at $J'_c$. Below $J'_c$
the dominant correlations are then antiferromagnetic along chains with much
smaller incommensurate behaviour resting atop. These antiferromagnetic
correlations nested in an incommensurate envelope were demonstrated very
clearly for a gapped system in Ref.~\onlinecite{Weichselbaum_and_White_DMRG}.
Though it is our belief that this behaviour is only found below $J'_c$ and the
fact that such behaviour was obtained for $1 \geq J'/J > J'_c$ in that paper
might be an artefact of the use periodic boundary conditions in the interchain
direction there. This point is further discussed in the next subsection.
Numerical access to three system sizes of width 4 makes it possible
to extrapolate $q_J$ and $q_{J'}$ to the $4 \times \infty$ limit. Extrapolated
values were found to lie, with great precision, on a scaling function of the
form $\frac{a}{N^2} + \frac{b}{N} + c$ and can be found in the inset of
Fig.~\ref{fig:infinite_q_incommensurate_vs_Jp}. The thermodynamic limit
results for both $q^{\infty}_J$ and $q^{\infty}_{J'}$ are plotted in the main
figure. Values above $J'_c$ were not considered, with the exception of the
commensurate $J'/J=1$ case. As before, these $q^{\infty}_J$ and
$q^{\infty}_{J'}$ data can be well fitted by a function of the form $a (J')^2
exp(-b/J')+c$. However, unlike the finite-size case, this function is found to
be valid for all $J'$ considered. This suggests that the linear behaviour in
the neighbourhood of $J' \sim 0$ (See Fig.~\ref{fig:q_J_incommensurate_vs_Jp}) may not be physical. Furthermore, as will be
discussed in section \ref{subsec:Interchain_Correlations}, it is found for $J'
\lesssim 0.3$ that the system's energy dependence on $\theta_{J'}$ becomes zero
to numerical precision. Thus it is possible that interchain correlations,
which are physically non-zero, but of a magnitude smaller than the smallest
number that could be represented by a computer are present in this region.
Regardless, the physicality of the linear behaviour is not certain.
\begin{figure}[t]
\includegraphics[scale=0.5]{infinite_q_incommensurate_vs_Jp}
\caption{\label{fig:infinite_q_incommensurate_vs_Jp} (Color online.) $q^{\infty}_J$ and $q^{\infty}_{J'}$ vs. $J'$:
The thermodynamic limit extrapolated values of $q_{J}$ and $q_{J'}$ vs. $J'$.
Extrapolations were done to quadratic functions of the form $\frac{a}{N^2} +
\frac{b}{N} + c$. Sample extrapolations can be seen in the inset for
$J'/J=0.5$ (triangles), $0.35$ (diamonds), $0.2$ (squares) and $0.05$
(circles). Values greater than the estimated infinite system size transition
point and less than $J'/J=1$ are not shown (see text). In similarity to the
finite-size Fig.~\ref{fig:q_J_incommensurate_vs_Jp} and
Fig.~\ref{fig:q_Jp_incommensurate_vs_Jp}, $q_J\rightarrow \pi$ and $q_{J'}
\rightarrow \pi/2$ as $J' \rightarrow 0$. However, contrary to that figure,
the degradation of an exponential fit to a linear one in the $J' \sim 0$
region is less pronounced, if present at all (see text).}
\end{figure}
\subsubsection{The Energy Gap: $\Delta$}
\label{sec:The_Energy_Gap}
The numerical determination of a spin gap is in general a difficult task.
Often computational reality doesn't permit enough system sizes to be calculated
in order for a reliable thermodynamic limit to be established. Furthermore,
when the thermodynamic limit can be taken, considerations like the method
and boundary conditions used can have a profound effect on the extrapolated
results.
With this in mind the bulk of existing numerical work on the ATLHM has
suggested the existence of a spin gap either for all of $J'/J < 1$ or
for $J'$ less than some critical value in the range of $J'/J \sim 0.6 -
0.8$.\cite{Becca_Sorella_ED,Weng_Weng_Bursill_ED,Weichselbaum_and_White_DMRG}
Indeed, an initial analysis of our own data, as can be seen in the
inset of Fig.~\ref{fig:Energy_Gap_vs_oN}, is consistent with this
picture. However, a more careful consideration of these results shows that this could be misleading.
As previously, the accessibility of three width 4 system sizes permits
a finite-size extrapolation of the energy gap data. This extrapolation
was done for values of $J' \leq 0.5$ and can be found in
Fig.~\ref{fig:Energy_Gap_vs_oN}. Values of $1 > J' \geq 0.5$ were not
considered due to the possibility of different system sizes being on opposite
sides of the $J'_c$ transition. For $J' \leq 0.5$ the $\Delta$ values were
found to fit very well to a scaling function of form
$\frac{a}{N}+\Delta_{\infty}$ and $\Delta_{\infty}$ was found to be on the
order of $10^{-2}$. An estimate of the error in this extrapolation can be
generated by contrasting the linear fit y-intercept with that of a quadratic
fit which produces a $\Delta_{\infty}$ on the order of $10^{-1}$. Such small
values are extremely suggestive of a gapless system. Taken alone, this is
consistent with both spiral and collinear antiferromagnetic orderings as well
as potentially a gapless spin liquid phase.
Data was also collected for $J'/J=1$ where the situation appeared to be
different, with linear scaling fits suggesting a small non-zero spin gap.
However, it is well
known\cite{Trumper_Sorella_Capriotti_VMC_Spin_Gap_Extrapolation,Trumper_Sorella_Capriotti_VMC_Spin_Gap_Extrapolation_2}
that the spin gap converges very slowly with system size in the spiral-ordered
region and the system is known to be gapless in this phase. This, combined
with the apparent gaplessness of the $J'<J'_c$ region, suggests that the ATLHM
is gapless for all $J'/J \leq 1$
\begin{figure}[t]
\includegraphics[scale=0.5]{Energy_Gap_vs_oN}
\caption{\label{fig:Energy_Gap_vs_oN} (Color online.) Spin Gap vs. $N^{-1}$:
The energy difference between the ground-states of the total $S^z=0$ and total
$S^z=1$ sectors extrapolated to the thermodynamic limit. Dotted lines
represent fits of the form $\frac{a}{N}+\Delta_{\infty}$ with
$\Delta_{\infty}$s found in the non-spiral region to be on the order of
$10^{-2}$. The error, estimated by contrasting fits quadratic vs. linear in
$N^{-1}$, could be as large as $10^{-1}$; or approximately one percent.
Inset: $\Delta$ vs. $J'$: We show the energy gap $\Delta$ versus $J'$ for
$N=4\times 4$ (circles) and $N=4 \times 6$ (squares). Without a
thermodynamic limit analysis it is easy to see how previous work found the
$J'<J'_c$ region to be gapped.} \end{figure}
One of the central results of the numerics of
Ref.~\onlinecite{Weichselbaum_and_White_DMRG} was the unusual behaviour of the
energy gap $\Delta$ for different system widths. This paper studied the
incommensurate behaviour of long systems of a small number of chains, i.e. $4
\times 64$, $6 \times 64$, $8 \times 32$, etc. One of the central results of
the paper was that for systems (periodic in $\mathbf{y}$) of width 2, 4 and 8
the system exhibited a spin gap for $J'<1$ which shrank with decreasing $J'$
down to $J'\sim 0.5$, the lowest $J'$ studied in the work. This spin gap was
accompanied by an exponential decay of intrachain correlations. Conversely,
systems of width 6 displayed a small or, likely, no such spin gap and
presumably an algebraic decay of correlations. The reason for this discrepancy
could not be identified. A possible explanation for the discrepancy between these
results and the ones presented here could be the presence of a non-zero $\theta_{J'}$ in
our calculations allowing the system to relax more completely as we now comment on in more detail.
For $J'/J$ in the neighbourhood of $1$ the classical and quantum ordering
vector in the $\mathbf{y}$ direction is $q_{J'}=2 \pi /3$, where for $J'/J \ll
1$ $q_{J'} \rightarrow \pi/2$ as $J' \rightarrow 0$. Thus, a cylinder, as used in Ref. \onlinecite{Weichselbaum_and_White_DMRG}, with a
width of 6 chains and periodic (no twist) boundary conditions around the cylinder would be commensurate with
the $2 \pi/3$ order but not the $\pi/2$ order, and thus we expect the correct
spin gap for $J'/J \sim 1$ and an artificial, finite size induced gap as $J'$
decreases (though this is not observed in Ref. \onlinecite{Weichselbaum_and_White_DMRG}
since the spin gap is only calculated as low as $J' \sim 0.9$ for the $6 \times 64$ system).
Conversely, system sizes of 4 and 8 are incommensurate with $2 \pi/3$ order,
and therefore are found to have an unphysical spin gap when $J'\sim J$, but
\emph{are} commensurate with a $q_{J'}=\pi/2$ ordering and thus we expect the
correct spin gap to emerge as $J' \rightarrow 0$. It is then the case that
the 4 and 8 width system would be expected to give the most accurate
indication of the spin gap for small $J'$ and the width 6 system for $J'/J
\sim 1$. The key point is that gapless spiral correlations in the $J'$
direction might appear gapped if analyzed with periodic boundary conditions around the cylinder
with widths incommensurate with the spiral in that direction. With this in mind, an alternative interpretation of
the data of Ref. \onlinecite{Weichselbaum_and_White_DMRG} could be consistent with a
system with no spin gap in the thermodynamic limit.
\subsubsection{Interchain Correlations}
\label{subsec:Interchain_Correlations}
Our analysis of intrachain correlations for systems of size $4 \times 4$, $4
\times 6$, $4 \times 8$ produced a clear and consistent picture of
antiferromagnetically ordered chains ($\theta_{J}=0$, $q_J=\pi$ for all chain
lengths) accented by incommensurate interchain correlations. The situation
for interchain correlations is not so simple.
The open question in the $J' \ll J$ region is whether the systems exhibits a
one or two-dimensional spin liquid
phase\cite{Becca_Sorella_ED,Weng_Weng_Bursill_ED,Hauke_ED_MSWT,Chung_Marston_LSW_HATM}
or a collinear antiferromagnetic order driven by next-nearest chain
antiferromagnetic correlations and order by disorder\cite{Balents_RG}. This
debate can be better informed by a consideration of the interchain ordering
vector, $q_{J'}$ and the importance of next-nearest chain antiferromagnetic
interactions to the ground-state.
The twist $\theta_{J'}$ which minimizes the ground-state as a function of
system \emph{width} is found to be $\pi$ for $4 \times 4$, and $0$ for $6
\times 4$ and $8 \times 4$ for $J'<J'_c$. This is clearly inconsistent with
any classical ordering vector $q_{J'}$. This supports the belief that, for the
system sizes under consideration, any long-range classical incommensurate spiral order is
suppressed. Previous studies which have shown the lack of long-range spiral order had a
potentially critical flaw in that they used periodic boundary conditions which
undoubtedly destabilize such orderings. It is then interesting that a lack of
spiral order is still found when the system has complete freedom to adopt an
incommensurate ground-state.
It is important to remember that, although the long-range incommensurate ordering is
suppressed short-range incommensurate correlations are still present. This is manifest by the
complete lack of any features of the minimum twist when calculated in the total $S^z=1$ sector around
the critical $J'_c$. An implication of this is that the short-range behavior of correlation functions would
show the same incommensurate behavior above and below $J_c'$. It is then natural to consider how strong these
incommensurate interchain interactions are, and how they compare to the
predicted next-nearest chain antiferromagnetic interactions that would drive a
CAF ordering.
The strength of interchain correlations can typically be determined by examining
$\left\langle \hat{S}_{\mathbf{x},\mathbf{y}}
\hat{S}_{\mathbf{x},\mathbf{y}+y'} \right\rangle$. However, the value of such
an analysis here is hindered by the small system sizes numerically available.
This deficiency turns out not to be so significant since the qualitative
information relating to the correlation between chains can be inferred from the
curvature of $\theta_{J'}$. For completely decoupled chains the ground-state
energy will have no dependence on the interchain twist $\theta_{J'}$, similarly
if the minima in $\theta_{J'}$ that minimizes the ground-state energy is found
to be extremely shallow then it can be argued that the interchain correlations
are extremely weak. Thus, by taking the second numerical derivative, in the
total-$S^z=0$ sector, we can construct a \emph{$J'-$twist susceptibility}:
\begin{equation}
\frac{\partial^2 E_{gs}}{(\partial \theta_{J'})^2}=\chi_{\theta_{J'}}.
\end{equation}
This susceptibility will
probe the strength of interchain correlations with a large value of $\chi_{J'}$
representing strong correlations and a small value of $\chi_{J'}$ representing
weak ones.
The $\chi_{\theta_{J'}}$ dependence on $J'$ and system width can be seen in
Fig.~\ref{fig:chi_Tx_vs_Jp} for a $\delta \theta_{J'}$ of 0.1. It can clearly
be seen that interchain correlations become tiny as the number of chains
increases. In fact the interchain correlations are found to be zero within the
$10^{-13}$ precision of the numerics for systems of width 6 and 8 for small
$J'$ even for such a large value of $\delta \theta_{J'}$. This is consistent
with the previous claim that these correlations are too weak to force spiral
ordering. However, an RG analysis of the ATLHM\cite{Balents_RG,Kallin_Ghamari_RG}
posit that as the interchain correlations become weak with $J' \rightarrow 0$,
the next-nearest chains correlate antiferromagnetically with a strength,
$J_{nnc}$, which grows. We will now consider the effect of such
correlations.
\begin{figure}[t]
\includegraphics[scale=0.5]{chi_Tx_vs_Jp}
\caption{\label{fig:chi_Tx_vs_Jp} (Color online.) $\chi_{\theta_{J'}}$ vs. $J'$:
The curvature of the ground-state energy (i.e. total-$S^z=0$) about its minimum ($\theta_{J'}=\pi$ for $N=4 \times 4$, $\theta_{J'}=0$ for $N=6 \times 4$ and $N= 8\times 4$) with respect to $\theta_{J'}$
($\chi_{\theta_{J'}}$) versus $J'$ for systems of increasing width. The size
of $\chi_{\theta_{J'}}$ gives an indication of the strength and importance of
interchain, (i.e. nearest chain) interactions to the ground-state energy. It
is clear that these correlations become smaller with width and become
exceedingly weak as $J' \rightarrow 0$. This is consistent with reasoning
from RG and the lack of long-range spiral order in this region.} \end{figure}
Recent series expansion work by Pardini and Singh in Ref.
\onlinecite{Pardini_and_Singh_Series_Expansion} have suggested that an
incommensurately ordered ground-state has a lower energy than a CAF one for
small $J'$. However, their work also showed that this energy difference was
extremely small and dependent on how short-ranged spiral correlations are
treated. We found previously that the ground-state is not incommensurately
ordered for our system sizes for small $J'$ and instead exhibits intrachain
antiferromagnetism. A relevant question is then whether the next-nearest chain
interactions are antiferromagnetic (CAF) or ferromagnetic (non-CAF or NCAF) and
whether these correlations grow as $J' \rightarrow 0$. We previously determined
that $6 \times 4$ and $8 \times 4$ sized systems are minimized, in the total-$S^z=0$ subspace, by
$\theta_{J'}=0$. This observation makes it difficult to discriminate between CAF and NCAF phases
since both would have such a twist. However, the $4 \times 4$ system is
minimized by a $\theta_{J'}=\pi$, which is inconsistent with CAF ordering.
This presents an opportunity to clearly demonstrate the effect of next-nearest
neighbour correlations.
We proceed by artificially inserting an exchange coupling between next-nearest
chains, $J_{nnn}\hat{S}_{\mathbf{x},\mathbf{y}}
\hat{S}_{\mathbf{x}-1,\mathbf{y}+2}$. The question then is, at what strength
of $J_{nnn}$ does the $4 \times 4$ system adopt a $\theta_{J'}=0$ ordering
(which we take to be CAF). This critical $J^c_{nnn}$ is shown, as a function
of $J'$ in Fig.~\ref{fig:J_CAF_vs_Jp}. As $J' \rightarrow 0$ the necessary
``nudge'' the system needs to adopt a CAF ordering becomes very small. In fact,
for $J'=0.05$, this critical interaction strength is as tiny as 0.0003.
Contrarily, if a \emph{ferromagnetic} interaction is used (i.e. $J_{nnn} <0$)
then $\theta_{J'}$ does not change, regardless the magnitude of $J_{nnn}$.
The fact that such a minuscule increase in antiferromagnetic next-nearest
neighbour correlations can force the ground-state minimizing boundary twist
to jump to one consistent with CAF ordering and inconsistent with NCAF
ordering lends promise to the notion of CAF ordering in the thermodynamic limit.
\begin{figure}[t]
\includegraphics[scale=0.5]{J_CAF_vs_Jp}
\caption{\label{fig:J_CAF_vs_Jp} (Color online.) $J_{CAF}^c$ vs. $J'$ for $N= 4 \times 4$:
As is discussed in the text, the $N= 4 \times 4$ system, whose $\theta_{J'}$ of
$\pi$ is found to be incompatible with the predicted
collinear-antiferromagnetic (CAF) ordering for $J' \ll 1$, can be forced to a
$\theta_J$ of 0, consistent with this ordering by applying only a small
next-nearest neighbour antiferromagnetic interaction $J_{CAF}$ (see text).
Thus, as a demonstration of the subtle importance of these next-nearest chain
interactions the critical $J_{CAF}^c$ for which $\theta_{J'}$ jumps from
$\pi$ to $0$ is plotted as a function of $J'$. For $J'=0.05$ this value
becomes as low as $J_{CAF}^c=0.0003$ representing an extreme susceptibility
to such interactions. Conversely a next-nearest chain \emph{ferromagnetic}
interaction is found to have no effect on $\theta_{J'}$. This suggests a
strong preference for CAF order. This solid line is given as an aid for the
eye.} \end{figure}
The ability to easily force a $4 \times 4$ system into a CAF consistent
ordering is appealing, but hardly conclusive, evidence that CAF ordering will
occur for larger systems, especially since this system is so small. We
therefore consider another means of analysing these interactions that can be
applied to larger systems.
We begin by perturbing our system with two different arrangements of staggered
magnetic field. The first arrangement is chosen to be consistent with CAF
ordering and involves antiferromagnetic staggering along chains and between
next-nearest chains (see Fig.~\ref{fig:CAF_Field_Lattice}, left). We only
apply fields to \emph{every other} chain in order to allow the system the
freedom to adopt the classical $q_{J'}=\pi/2$ ordering. The second arrangement
is designed to be consistent with ferromagnetic ordering between next-nearest
chains (see Fig.~\ref{fig:CAF_Field_Lattice}, right) but still
antiferromagnetic along chains. Two susceptibilities are constructed from this
perturbation.
\begin{figure}[t]
\includegraphics[scale=0.5]{CAF_Field_Lattice_non_Tilted.eps}
\caption{\label{fig:CAF_Field_Lattice} A diagram of the staggered fields
applied in the generation of $\chi_{NNN}$, $\chi_{CAF}$ and $\chi_{NCAF}$.
The field terms, $h \hat{S}^z_i$, are represented by arrows. Fields are
placed on every \emph{other} chain to allow the system freedom to adopt a
classical $q_{J'}= \pi/2$ ordering. The collinear antiferromagnetic (CAF)
ordering is found on the left and corresponds to antiferromagnetic
correlations between next-nearest chains. For clarity, a sample
next-nearest chain partner for the non-skewed triangular system is
illustrated with a dotted line. The non-collinear antiferromagnetic (NCAF)
ordering, corresponding to \emph{ferromagnetic} next-nearest chain
correlations, is shown on the right.} \end{figure}
The first susceptibility, which we call $\chi_{NNN}$, is constructed from the
next-nearest neighbour chain correlation functions:
\begin{align}
\chi_{NNN} = \frac{\delta^2 \langle \hat{S}_{\mathbf{x},\mathbf{y}} \hat{S}_{\mathbf{x}-1,\mathbf{y}+2} \rangle }{\delta h^2}
\end{align}
where $h$ is arranged in one of the two (CAF or NCAF) ways. The first
derivative term was found to be zero, which could have been predicted on the
basis of spin inversion symmetry, and thus calculating this quantity is a
simple matter of numerical differentiation. The results, as a function of
$J'$, are shown in Fig.~\ref{fig:NNN_Correlator_Susceptibility_vs_Jp} where
calculations were done only between chains that received a magnetic field (i.e.
between chains 0 and 2 or 2 and 4 but not 1 and 3). The specific chain
considered and the specific spin within that chain was found to be irrelevant.
\begin{figure}[t]
\includegraphics[scale=0.5]{NNN_Correlator_Susceptibility_vs_Jp}
\caption{\label{fig:NNN_Correlator_Susceptibility_vs_Jp} (Color online.) $\chi_{NNN}$ vs. $J'$ and System Size:
$\chi_{NNN}$, being a susceptibility of the next-nearest chain correlation
function to collinear (CAF) and non-collinear (NCAF) perturbative magnetic
fields further discussed in the text, plotted against $J'$ for various system
widths. The perturbing field was of strength $h=0.001$ and applied to half
the sites for both CAF and NCAF (see text). Calculations were done in total-$S^z=0$ about the $\theta_{J'}=0$ minima. This quantity is found to be
largely system size independent and negative (positive) for CAF (NCAF)
correlations. This is consistent with the notion that a perturbative CAF
field will cause the next-nearest chain correlations to grow more negative
and a NCAF field to grow more positive. The extremely similar magnitude of
the two correlations suggest the system exhibits a delicate competition
between collinear and non-collinear next-nearest chain correlations in the
non-spiral ordered phase and that this susceptibility grows as $J'
\rightarrow 0$.} \end{figure}
This $\chi_{NNN}$ is found to be largely system size independent and negative
(positive) for CAF (NCAF) correlations. This is consistent with the notion
that a perturbative CAF field will cause the next-nearest chain correlations to
grow more negative and a NCAF field to grow more positive. The extremely
similar magnitude of the two correlations suggests the system exhibits a
delicate competition between collinear and non-collinear next-nearest chain
correlations in the non-spiral ordered phase and that this susceptibility grows
as $J' \rightarrow 0$. The growth of this susceptibility, coupled with the
diminution of nearest-chain correlations as demonstrated in
Fig.~\ref{fig:chi_Tx_vs_Jp} seems consistent with the picture painted by
renormalization theory. However, the system seems potentially equally
susceptible to ferromagnetic next-nearest chain interactions. One then wonders
which of these correlations ultimately prevails. In order to consider this we
consider yet another susceptibility.
To quantify the systems preference towards ferromagnetic versus
antiferromagnetic next-nearest chain ordering, we considered the effect that
perturbing magnetic fields of Fig.~\ref{fig:CAF_Field_Lattice} have on the ground-state energy. We
thus define:
\begin{align}
\chi_{CAF} = \frac{\delta^2 E_{gs}}{\delta h_{CAF}^2}, \; \; \: \chi_{NCAF} = \frac{\delta^2 E_{gs}}{\delta h_{NCAF}^2}.
\end{align}
\begin{figure}[t]
\includegraphics[scale=0.5]{Chi_E_CAF_vs_Jp_Plus_Inset}
\caption{\label{fig:Chi_E_CAF_vs_Jp_Plus_Inset} (Color online.) $\chi_{CAF}$ vs. $J'$:
$\chi_{CAF}$, being the second derivative of the ground-state energy (i.e. total-$S^z=0$, $\theta_{J'}=0$) with
respect to a collinear antiferromagnetic (CAF) perturbing magnetic field,
versus $J'$ for systems of varying width. For all system sizes the
quantity is found to be negative and increasing in magnitude with
decreasing $J'$. This implies that the system's energy is lowered
by promoting CAF-like ordering. In order to establish the strength
of this affinity for CAF order vs. non-collinear antiferromagnetic
(NCAF) order, a similar ground-state susceptibility is defined
relative to an NCAF perturbing magnetic field ($\chi_{NCAF}$). The
inset show the difference in magnitude between $\chi_{CAF}$ and
$\chi_{NCAF}$. $\chi_{CAF}$ is found to be greater for all system
sizes and $J'$ though only by $\sim 10^{-7}$.} \end{figure}
As before, the first derivative term was found to be zero. This is due to spin
inversion symmetry. $\chi_{CAF}$ can be found plotted in
Fig.~\ref{fig:Chi_E_CAF_vs_Jp_Plus_Inset}. $\chi_{NCAF}$, which is not plotted,
behaves identically except being positive. The fact that $\chi_{CAF}$ is
negative implies that CAF-ordering lowers the systems energy where NCAF,
having a positive $\chi_{NCAF}$, increases it. Furthermore, when we compare
the \emph{magnitudes} of the two susceptibilities, which can be found in the
inset of Fig.~\ref{fig:Chi_E_CAF_vs_Jp_Plus_Inset}, we see that $\chi_{CAF}$
is in fact larger than $\chi_{NCAF}$ for all system sizes. Though it is
important to note that it is only larger by a margin of $\sim 10^{-7}$, and
decreases with system width, further evidencing the tenuousness of these
competing correlations.
\section{Conclusion and Summary}
\label{sec:Conclusion_and_Summary}
In this paper we have demonstrated the power of twist boundary conditions to mitigate potentially disastrous finite-size effects in incommensurate systems. Using these twisted boundary conditions we were able to extract the intrachain incommensurate $q$-vector $q_{J}$ and found it to be in good agreement with results on substantially larger systems. Furthermore, we were also able to extract the \emph{inter}chain incommensurate $q$-vector $q_{J'}$. To our knowledge our is the first work to allow fully incommensurate behaviour in both intra- and interchain directions.
Analysis of the incommensurability in both the ground-state and total $S^z=1$
excited state revealed a potential phase transition between a long-range spiral
ordered phase and one with short-range spiral correlations. A scaling analysis
of this critical $J'_c$ suggests that this point is at $J'_c \sim 0.475$ for
systems of width 4 and $\sim 0.948$ for systems of infinite width (length 4).
We then attempted to characterize the $J'<J'_c$ phase. We believe it to be
gapless in the thermodynamic limit, as well as dominated by both next-nearest
ferromagnetic and antiferromagnetic correlations. Additionally, the nearest
chain correlations are found to become minuscule. Further analysis reveals
that the antiferromagnetic interactions are marginally stronger in the systems
considered. This is consistent with the renormalization group claim that this
region should be CAF ordered.
\begin{acknowledgments}
We would like to thank
Sedigh Ghamari, Sung-Sik Lee and Catherine Kallin for many fruitful discussions.
We also acknowledge computing time at the Shared Hierarchical Academic
Research Computing Network (SHARCNET:www.sharcnet.ca) and research
support from NSERC.
\end{acknowledgments}
|
1,314,259,995,887 | arxiv | \section{Introduction}
ACO is a recently developed, population-based approach presented by M.
Dorigo and A. Colorni etc. al., it was inspired by the ants' foraging
behavior in 1991 \cite{Dorigo,Colorni,Dorigo1}. Ant System (AS) was first
introduced in three different versions \cite{Dorigo,Colorni,Dorigo1}, they
were called ant-density, ant-quantity, and ant-cycle. Ant Colony System
(ACS) has been introduced in \cite{Dorigo2,Cambardella} to improve the
performance of AS. Later, AS and ACS developed into a unifying framework to
solve combinatorial optimization problems \cite{Dorigo3,Dorigo4}, \ and the
framework is often called Ant Colony Optimization (ACO). ACO has been
applied to solve optimization problems\cite{Ball1,Ball2}, such as Traveling
Salesman Problem (TSP)\cite{Dorigo2,Cambardella1}, Quadratic Assignment
Problem(QAP)\cite{Cambardella2}, Job-shop Scheduling Problem(JSP)\cite%
{Cambardella1}, Vehicle Routing Problem( VRP)\cite{Bullnheimer,Forsyth} and
Data Mining(DM)\cite{Rafael}. The high performance of ACO and its wide
application make it as famous as other optimization algorithms, such as
Simulated Annealing (SA)\cite{Kirkpatrick}, Tabu Search (TS)\cite{Glover},
Genetic Algorithms (GA)\cite{Golderg}, and so on.
The study of ACO theory is necessary but rare. W. J. Gutjahr studies the
convergence of ACO under some conditions by Graph Theory\cite{Gutjahr},
which is called Graph-Based Ant System (GBAS). GBAS maps a feasible solution
of optimization problem to a route in a directed graph. T. St$\overset{..}{u}
$ezle and M. Dorigo proved the existence of the ACO convergence under two
conditions, one is to only update the pheromone of the shortest route
generated at each iteration step, the other is that the pheromone on all
routes has lower bound \cite{Stuezle}. J. H. Yoo analyzes the convergence of
a kind of distributed ants routing algorithm by the method of artificial
intelligence \cite{Yoo,Yoo1}. Sun analyzes the convergence of a simple ant
algorithm by Markov Process\cite{Sun}. Ding presents a hybrid algorithm of
ACO and genetic algorithm, and analyzes the convergence by Markov theory
\cite{Ding}. Hou presents a special ACO algorithm and proves its convergence
by fixed-point theorem \cite{Hou}.
The ways of studying ACO convergence are rare, such as Markov theory, Graph
Theory, and so on. And only the results with some constraint conditions are
obtained currently, and the result with no constraint condition is still
unknown. The motivation of this paper is to explore the way to study ACO
convergence under no constraint condition.
\section{Framework of ACO}
In the 1990s, ACO was introduced as a novel nature-inspired method for the
solution of hard combinatorial optimization problems (Dorigo, 1992; Dorigo
et al., 1996, 1999; Dorigo and St$\overset{..}{u}$ezle, 2004). The inspiring
source of ACO is the foraging behavior of real ants. When searching for
food, ants initially explore the area surrounding their nest in a random
manner. As soon as an ant finds a food source, it remembers the route passed
by and carries some food back to the nest. During the return trip, the ant
deposits pheromone on the ground. The deposited pheromone, guides other ants
to the food source. And it has been shown (Goss et al., 1989), indirect
communication among ants via pheromone trails enables them to find the
shortest routes between their nest and food sources.
The framework of ACO is shown in \textbf{Algorithm 1}, and it is applied to
solve Travel Salesman Problem (TSP). Where TSP can be explained as follows:
for a given set of cities, the task of TSP is to find the cheapest route of
visiting all of the cities and returning to starting point, provided each
city is only visited once.
\bigskip \textbf{Algorithm 1}
\textbf{Step1.} Initialization: Initialize pheromone of all edges among
cities. And put $m$ ants at different cities randomly. Pre-assign an
iteration number $N_{C_{\max }}$ and let $t=0$, where $t$ denotes the $t-th$
iteration step.
\bigskip \textbf{Step2. }while($t<N_{C_{\max }}$)
\{
\textbf{Step2.1}. All ants select its next city according to the transition
probability defined in formula (1), which is the probability that the $k-th$
ant selecting the edge from $i-th$ city to $j-th$ city.
\bigskip
\begin{equation}
p_{ij}^{(k)}(t)=\left\{
\begin{tabular}{ccc}
$\frac{\tau _{ij}^{\alpha }(t).\eta _{ij}^{\beta }}{\underset{s\in
allowed_{k}}{\sum }\tau _{is}^{\alpha }(t).\eta _{is}^{\beta }}$ & $if$ & $%
j\in allowed_{k}$ \\
$0$ & \multicolumn{2}{c}{$otherwise$}%
\end{tabular}%
\right. \label{eq1}
\end{equation}%
, where $allowed_{k}$ denotes the set of cities that can be accessed by the $%
k-th$ ant; $\tau _{ij}(t)$ is the pheromone value of the edge ($i,j$); $\eta
_{ij}$\ is a local heuristic function defined as
\bigskip
\begin{equation}
\eta _{ij}=\frac{1}{d_{ij}} \label{eq2}
\end{equation}%
, where $d_{ij}$ is the distance between the $i-th$\ city and the $j-th$
city; the parameters $\alpha $ and $\beta $\ determine the relative
influence of the trail strength and the heuristic information respectively.
\ \textbf{Step2.2.} After all ants finish their travels, all pheromone
values $\tau _{ij}(t)$ are updated according to formula (\ref{eq3}).
\bigskip
\begin{equation}
\tau _{ij}(t+1)=(1-\rho )\cdot \tau _{ij}(t)+\Delta \tau _{ij}(t)
\label{eq3}
\end{equation}%
\begin{equation}
\Delta \tau _{ij}(t)=\overset{m}{\underset{k=1}{\sum }}\Delta \tau
_{ij}^{(k)}(t) \label{eq4}
\end{equation}
\bigskip
\begin{equation}
\Delta \tau _{ij}^{(k)}(t)=\left\{
\begin{tabular}{ccc}
$\frac{Q}{L^{(k)}(t)}$ & $if$ & $the$ $k-th$ $ant$ $pass$ $edge$ $(i,j)$ \\
$0$ & \multicolumn{2}{c}{$otherwise$}%
\end{tabular}%
\right. \label{eq5}
\end{equation}%
,where $L^{(k)}(t)$ is the length of the route passed by the $k-th$ ant; $%
\rho $ is the persistence ratio of the trail (thus, ($1-\rho $)\ corresponds
to the evaporation ratio); $Q$ denotes constant quantity of pheromone.
\textbf{Step2.3.} Increase iteration number, i.e., $t\leftarrow t+1$.
\}
\textbf{Step3.} End procedure and select the route which has shortest length
as the output.
\section{The Statistical Feature of The Solutions of ACO}
\subsection{Definition of Symbols}
ACO solving the problem of TSP is the model in this paper. Suppose the $m$
ants are $a_{1}$, $a_{2}$, ......, $a_{m}$. At the $t-th$\ iteration step,
ant $a_{i}$ selects route $r_{i}^{(t)}$ and it has length $L_{i}^{(t)}$.
After all ants finish their $t-th$ travels,$\ $there are totally amount of
pheromone $f_{i}^{(t)}=\underset{(k,j)\in r_{i}^{(t)}}{\sum \tau _{kj}(t)}$
depositing on router $r_{i}^{(t)}$, where $\tau _{kj}(t)$ denotes pheromone
depositing at the edge $(k,j)$ by all ants.
\textbf{Definition 1} \textbf{(Pheromone Probability)}:
\begin{equation}
p_{i}^{(t)}=\frac{f_{i}^{(t)}}{\overset{m}{\underset{j=1}{\sum }}f_{j}^{(t)}}
\label{eq6}
\end{equation}
In formula (\ref{eq6}), $\overset{m}{\underset{j=1}{\sum }}f_{j}^{(t)}$\
represents the sum of pheromone of all routes. $p_{i}^{(t)}$\ represents the
ratio of pheromone that is assigned at the $i-th$ route $r_{i}^{(t)}$. The
more big the ratio $p_{i}^{(t)}$ is, the more possibly the edges of route $%
r_{i}^{(t)}$ are selected by ants at the next iteration step. That is, $%
p_{i}^{(t)}$ is a probability which will affect the route selection of ant
at the next iteration step. $p_{i}^{(t)}$\ is called as \emph{pheromone
probability}, and Fig.\ref{figPheromoneProbability_1} diagrammatizes it.
\begin{figure}[tbh]
\epsfig{file=Fig1.eps,width=10cm,}
\caption{\textbf{The Schematic of pheromone probability }$p_{i}^{(t)}$%
\textbf{. }(a) There is a complete graph with four vertices, two ants $%
a_{1}^{(t)}$ and $a_{2}^{(t)}$ act on it at the $t-th$ iteration
step. (b) Two ants $a_{1}^{(t)}$ and $a_{2}^{(t)}$ select two routes
$r_{1}^{(t)}$ and $r_{2}^{(t)}$ respectively. Every edge of route
$r_{1}^{(t)}$ contains pheromone , $f_{1}^{(t)}$ represents the sum
of pheromone on all edges of route $r_{1}^{(t)}$. As same,
$f_{2}^{(t)}$ represents the sum of pheromone on all edges of route
$r_{2}^{(t)}$. (c) There are two routes $r_{1}^{(t)}$ and
$r_{2}^{(t)}$ totally, $p_{i}^{(t)}$ represents the ratio of
pheromone assigned at route $r_{i}^{(t)}$. The more bigger ratio
$p_{i}^{(t)}$ is, the more possibly the edges at route $r_{i}^{(t)}$
were selected by ants at the next iteration step. That is,
$p_{i}^{(t)}$ is a probability which will affect the route selection
of ants at the next iteration.} \label{figPheromoneProbability_1}
\end{figure}
\bigskip \textbf{Definition 2 (Route Length Set):} At the $t-th$ iteration
step, $m$ ants select $m$ routes. The set of route lengths is denoted as
\begin{equation*}
L\_Set^{(t)}=\{L_{1}^{(t)},L_{2}^{(t)},...,L_{i}^{(t)},...,L_{m}^{(t)}\}
\end{equation*}
\textbf{Definition 3 (Pheromone Probability Set):} The set of pheromone
probabilities is defined as
\begin{equation*}
P\_Set^{(t)}=\{p_{1}^{(t)},p_{2}^{(t)},...,p_{i}^{(t)},...,p_{m}^{(t)}\}
\end{equation*}
\subsection{Statistical Features of Route Length Set}
At the $t-th$ iteration step, the $i-th$ ant $a_{i}$ selects route $%
r_{i}^{(t)}$, where $i=1,2,...,m$. And route $r_{i}^{(t)}$ has length $%
L_{i}^{(t)}$. It's possible that two different ants select same route and
they have same route length. Even it's possible that, two different ants
select two\ different routes, but their lengths are same. Thus, for a given
value of route length $x$, there are set $A_{t}(x)=\{j|L_{j}^{(t)}=x\}$,
where $j$ is the subscript of route $r_{j}^{(t)}$. And set $A_{t}(x)$ is the
set of subscripts of routes which lengths are equal to a given value $x$.
Let $|A_{t}(x)|$ denote the number of elements of set $A_{t}(x)$. Number $%
|A_{t}(x)|$ represents the frequency that the routes with length $x$ being
selected by ants. And real number $\frac{|A_{t}(x)|}{m}$ is the
approximation of probability that represents the degree of possibility that
the routes with length $x$ being selected by ants.
Let
\begin{eqnarray*}
h_{t}(x) &=&\frac{|A_{t}(x)|}{m} \\
x &\in &[L_{inf},L_{sup}]
\end{eqnarray*}%
, where $L_{inf}=min\{L_{i}^{(t)}\}$ and $L_{sup}=max\{L_{i}^{(t)}\}$.
Then $h_{t}(x)$ is the function of probability in theory, and the domain of
function is extended to the set of possitive real numbers in general.
To observer the statistical feature of route length set $L\_Set^{(t)}$, its
histogram is plotted, which is the approximation of probability function $%
h_{t}(x)$. The method of plot is proposed as below:
Firstly, a two-dimensional coordinate frame is constructed, the $x-axis$
denotes the value of route length $x$, and the $y-axis$ denotes probability $%
h_{t}(x)$. The $x-axis$ is divided into equal intervals, and the size of
each interval is denoted by $\delta $.
Secondly, calculate the approximation of probability $h_{t}(x)$ for every
interval $I$ : Suppose interval $I$ has a counter $c$ which initial value is
set to zero (i.e., $c=0$). When a length $L_{i}^{(t)}$ falling into this
interval (i.e., $L_{i}^{(t)}\in I$), let the counter add one (i.e., $%
c\leftarrow c+1$). Then for arbitrary $x\in I$, there is function value $%
\frac{c}{m}$. Function value $\frac{c}{m}$ is the approximation of
probability $h_{t}(x)$. Under the condition that the size of interval $I$
becomes very small, we have $h_{t}(x)=\frac{c}{m}$.
The histogram of test data pr136 is shown at Fig.\ref{figHistogram}. Fig.\ref%
{figHistogram} demonstrates that route length set $L\_Set^{(t)}$ has some
statistical features, and they are summarized as below.
(1) The value of route length $x$ is random data and has probability $%
h_{t}(x)$. The expectation and deviation of set $L\_Set^{(t)}$ exist.
(2) Being big in the middle and small at both sides, that is the shape of
probability function $h_{t}(x)$. It's a typical distribution feature.
(3) With the increase of iteration step (i.e., $t\rightarrow t+1$ ), the
distribution of set $L\_Set^{(t)}$ will become stable. That is, the sequence
of probability functions $\{h_{1}(x),h_{2}(x),...,h_{t}(x),...\}$ is
convergent.
\begin{figure}[tbh]
\epsfig{file=Fig2.eps,width=10cm,}
\caption{\textbf{The Histogram of the Route Length Set }$L\_Set^{(t)}$%
\textbf{: }$X-axis$ represents the value of route length $x$, $Y-axis$
represents the probability that the routes with length $x$ being selected by
ants . In this figure, the probability is replaced by frequency for
direct-viewing. This figure shows that set $L\_Set^{(t)}$ has statistical
feature. The distribution of set $L\_Set^{(t)}$ is convergent. Notice: The
test data is pr136, \ number of cities and ants is 136 and 544 respectively.
The $x-axis$ is divided into equal interval with size 2183 (i.e.$\protect%
\delta =2183$ ). The histogram of at $t-th$ iteration is shown here, where $%
t=1,10,50,100,500,1000$. The same features are also found in other test
data, such as pr107, d198, pr226, d493 and so on. All test data in this
paper is downloaded from
http://www.iwr.uniheidelberg.de/iwr/comopt/soft/TSPLIB95/TSPLIB.html }
\label{figHistogram}
\end{figure}
\subsection{The Expectation and Deviation of Route Length Set}
Expectation and deviation are the two most essential characteristics of
distribution, these two characteristics of set $L\_Set^{(t)}$ will be
calculated in this section. The expectation and standard deviation of set $%
L\_Set^{(t)}$ are denoted by $\overset{-}{L}^{(t)}$and $\sigma ^{(t)}$
respectively in this paper. $\overset{-}{L}$ and $\sigma ^{(t)}$ are defined
as below:
\textbf{Definition 4} (the expectation of set $L\_Set^{(t)}$):
\begin{equation}
\overset{-}{L}^{(t)}=\frac{1}{m}\overset{m}{\underset{i=1}{\sum L_{i}^{(t)}}}
\label{eq7}
\end{equation}
, where $m$ is the number of ants.
\textbf{Definition 5} (the standard deviation of set $L\_Set^{(t)}$):
\begin{equation}
\sigma ^{(t)}=\sqrt{\frac{1}{m}\overset{m}{\underset{i=1}{\sum }}%
|L_{i}^{(t)}-\overset{-}{L}^{(t)}|^{2}} \label{eq8}
\end{equation}
Two sequences $\{\overset{-}{L}^{(1)},\overset{-}{L}^{(2)},...,\overset{-}{L}%
^{(t)},...\}$ and $\{\sigma ^{(1)},\sigma ^{(2)},...,\sigma ^{(t)},...\}$
are shown at Fig.\ref{figAveDev}. The subfigure (a) of Fig.\ref{figAveDev}
shows that expectation $\overset{-}{L}^{(t)}$descends continuously and
converges to a constant value. The subfigure (b) shows that all standard
deviations fluctuate narrowly in an interval and most of deviations are
close to a constant value.
\begin{figure}[tbh]
\epsfig{file=Fig3.eps,width=10cm,}
\caption{\textbf{The Feature of Expectation and Standard Deviation of Route
Length Set:} Figure (a) and (b) diagrammatize the two curves of sequence $\{%
\protect\overset{-}{L}^{(1)},\protect\overset{-}{L}^{(2)},...,%
\protect\overset{-}{L}^{(t)},...\}$ and $\{\protect\sigma ^{(1)},\protect%
\sigma ^{(2)},...,\protect\sigma ^{(t)},...\}$\ respectively, where $%
\protect\overset{-}{L}^{(t)}$and $\protect\sigma ^{(t)}$\ are the
expectation and standard deviation of route length set $L\_Set^{(t)}$\
respectively. Figure (a) shows that expectation descends continuously and
converges to a constant value. Figure (b) shows that all standard deviations
fluctuate narrowly in a small interval comparing with average route length
and most of deviations are close to a constant value. The similar feature
are observed from other instances, such as pr107, d198, pr226, d493 and so
on. Notice: The test data in this figure is pr136, the number of cities is $%
n=136$, the number of ants is $m=544$, and the number of maximum iteration
is $N_{C_{\max }}=1000$.}
\label{figAveDev}
\end{figure}
\subsection{Holding View Point of Statistics to Understand ACO Convergence}
There are three types of understanding for the convergence of ACO:
\textbf{Type 1:} With the increase of iteration steps, all ants will select
the optimal route which has shortest length.
\textbf{Type 2:} With the increase of iteration steps, all ants will select
a unique fixed (or stable) route, but it is not optimal possibly.
\textbf{Type 3:} With the increase of iteration steps, more than one fixed
routes are selected by different ants. That is, ACO converges to a stable
set which consists of some fixed routes, not a unique route.
\bigskip
ACO converging to optimal route is difficult in general, the first type is
not common in practice. The 2nd type is also not common in practice, and it
never be observed in the authors' experiment. Instead of the 2nd type, the
3rd type is common in practice. For example, Fig.\ref{figHistogram} shows
that, there are always different routes selected by ants at every iteration
step, and the convergent route is not unique. \textbf{Since the 3rd type is
common and has more practical worthiness, the convergence of ACO refers to
this type in this paper.}
In addition, the aim of ACO is to find the shortest route length, and the
difference of the routes is not cared. Therefore, a equivalent statement of
the 3rd type is that, ACO converges to a stable set which consists of stable
route lengths .
According to above discusion, if ACO converges, the stable set will appear,
which consists of stable route lengths. Then the histogram of this stable
set is convergent (see Fig. \ref{figHistogram}). That is, ACO converging
results in probability sequence $\{h_{1}(x),h_{2}(x),...,h_{t}(x),...\}$
being convergent. At the same time, sequence $%
\{h_{1}(x),h_{2}(x),...,h_{t}(x),...\}$ being convergent will result in ACO
converging also, and it is proved as below:
Let set $A_{t}(x)=\{j|L_{j}^{(t)}=x\}$. Then $h_{t}(x)=\frac{|A_{t}(x)|}{m}$%
. Thus, if $\{h_{1}(x),h_{2}(x),...,h_{t}(x),...\}$ is convergent, $%
|A_{t}(x)|$ becomes fixed (or stable). Since route $r_{j}^{(t)}$ represents
the route selected by ant $a_{j}$, number $|A_{t}(x)|$ represents the number
of ants which routes has length $x$. There are only two factors to cause $%
|A_{t}(x)|$ becoming fixed. One factor is ACO being convergent. The other
factor is that, some ants coming into set $A_{t}(x)$ and some coming out,
the quantities of input and output are equal. The second factor is too
special so that it does not exist. Therefore, if $%
\{h_{1}(x),h_{2}(x),...,h_{t}(x),...\}$ is convergent, ACO will be
convergent.
According to above discussion, the following conclusion is obtained:
\textbf{Conclusion 1:} ACO being convergent is equivalent to the sequence of
probability functions $\{h_{1}(x),h_{2}(x),...,h_{t}(x),...\}$ being
convergent.
This conclusion shows that, the histogram of route length set becoming
convergent is the marker of ACO being convergent (see Fig.\ref{figHistogram}%
).
\section{Using Pheromone Probability to Observe The Statistical Feature of
Route Length Set}
\subsection{The Pseudo-Probability and Pseudo-Histogram of Route Length Set}
\textbf{The definition of pseudo-probability }$h_{t}^{^{\prime }}(x)$\textbf{%
:}
At the $t-th$ iteration step, every ant will select a route. The $i-th$ ant $%
a_{i}$ selects route $r_{i}^{(t)}$, and $r_{i}^{(t)}$ has length $%
L_{i}^{(t)} $, where $i=1,2,...,m$. Each route $r_{i}^{(t)}$ contains the
amount of pheromone $f_{i}^{(t)}$, which is the sum of pheromone depositing
on every edge of route $r_{i}^{(t)}$. The pheromone probability is the ratio
of pheromone, it is defined as $p_{i}^{(t)}=\frac{\ f_{i}^{(t)}}{\overset{m}{%
\underset{j=1}{\sum }}f_{j}^{(t)}}$.
Set $A_{t}(x)=\{j|L_{j}^{(t)}=x\}$ is the subscripts set of routes which
length is a given value $x$. Basing on set $A_{t}(x)$, \textbf{%
pseudo-probability} is defined as
\begin{eqnarray*}
h_{t}^{^{\prime }}(x) &=&\underset{j\in A_{t}(x)}{\sum p_{j}^{(t)}} \\
x &\in &[L_{inf},L_{sup}]
\end{eqnarray*}%
, where $L_{inf}=min\{L_{i}^{(t)}\}$ and $L_{sup}=max\{L_{i}^{(t)}\}$%
.\bigskip
Pseudo-probability $h_{t}^{^{\prime }}(x)$ is the sum of pheromone
probabilities which associated route has length $x$.
\bigskip
\textbf{The pseudo-histogram of route length set }$L\_Set^{(t)}$\textbf{:}
Pseudo-histogram is also a histogram in which pseudo-probability $%
h_{t}^{^{\prime }}(x)$ replace probability $h_{t}(x)$ to estimate the
distribution of route length set $L\_Set^{(t)}$. It is generated by
following method:
Firstly, a two-dimensional coordinate frame is constructed, the $x-axis$
denotes the value of route length $x$, and the $y-axis$ denotes
pseudo-probability $h_{t}^{^{\prime }}(x)$. The $x-axis$ is divided into
equal intervals, and the size of each interval is denoted by $\delta $.
Secondly, calculate the approximation of pseudo-probability for every
interval $I$: Suppose $x$ represents argument and $x\in I$. A counter $d$ is
attached to interval $I$, and its initial value is set to zero (i.e., $d=0$%
;).\ If $L_{i}^{(t)}$\ falls into interval $I$ (i.e., $L_{i}^{(t)}\in I$),
its associated pheromone probability $p_{i}^{(t)}$ is add to $d$ (i.e., $%
d=d+p_{i}^{(t)}$). The value $d$ is the function value of argument $x$. When
the size of interval $I$ limits to zero ideally, the value $d$ limits to
pseudo-probability $h_{t}^{^{\prime }}(x)$.
\bigskip
The pseudo-histogram is shown in Fig.\ref{figRelation}. Comparing with the
histogram of probability function shown at Fig.\ref{figHistogram},
pseudo-histogram is very similar to it. Probability $h_{t}(x)$ represents
the degree of possibility that the routes with length $x$ being selected by
ants, pseudo-probability $h_{t}^{^{\prime }}(x)$ represents the sum of
pheromone probability $p_{i}^{(t)}$ which associated route $r_{i}^{(t)}$ has
length $x$. The similarity of these two figures excites the guess that
pseudo-probability is the approximation of probability(i.e., $%
h_{t}(x)\thickapprox h_{t}^{^{\prime }}(x)$).
\begin{figure}[tbh]
\caption{\textbf{The Pseudo-Histogram of Route Length Set }$L\_Set^{(t)}$%
\textbf{\ Calculated by Pheromone Probability }$p_{i}^{(t)}$\textbf{.} The $%
X-axis$ denotes the value of route length $x$ and is divided into small
intervals. The value at $Y-axis$ denotes the pseudo-probability $%
h_{t}^{^{\prime }}(x)$, which is the sum of all pheromone probabilities $%
p_{i}^{(t)}$ under the condition $L_{i}^{(t)}=x$. The input data of this
instance is pr136, where the number of cities $n=136$, number of ants $m=544$%
\ and length of interval $\protect\delta =2183$. Comparing with Figure.%
\protect\ref{figHistogram}, this figure is similar to it. And this
similarity excites the guess pseudo-probability $h_{t}^{^{\prime }}(x)$ is
the approximation of probability $h_{t}(x)$, which represents the degree of
possibility that the routes with length $x$ being selected by ants.}
\label{figRelation}\epsfig{file=Fig4.eps,width=12cm,}
\end{figure}
\subsection{Pseudo-Expectation $\protect\overset{-}{L}^{^{\prime }(t)}$and
Pseudo-Deviation $\protect\sigma ^{^{\prime }(t)}$}
\textbf{Definition 6} ($\overset{-}{L}^{^{\prime }(t)}$, Pseudo-Expectation
of Route Length Set $L\_Set^{(t)}$ Calculated by Pheromone Probability):
\begin{equation}
\overset{-}{L}^{^{\prime }(t)}=\underset{x\in V}{\sum }xh_{t}^{^{\prime
}}(x)=\overset{m}{\underset{i=1}{\sum }}L_{i}^{(t)}\times p_{i}^{(t)}
\label{eq9}
\end{equation}%
, where $x$ denotes the value of route length and $V$ denotes the set of
these values.
\textbf{Definition 7} ($\sigma ^{^{\prime }(t)}$, Pseudo-Deviation
Calculated by Pheromone Probability):
\begin{equation}
\sigma ^{^{\prime }(t)}=\sqrt{\overset{m}{\underset{i=1}{\sum }p_{i}^{(t)}}%
|L_{i}^{(t)}-\overset{-}{L}^{^{\prime }(t)}|^{2}} \label{eq10}
\end{equation}
The two sequence \{$\overset{-}{L}^{^{\prime }(1)},\overset{-}{L}^{^{\prime
}(2)},...,\overset{-}{L}^{^{\prime }(t)},...$\} and sequence \{$\sigma
^{^{\prime }(1)},\sigma ^{^{\prime }(2)},..,\sigma ^{^{\prime }(t)},...$\}
are shown at Fig.\ref{figExVar}. Comparing with Fig.\ref{figAveDev}, Fig.\ref%
{figExVar} is very similar to it. Two sequences \{$|\overset{-}{L}^{(t)}-%
\overset{-}{L}^{^{\prime }(t)}|$\} and \{$|\sigma ^{(t)}-\sigma ^{^{\prime
}(t)}|$\} are shown in Fig.\ref{figSubstraction-1}, this figure demonstrates
that $\overset{-}{L}^{(t)}\thickapprox \overset{-}{L}^{^{\prime }(t)}$and $%
\sigma ^{(t)}\thickapprox \sigma ^{^{\prime }(t)}$. Expectation and
deviation are two most important characteristics of set of random data. And
Fig.\ref{figExVar} and Fig.\ref{figSubstraction-1} are two evidences to
support the conclusion
\begin{equation*}
h_{t}(x)\thickapprox h_{t}^{^{\prime }}(x)
\end{equation*}%
, where $h_{t}(x)$ and $h_{t}^{^{\prime }}(x)$ denotes the probability and
pseudo-probability respectively.
\begin{figure}[tbh]
\epsfig{file=Fig5.eps,width=10cm,}
\caption{\textbf{Pseudo-Expectation }$\protect\overset{-}{L}^{^{\prime }(t)}$%
\textbf{and Pseudo-Deviation }$\protect\sigma ^{^{\prime }(t)}$\textbf{\ :}
Comparing with Fig.\protect\ref{figAveDev}, this figure is similar to it.
\textit{This similarity provides a evidence to support conclusion 2: }$%
h_{t}(x)\thickapprox h_{t}^{^{\prime }}(x)$.\textit{\ }Notice: The input
data of this shown instance is pr136, the number of ants $m=544$, the number
of cities $n=136$ and the number of maximum iteration $N_{C_{\max }}=1000$.
The same conclusion is also found in other test data, such as pr107, d198,
pr226, d493 and so on.}
\label{figExVar}
\end{figure}
\begin{figure}[tbh]
\epsfig{file=Fig6.eps,width=10cm}
\caption{\textbf{Sequence of Pesudo-Expectation }$\{\protect\overset{-}{L}%
^{^{\prime }(t)}\}$\textbf{\ and Pesudo-Deviation }$\{\protect\overset{-}{%
\protect\sigma }^{^{\prime }(t)}\}$\textbf{\ Is Close to }$\protect\overset{-%
}{\{L}^{(t)}\}$\textbf{\ and }$\{\protect\overset{-}{\protect\sigma }%
^{(t)}\} $\textbf{\ Respectively.} Two sequences \{$|\protect\overset{-}{L}%
^{(t)}-\protect\overset{-}{L}^{^{\prime }(t)}|$\} and \{$|\protect\sigma %
^{(t)}-\protect\sigma ^{^{\prime }(t)}|$\} are shown in this figure. This
figure shows that $\protect\overset{-}{L}^{(t)}\thickapprox \protect\overset{%
-}{L}^{^{\prime }(t)}$and $\protect\sigma ^{(t)}\thickapprox \protect\sigma %
^{^{\prime }(t)}$. This evidence further supports conclusion 2.
Notice: The experiment parameters are same to Fig.5}
\label{figSubstraction-1}
\end{figure}
Since histogram and pseudo-histogram is very similar and $\overset{-}{L}%
^{(t)}\thickapprox \overset{-}{L}^{^{\prime }(t)}$and $\sigma
^{(t)}\thickapprox \sigma ^{^{\prime }(t)}$, we have following conclusion:
\textbf{Conclusion 2:} With the increasing of iteration step,
pseudo-probability is the approximation of probability (i.e., $%
h_{t}(x)\thickapprox h_{t}^{^{\prime }}(x)$ when $t\rightarrow \infty $).
\bigskip
Conclusion 1 shows probability function $h_{t}(x)$ being convergent is
equivalent to ACO being convergent. Since $h_{t}(x)\thickapprox
h_{t}^{^{\prime }}(x)$, we have
\bigskip
\textbf{Conclusion 3:} Pseudo-probability $h_{t}^{^{\prime }}(x)$ being
convergent is equivalent to ACO being convergent
\bigskip
Pseudo-probability $h_{t}^{^{\prime }}(x)$ being convergent results in ACO
being convergent. And when ACO being convergent, every route selected by ant
is fixed. This situation results in the amount of pheromone depositing on
convergent route is fixed and its ratio (i.e., pheromone probability $%
p_{i}^{(t)}$) is fixed too. Therefore, function $h_{t}^{^{\prime }}(x)$
being convergent results in pheromone probability $p_{i}^{(t)}$ being
convergent, where $i=1,2,...,m$. On the other hand, pheromone probability $%
p_{i}^{(t)}$ being convergent results in the function $h_{t}^{^{\prime }}(x)$
being convergent and ACO being convergent. Then, we have
\bigskip
\textbf{Conclusion 4: }Pseudo-probability $h_{t}^{^{\prime }}(x)$\ being
convergent is equivalent to pheromone probability set $P\_Set^{(t)}$\ being
convergent, where the convergence of $P\_Set^{(t)}$\ refers to that every
pheromone probability in this set is convergent.
\textbf{Conclusion 5: }Pheromone probability set $P\_Set^{(t)}$\ being
convergent is equivalent to ACO being convergent.
\section{Entropy Convergence}
\subsection{Entropy of Pheromone and Its Convergence}
In 1948 Shannon introduced the entropy \cite{Shannon} into information
theory for the first time. In information theory, entropy is a measure of
the uncertainty associated with random system. The lower entropy is, the
lower the uncertainty of system is. Entropy is defined as
\begin{equation}
H=-\overset{n}{\underset{i=1}{\sum }}p_{i}\cdot log_{_{2}}p_{i} \label{eq11}
\end{equation}%
, where $p_{i}$ denotes the probability.
At $t-th$ iteration of ACO, ant $a_{i}$ select route $r_{i}^{(t)}$, where $%
i=1,2,...,m$. Route $r_{i}^{(t)}$ associates with pheromone probability $%
p_{i}^{(t)}$, which is the ratio of pheromone assigned at route $r_{i}^{(t)}$%
. All pheromone probability $p_{i}^{(t)}$ comprise set $P\_Set^{(t)}=%
\{p_{1}^{(t)},p_{2}^{(t)},...,p_{i}^{(t)},...,p_{m}^{(t)}\}$.
According to Eq.\ref{eq11}, \textbf{entropy of pheromone} is defined as
\begin{equation}
H(P\_Set^{(t)})=-\overset{n}{\underset{i=1}{\sum }}p_{i}^{(t)}\cdot
log_{_{2}}p_{i}^{(t)} \label{eq12}
\end{equation}
It is simplified as
\begin{equation}
H_{t}=-\overset{n}{\underset{i=1}{\sum }}p_{i}^{(t)}\cdot
log_{_{2}}p_{i}^{(t)} \label{eq13}
\end{equation}
Pheromone probability $p_{i}^{(t)}$ represents the ratio of pheromone
assigned at route $r_{i}^{(t)}$. If every route is assigned equal amount of
pheromone, all ants don't know which route is best and select route
randomly. At this time, all pheromone probabilities are equal (i.e., $%
p_{i}^{(t)}=\frac{1}{m}$), entropy of pheromone is maximum, and the
uncertainty degree that ants selecting route is maximum. And this situation
is often happened at the early iteration steps of ACO. If pheromone is
assigned at few routes, most of ants will select these routes with high
probability. At this time, there is low uncertainty for ants selecting
route, entropy is small. This situation is often happened at the iteration
steps at which ACO is close to convergence.
With the increase of iteration step, every route selected by ants will
become fixed, the pheromone depositing on it becomes fixed (stable) and its
pheromone probability becoming fixed too. This situation results in the
sequence $\{H_{1},H_{2},...,H_{t},...\}$ converging. The test result at Fig.%
\ref{figEntropy} shows that the entropy sequence is convergent.
\begin{figure}[tbh]
\epsfig{file=Fig7.eps,width=10cm,}
\caption{\textbf{Entropy of Pheromone Is Convergent.} Two curves show that
two entropy sequences are convergent approximately. And the amplitude of
swing is very narrow, it is less than $0.0002/4.6$ or $0.0004/4.9$ . Notice:
The shown test data are pr107 and pr136, the number of ant is equal to the
number of cites (i.e. $m=107$ and $m=136$), the number of maximum iteration
is $N_{C_{\max }}=500$ and the feature of entropy convergence is observed
from other test data also, such as d198, pr226, d493 and so on. The entropy
is calculated by $H(t)=-\protect\overset{n}{\protect\underset{i=1}{\sum }}%
p_{i}^{(t)}\cdot lnp_{i}^{(t)}$ in this figure.}
\label{figEntropy}
\end{figure}
\subsection{Entropy Convergence Is A Marker of ACO Convergence}
Entropy is the most essential characteristics of a random system. Thus, the
convergence of entropy sequence $\{H_{1},H_{2},...,H_{t},...\}$\ is the
marker of the convergence of set $P\_Set^{(t)}$. Set $P\_Set^{(t)}$ being
convergent is equivalent to ACO being convergent according to conclusion 5.
Therefore, the convergence of entropy sequence $\{H_{1},H_{2},...,H_{t},...\}
$ is the marker of the convergence of ACO. When ACO is convergent, set $%
P\_Set^{(t)}$ is convergent, entropy sequence is convergent too. If ACO is
not convergent, set $P\_Set^{(t)}$ is not convergent, entropy sequence is
not convergent too. On the other hand, when entropy sequence is convergent,
set $P\_Set^{(t)}$ is convergent very possibly because entropy is its
essential characteristic, ACO is convergent too. If entropy sequence is not
convergent, set $P\_Set^{(t)}$ is not convergent very possibly, ACO is not
convergent too.
\emph{Therefore, the convergence of entropy sequence is a marker of minimum
iteration steps at which ACO is convergent.}
In addition, the convergence of entropy sequence $%
\{H_{1},H_{2},...,H_{t},...\}$ has usual \ criterion $\frac{|H_{t}-H_{t-1}|}{%
H_{t-1}}<\varepsilon $ \cite{Pang}. And criterion$\frac{|H_{t}-H_{t-1}|}{%
H_{t-1}}<\varepsilon $ is a very simple criterion to estimate the minimum
iteration number at which ACO is convergent possibly.
\section{Application of Entropy Convergence}
\subsection{Apply Entropy Convergence as Termination Criterion of ACO}
The improved ACO algorithm with criterion $\frac{|H_{t}-H_{t-1}|}{H_{t-1}}%
<\varepsilon $ is presented as below, and it is named \textbf{ACO-Entropy}
in this paper.
\textbf{Algorithm ACO-Entropy}
\textbf{Step1.} Initialize pheromone trails for all edges and put $m$ ants
at different cities. Let $t=0$ , $\Delta \tau _{ij}(0)=0$ and $H_{0}=\log
_{2}m$.
\textbf{Step2. do}
\{
\ \ \ \textbf{Step2.1} $t\leftarrow t+1$.
\ \ \ \textbf{Step2.2} The ants choose next cities according to transition
probability.
\ \ \ \textbf{Step2.3} After all ants finish their travels, pheromone are
updated.
\ \ \ \textbf{Step2.4} The pheromone probability $p_{i}^{(t)}$ and the
entropy $H_{t}$ are calculated by
\ \ \ \ \ \ \ \ \ \ \ formula \ (\ref{eq6}) and (\ref{eq13}) respectively.
\}while($\frac{|H_{t}-H_{t-1}|}{H_{t-1}}\geq \varepsilon $)
\textbf{Step3.} End procedure and output result.
\subsection{The Experiment and Comparison}
All data tested in this paper are downloaded from
http://www.iwr.uniheidelberg.de/iwr/
comopt/soft/TSPLIB95/TSPLIB.html. All algorithms in this paper run on
personal computer, CPU (2): 1.60GHZ, Memory: 480M, Software: Matlab 7.1. All
parameters are set as below:
$\alpha =1$, $\beta =8$, $\rho =0.4$, $Q=100$, $\tau _{ij}(0)=1$, $m=n$, $%
\varepsilon =0.001$, $N_{C_{\max }}=1000$.
To test the performance of ACO-Entropy, two algorithms of ACO and the
ACO-Entropy are tested in this paper, where ACO refers to Ant-Cycle shown at
section II, which is often used standard algorithm.
Table.1 and Fig.\ref{figComparison} show that, ACO-Entropy is faster than
ACO by factors of 2-6 under the same condition and nearly same quality of
solution is obtained.
\begin{figure}[tbh]
\epsfig{file=Fig8.eps,width=10cm,}
\caption{\textbf{Comparison of the Running Speed of ACO and ACO-Entropy}.
ACO-Entropy is faster than ACO by factors of 2-6 under the same condition
and the nearly same quality of solution is obtained (see Table 1), where ACO
refers to Ant-Cycle shown at section II.}
\label{figComparison}
\end{figure}
\begin{center}
{\small \ }%
\begin{tabular}{|c|c|c|c|c|}
\hline
{\small Input} & {\small Number} & \multicolumn{3}{|c|}{\small ACO-Entropy}
\\ \hline
{\small Data} & {\small of Test} & {\small Average Solution} & {\small %
Average Time(s)} & {\small Iteration Number} \\ \hline
{\small pr107} & {\small 10} & {\small 46294} & {\small 163.0804} & {\small %
189} \\ \hline
{\small pr136} & {\small 10} & {\small 108467} & {\small 173.3620} & {\small %
131} \\ \hline
{\small d198} & {\small 10} & {\small 17135} & {\small 447.4071} & {\small %
100} \\ \hline
{\small pr226} & {\small 10} & {\small 84718} & {\small 2466.1} & {\small 293%
} \\ \hline
{\small d493} & {\small 2} & {\small 39851} & {\small 16405} & {\small 155}
\\ \hline
\multicolumn{2}{|c}{} & \multicolumn{3}{|c|}{\small ACO} \\ \hline
{\small pr107} & {\small 10} & {\small 45973} & {\small 431.496} & {\small %
500} \\ \hline
{\small pr136} & {\small 10} & {\small 102608} & {\small 660.918} & {\small %
500} \\ \hline
{\small d198} & {\small 10} & {\small 16891} & {\small 2832.5} & {\small 500}
\\ \hline
{\small pr226} & {\small 10} & {\small 84514} & {\small 4211.8} & {\small 500%
} \\ \hline
{\small d493} & {\small 2} & {\small 38926} & {\small 53007} & {\small 200}
\\ \hline
\multicolumn{5}{|c|}{%
\begin{tabular}{l}
{\small Table1. Performance Comparison of ACO and ACO-Entropy:} \\
{\small This table shows that ACO-Entropy is faster than ACO by factors of
2-6.} \\
{\small The two solution qualities of ACO-Entropy and ACO are nearly same.}%
\end{tabular}%
} \\ \hline
\end{tabular}
\end{center}
\section{Conclusion}
The convergence of ACO is the base of ACO, its study is not much currently.
The convergence under some special conditions has be studied, and the view
point of study are Graph theory, Markov process, and so on. It is
interesting to find a new view point to study ACO convergence under general
condition. The aim of this paper is to explore new view point of studying
ACO convergence under general condition and to find the new marker of ACO
convergence.
Since ACO is kind of probabilistic algorithm, the feature of its convergence
possibly hide in some statistical properties. Thus, the analysis of
statistical property is the start point of study of this paper. Along this
start point, five equivalent statements of ACO convergence are found in this
paper (see Conlusion 1-5). And these equivalent statements result in the
following conclusion:
ACO may not converges to the optimal solution in practice, but its entropy
is convergent under general condition.
\begin{acknowledgments}
The first author thanks his teacher prof. G.-C. Guo because his main study
methods are learned from his lab. of quantum information. The first author
thanks prof. Z. F. Han's and prof. Z.-W Zhou working at Guo's lab. for they
helping him up till now. The first author thanks prof. J. Zhang, prof. Q. Li
and prof. J. Zhou for their help. The authors thank Dr. Marek Gutowski at
Institute of Physics, Poland for he telling them the careless incorrectness
of one reference. The authors thank prof. walter gutjahr, his encouragement
gave them a great sense of uplift since he is the first man to study the ACO
convergence.
\end{acknowledgments}
|
1,314,259,995,888 | arxiv | \section{Introduction}
One convenient way to model uncertain dynamical systems is to describe them as Markov chains. These have been studied in great detail, and their properties are well known. However, in many practical situations, it remains a challenge to accurately identify the transition probabilities in the Markov chain: the available information about physical systems is often imprecise and uncertain. Describing a real-life dynamical system as a Markov chain will therefore often involve unwarranted precision, and may lead to conclusions not supported by the available information.
\par
For this reason, it seems quite useful to perform probabilistic robustness studies, or sensitivity analyses, for Markov chains. This is especially relevant in decision-making applications. Many researchers in Markov Chain Decision Making \citep{white1994,harmanec2002,nilim2005,itoh2007}---inspired by \citeauthor{satia1973}'s~\citeyearpar{satia1973} original work---have paid attention to this issue of `imprecision' in Markov chains.
\par
Work on the more mathematical aspects of modelling such imprecision in Markov chains was initiated in the early 1980s by \citeauthor{hartfiel1994} (see \cite{hartfiel1991,hartfiel1994,hartfiel1998}), under the name `Markov set-chains'. \Citeauthor{hartfiel1991}'s work seems to have been unknown to \citet{kozine2002}, who approached the subject from a different angle.
Armed with linear programming techniques, these authors performed an experimental study of the limit behaviour of Markov chains with uncertain transition probabilities.
More recently, \citet{skulj2006,skulj2007} has also contributed to a formal study of the time evolution and limit behaviour of such systems. Markov set-chains can also be seen as special cases of so-called \emph{credal networks} under strong independence \cite{cozman2000,cozman2005}.
\par
All these approaches use \newconcept{sets of probabilities} to deal with the imprecision in the transition probabilities. When these probabilities are not well known, they are assumed to belong to certain sets, and robustness analyses are performed by allowing the transition probabilities to vary over such sets. This should be contrasted with more common ways of performing a sensitivity analysis: looking at small deviations from a reference model and evaluating derivatives of important variables in this reference point.
\par
As we shall see, the sets of probabilities approach leads to a number of computational difficulties. But we will show that they can be overcome by tackling the problem from another angle, using lower and upper expectations, rather than sets of probabilities. Our new method also makes it fairly easy to formulate and prove convergence (or Perron--Frobenius-like) results for Markov chains with uncertain transition probabilities that hold under weaker conditions than the ones found by \citet{hartfiel1991,hartfiel1998} and \citet{skulj2007}. We shall see that our condition for this convergence, which requires that the imprecise Markov chain should be \emph{regularly absorbing}, is implied by, and even strictly weaker than, both \citeauthor{hartfiel1998}'s \emph{product scrambling} and \citeauthor{skulj2007}'s \emph{regularity} conditions.
\par
In the rest of this Introduction, we give an overview of the theory of classical Markov chains and formulate the classical Perron--Frobenius theorem. Then, in Sections~\ref{sec:towards} and~\ref{sec:sensitivity-analysis}, we introduce imprecise Markov chains and generalise many aspects of the classical theory. In Section~\ref{sec:accessibility}, we briefly discuss accessibility relations, which allows us to give a nice interpretation to a number of conditions that will turn out to be sufficient for a Perron--Frobenius-like convergence result. In Section~\ref{sec:convergence}, we generalise the classical Perron--Frobenius theorem, and explore the relation of our generalisation with previous work in the literature. We discuss a number of theoretical and numerical examples in Section~\ref{sec:examples}, and we give perspectives for further research in the Conclusions. Proofs of theorems and propositions have been relegated to an appendix.
\subsection{A short analysis of classical Markov chains}
Consider a finite Markov chain in discrete time, where at consecutive times $n=1,2,3,\dots,N$, $N\in\naturals$ the \newconcept{state}~$X(n)$ of a system can assume any value in a finite set~$\states$. Here $\naturals$~denotes the set of non-zero natural numbers, and~$N$ is the time horizon. The time evolution of such a system can be modelled as if it traversed a path in a so-called \newconcept{event tree}; see \citet{shafer1996a}. An example of such a tree for $\states=\{a,b\}$ and $N=3$ is given in Figure~\ref{fig:markov-event}.
\par
The \newconcept{situations}, or nodes, of the tree have the form $\vtuple{x}{k}\eqdef(x_1,\ldots,x_k)\in\states^k$, $k=0,1,\dots,N$. For $k=0$ there is some abuse of notation as we let $\states^0\eqdef\{\init\}$, where~$\init$ is the so-called \newconcept{initial situation}, or root of the tree. In the cuts\footnote{A \newconcept{cut} $V$ of a situation $s$ is a collection of descendants $v$ of $s$ such that every path (from root to leaves) through $s$ goes through exactly one $v$ in $V$.} $\states^n$ of~$\init$, the value of the state $X(n)$ at time $n$ is revealed.
\par
\begin{figure}[ht]
\centering\footnotesize
\begin{tikzpicture}
\tikzstyle{level 1}=[sibling distance=20em]
\tikzstyle{level 2}=[sibling distance=10em]
\tikzstyle{level 3}=[sibling distance=5em]
\node[root] (root) {} [grow=down,level distance=8ex]
child {node[nonterminal] (a) {$a\vphantom{)}$}
child {node[nonterminal] (aa) {$(a,a)$}
child {node[nonterminal] (aaa) {$(a,a,a)$}}
child {node[nonterminal] (aab) {$(a,a,b)$}}
}
child {node[nonterminal] (ab) {$(a,b)$}
child {node[nonterminal] (aba) {$(a,b,a)$}}
child {node[nonterminal] (abb) {$(a,b,b)$}}
}
}
child {node[nonterminal] (b) {$b\vphantom{)}$}
child {node[nonterminal] (ba) {$(b,a)$}
child {node[nonterminal] (baa) {$(b,a,a)$}}
child {node[nonterminal] (bab) {$(b,a,b)$}}
}
child {node[nonterminal] (bb) {$(b,b)$}
child {node[nonterminal] (bba) {$(b,b,a)$}}
child {node[nonterminal] (bbb) {$(b,b,b)$}}
}
};
\draw[cut] (b) -- +(1,0);
\draw[cut] (b) -- (a) -- +(-2,0) node[left,local] {$\states^1$};
\draw[cut] (bb) -- +(1,0);
\draw[cut] (bb) -- (ba) -- (ab) -- (aa) -- +(-1,0) node[left,local] {$\states^2$};
\end{tikzpicture}
\caption{
The event tree for the time evolution of system that can be in two states, $a$~and~$b$, and can change state at time instants $n=1,2$.
Also depicted are the respective cuts~$\states^1$ and~$\states^2$ of\/~$\init$ where the states at times~$1$ and~$2$ are revealed.}
\label{fig:markov-event}
\end{figure}
\par
In a classical analysis, it is generally assumed that we have: (i) a probability distribution over the initial state $X(1)$, in the form of a probability mass function $m_1$ on $\states$; and (ii) for each situation $\vtuple{x}{n}$ that the system can be in at time $n$, a~probability distribution over the next state $X(n+1)$, in the form of a probability mass function $q(\cdot\vert\vtuple{x}{n})$ on $\states$. This means that in each non-terminal situation\footnote{A \newconcept{non-terminal} situation is a node of the tree that is not a leaf.} $\vtuple{x}{n}$ of the event tree, we have a \emph{local} probability model telling us about the probabilities of each of its child nodes. This turns the event tree into a so-called \newconcept{probability tree}; see \citetopt[Chapter~3]{shafer1996a} and \citetopt[Section~1.9]{kemeny1976}.
\par
The probability tree for a Markov chain is special, because the \newconcept{Markov Condition} states that when the system jumps from state $X(n)=x_n$ to a new state $X(n+1)$, where the system goes to will only depend on the state $X(n)=x_n$ the system was in at time $n$, and not on its states $X(k)=x_k$ at previous times $k=1,2,\dots,n-1$. In other words:
\begin{equation}\label{eq:markov-condition-precise}
q(\cdot\vert\ntuple{x}{n})
=q_n(\cdot\vert x_n),
\quad\vtuple{x}{n}\in\states^n,\,n=1,\dots,N-1,
\end{equation}
where $q_n(\cdot\vert x_n)$ is some probability mass function on~$\states$. The Markov chain may be non-stationary, as the transition probabilities on the right-hand side in Eq.~\eqref{eq:markov-condition-precise} are allowed to depend explicitly on the time $n$. Figure~\ref{fig:markov-probability} gives an example of a probability tree for a Markov chain with $\states=\{a,b\}$ and $N=3$.
\begin{figure}[ht]
\centering\footnotesize
\begin{tikzpicture}
\tikzstyle{level 1}=[sibling distance=20em]
\tikzstyle{level 2}=[sibling distance=10em]
\tikzstyle{level 3}=[sibling distance=5em]
\node[root] (root) {} [grow=down,level distance=8ex]
child {node[nonterminal] (a) {$a\vphantom{)}$}
child {node[nonterminal] (aa) {$(a,a)$}
child {node[nonterminal] (aaa) {$(a,a,a)$}}
child {node[nonterminal] (aab) {$(a,a,b)$}}
}
child {node[nonterminal] (ab) {$(a,b)$}
child {node[nonterminal] (aba) {$(a,b,a)$}}
child {node[nonterminal] (abb) {$(a,b,b)$}}
}
}
child {node[nonterminal] (b) {$b\vphantom{)}$}
child {node[nonterminal] (ba) {$(b,a)$}
child {node[nonterminal] (baa) {$(b,a,a)$}}
child {node[nonterminal] (bab) {$(b,a,b)$}}
}
child {node[nonterminal] (bb) {$(b,b)$}
child {node[nonterminal] (bba) {$(b,b,a)$}}
child {node[nonterminal] (bbb) {$(b,b,b)$}}
}
};
\draw[local,thick] (root) +(190:1.5em) arc (190:350:1.5em);
\draw[local,thick] (b) +(210:2em) arc (210:330:2em);
\draw[local,thick] (a) +(210:2em) arc (210:330:2em);
\draw[local,thick] (bb) +(230:2.25em) arc (230:310:2.25em);
\draw[local,thick] (ba) +(230:2.25em) arc (230:310:2.25em);
\draw[local,thick] (ab) +(230:2.25em) arc (230:310:2.25em);
\draw[local,thick] (aa) +(230:2.25em) arc (230:310:2.25em);
\path (root) +(275:2.35em) node[local] {$m_1$};
\path (a) +(270:2.95em) node[local] {$q_1(\cdot\vert a)$};
\path (b) +(270:2.95em) node[local] {$q_1(\cdot\vert b)$};
\path (aa) +(300:2.8em) node[local,above right] {$q_2(\cdot\vert a)$};
\path (bb) +(300:2.8em) node[local,above right] {$q_2(\cdot\vert b)$};
\path (ba) +(300:2.8em) node[local,above right] {$q_2(\cdot\vert a)$};
\path (ab) +(300:2.8em) node[local,above right] {$q_2(\cdot\vert b)$};
\end{tikzpicture}
\caption{The probability tree for the time evolution of a~Markov chain that can be in two states, $a$ and $b$, and can change state at each time instant $n=1,2$.}
\label{fig:markov-probability}
\end{figure}
\par
With the local probability mass functions $m_1$ and $q_n(\cdot\vert x_n)$ we associate the linear real-valued \newconcept{expectation functionals} $\ex_1$ and $\ex_n(\cdot\vert x_n)$, given, for all real-valued maps $h$ on $\states$, by
\begin{equation}
\ex_1(h)
\eqdef\smashoperator{\sum_{x_1\in\states}}h(x_1)m_1(x_1)
\quad\text{ and }\quad
\ex_n(h\vert x_n)
\eqdef\smashoperator{\sum_{x_{n+1}\in\states}}
h(x_{n+1})q_n(x_{n+1}\vert x_n)
\end{equation}
Throughout, we will formulate our results using expectations, rather than probabilities.\footnote{Arguments for the `expectation approach' to probability theory were given by \citet{whittle2000}. This approach is also central in the work of \citet{finetti19745}. For classical, precise probabilities, whether we use the language of probability measures, or that of expectation operators, seems to be a matter of personal preference, as the two approaches are formally equivalent. But for the imprecise-probability models we introduce in Section~\ref{sec:towards}, it was argued by \citet{walley1991} that the (lower and upper) expectation language is mathematically superior and more expressive.} Our reasons for doing so are not merely aesthetic, or a matter of personal preference; they will become clear as we go along.
\par
In any probability tree, probabilities and expectations can be calculated very efficiently using backwards recursion.\footnote{See Chapter~3 of \citeauthor{shafer1996a}'s book~\citep{shafer1996a} on causal reasoning in probability trees, which contains a number of propositions about calculating probabilities and expectations in probability trees. That such backwards recursion is possible, was arguably discovered by Christiaan Huygens in the middle of the 17-th century. \citetopt[Appendix~A]{shafer1996a} discusses \citeauthor{huygens16567}'s treatment~\citep[Appendix~VI]{huygens16567} of a special case of the so-called \newconcept{Problem of Points}, where Huygens draws what is probably the first recorded probability tree, and solves the problem by backwards calculation of expectations in the tree.} Suppose that in situation~$\vtuple{x}{n}$, we want to calculate the conditional expectation $\ex(f\vert\vtuple{x}{n})$ of some real-valued map~$f$ on~$\states^N$ that may depend on the values of the states $X(1)$, \dots, $X(N)$. Let us indicate briefly how this is done, also taking into account the simplifications due to the Markov Condition~\eqref{eq:markov-condition-precise}.
\par
For these simplifications, a prominent part will be played by the so-called \newconcept{transition operators}\footnote{The operators $\trans_n$ are also called the \newconcept{generators} of the Markov process; see \citet{whittle2000}.} $\trans_n$ and $\ttrans_n$. Consider the linear space $\allgambles(\states)$ of all real-valued maps on $\states$. Then the linear operator (transformation) $\trans_n\colon\allgambles(\states)\to\allgambles(\states)$ is defined by
\begin{equation}\label{eq:trans-linear}
\trans_nh(x_n)
\eqdef\ex_n(h\vert x_n)
=\smashoperator{\sum_{x_{n+1}\in\states}}h(x_{n+1})q_n(x_{n+1}\vert x_n)
\end{equation}
for all real-valued maps $h$ on $\states$. In other words, $\trans_nh$ is the real-valued map on $\states$ whose value $\trans_nh(x_n)$ in ${x_n\in\states}$ is the conditional expectation of the random variable $h(X(n+1))$, given that the system is in state~$x_n$ at time~$n$. More generally, we also consider the linear maps~$\ttrans_n$ from $\allgambles(\states^{n+1})$ to $\allgambles(\states^n)$, defined by
\begin{equation}\label{eq:trans-linear-general}
\begin{aligned}
\ttrans_nf\ftuple{x}{n}
\eqdef{}&{}\trans_n(f(\ntuple{x}{n},\cdot))(x_n)\\
={}&{}\ex_n(f(\ntuple{x}{n},\cdot)\vert x_n)
=\smashoperator{\sum_{x_{n+1}\in\states}}f(\ntuple{x}{n},x_{n+1})
q_{n}(x_{n+1}\vert x_n)
\end{aligned}
\end{equation}
for all $\vtuple{x}{n}\in\states^n$ and all real-valued maps $f$ on $\states^{n+1}$.\footnote{The $\ttrans^n$ can be seen as projection operators, since (with some abuse of notation) $\ttrans_n\circ\ttrans_n=\ttrans_n$.}
\par
We begin our illustration of backwards recursion by calculating $\ex(f\vert\ntuple{x}{n})$ for the case $n=N-1$. Here
\begin{align}
\ex(f\vert\ntuple{x}{N-1})
&=\ex(f(\ntuple{x}{N-1},\cdot)\vert\ntuple{x}{N-1})\notag\\
&=\smashoperator{\sum_{x_N\in\states}}f(\ntuple{x}{N-1},x_N)q(x_N\vert\ntuple{x}{N-1})\notag\\
&=\smashoperator{\sum_{x_N\in\states}}f(\ntuple{x}{N-1},x_N)q_{N-1}(x_N\vert x_{N-1})
=\ttrans_{N-1}f\ftuple{x}{N-1},
\end{align}
where the third inequality follows from the Markov Condition~\eqref{eq:markov-condition-precise}, and the fourth from Eq.~\eqref{eq:trans-linear-general}. Using similar arguments for $n=N-2$, we derive from the Law of Iterated Expectations\footnote{Also known as the Rule of Total Expectation, or the Rule of Total Probability, or the Conglomerative Property; see, e.g., \citetopt[Section~5.3]{whittle2000} or \citet{finetti19745}.} that
\begin{equation}
\ex(f\vert\ntuple{x}{N-2})
=\ex(\ex(f(\ntuple{x}{N-2},\cdot,\cdot)\vert\ntuple{x}{N-2},\cdot)
\vert\ntuple{x}{N-2})
=\ttrans_{N-2}\ttrans_{N-1}f\ftuple{x}{N-2}.
\end{equation}
Repeating this argument leads to the backwards recursion formulae
\begin{equation}\label{eq:backpropagation-precise-1}
\ex(f\vert\ntuple{x}{n})
=\ttrans_n\ttrans_{n+1}\dots\ttrans_{N-1}f\ftuple{x}{n}
\end{equation}
for $n=1,\dots,N-1$, while for $n=0$, we get
\begin{equation}\label{eq:backpropagation-precise-2}
\ex(f)\eqdef\ex(f\vert\init)=\ex_1(\ttrans_1\ttrans_2\dots\ttrans_{N-1}f).
\end{equation}
In these formulae, $f$ is any real-valued map on $\states^N$.
In Figure~\ref{fig:communicating-vessels}, we give a graphical representation of calculations using the backwards recursion formulae~\eqref{eq:backpropagation-precise-1} and~\eqref{eq:backpropagation-precise-2}, for a two-state stationary Markov chain.
\par
\begin{figure}[ht]
\centering
\begin{tikzpicture}[x={(1.2em,0ex)},y={(0em,1ex)},xscale=.99]
\newcommand{node[rectangle,inner sep=0pt,fill,minimum width=.25em,minimum height=.5pt] {}}{node[rectangle,inner sep=0pt,fill,minimum width=.25em,minimum height=.5pt] {}}
\begin{scope}[yshift=0ex]\small
\draw[yellow!50!red!33.3333!blue!25!green!50!black,fill=yellow!50!red!33.3333!blue!25!green!50] (.5,2.71875) rectangle ++(29,-2.71875) node[yshift=1.1ex,above,pos=.5,text=black] {$\ex(f)=\ex_1(\ttrans_1\ttrans_2f)$};
\draw (0,0) -- (30,0);
\foreach \x in {0,30} {
\draw[->] (\x,0) -- (\x,5);
\foreach \y in {1,2,3,4} \draw (\x,\y) node[rectangle,inner sep=0pt,fill,minimum width=.25em,minimum height=.5pt] {};
}
\end{scope}
\begin{scope}[yshift=-8ex]\small
\draw[yellow!50!red!50!black,fill=yellow!50!red!50] (.5,3) rectangle ++(14,-3) node[yshift=1.2ex,above,pos=.5,text=black] {\small$\ex(f\vert a)=\ttrans_1\ttrans_2f(a)$};
\draw[blue!50!green!50!black,fill=blue!50!green!50] (15.5,1.875) rectangle ++(14,-1.875) node[yshift=1ex,above,pos=.5,text=black] {$\ex(f\vert b)=\ttrans_1\ttrans_2f(b)$};
\draw (0,0) -- (30,0);
\foreach \x in {0,30} {
\draw[->] (\x,0) -- (\x,5);
\foreach \y in {1,2,3,4} \draw (\x,\y) node[rectangle,inner sep=0pt,fill,minimum width=.25em,minimum height=.5pt] {};
}
\end{scope}
\begin{scope}[yshift=-16ex]\small
\draw[yellow!50!black,fill=yellow!50] (.5,3.5) rectangle ++(6.5,-3.5) node[yshift=1.3ex,above,pos=.5,text=black] {$\ex(f\vert a,a)=\ttrans_2f(a,a)$};
\draw[red!50!black,fill=red!50] (8,2.5) rectangle ++(6.5,-2.5) node[yshift=1ex,above,pos=.5,text=black] {$\ex(f\vert a,b)=\ttrans_2f(a,b)$};
\draw[blue!50!black,fill=blue!50] (15.5,2) rectangle ++(6.5,-2) node[yshift=1ex,above,pos=.5,text=black] {$\ex(f\vert b,a)=\ttrans_2f(b,a)$};
\draw[green!50!black,fill=green!50] (23,1.75) rectangle ++(6.5,-1.75) node[yshift=1ex,above,pos=.5,text=black] {$\ex(f\vert b,b)=\ttrans_2f(b,b)$};
\draw (0,0) -- (30,0);
\foreach \x in {0,30} {
\draw[->] (\x,0) -- (\x,5);
\foreach \y in {1,2,3,4} \draw (\x,\y) node[rectangle,inner sep=0pt,fill,minimum width=.25em,minimum height=.5pt] {};
}
\end{scope}
\begin{scope}[yshift=-24ex]\small
\draw[yellow!75!black,fill=yellow!75] (.5,4) rectangle ++(2.75,-4) node[yshift=1.6ex,above,pos=.5,text=black] {$f(a,a,a)$};
\draw[yellow!25!black!50,fill=yellow!25] (4.25,3) rectangle ++(2.75,-3) node[yshift=1ex,above,pos=.5,text=black] {$f(a,a,b)$};
\draw[red!75!black,fill=red!75] (8,2) rectangle ++(2.75,-2) node[yshift=1ex,above,pos=.5,text=black] {$f(a,b,a)$};
\draw[red!25!black!50,fill=red!25] (11.75,3) rectangle ++(2.75,-3) node[yshift=1ex,above,pos=.5,text=black] {$f(a,b,b)$};
\draw[blue!75!black,fill=blue!75] (15.5,2.5) rectangle ++(2.75,-2.5) node[yshift=1ex,above,pos=.5,text=black] {$f(b,a,a)$};
\draw[blue!25!black!50,fill=blue!25] (19.25,1.5) rectangle ++(2.75,-1.5) node[yshift=1ex,above,pos=.5,text=black] {$f(b,a,b)$};
\draw[green!75!black,fill=green!75] (23,.5) rectangle ++(2.75,-.5) node[yshift=1ex,above,pos=.5,text=black] {$f(b,b,a)$};
\draw[green!25!black!50,fill=green!25] (26.75,3) rectangle ++(2.75,-3) node[yshift=1ex,above,pos=.5,text=black] {$f(b,b,b)$};
\draw (0,0) -- (30,0);
\foreach \x in {0,30} {
\draw[->] (\x,0) -- (\x,5);
\foreach \y in {1,2,3,4} \draw (\x,\y) node[rectangle,inner sep=0pt,fill,minimum width=.25em,minimum height=.5pt] {};
}
\end{scope}
\end{tikzpicture}
\caption{Backwards calculation of the conditional and joint expectations of a real-valued map~$f$ on~$\states^3$, for a stationary Markov chain with state set $\states=\{a,b\}$, and a uniform probability mass function attached to each non-terminal situation.}
\label{fig:communicating-vessels}
\end{figure}
\par
For instance, if we let $f$ be the indicator functions $\ind{\{\vtuple{x}{N}\}}$ of the singletons $\{\vtuple{x}{N}\}$, Formulae \eqref{eq:backpropagation-precise-1} and~\eqref{eq:backpropagation-precise-2} allow us to calculate the joint probability mass function $p$ defined by $p\ftuple{x}{N}=\ex(\ind{\{\vtuple{x}{N}\}})$ for all the variables $X(1)$, \dots, $X(N)$.
We can also use them to find the conditional mass functions $p_n(\cdot\vert x_n)$ and $p(\cdot\vert\ntuple{x}{n})$ defined by $p_n(\ntuple[n+1]{x}{N}\vert x_n)=p(\ntuple[n+1]{x}{N}\vert\ntuple{x}{n})=\ex(\ind{\{\vtuple{x}{N}\}}\vert\ntuple{x}{n})$.
\subsection{The Perron--Frobenius Theorem for classical Markov chains}
We are especially interested in the case of a \newconcept{stationary} Markov chain, and in the (marginal) expectation $\ex_n(h)$ of a real-valued map $h$ (on $\states$) that depends only on the state $X(n)$ at time~$n$. Here, Eq.~\eqref{eq:backpropagation-precise-2} becomes
\begin{equation}\label{eq:backpropagation-precise-3}
\ex_n(h)\eqdef\ex_1(\trans^{n-1}h),
\end{equation}
where $\trans\eqdef\trans_1=\trans_2=\dots=\trans_{N-1}$, and where we denote by~$\trans^k$ the $k$-fold composition of $\trans$ with itself; in particular, $\trans^0$ is the identity operator $\id$ on $\allgambles(\states)$. If we let $h=\ind{\{x_n\}}$, this allows us to find the probability mass function $m_n(x_n)=\ex_n(\ind{\{x_n\}})$, $x_n\in\states$ for the state $X(n)$.
\par
By the way, the linear transition operator $\trans$ is very closely related to the so-called \newconcept{Markov}, or \newconcept{transition}, \newconcept{matrix} $\transmat$ of the stationary Markov chain, whose elements for all $(x,y)\in\states^2$ are defined by
\begin{equation}
T_{xy}
\eqdef
q(y\vert x)=\trans\ind{\{y\}}(x).
\end{equation}
Any such transition matrix satisfies the conditions $T_{xy}\geq0$ and $\sum_{z\in\states}T_{xz}=1$. We will henceforth call \newconcept{transition matrix} any matrix satisfying these properties.\footnote{In the literature we also find the term \newconcept{stochastic matrix}, see \citet{hartfiel1998}, for instance.}
The probability counterpart of the expectation formula~\eqref{eq:backpropagation-precise-3} can then be written in matrix form as:
\begin{equation}\label{eq:back-propagation-precise-4}
\dismat_n=\dismat_1T^{n-1},
\end{equation}
where, here and further on, we also use the notation $\dismat_n$ for the row vector whose components are the probabilities $\dismat_n(x_n)$, $x_n\in\states$.
\par
Under some restrictions on the transition operator $\trans$, the classical Perron--Frobenius Theorem then tells us that, as $n$ (as well as the time horizon $N$) recedes to infinity, this probability mass function $m_n$ converges to some limit, independently of the initial probability mass function $m_1$; see \citetopt[Theorem~4.1.6]{kemeny1976} and \citetopt[Chapter~6]{luenberger1979}. In terms of expectation functionals and transition operators:
\begin{theorem}[Classical Perron--Frobenius Theorem, Expectation Form]\label{theo:perron-frobenius-classical}
Consider a stationary Markov chain with finite state set $\states$ and transition operator $\trans$.
Suppose that\/ $\trans$ is regular, meaning that there is some $k>0$ such that ${\min\trans^k\ind{\{x\}}>0}$ for all~$x$ in~$\states$.\footnote{This means that there is a $k>0$ such that all elements of the $k$-th power $\transmat^k$ of the transition matrix $\transmat$ are (strictly) positive. Matrices with this property are sometimes called \newconcept{regular} as well, but this same name is also used for other matrix properties. Another name for this property is `\newconcept{primitive}' \cite{hartfiel1998}.}
Then for every initial expectation operator $\ex_1$, the expectation operator $\ex_n=\ex_1\circ\trans^{n-1}$ for the state at time $n$ converges point-wise to the same limit expectation operator $\ex_\infty$:
\begin{equation}
\smashoperator{\lim_{n\to\infty}}\ex_n(h)
=\smashoperator{\lim_{n\to\infty}}\ex_1(\trans^{n-1}h) \defeq \ex_\infty(h)
\quad\text{ for all $h\in\allgambles(\states)$}.
\end{equation}
Moreover, the limit expectation~$\ex_\infty$ is the only $\trans$-invariant expectation on $\allgambles(\states)$, in the sense that $\ex_\infty=\ex_\infty\circ\trans$.
\end{theorem}
\section{Towards imprecise Markov chains}\label{sec:towards}
The treatment above rests on the assumption that the initial probabilities and the transition probabilities are precisely known. If such is not the case, then it seems necessary to perform some kind of sensitivity analysis, in order to find out to what extent any conclusions we might reach using such a treatment, depend on the actual values of these probabilities.
\par
A very general way of performing a sensitivity analysis for probabilities involves calculations with closed convex sets of probability mass functions, also called \newconcept{credal sets}, rather than with single probability measures.
Let~$\simplex_\states$ denote the set of all probability mass functions on~$\states$, an~$(\abs{\states}-1)$-dimensional unit simplex in the $\abs{\states}$-dimensional linear space $\reals^\states$, then $\set{m\in\simplex_\states}{(\forall x\in\states) m(x)\leq\frac{1}{2} }$ is a cre\-dal set, but $\set{m\in\simplex_\states}{(\exists x\in\states) m(x)\geq\frac{1}{2} }$ is not.
\par
There is a growing body of literature on this interesting and fairly new area of \newconcept{imprecise probabilities}, starting with the publication of \citeauthor{walley1991}'s \citep{walley1991} seminal work.
We refer to the literature \Citep{walley1991,walley1996,weichselberger2001,cooman2005c} for more details and discussion.
\par
Let us recall a number of results for credal sets, important for the developments in this paper. Proofs can be found in \citeauthor{walley1991}'s book \citep[Chapters~2 and~3]{walley1991}.
Specifying a closed convex set~$\mass$ of probability mass functions~$p$ on a finite set~$\pties$ is equivalent to specifying its \newconcept{lower} and \newconcept{upper expectation} (functionals) $\lex_\mass\colon\allgambles(\pties)\to\reals$ and $\uex_\mass\colon\allgambles(\pties)\to\reals$, defined for all $g\in\allgambles(\pties)$ by
\begin{equation}\label{eq:mass-to-luex}
\lex_\mass(g)\eqdef\min\set{\ex_p(g)}{p\in\mass}
\quad\text{ and }\quad
\uex_\mass(g)\eqdef\max\set{\ex_p(g)}{p\in\mass},
\end{equation}
where $\ex_p(g)=\sum_{\pty\in\pties}g(y)p(y)$ is the expectation of $g$ associated with the probability mass function $p$. In a sensitivity analysis, such functionals are quite useful, because they give tight lower and upper bounds on the expectation of any real-valued map. Since the functionals $\lex_\mass$ and $\uex_\mass$ are \newconcept{conjugate} in the sense that $\lex_\mass(g)=-\uex_\mass(-g)$ for all real-valued maps $g$ on $\pties$, one is completely determined if the other is known. Below, we concentrate on upper expectations. Any upper expectation $\uex=\uex_\mass$ associated with some credal set $\mass$ satisfies the following properties \citep[see, e.g.][Section~2.6.1]{walley1991}:
{\renewcommand\theenumi{$\uex$\ensuremath{\arabic{enumi}}}
\begin{enumerate}
\item $\min g\leq\uex(g)\leq\max g$ for all $g$ in $\allgambles(\pties)$ (boundedness);\label{eq:uex1}
\item $\uex(g_1+g_2)\leq\uex(g_1)+\uex(g_2)$ for all $g_1$ and $g_2$ in $\allgambles(\pties)$ (subadditivity);\label{eq:uex2}
\item $\uex(\lambda g)=\lambda\uex(g)$ for all real $\lambda\geq0$ and all $g$ in $\allgambles(\pties)$ (non-negative homogeneity);\label{eq:uex3}
\item $\uex(g+\mu\cg)=\uex(g)+\mu$ for all real $\mu$ and all $g$ in $\allgambles(\pties)$ (constant additivity);\label{eq:uex4}
\item if $g_1\leq g_2$ then $\uex(g_1)\leq\uex(g_2)$ for all $g_1$ and $g_2$ in $\allgambles(\pties)$ (monotonicity);\label{eq:uex5}
\item if $g_n\to g$ point-wise then $\uex(g_n)\to\uex(g)$ for all sequences $g_n$ in $\allgambles(\pties)$ (continuity);\label{eq:uex6}
\item $\uex(g)\geq-\uex(-g)=\lex(g)$ for all $g$ in $\allgambles(\pties)$ (upper--lower consistency).\label{eq:uex7}
\end{enumerate}}
\noindent
Conversely, for any real functional~$\uex$ that is defined on $\allgambles(\pties)$ and that satisfies the conditions~\eqref{eq:uex1}--\eqref{eq:uex3}, there is a unique credal set $\mass\subseteq\simplex_\states$ such that $\uex$ coincides with the upper expectation $\uex_\mass$, namely $\mass=\set{p\in\simplex_\pties}{(\forall f\in\allgambles(\pties))\ex_p(f)\leq\uex(f)}$. Such an $\uex$ therefore automatically also satisfies conditions~\eqref{eq:uex4}--\eqref{eq:uex7}. It therefore make sense to {\it call \emph{upper expectation} any real functional $\uex$ on $\allgambles(\pties)$ that satisfies properties \eqref{eq:uex1}--\eqref{eq:uex3}.}
\par
What is the upshot of all this for the Markov chain problem we are considering here? First of all, in the initial situation~$\init$, corresponding to time $n=0$, rather than a single initial probability mass function~$m_1$, we now have a local credal set $\margmass_1$ of candidate mass functions~$m_1$ for the state $X(1)$ that the system will be in at time $k=1$.
We denote by $\uex_1$ the upper expectation associated with $\margmass_1$:
\begin{equation}
\uex_1(h)
\eqdef\max\Bigl\{\smashoperator[r]{\sum_{x\in\states}}h(x)m_1(x)
\colon m_1\in\margmass_1\Bigr\}
\quad\text{ for all $h\in\allgambles(\states)$.}
\end{equation}
Also, in any situation $\vtuple{x}{n}\in\states^n$ corresponding to time $n=1,2,\dots,N-1$, instead of a single transition mass function $q_n(\cdot\vert x_n)$, we now have a local credal set $\condmass_n(\cdot\vert x_n)$ of candidate conditional mass functions $q_n(\cdot\vert x_n)$ for the state $X(n+1)$ that the system will be in at time $n+1$. We denote by $\uex_n(\cdot\vert x_n)$ the upper expectation associated with $\condmass_n(\cdot\vert x_n)$, i.e.:
\begin{equation}\label{eq:local-upper}
\uex_n(h\vert x_n)
\eqdef\max\Bigl\{\smashoperator[r]{\sum_{x\in\states}}
h(x)q(x)\colon q\in\condmass_n(\cdot\vert x_n) \Bigr\}
\quad\text{ for all $h\in\allgambles(\states)$}.
\end{equation}
We call the resulting model an \newconcept{imprecise Markov chain}. Figure~\ref{fig:markov-imprecise} gives an example of a probability tree for an imprecise Markov chain.
It is an imprecise-probability tree where the local conditional models satisfy the \emph{Markov Condition}:
\begin{equation}\label{eq:imprecise-markov-condition}
\condmass(\cdot\vert\ntuple{x}{n})
=\condmass(\cdot\vert x_n)
\quad\text{ for all $\ntuple{x}{n}\in\states^n$ and $n=1,2,\dots,N-1$}.
\end{equation}
A classical, or \newconcept{precise}, Markov chain is an imprecise one with credal sets that are singletons.
\par
\begin{figure}[ht]
\centering\footnotesize
\begin{tikzpicture}
\tikzstyle{level 1}=[sibling distance=20em]
\tikzstyle{level 2}=[sibling distance=10em]
\tikzstyle{level 3}=[sibling distance=5em]
\node[root] (root) {} [grow=down,level distance=8ex]
child {node[nonterminal] (a) {$a\vphantom{)}$}
child {node[nonterminal] (aa) {$(a,a)$}
child {node[nonterminal] (aaa) {$(a,a,a)$}}
child {node[nonterminal] (aab) {$(a,a,b)$}}
}
child {node[nonterminal] (ab) {$(a,b)$}
child {node[nonterminal] (aba) {$(a,b,a)$}}
child {node[nonterminal] (abb) {$(a,b,b)$}}
}
}
child {node[nonterminal] (b) {$b\vphantom{)}$}
child {node[nonterminal] (ba) {$(b,a)$}
child {node[nonterminal] (baa) {$(b,a,a)$}}
child {node[nonterminal] (bab) {$(b,a,b)$}}
}
child {node[nonterminal] (bb) {$(b,b)$}
child {node[nonterminal] (bba) {$(b,b,a)$}}
child {node[nonterminal] (bbb) {$(b,b,b)$}}
}
};
\draw[local,thick] (root) +(190:1.5em) arc (190:350:1.5em);
\draw[local,thick] (b) +(210:2em) arc (210:330:2em);
\draw[local,thick] (a) +(210:2em) arc (210:330:2em);
\draw[local,thick] (bb) +(230:2.25em) arc (230:310:2.25em);
\draw[local,thick] (ba) +(230:2.25em) arc (230:310:2.25em);
\draw[local,thick] (ab) +(230:2.25em) arc (230:310:2.25em);
\draw[local,thick] (aa) +(230:2.25em) arc (230:310:2.25em);
\path (root) +(275:2.35em) node[local] {$\margmass_1$};
\path (a) +(270:2.95em) node[local] {$\condmass_1(\cdot\vert a)$};
\path (b) +(270:2.95em) node[local] {$\condmass_1(\cdot\vert b)$};
\path (aa) +(300:2.8em) node[local,above right]
{$\condmass_2(\cdot\vert a)$};
\path (bb) +(300:2.8em) node[local,above right]
{$\condmass_2(\cdot\vert b)$};
\path (ba) +(300:2.8em) node[local,above right]
{$\condmass_2(\cdot\vert a)$};
\path (ab) +(300:2.8em) node[local,above right]
{$\condmass_2(\cdot\vert b)$};
\end{tikzpicture}
\caption{The tree for the time evolution of an imprecise Markov chain that can be in two states, $a$ and $b$, and can change state at each time instant $n=1,2$.}
\label{fig:markov-imprecise}
\end{figure}
\par
How, then, can a sensitivity analysis be performed for such an imprecise Markov chain? We choose, in each non-terminal situation $\vtuple{x}{k}$ of the above-mentioned event tree, a local transition probability mass $q(\cdot\vert\ntuple{x}{k})$ in the set of possible candidates $\condmass_k(\cdot\vert x_k)$.\footnote{These local transition probability masses themselves depend on the situation $\vtuple{x}{k}$ they are attached to, but the sets $\condmass_k(\cdot\vert x_k)$ they are chosen from only depend on the last state~$x_k$, as the Markov Condition~\eqref{eq:imprecise-markov-condition} tells us.} For $k=0$, we get the initial situation~$\init$, where we choose some element~$m_1$ in the set of possible candidates~$\margmass_1$.
By making a choice of local model for each non-terminal situation in the event tree, we obtain what we call a \newconcept{compatible probability tree}, for which we may calculate all (conditional) expectations and probability mass functions:
\begin{align}
\ex(f\vert\ntuple{x}{n})
&=\smashoperator{\sum_{\vtuple[n+1]{x}{N}\in\states^{N-n}}}f(\ntuple{x}{n},\ntuple[n+1]{x}{N})
\smashoperator{\prod_{k=n}^{N-1}}q(x_{k+1}\vert\ntuple{x}{k})
\label{eq:probability-tree-conditional},\\
\ex(f)
&=\smashoperator{\sum_{\vtuple{x}{N}\in\states^{N}}}f\ftuple{x}{N}
m_1(x_1)\smashoperator{\prod_{k=1}^{N-1}}q(x_{k+1}\vert\ntuple{x}{k}),
\label{eq:probability-tree-joint}
\end{align}
for ${n=1,\dots,N-1}$, and for all real-valued maps~$f$ on~$\states^N$.
As we have just come to realise, the probability trees that are compatible with an imprecise Markov chain are no longer necessarily (precise) Markov chains themselves.
It is still possible to calculate the $\ex(f\vert\ntuple{x}{n})$ and $\ex(f)$ in Eqs.~\eqref{eq:probability-tree-conditional} and~\eqref{eq:probability-tree-joint} using backwards recursion \citep[Chapter~3]{shafer1996a}, but the formulae for doing so will be more complicated than the ones for precise Markov chains given by Eqs.~\eqref{eq:backpropagation-precise-1} and~\eqref{eq:backpropagation-precise-2}.
\par
If we repeat this for every other choice of the $m_1$ in $\margmass_1$ and the $q(\cdot\vert\ntuple{x}{k})$ in $\condmass_k(\cdot\vert x_k)$, we end up with an infinity of compatible probability trees,\footnote{Except when all the credal sets are singletons, of course.} for which the associated (conditional) expectations and probability mass functions turn out to constitute closed convex sets.
We denote their corresponding upper expectation functionals on $\allgambles(\states^N)$ by $\uex(\cdot\vert\ntuple{x}{n})$ and $\uex$. These upper expectations, and the conjugate lower expectations, are the final aim of our sensitivity analysis.
\par
The procedure we have just described is computationally very complex. When the closed convex sets $\margmass_1$ and $\condmass_k(\cdot\vert x)$ each have a finite number of extreme points (are polytopes), we can limit ourselves to working with these sets of extreme points, rather than with the infinite sets themselves. But even then, the computational complexity of this approach will generally be exponential in the number of time steps.
\par
However, we will see in Section~\ref{sec:sensitivity-analysis} that the upper expectations $\uex$ and $\uex(\cdot\vert\ntuple{x}{n})$ associated with the closed convex sets of (conditional) probability mass functions for the compatible probability trees of an imprecise Markov chain can be calculated in the same way as the expectations $\ex$ and $\ex(\cdot\vert\ntuple{x}{n})$ in a precise one: using counterparts of the backwards recursion formulae~{\eqref{eq:backpropagation-precise-1}--\eqref{eq:backpropagation-precise-3}}. Because of this, making inferences about the mass function of the state at time $n$, i.e., finding the upper envelope $\uex_n$ of the $\ex_n$ given in Eq.~\eqref{eq:backpropagation-precise-3} \emph{now has a complexity that is linear, rather than exponential, in the number of time steps $n$.} This is our first contribution.
\par
Our second contribution in this paper is a Perron--Frobenius Theorem for a special class of so-called regularly absorbing stationary imprecise Markov chains: in Section~\ref{sec:convergence} we prove a generalisation of Theorem~\ref{theo:perron-frobenius-classical}, which tells us that under fairly weak conditions, the upper expectation operators~$\uex_n$ converge to limits that do not depend on the initial upper expectation operators~$\uex_1$. Our result also extends a number of other related convergence theorems for imprecise Markov chains in the literature \citep{hartfiel1991,hartfiel1994,hartfiel1998,skulj2007}.
\section{Sensitivity analysis of imprecise Markov chains}\label{sec:sensitivity-analysis}
We can now take our most important step: deriving the backwards recursion formulae for the conditional and joint upper expectations in an imprecise Markov chain.
We first define \newconcept{upper transition operators} $\utrans_n$ and $\uttrans_n$.
The operator $\utrans_n\colon\allgambles(\states)\to\allgambles(\states)$ is defined by
\begin{equation}\label{eq:trans-upper}
\utrans_nh(x_n)\eqdef\uex_n(h\vert x_n)
\end{equation}
for all real-valued maps $h$ on $\states$, and all $x_n$ in $\states$. In other words, $\utrans_nh$ is the real-valued map on $\states$, whose value $\utrans_nh(x_n)$ in $x_n\in\states$ is the conditional upper expectation of the random variable $h(X(n+1))$, given that the system is in state $x_n$ at time $n$. More generally, we also consider the maps $\uttrans_n$ from $\allgambles(\states^{n+1})$ to $\allgambles(\states^n)$, defined by
\begin{equation}\label{eq:trans-upper-general}
\uttrans_nf\ftuple{x}{n} \eqdef \bigl(\utrans_nf(\ntuple{x}{n},\cdot)\bigr)(x_n) = \uex_n(f(\ntuple{x}{n},\cdot)\vert x_n)
\end{equation}
for all $\vtuple{x}{n}$ in $\states^n$ and all real-valued maps $f$ on $\states^{n+1}$. Of course, we can also consider lower expectations and lower transition operators, which are related to the upper expectations and upper transition operators by conjugacy. As is the case for upper expectations, it is possible to introduce the notion of an upper transition operator directly, by basing it on a number of defining properties, rather than by referring to an underlying imprecise Markov chain. We refer to the Appendix for more details.
\par
The upper expectations $\uex(\cdot\vert\ntuple{x}{n})$ and $\uex$ on $\allgambles(\states^N)$ can be calculated very easily by backwards recursion, cfr.~\eqref{eq:backpropagation-precise-1} and~\eqref{eq:backpropagation-precise-2}.
\begin{theorem}[Concatenation Formula]\label{theo:concatenation}
For any $\vtuple{x}{n}$ in $\states^n$, $n=1,\dots,N-1$, and for any real-valued map~$f$ on~$\states^N$:
\begin{align}
\uex(f\vert\ntuple{x}{n})
&=\uttrans_n\uttrans_{n+1}\dots\uttrans_{N-1}f\ftuple{x}{n}
\label{eq:backpropagation-upper-1}\\
\uex(f)
&=\uex_1(\uttrans_1\uttrans_2\dots\uttrans_{N-1}f).
\label{eq:backpropagation-upper-2}
\end{align}
\end{theorem}
Call, for any non-empty subset $I$ of $\{1\dots,N\}$, a real-valued map $f$ on $\states^N$ \newconcept{\mbox{$I$-measurable}} if $f\ftuple{x}{N}=f\ftuple{z}{N}$ for all $\vtuple{x}{N}$ and $\vtuple{z}{N}$ in $\states^N$ such that $x_k=z_k$ for all $k\in I$.
In other words, an $I$-measurable~$f$ only depends on the states $X(k)$ at times $k\in I$.
As an example, an \mbox{$\{n\}$-measurable} map~$h$ only depends on the state $X(n)$ at time~$n$, and we identify it with a map on~$\states$ (but remember that it acts on states at time~$n$).
The following proposition tells us that all conditional upper expectations satisfy a Markov Condition (cfr.~\eqref{eq:markov-condition-precise}).
\begin{proposition}[Markov Condition]\label{prop:markov}
Consider an imprecise Markov chain with finite state set $\states$ and time horizon $N$.
Fix~$n\in\{1,\dots,N-1\}$.
Let~$\vtuple{x}{n-1}$ and~$\vtuple{z}{n-1}$ be arbitrary elements of~$\states^{n-1}$, and let $x_n\in\states$.
Let~$f$ be any $\{n,n+1,\dots,N\}$-mea\-sur\-able real-valued map on $\states^N$.
Then $\uex(f\vert\ntuple{x}{n-1},x_n)=\uex(f\vert\ntuple{z}{n-1},x_n)$, so we may write
$\uex(f\vert\ntuple{x}{n-1},x_n)=\uex_{\vert n}(f\vert x_n)$.
\end{proposition}
\noindent
The index `$\vert n$' is intended to make clear that we are considering an expectation conditional on the state $X(n)$ at time $n$.
\par
If we apply the joint upper expectation~$\uex$ to maps~$h$ that only depend on the state $X(n)$ at time~$n$, we get the \newconcept{marginal upper expectation} $\uex_n(h)\eqdef\uex(h)$, and $\uex_n$ is a model for the uncertainty about the state $X(n)$ at time~$n$. More generally, taking into account Proposition~\ref{prop:markov}, we use the notation $\uex_{n\vert\ell}(h\vert x_\ell)\eqdef\uex_{\vert\ell}(h\vert x_\ell)$ for the upper expectation of $h(X(n))$, conditional on $X(\ell)=x_\ell$ with $1\leq\ell<n$. With notations established in Eq.~\eqref{eq:local-upper}, $\uex_{n+1\vert n}(h\vert x_n)=\uex_n(h\vert x_n)=\utrans_nh(x_n)$. Such expectations can be found using simpler recursion formulae than Eqs.~\eqref{eq:backpropagation-upper-1} and~\eqref{eq:backpropagation-upper-2}, as they are based on the simpler upper transition operators $\utrans_k$.
\begin{corollary}\label{cor:marginal-concatenation}
For any real-valued map~$h$ on~$\states$, and for any $1\leq\ell<n\leq N$ and all~$x_\ell$ in~$\states$:
\begin{equation}\label{eq:backpropagation-upper-3}
\uex_{n\vert\ell}(h\vert x_\ell)
=\utrans_\ell\utrans_{\ell+1}\dots\utrans_{n-1}h(x_\ell)
\quad\text{ and }\quad
\uex_n(h)
=\uex_1(\utrans_1\utrans_2\dots\utrans_{n-1}h).
\end{equation}
\end{corollary}
\noindent
This offers a reason for formulating our theory in terms of real-valued maps rather than events: suppose we want to calculate the upper probability $\uex_n(A)$ that the state $X(n)$ at time~$n$ belongs to the set~$A$.
According to Eq.~\eqref{eq:backpropagation-upper-3}, $\uex_n(A)=\uex_1(\utrans_1\dots\utrans_{n-1}\ind{A})$, and even if~$\utrans_{n-1}\ind{A}$ can still be calculated using upper probabilities only, it will generally assume values other than~$0$ and~$1$, and therefore will generally not be the indicator of some event.
Already after one step, i.e., in order to calculate $\utrans_{n-2}\utrans_{n-1}\ind{A}$, we need to leave the ambit of events, and turn to the more general real-valued maps; even if we only want to calculate upper \emph{probabilities} after~$n$ steps.
For joint upper and lower probability mass functions, however, we can remain within the ambit of events:
\begin{proposition}[Chapman--Kolmogorov Equations]\label{prop:chapman-kolmogorov}
For an imprecise Markov chain, we have for all\/ $1\leq n<m\leq N$ and all\/ $(x_n,\ntuple[n+1]{x}{m})\in\states^{m-n+1}$ that
\begin{equation}\label{eq:CKu}
\uex_{\vert n}(\{\vtuple[n+1]{x}{m}\}\vert x_n)
=\smashoperator{\prod_{k=n}^{m-1}}\utrans_k\ind{\{x_{k+1}\}}(x_k),
\end{equation}
and for all\/ $1\leq m\leq N$ and all $\vtuple{x}{m}\in\states^{m}$ that
\begin{equation}\label{eq:jCKu}
\uex(\{\vtuple{x}{m}\})
=\uex_1(\{x_1\})\smashoperator{\prod_{k=1}^{m-1}}
\utrans_k\ind{\{x_{k+1}\}}(x_k).
\end{equation}
There are analogous expressions for the lower expectations.
\end{proposition}
\section{Accessibility relations}\label{sec:accessibility}
From now on, and for the rest of the paper, we mainly consider \emph{stationary imprecise Markov chains with an infinite time horizon}.
This means that for each time $n\in\naturals$, we consider the same upper transition operator $\utrans_n=\utrans$.
The classification of the states of such a stationary (im)precise Markov chain can be fruitfully started by introducing a so-called \newconcept{accessibility relation} $\access[\cdot]{\cdot}{\cdot}$: let~$x$ and~$y$ be any two states in~$\states$ and let~$n$ be a number of steps in~$\naturals_0\eqdef\naturals\cup\{0\}$, then $\access[n]{x}{y}$ expresses that~$y$ is accessible from~$x$ in~$n$ steps.
To be an accessibility relation, a generic ternary relation $\access[\cdot]{\cdot}{\cdot}$ has to satisfy the defining properties:
\begin{align}
(\forall x,y\in\states) &\access[0]{x}{y}\asa x=y \label{eq:basic-communication-1},\\
(\forall x,y,z\in\states) (\forall m,n\in\naturals_0) &\text{$\access[n]{x}{y}$ and $\access[m]{y}{z}$}\then\access[n+m]{x}{z} \label{eq:basic-communication-2}.\\
(\forall x\in\states) (\forall n\in\naturals) (\exists y\in\states) &\access[n]{x}{y}. \label{eq:basic-communication-3}
\end{align}
\par
An accessibility relation is classically derived from the transition matrix of a stationary Markov chain; in Section~\ref{sec:accessibility-imprecise} we will associate such a relation with a stationary imprecise Markov chain.
But for \emph{any} (abstract) accessibility relation satisfying the conditions~\eqref{eq:basic-communication-1}--\eqref{eq:basic-communication-3}, we can draw all the following conclusions, no matter what transition matrix or operator it was derived from, or whether it comes about in any other way; \citetopt[Section~1.4]{kemeny1976} give a detailed justification.
In what follows, we use the terminology introduced by \citeauthor{kemeny1976}, but we want to remind the reader that the terms we use may also have various other meanings in different parts of the literature.
\subsection{Abstract accessibility relations}\label{sec:accessibility-abstract}
Accessibility relations give rise to many interesting concepts, which we discuss below.
We refer to Figure~\ref{fig:communication} for a graphical representation.
\par
\begin{figure}[htb]
\centering
\begin{tikzpicture}[->,>=stealth,shorten >=2pt,shorten <=2pt,node distance=2em]
\node[statesbackground,label={17:\small$\states$}] {
\begin{tikzpicture}
\node[closedbackground,label={40:\small$C_1$}] (C1) {
\tikzstyle{commun}=[communbackground,minimum height=5ex,minimum width=2.8em]
\begin{tikzpicture}[node distance=4ex and 0em]
\node[commun,label={4:\small$D_3$}] (D3) {};
\node[below left=of D3,commun,label={4:\small$D_1$}] (D1) {};
\node[below right=of D3,commun,label={4:\small$D_2$}] (D2) {};
\node[above left=of D3,commun,label={4:\small$D_4$}] (D4) {};
\node[above right=of D3,commun,label={4:\small$D_5$}] (D5) {};
\draw (D1) -- (D3);
\draw (D2) -- (D3);
\draw (D3) -- (D4);
\draw (D3) -- (D5);
\end{tikzpicture}
};
\node[right=of C1,closedbackground,label={40:\small$C_2$}] (C2) {
\tikzstyle{commun}=[communbackground,minimum height=10ex,minimum width=2.8em]
\begin{tikzpicture}[node distance=4ex and 0em]
\node[commun,label={30:\small$D_8$}] (D8) {};
\node[below left=of D8,commun,label={30:\small$D_6$}] (D6) {};
\node[below right=of D8,commun,label={30:\small$D_7$}] (D7) {};
\draw (D6) -- (D8);
\draw (D7) -- (D8);
\end{tikzpicture}
};
\node[right=of C2,closedbackground,label={59:\small$C_3$}] (C3) {
\tikzstyle{commun}=[communbackground,minimum height=24ex,minimum width=2.8em]
\begin{tikzpicture}
\node[commun,label={70:\small$D_9$}] (D9) {};
\end{tikzpicture}
};
\end{tikzpicture}
};
\end{tikzpicture}
\caption{
Three increasingly finer partitions of the state set $\states$ for a particular stationary (im)precise Markov chain, or more generally, for an accessibility relation $\access[\cdot]{\cdot}{\cdot}$.
No transition between states of the classes $C_1$, $C_2$, and $C_3$ is possible, and these classes can be seen as separate (im)precise Markov chains.
The equivalence classes $D_k$ for the communication relation are partially ordered by the relation $\access{}{}$, whose (Hasse) diagram is represented by the upward arrows.
Maximal classes are $D_4$, $D_5$, $D_8$, and $D_9$, the other classes are transient.
If $D_4$, $D_5$, $D_8$, and $D_9$ are aperiodic, the accessibility relation restricted to respectively $C_1$, $C_2$, and $C_3$ is respectively maximal class regular, top class regular, and regular.}
\label{fig:communication}
\end{figure}
\par
Consider any two states $x$ and $y$ in $\states$.
Then $y$ is \newconcept{accessible from $x$}, which we denote as $\access{x}{y}$, if there is some $n\in\naturals_0$ such that $\access[n]{x}{y}$.
If $x$ and $y$ are accessible from one another, then we say that $x$ and $y$ \newconcept{communicate}, which we denote as $\commun{x}{y}$.
\par
It follows from Eqs.~\eqref{eq:basic-communication-1} and~\eqref{eq:basic-communication-2} that the binary relation $\access{}{}$ on $\states$ is a preorder, i.e., is reflexive and transitive. The binary relation $\commun{}{}$ on~$\states$ is the associated equivalence relation. This \newconcept{communication relation} $\commun{}{}$ partitions the state set~$\states$ into equivalence classes~$D$ of states that are accessible from one another, called \newconcept{communication classes}. The preorder $\access{}{}$ induces a partial order on this partition, also denoted by $\access{}{}$.
\par
Undominated or \newconcept{maximal} states with respect to the preorder $\access{}{}$ are states~$x$ such that $\access{x}{y}\then\access{y}{x}$ for any state~$y$ in~$\states$.
This means that a maximal state has access only to other maximal states in the same communication class, and to no other states.
Collections of maximal states, such as the communication classes they belong to, are also called \newconcept{maximal}.
The other states and collections of them, such as the communication classes they belong to, are called \newconcept{transient}.
If all maximal states communicate, or in other words if there is a unique maximal communication class, this class is called the \newconcept{top} class.
It is made up of those states that are accessible from any state.
\par
Consider, for any $x$ and $y$ in $\states$, the set
\begin{equation}
\nsteps{x}{y}\eqdef\inlineset{n\in\naturals}{\access[n]{x}{y}},
\end{equation}
i.e., those numbers of steps after which $y$ is accessible from $x$. We call the \newconcept{period} $\period{x}$ of a state $x$ the greatest common divisor of the set $\nsteps{x}{x}$, i.e., $\period{x}\eqdef\gcd\inlineset{n\in\naturals}{\access[n]{x}{x}}$. Because, by Eq.~\eqref{eq:basic-communication-2}, $\nsteps{x}{x}$ is closed under addition, we can rely on a basic number-theoretic result (see, e.g., \citetopt[Theorem~1.4.1]{kemeny1976}) which tells us that $\nsteps{x}{x}$ is, up to perhaps a finite number of initial elements, equal to the set of all multiples of $\period{x}$.
\par
Now consider an equivalence class~$D$ of communicating states, and any two states~$x$ and~$y$ in that class. Then it is not difficult to show that they have the same period: $\commun{x}{y}\then\period{x}=\period{y}$. We denote by $\period{D}$ the common period of all elements of the equivalence class $D$.
\begin{proposition}\label{prop:class-cycle}
Consider arbitrary $x$ and $y$ in some maximal communication class $D$. Then there is some $0\leq\steps{x}{y}<\period{D}$ such that $n\in\nsteps{x}{y}\then n\equiv\steps{x}{y}\pmod{\period{D}}$, i.e., $n$ and $\steps{x}{y}$ are equal up to some multiple of $\period{D}$. Moreover,
\begin{equation}\label{eq:nxy}
(\exists n\in\naturals)
(\forall k\geq n)\,
\steps{x}{y}+k\period{D}\in\nsteps{x}{y}.
\end{equation}
\end{proposition}
\noindent
For any~$x$, $y$~and~$z$ in this equivalence class~$D$, $\steps{x}{y}+\steps{y}{z}\equiv\steps{x}{z}\pmod{\period{D}}$, and therefore $\steps{y}{z}=0$ if and only if $\steps{x}{y}=\steps{x}{z}$. This implies that `$\steps{y}{z}=0$' determines an equivalence relation on this equivalence class~$D$, which further partitions it into~$\period{D}$ subsets, called \newconcept{cyclic classes}. In such a cyclic class, all states~$y$ give the same value to~$\steps{x}{y}$, for any given~$x$ in~$D$. Within $D$, the system moves from cyclic class to cyclic class, in a definite ordered cycle of length~$\period{D}$. If~$D$ is transient, then in some cyclic classes it is possible that, rather than moving to the next cyclic class, the system moves to (a state in) another equivalence class~$D'$ for the communication relation that is a successor to~$D$ for the partial order $\access{}{}$.
\par
If $\period{D}=1$, or in other words if $\steps{x}{y}=0$ for all $x,y\in D$, then there is only one cyclic class in $D$, and we call the communication class $D$, and all its states, \newconcept{aperiodic}. If $D$ is moreover maximal, then $D$ is called \newconcept{regular}. The following general characterisation of regularity is easily derived from Proposition~\ref{prop:class-cycle}; see also \citeauthor{kemeny1976}'s arguments \citep[Chapters~1 and~4]{kemeny1976}.
\begin{proposition}\label{prop:regularity-characterisation}
A communication class $D\subseteq\states$ is regular under the accessibility relation $\access[\cdot]{\cdot}{\cdot}$ if and only if
\begin{equation}\label{eq:regular}
(\exists n\in\naturals)
(\forall k\geq n)
(\forall x,y\in D)
\access[k]{x}{y}.
\end{equation}
\end{proposition}
An interesting special case obtains when there is only one equivalence class for the communication relation (namely~$\states$), so $\states$ is maximal, and there is only one cyclic class (namely~$\states$), meaning that all states are aperiodic. In that case, the accessibility relation $\access[\cdot]{\cdot}{\cdot}$ is called \newconcept{regular} as well.
\noindent
If all maximal communication classes are regular (aperiodic), the accessibility relation is called \newconcept{maximal class regular}. If there is only one maximal communication class, and if this top class is moreover regular (aperiodic), then the accessibility relation is called \newconcept{top class regular}. Top class regularity has the following simple alternative characterisation.
\begin{proposition}\label{prop:topclassregular}
An accessibility relation $\access[\cdot]{\cdot}{\cdot}$ is top class regular if and only if the corresponding set $\maxregstates_{\access[]{}{}}$ of so-called \newconcept{maximal regular states} is non-empty:
\begin{equation}\label{eq:topclassregular}
\maxregstates_{\access[]{}{}}
=\set{x\in\states}
{(\exists n\in\naturals)(\forall k\geq n)(\forall y\in\states)\access[k]{y}{x}}
\neq\emptyset;
\end{equation}
and in that case this set $\maxregstates_{\access[]{}{}}$ is the top communication class.
\end{proposition}
\subsection{Accessibility relations for imprecise Markov chains}\label{sec:accessibility-imprecise}
Because we now only consider stationary imprecise Markov chains, this means that for each time $n\in\naturals$, we consider the same transition models $\condmass_n(\cdot\vert x)=\condmass(\cdot\vert x)$, $x\in\states$, or equivalently, for the upper transition operators: $\utrans_n=\utrans$ and $\uttrans_n=\uttrans$.
\par
Let us denote by $\smash[b]{\utp[n]{x}{y}}$ the upper probability of going in $n$ steps from state $x$ to state $y$.
For $n=0$, $\utp[0]{x}{y}=\ind{\{y\}}(x)$, and for $n\geq1$, $\utp[n]{x}{y}=\uex_{k+n\vert k}(\{y\}\vert x)$, where---because of stationarity---the right-hand sides does not depend on~$k\in\naturals$.
By Corollary~\ref{cor:marginal-concatenation}, we find that $\utp[n]{x}{y}=\utrans^n\ind{\{y\}}(x)$ for all $n\in\naturals_0$.
The following two propositions allow us to associate an accessibility relation with the upper transition operator.
They are immediate generalisations of similar results involving (precise) probabilities in (precise) Markov chains:
\begin{proposition}\label{prop:basic-inequality}
For all $x$, $y$ and $z$ in $\states$, and for all $n$ and $m$ in $\naturals_0$,
\begin{equation}\label{eq:basic-inequality}
\utp[n+m]{x}{y}\geq\utp[n]{x}{z}\utp[m]{z}{y}.
\end{equation}
\end{proposition}
\begin{proposition}\label{prop:always-arrive-in-a-state}
For all $x$ in $\states$, and for all $n$ in $\naturals_0$, there is some $y$ in $\states$ such that $\utp[n]{x}{y}>0$.
\end{proposition}
\noindent
Because of these results, which ensure that Eqs.~\eqref{eq:basic-communication-2} and~\eqref{eq:basic-communication-3} are satisfied [Eqs.~\eqref{eq:basic-communication-1} is trivially satisfied because $\utp[0]{x}{y}=\ind{\{y\}}(x)$], we can define an accessibility relation $\uaccess[\cdot]{\cdot}{\cdot}$ using the $\utp[n]{x}{y}$: for any $x$ and $y$ in $\states$ and any $n\in\naturals_0$:
\begin{equation}\label{eq:uaccessibility}
\uaccess[n]{x}{y}\asa\utp[n]{x}{y}>0\asa\utrans^n\ind{\{y\}}(x)>0.
\end{equation}
Clearly, $\uaccess[n]{x}{y}$ if there is \emph{some} compatible probability tree in which it is possible (meaning that there is a non-zero probability) to go from state $x$ to $y$ in $n$ time steps. In other words, $\uaccess[n]{x}{y}$ if it is not considered impossible in the context of our imprecise-probability model to go from $x$ to $y$ in $n$ steps: we then say that $y$ is \newconcept{accessible} from $x$ in $n$ steps; and if $\uaccess[]{x}{y}$ then $y$ is \newconcept{accessible} from $x$.
\par
The following notion will be essential for the convergence result we present in the next section. It involves both lower and upper transition probabilities.
\begin{definition}[Regularly absorbing]\label{def:regabs}
A stationary imprecise Markov chain is called \newconcept{regularly absorbing} if it is top class regular (under~$\uaccess{}{}$), meaning that
\begin{equation}
\maxregstates_{\uaccess{}{}}
\eqdef\set{x\in\states}
{(\exists n\in\naturals)(\forall k\geq n)(\forall y\in\states)
\utrans^k\ind{\{x\}}(y)>0}
\neq\emptyset,
\end{equation}
and if moreover for all~$y$ in~$\states\setminus\maxregstates_{\uaccess[]{}{}}$ there is some $n\in\naturals$ such that\/ $\ltrans^n\ind{\maxregstates_{\uaccess{}{}}}(y)>0$.
\end{definition}
\noindent
In particular, an imprecise Markov chain that is regular (under~$\uaccess{}{}$, meaning that the accessibility relation $\uaccess{}{}$ is regular) is also regularly absorbing (under $\uaccess{}{}$) in a trivial way.
\section{Convergence for stationary imprecise Markov chains}\label{sec:convergence}
We call an upper expectation $\uex$ on $\allgambles(\states)$ \newconcept{$\utrans$-invariant} whenever $\uex\circ\utrans=\uex$, so whenever $\uex(\utrans h)=\uex(h)$ for all $h\in\allgambles(\states)$.
\begin{theorem}[Perron--Frobenius Theorem, Upper Expectation Form]\label{theo:convergence}
Consider a stationary imprecise Markov chain with finite state set $\states$ that is regularly absorbing. Then for every initial upper expectation $\uex_1$, the upper expectation $\uex_n=\uex_1\circ\utrans^{n-1}$ for the state at time $n$ converges point-wise to the same upper expectation $\uex_\infty$:
\begin{equation}
\smashoperator{\lim_{n\to\infty}}\uex_n(h)
=\smashoperator{\lim_{n\to\infty}}
\uex_1(\utrans^{n-1}h)\defeq\uex_\infty(h)
\text{ for all $h$ in $\allgambles(\states)$.}
\end{equation}
Moreover, the limit upper expectation $\uex_\infty$ is the only $\utrans$-invariant upper expectation on $\allgambles(\states)$.
\end{theorem}
\noindent
Let us compare this convergence result to what exists in the literature.
\par
The classical Perron--Frobenius Theorem~\ref{theo:perron-frobenius-classical} is of course a special case of our Theorem~\ref{theo:convergence}, because if (the transition operator of) a precise stationary Markov chain is regular in the sense of Theorem~\ref{theo:perron-frobenius-classical}, then it is also regular (under~$\uaccess{}{}$), and therefore regularly absorbing.
\par
Other authors have presented convergence results for stationary imprecise Markov chains, namely \citet{hartfiel1991,hartfiel1998,hartfiel1994}, and \citet{skulj2007}. They all use the following approach.
They consider some set $\transmats$ of (one-step) transition matrices $T$, and deduce from that a corresponding set $\transmats^n$ of $n$-step transition matrices given by
\begin{equation}
\transmats^n
\eqdef\set{\transmat_1\transmat_2\dots\transmat_n}
{\transmat_1,\transmat_2,\dots,\transmat_n\in\transmats}.
\end{equation}
\citeauthor{hartfiel1998} calls the sequence $\transmats^n$, $n\in\naturals$ a \newconcept{Markov set chain}.
If we also have a set $\margmass_1$ of (marginal) mass functions $m_1$ for $X(1)$, then they take the corresponding set $\margmass_n$ of (marginal) mass functions for $X(n)$ to be
\begin{equation}
\margmass_n
=\set{\dismat_1\transmat}
{m_1\in\margmass_1\text{ and }\transmat\in\transmats^{n-1}},
\end{equation}
where, as before, we also denote by $\dismat$ the row vector corresponding to the mass function~$m$.
If we furthermore also denote by $h$ the column vector corresponding to the values $h(x)$ of the real-valued map~$h$ in all $x\in\states$, then we find that the corresponding set $\expects_n(h)$ of expectations of $h(X(n))$ is given by
\begin{equation}
\expects_n(h)
=\set{m_1Th}
{m_1\in\margmass_1\text{ and }\transmat\in\transmats^{n-1}}.
\end{equation}
Incidentally, these are also the formulae that can be obtained by considering imprecise Markov chains to be special cases of so-called credal networks under a strong independence assumption; for more details, see \citeauthor{cozman2000}'s work \citep{cozman2000,cozman2005} for instance.
\par
\citet{skulj2007} considers the set $\transmats$ of transition matrices $\transmat$ corresponding to a so-called \newconcept{interval stochastic matrix}, meaning that $\transmats$ is the set of all transition matrices such that $\ltransmat\leq\transmat\leq\utransmat$, where $\ltransmat$ and $\utransmat$ are so-called lower and upper transition matrices; see also Section~\ref{sec:lower-upper-mass} for the related model in terms of upper transition operators. \citet{hartfiel1991} considers arbitrary sets of transition matrices, but in his book \cite{hartfiel1998} he also focuses mainly on interval stochastic matrices.
\par
What is the relationship between the Markov set-chain model and the model involving upper transition operators we have studied and motivated above?
Consider a stationary imprecise Markov chain with upper transition operator $\utrans$.
For each state $x$, as $\utrans h(x)$ has been defined as a conditional upper expectation $\uex(h\vert x)$, there is a corresponding credal set $\condmass_{\utrans}(\cdot\vert x)$ given by
\begin{equation}\label{eq:utrans-to-condmass}
\condmass_{\utrans}(\cdot\vert x)
\eqdef\set{q(\cdot\vert x)\in\simplex_\states}
{(\forall h\in\allgambles(\states))\ex_{q(\cdot\vert x)}(h)\leq\utrans h(x)},
\end{equation}
and then also
\begin{equation}\label{eq:condmass-back-to-utrans}
\utrans h(x)
=\max\set{\ex_{q(\cdot\vert x)}(h)}
{q(\cdot\vert x)\in\condmass_{\utrans}(\cdot\vert x)}.
\end{equation}
With these credal sets, we can associate a set of transition matrices $\transmats_{\utrans}$:
\begin{equation}\label{eq:utrans-to-transmats}
\transmats_{\utrans}
\eqdef\set{\transmat\in\reals^{\states\times\states}}
{(\forall x\in\states)
(\exists q(\cdot\vert x)\in\condmass_{\utrans}(\cdot\vert x))
(\forall y\in\states)
\transmat_{xy}=q(y\vert x)}.
\end{equation}
In other words, each row $\transmat_{x\cdot}$ of any such transition matrix is formed by the transition probabilities corresponding to some element of $\condmass_{\utrans}(\cdot\vert x)$.
The elements $\transmat$ of $\transmats_{\utrans}$ are the transition matrices that can be constructed using the one-step information contained in the conditional credal sets $\condmass_{\utrans}(\cdot\vert x)$ and therefore in the (one-step) upper transition operator $\utrans$.
More generally, the set $\transmats_{\utrans^n}$ contains all $n$-step transition matrices that correspond to the $n$-step upper transition operator $\utrans^n$ (see the Appendix for more details about why we can also consider $\utrans^n$ to be an upper transition operator).
\begin{proposition}\label{prop:hartfiel-and-us}
Consider a stationary imprecise Markov chain with upper transition operator~$\utrans$ and let $n\in\naturals$. Then
\begin{enumerate}[(i)]
\item $\transmats_{\utrans}^n\subseteq\transmats_{\utrans^n}$;
\item For all real-valued maps $h$ on $\states$ there is some $\transmat\in\transmats_{\utrans}^n$ such that for all $x\in\states$, $\utrans^nh(x)=(\transmat h)_x$;
\item\label{prop:hartfiel-and-us:same} For all real-valued maps $h$ on $\states$ and all $x\in\states$,
\begin{equation}
\label{eq:hartfiel-and-us}
\utrans^nh(x)
=\max\set{(\transmat h)_x}{\transmat\in\transmats_{\utrans}^n}
\quad\text{ and }\quad
\ltrans^nh(x)
=\min\set{(\transmat h)_x}{\transmat\in\transmats_{\utrans}^n}.
\end{equation}
\end{enumerate}
\end{proposition}
\noindent
We gather from the following counterexample that for $n>1$, $\transmats_{\utrans}^n$ can be strictly included in $\transmats_{\utrans^n}$.
This shows that the model based on imprecise-probability trees and upper transition operators that we have been using, is more detailed than the Markov set chain model. Nevertheless, as Proposition~\ref{prop:hartfiel-and-us}\eqref{prop:hartfiel-and-us:same} indicates, both models yield very strongly related (if not identical) results as far as the calculation of marginal expectations for $X(n)$ is concerned.
\begin{example}\upshape
Consider~$\utrans\eqdef(1-\epsilon)\id+\cg\epsilon\max$, where $0\leq\epsilon\leq1$ and~$\id$ is the identity operator, which leaves its argument real-valued map~$h$ unchanged: ${\id h=h}$.
This corresponds to a special case of the contamination models~\eqref{eq:contamination-utrans} discussed in Section~\ref{sec:contamination}.
For the corresponding $2$-step transition operator, we find that~$\utrans^2=(1-\delta)\id+\cg\delta\max$, with $\delta\eqdef\epsilon(2-\epsilon)$.
\par
Let $\card{\states}=2$, then the sets of corresponding transition matrices are
\begin{equation}
\transmats_{\utrans}
=\set{\begin{bmatrix}1-\epsilon_1&\epsilon_1\\\epsilon_2&1-\epsilon_2\end{bmatrix}}
{0\leq\epsilon_1,\epsilon_2\leq\epsilon}
\text{ and }
\transmats_{\utrans^2}
=\set{\begin{bmatrix}1-\delta_1&\delta_1\\\delta_2&1-\delta_2\end{bmatrix}}
{0\leq\delta_1,\delta_2\leq\delta}.
\end{equation}
We now show that the set $\transmats^2_{\utrans}$ is \emph{strictly} contained in $\transmats_{\utrans^2}$.
Any element of $\transmats_{\utrans}^2$ is given by
\begin{equation}
\begin{bmatrix}
1-\epsilon_1&\epsilon_1\\
\epsilon_2&1-\epsilon_2
\end{bmatrix}
\begin{bmatrix}
1-\epsilon_3&\epsilon_3\\
\epsilon_4 &1-\epsilon_4
\end{bmatrix}
=
\begin{bmatrix}
1-\epsilon_1-\epsilon_3+\epsilon_1\epsilon_3+\epsilon_1\epsilon_4
&\epsilon_1+\epsilon_3-\epsilon_1\epsilon_3-\epsilon_1\epsilon_4\\
\epsilon_2+\epsilon_4-\epsilon_2\epsilon_4-\epsilon_2\epsilon_3
&1-\epsilon_2-\epsilon_4+\epsilon_2\epsilon_4+\epsilon_2\epsilon_3
\end{bmatrix}
\end{equation}
for some $0\leq\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4\leq\epsilon$, and therefore clearly belongs to $\transmats_{\utrans^2}$. But is is straightforward to check that no choice of $\epsilon_1,\epsilon_2,\epsilon_3,\epsilon_4$ in $[0,\epsilon]$ corresponds to the element of $\transmats_{\utrans^2}$ with $\delta_1=\delta_2=\delta=\epsilon(2-\epsilon)$.~$\blacklozenge$
\end{example}
\Citet{skulj2007} calls a compact set $\transmats$ of transition matrices \newconcept {regular} if there is some $n>0$ such that $T_{xy}>0$ for all $T\in\transmats^n$ and all $x,y\in\states$. He then shows that for such regular~$\transmats$ and for all compact~$\margmass_1$, the corresponding sequence of compact sets~$\margmass_n$ converges in Hausdorff norm to the same compact (and invariant) set $\margmass_{\infty}$.
It follows that for all~$h$ and all compact~$\margmass_1$, the sequence of compact sets $\expects_n(h)$ will converge to the same compact set $\expects_\infty(h)$.
This is a clear generalisation of the classical Perron--Frobenius Theorem~\ref{theo:perron-frobenius-classical}.
But it follows from Proposition~\ref{prop:hartfiel-and-us} that for a given stationary imprecise Markov chain with upper transition operator~$\utrans$, the set~$\transmats_{\utrans}$ is regular in \citeauthor{skulj2007}'s sense if and only if for some $n\in\naturals$, $\ltrans^n\ind{\{y\}}(x)>0$ for all $x,y\in\states$. This is much stronger than even our strongest convergence requirement of regularity (under $\uaccess{}{}$), which only involves the condition $\utrans^n\ind{\{y\}}(x)>0$ for all $x,y\in\states$.
\citeauthor{skulj2007} also proves a convergence result for conservative (too large) approximations of the~$\uex_n$, in the special case of a regular (under~$\uaccess{}{}$) imprecise Markov chain whose upper transition operator is $2$-alternating; see Section~\ref{sec:lower-upper-mass} for further details.
\par
We now turn to \citeauthor{hartfiel1991}'s \citep{hartfiel1991,hartfiel1994,hartfiel1998} results. The strongest general convergence result seems to appear in his book \citep[Sec.~3.2]{hartfiel1998}, where he uses the \newconcept{coefficient of ergodicity} $\tau(\transmat)$ of a transition matrix $\transmat$, defined by
\begin{equation}\label{eq:ergod-coeff}
\tau(\transmat)
=\frac{1}{2}\max_{x,y\in\states}\sum_{z\in\states}\abs{\transmat_{xz}-\transmat_{yz}}
=1-\min_{x,y\in\states}\sum_{z\in\states}\min\{\transmat_{xz},\transmat_{yz}\}.
\end{equation}
A transition matrix is called \newconcept{scrambling} if $\tau(\transmat)<1$.
\citeauthor{hartfiel1991} calls a compact set $\transmats$ of transition matrices \newconcept{product scrambling} if there is some $m\in\naturals$ such that $\tau(\transmat)<1$ for all $\transmat\in\transmats^m$.
He then shows that for such product scrambling $\transmats$ and for all compact $\margmass_1$, the corresponding sequence of compact sets $\margmass_n$ converges in Hausdorff norm to the same compact (and invariant) set $\margmass_{\infty}$.
Again, this is a generalisation of the classical Perron--Frobenius Theorem, and it includes \citeauthor{skulj2007}'s above-mentioned result as a special case.
We believe, however, that this approach, based on the coefficient of ergodicity, has a number of drawbacks that our treatment does not have: the condition seems quite hard to check in practise, and it it is hard to interpret directly. We now also argue that it is too strong, at least from our point of view.
\begin{proposition}\label{prop:product-scrambling}
Consider a stationary imprecise Markov chain with upper transition operator~$\utrans$. If~$\transmats_{\utrans}$ is product scrambling, then the chain is regularly absorbing.
\end{proposition}
\noindent
Moreover, as the following counterexample shows, it is easy to find examples of stationary imprecise Markov chains that are regularly absorbing but for which the corresponding set $\transmats_{\utrans}$ is not product scrambling. Another, perhaps more involved, such counterexample will be presented near the end of Section~\ref{sec:k-out-of-n}.
\begin{example}[Vacuous imprecise Markov chain]\label{ex:vacuous-chain}
Consider an arbitrary state set $\states$ with at least two elements, and the upper transition operator $\utrans$ defined by $\utrans h=\ind{\states}\max h$ for all real-valued maps $h$ on $\states$.
The set $\transmats_{\utrans}$ that corresponds to this upper transition operator is the set of \emph{all} transition matrices $\transmats_{\mathrm{all}}$, and consequently $\transmats_{\utrans^n}=\transmats_{\utrans}^n=\transmats_{\mathrm{all}}$ for all $n\in\naturals$ as well.
\par
Consider the unit transition matrix $\transmat$ defined by $\transmat_{xy}=\delta_{xy}$ [Kronecker delta], so the system remains with probability one in any state $x$ that it is in.
This $\transmat$ belongs to $\transmats_{\utrans^n}=\transmats_{\mathrm{all}}$ for all $n\in\naturals$, but $\tau(\transmat)=1$, so $\transmats_{\mathrm{all}}$ is not product scrambling.
\par
But the chain is regularly absorbing! It is even regular (under~$\uaccess{}{}$), in a trivial way: ${\utrans^n\ind{\{y\}}(x)=1}$ for all $n\in\naturals$ and all $x,y\in\states$.
Observe that $\utrans^n=\ind{\states}\max$ and therefore $\uex_\infty=\max$ for all~$\uex_1$.~$\blacklozenge$
\end{example}
\section{Examples}\label{sec:examples}
In this section, we indicate how the theory developed in the previous sections can be applied in a number of practical situations.
For each of these, the upper expectations are of some special types that are described in the literature on imprecise probabilities.
We present concrete and explicit examples, as well as a number of simulations.
\subsection{Contamination models}\label{sec:contamination}
Suppose we consider a precise stationary Markov chain, with transition operator $\trans$.
We contaminate it with a vacuous model, i.e., we take a convex mixture with the upper transition operator $\cg\max$ of Example~\ref{ex:vacuous-chain}.
This leads to the upper transition operator $\utrans$, defined by
\begin{equation}\label{eq:contamination-utrans}
\utrans h=(1-\epsilon)\trans h+\cg\epsilon\max h,
\end{equation}
for all $h\in\allgambles(\states)$, where~$\epsilon$ is some constant in the open real interval $(0,1)$.
The underlying idea is that we consider a specific convex neighbourhood of $\trans$.
Since for all~$x$ in~$\states$, $\min\utrans\ind{\{x\}}={(1-\epsilon)\min\trans\ind{\{x\}}+\epsilon}>0$, this upper transition operator (or the associated imprecise Markov chain) is always
regular (under~$\uaccess{}{}$), regardless of whether~$\trans$ is regular (in the sense of Theorem~\ref{theo:perron-frobenius-classical})!
We infer from Theorem~\ref{theo:convergence} that, whatever the initial upper expectation operator~$\uex_1$ is, the upper expectation operator~$\uex_n$ for the state $X(n)$ at time~$n\in\naturals$ will always converge to the same~$\uex_\infty$.
\par
What is this $\uex_\infty$ is for given~$\trans$ and~$\epsilon$?
For any $n\geq1$,
\begin{align}
\utrans^nh
&= (1-\epsilon)^n\trans^nh
+\cg\epsilon\smashoperator{\sum_{k=0}^{n-1}}(1-\epsilon)^k\max\trans^kh,\\
\intertext{and therefore}
\uex_{n+1}(h)
&=(1-\epsilon)^n\uex_1(\trans^nh)
+\epsilon\smashoperator{\sum_{k=0}^{n-1}}(1-\epsilon)^k\max\trans^kh.
\label{eq:contamination-marginal}
\end{align}
If we now let $n\to\infty$, we see that the limit is indeed independent of the initial upper expectation~$\uex_1$:
\begin{equation}\label{eq:contamination-limit}
\uex_\infty(h)
=\epsilon\smashoperator{\sum_{k=0}^{\infty}}
(1-\epsilon)^k\max\trans^kh.
\end{equation}
\begin{example}[Contaminating a cycle]\upshape
Consider for instance $\states=\{a,b\}$, and let the precise Markov chain be the cycle with period 2, with transition operator $\trans$ given by $\trans h(a)=h(b)$ and $\trans h(b)=h(a)$.
Then $\trans^{2n}h=h$ and $\trans^{2n+1}h=\trans h$, and therefore $\max\trans^{2n}h=\max\trans^{2n+1}h=\max h$, whence $\uex_\infty(h)=\max h$.
So the limit upper expectation is vacuous: we lose all information about the value of $X(n)$ as $n\to\infty$.~$\blacklozenge$
\end{example}
\begin{example}[Contaminating a random walk]\upshape
Consider a random walk, where $\states=\{a,b\}$ and $\trans h=\cg\frac{h(a)+h(b)}{2}$.
Then we find that $\uex_\infty(h)=\epsilon\max h+(1-\epsilon)\frac{h(a)+h(b)}{2}$.~$\blacklozenge$
\end{example}
\begin{example}[Another contamination model]\upshape\label{ex:contaminated-transient}
To illustrate the convergence properties of an imprecise Markov chain, let us look at a simple numerical example.
Again consider $\states=\{a,b\}$ and let the stationary imprecise Markov chain be defined by an initial credal set $\margmass_1=\set{m\in\simplex_{\{a,b\}}}{0.6\leq m(a)\leq0.9}$, and a contamination model of the type~\eqref{eq:contamination-utrans}, with ${\epsilon=0.1}$, and for which the precise transition operator~$\trans$ is defined by the transition matrix
\begin{equation*}
\transmat
\eqdef
\begin{bmatrix}
q(a\vert a) & q(b\vert a)\\
q(a\vert b) & q(b\vert b)
\end{bmatrix}
=
\begin{bmatrix}
0.15 & 0.85\\
0.85 & 0.15
\end{bmatrix}.
\end{equation*}
In~Figure~\ref{fig:contaminated-transient} we have plotted the evolution of $\uex_n(\{a\})$ and $\lex_n(\{a\})$, the upper and lower probability for finding the system in state~$a$ at time $n$, which can be calculated efficiently using Eq.~\eqref{eq:contamination-marginal}.
\begin{figure}[ht]
\centering\small
\begin{tikzpicture}[baseline,x=1.5em,y=25ex]
\fill[color=UGentblauw!20] (0,.5) -- plot file {LaV.table} -- (19,.5);
\fill[color=UGentblauw!20] (0,.5) -- plot file {UaV.table} -- (19,.5);
\draw[color=red,thick,text=black] plot[mark=x] file {Pa.table} node[right] {$\ex_n(\{a\})$};
\draw[color=UGentblauw,thick,text=black] plot[mark=x] file {La.table} node[right] {$\lex_n(\{a\})$};
\draw[color=UGentblauw,thick,text=black] plot[mark=x] file {Ua.table} node[right] {$\uex_n(\{a\})$};
\draw[->] (0,0) -- (20,0) node[right] {$n$};
\foreach \k in {0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19} \draw ([yshift=.3ex] \k,0) -- ([yshift=-.3ex] \k,0);
\foreach \kpos/\k in {0/1,4/5,9/10,14/15,19/20} \path (\kpos,0) node[below] {$\k$};
\draw[->] (0,0) -- (0,1.05);
\foreach \val in {0.0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0} \draw ([xshift=.3ex] 0,\val) -- ([xshift=-.3ex] 0,\val);
\foreach \val in {0.0,0.2,0.4,0.6,0.8,1.0} \path (0,\val) node[left] {$\val$};
\end{tikzpicture}
\caption{
The time evolution of (i)~the upper and lower probability of finding the imprecise Markov chain of Example~\ref{ex:contaminated-transient} in the state~$a$ (outer plot marks and connecting lines); and of (ii)~the probability of finding the classical Markov chain of Example~\ref{ex:contaminated-transient} in the state~$a$ (inner plot marks and connecting lines).
The filled area denotes the hull of the evolution of this probability, under the contamination model of Example~\ref{ex:contaminated-transient}, for all possible initial mass functions.
}
\label{fig:contaminated-transient}
\end{figure}
\par
For comparison, we have also plotted the evolution of~$\ex_n(\{a\})$, the probability for finding the system in state~$a$ at time $n$, for a (precise) Markov chain defined by probability mass functions that lie on the boundaries of the credal sets defining the above imprecise Markov chain; to wit, its initial mass function is given by the row vector~$\dismat_1\eqdef[m_1(a) \;\; m_1(b)]=[0.9 \;\; 0.1]$ and its transition matrix is $\left[\begin{smallmatrix}0.135&0.865\\0.865&0.135\end{smallmatrix}\right]$.
Here $\ex_\infty(\{a\})=\ex_\infty(\{b\})=0.5$.~$\blacklozenge$
\end{example}
\subsection{Belief function models}\label{sec:belief-models}
The contamination models we have just described are a special case of a more general and quite interesting class of models, based on \citeauthor{shafer1976}'s \citep{shafer1976} notion of a belief function. We can consider a number of subsets $F_j$, $j=1,\dots,n$ of $\states$, and a convex mixture of the vacuous upper expectations relative to these subsets:
\begin{equation}\label{eq:belief-functions}
\uex(h)=\sum_{j=1}^nm(F_j)\max_{x\in F_j}h(x),
\end{equation}
with $m(F_j)\geq0$ and $\sum_{j=1}^nm(F_j)=1$. In Shafer's terminology, the sets $F_j$ are called \newconcept{focal elements}, and the $m(F_j)$'s the \newconcept{basic probability assignment}.\footnote{Usually, in Shafer's approach, Eq.~\eqref{eq:belief-functions} is only considered for (indicators of) events, and it then defines a so-called \newconcept{plausibility function}, whose conjugate lower probability is a \newconcept{belief function}. Eq.~\eqref{eq:belief-functions} gives the point-wise greatest (most conservative) upper expectation that extends this plausibility function from events to real-valued maps.}
\par
We can now consider imprecise Markov chains where the local models, attached to the non-terminal situations in the tree, are of this type. The general backwards recursion formulae we have given in Section~\ref{sec:sensitivity-analysis} can then be used in combination with the simple formulae of the type~\eqref{eq:belief-functions} for an efficient calculation of all conditional and joint upper and lower expectations in the tree. We leave this implicit however, and move on to another example, which is rather more popular in the literature.
\subsection{Models with lower and upper mass functions}\label{sec:lower-upper-mass}
An intuitive way to introduce imprecise Markov chains \citep{kozine2002,campos2003,skulj2006,hartfiel1998} goes by way of so-called \newconcept{probability intervals}, studied in a paper by \Citet{campos1994}; see also \citetopt[Section~4.6.1]{walley1991} and \citetopt[Section~2.1]{hartfiel1998}. It consists in specifying lower and upper bounds for mass functions. Let us explain how this is done in the specific context of Markov chains.
\par
For the initial mass function $m_1$, we specify a lower bound $\lmarg_1\colon\states\to\reals$, also called a \newconcept{lower mass function}, and an upper bound $\umarg_1\colon\states\to\reals$, called an \newconcept{upper mass function}. The credal set $\margmass_1$ attached to the initial situation, which corresponds to these bounds, is then given by
\begin{equation}
\margmass_1
\eqdef
\set{m\in\simplex_\states}
{(\forall x\in\states)\,\lmarg_1(x)\leq m(x)\leq\umarg_1(x)}.
\end{equation}
\par
Similarly, in each non-terminal situation $\vtuple{x}{k}\in\states^k$, ${k=1,\dots,N-1}$ we have a credal set $\condmass_k(\cdot\vert x_k)$ that is defined in terms of conditional lower and upper mass functions $\lcond_k(\cdot\vert x_k)$ and $\ucond_k(\cdot\vert x_k)$. Here, for instance, $\lcond_k(x_{k+1}\vert x_k)$ gives a lower bound on the transition probability $q_k(x_{k+1}\vert x_k)$ to go from state $X(k)=x_k$ to state $X(k+1)=x_{k+1}$ at time $k$.
\par
Under some consistency conditions (for more details, see \citep{campos1994}) the upper expectation associated with $\margmass_1$ is then given in all subsets~$A$ of~$\states$ by
\begin{equation}
\uex_1(A)
=\min\bigg\{\smashoperator{\sum_{z\in A}}\umarg_1(z),
1-\smashoperator{\sum_{z\in\states\setminus A}}\lmarg_1(z)\bigg\},
\end{equation}
This $\uex_1$ is \newconcept{$2$-alternating}: $\uex_1(A\cup B)+\uex_1(A\cap B)\leq\uex_1(A)+\uex_1(B)$ for all subsets~$A$ and~$B$ of~$\states$.
This implies (see \citep[Section~3.2.4]{walley1991} and \citep[Theorem~8 and Corollary~17]{cooman2005e}) that for all $h\in\allgambles(\states)$ the upper expectation $\uex_1(h)$ can be found by Choquet integration:
\begin{equation}\label{eq:choquet}
\uex_1(h)
=\min h+\smashoperator{\int_{\min h}^{\max h}}
\uex_1(\set{z\in\states}{h(z)\geq\alpha})\dif\alpha,
\end{equation}
where the integral is a Riemann integral. Similar considerations for the $2$-alternating $\uex_k(\cdot\vert x_k)$ lead to formulae for the upper transition operators $\utrans_k$: for all~$x_k$ in~$\states$,
\begin{align}
\utrans_k\ind{A}(x_k)
&=\min\bigg\{\smashoperator{\sum_{z\in A}}\ucond_k(z\vert x_k),
1-\smashoperator{\sum_{z\in\states\setminus A}}
\lcond_k(z\vert x_k)\bigg\}\label{eq:choquet2alt}\\
\utrans_kh(x_k)
&=\min h+\smashoperator{\int_{\min h}^{\max h}}
\utrans_k\ind{\set{z\in\states}{h(z)\geq\alpha}}(x_k)
\dif\alpha.\label{eq:choquet2alttoo}
\end{align}
Using $\uex_1$ and the $\utrans_k$, all (conditional) expectations in the imprecise Markov chain can now be calculated, by applying Theorem~\ref{theo:concatenation} and Corollary~\ref{cor:marginal-concatenation}.
\par
Rather than using this backwards recursion method, \citet{skulj2006,skulj2007} uses forward propagation, which, reformulated using our notations, amounts to the following. The marginal expectation $\uex_2$ is calculated by $\uex_2=\uex_1\circ\utrans_1$, $\uex_3$ by $\uex_3=\uex_2\circ\utrans_2$, and more generally, $\uex_{n+1}=\uex_n\circ\utrans_n$.
Even though it appears quite natural, this approach has an important drawback, especially in the context of the probability interval approach described above.
In order to calculate, say~$\uex_3(h)$, we first need to find the upper expectation~$\uex_2$, and calculate its value in the map~$\utrans_2h$.
But~$\uex_2$, as the composition of two $2$-alternating models~$\uex_1$ and~$\utrans_1$, is no longer necessarily $2$-alternating, and therefore its value in the map~$\utrans_2h$ cannot generally be calculated from the values it assumes on events, using Choquet integration, as in Eqs.~\eqref{eq:choquet} and~\eqref{eq:choquet2alttoo}.
Indeed, Choquet integration will generally give too large a value for~$\uex_3(h)$, and will therefore lead to conservative approximations.
These are the difficulties that \citeauthor{skulj2006} is faced with in his work \citep{skulj2006,skulj2007}.
\par
They can be circumvented by our backwards recursion approach.
Indeed, in order to find~$\uex_n(h)$, we begin by calculating $h_1\eqdef h$ and $h_{k+1}\eqdef\utrans_kh_k$, $k=1,\dots,n-1$, using Eq.~\eqref{eq:choquet2alttoo}.
Finally, $\uex_n(h)=\uex_1(h_n)$ is calculated using Eq.~\eqref{eq:choquet}.
Our calculations use Choquet integration but are tight, and not conservative approximations, because at all times, the intervening local upper expectations are $2$-alternating.
\begin{example}[Close to a cycle]\label{ex:evosimplex}\upshape
Consider a three-state stationary imprecise Markov model with $\states=\{a,b,c\}$ and with marginal and transition probabilities given by probability intervals.
It follows from Eqs.~\eqref{eq:choquet2alt} and~\eqref{eq:choquet2alttoo} that the upper transition operator~$\utrans$ is fully determined by the lower and upper transition matrices:
\begin{align*}
\ltransmat\eqdef
\begin{bmatrix}
\lcond(a\vert a) & \lcond(b\vert a) & \lcond(c\vert a) \\
\lcond(a\vert b) & \lcond(b\vert b) & \lcond(c\vert b) \\
\lcond(a\vert c) & \lcond(b\vert c) & \lcond(c\vert c)
\end{bmatrix}
&=
\frac{1}{200}
\begin{bmatrix}
9 & 9 & 162 \\
144 & 18 & 18 \\
9 & 162 & 9
\end{bmatrix},\\
\utransmat\eqdef
\begin{bmatrix}
\ucond(a\vert a) & \ucond(b\vert a) & \ucond(c\vert a) \\
\ucond(a\vert b) & \ucond(b\vert b) & \ucond(c\vert b)\\
\ucond(a\vert c) & \ucond(b\vert c) & \ucond(c\vert c)
\end{bmatrix}
&=\frac{1}{200}
\begin{bmatrix}
19 & 19 & 172 \\
154 & 28 & 28 \\
19 & 172 & 19
\end{bmatrix},
\end{align*}
where the numerical values are particular to this example.
We have depicted the credal sets $\condmass(\cdot\vert a)$, $\condmass(\cdot\vert b)$ and $\condmass(\cdot\vert c)$ corresponding to this upper transition operator in Fig.~\ref{fig:near-cyclic-transition}.
\begin{figure}[htb]
\centering\footnotesize
\newcommand{\abcsimplex}{
(0,1,0) node[above] {$c$}
-- (1,0,0) node[below right] {$b$}
-- (0,0,1) node[below left] (aunit) {$a$}
-- cycle
}
\begin{tikzpicture}[scale=1.1,z={(-.86603,-.5)},x={(.86603,-.5)},baseline=(aunit)]
\draw[simplexbackground] \abcsimplex;
\draw[fill=red] (19/200,162/200,19/200) -- (9/200,172/200,19/200) -- (19/200,172/200,9/200) -- cycle;
\draw (0.7, 0, 0.7) node {$\condmass(\cdot\vert a)$};
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=1.1,z={(-.86603,-.5)},x={(.86603,-.5)},baseline=(aunit)]
\draw[simplexbackground] \abcsimplex;
\draw[fill=red] (28/200,18/200,154/200) -- (18/200,28/200,154/200) -- (28/200,28/200,144/200) -- cycle;
\draw (0.7, 0, 0.7) node {$\condmass(\cdot\vert b)$};
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=1.1,z={(-.86603,-.5)},x={(.86603,-.5)},baseline=(aunit)]
\draw[simplexbackground] \abcsimplex;
\draw[fill=red] (162/200,19/200,19/200) -- (172/200,19/200,9/200) -- (172/200,9/200,19/200) -- cycle;
\draw (0.7, 0, 0.7) node {$\condmass(\cdot\vert c)$};
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=.7,z={(-.86603,-.5)},x={(.86603,-.5)}]
\draw (0,1,0) node (c) {$c$} (1,0,0) node (b) {$b$} (0,0,1) node (a) {$a$};
\draw[->] (c) -- (b);
\draw[->] (b) -- (a);
\draw[->] (a) -- (c);
\end{tikzpicture}
\caption{The credal sets $\condmass(\cdot\vert a)$, $\condmass(\cdot\vert b)$ and $\condmass(\cdot\vert c)$ in the simplex $\simplex_{\{a,b,c\}}$, corresponding to the upper transition operator $\trans$ in Example~\ref{ex:evosimplex}.}
\label{fig:near-cyclic-transition}
\end{figure}
Similarly, the initial upper expectation $\uex_1$ is completely determined by the row vectors $\ldismat_1\eqdef[\lmarg_1(a) \;\; \lmarg_1 (b) \;\; \lmarg_1(c)]$ and $\udismat_1\eqdef[\umarg_1(a) \;\; \umarg_1(b) \;\; \umarg_1(c)]$.
In Figure~\ref{fig:simplex_evolution}, we plot conservative approximations for the credal sets $\margmass_n$ corresponding to the upper expectation operators $\uex_n$.
\begin{figure}[htb]
\centering
\begin{tikzpicture}[scale=1.4,z={(-.86603,-.5)},x={(.86603,-.5)},baseline=(a)]
\fill[simplexbackground] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\draw[yellowbackground]
(0.1000, 0.1000, 0.8000) -- (0.2500, 0.1000, 0.6500)
-- (0.2500, 0.1500, 0.6000) -- (0.2000, 0.2000, 0.6000)
-- (0.0200, 0.2000, 0.7800) -- (0.0200, 0.1800, 0.8000) -- cycle;
\draw[bluebackground]
(0.6000, 0.0000, 0.4000) -- (0.9000, 0.0000, 0.1000)
-- (0.9000, 0.0500, 0.0500) -- (0.5500, 0.4000, 0.0500)
-- (0.4000, 0.4000, 0.2000) -- (0.4000, 0.2000, 0.4000) -- cycle;
\draw[redbackground]
(0.0000, 0.9000, 0.1000) -- (0.1000, 0.9000, 0.0000)
-- (0.0000, 1.0000, 0.0000) -- cycle;
\draw (0.7, 0, 0.7) node {\small$n=1$};
\draw[simplexborder] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=1.4,z={(-.86603,-.5)},x={(.86603,-.5)},baseline=(a)]
\fill[simplexbackground] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\draw[yellowbackground]
(0.1125, 0.5945, 0.2930) -- (0.1962, 0.5108, 0.2930)
-- (0.2300, 0.5108, 0.2592) -- (0.2300, 0.7016, 0.0684)
-- (0.2165, 0.7151, 0.0684) -- (0.1125, 0.7151, 0.1724) -- cycle;
\draw[bluebackground]
(0.0450, 0.1692, 0.7858) -- (0.1287, 0.0855, 0.7858)
-- (0.3650, 0.0855, 0.5495) -- (0.3650, 0.2750, 0.3600)
-- (0.2300, 0.4100, 0.3600) -- (0.0450, 0.4100, 0.5450) -- cycle;
\draw[redbackground]
(0.6525, 0.1355, 0.2120) -- (0.7025, 0.0855, 0.2120) --
(0.7700, 0.0855, 0.1445) -- (0.7700, 0.1445, 0.0855) --
(0.7025, 0.2120, 0.0855) -- (0.6525, 0.2120, 0.1355) -- cycle;
\draw (0.7, 0, 0.7) node {\small$n=2$};
\draw[simplexborder] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=1.4,z={(-.86603,-.5)},x={(.86603,-.5)},baseline=(a)]
\fill[simplexbackground] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\draw[yellowbackground]
(0.5680, 0.1295, 0.3025) -- (0.5777, 0.1295, 0.2928)
-- (0.5777, 0.2644, 0.1579) -- (0.4977, 0.3444, 0.1579)
-- (0.3898, 0.3444, 0.2659) -- (0.3898, 0.3078, 0.3025) -- cycle;
\draw[bluebackground]
(0.1027, 0.5111, 0.3862) -- (0.2750, 0.3389, 0.3862)
-- (0.3718, 0.3389, 0.2894) -- (0.3718, 0.5411, 0.0871)
-- (0.2107, 0.7022, 0.0871) -- (0.1027, 0.7022, 0.1951) -- cycle;
\draw[redbackground]
(0.1897, 0.1199, 0.6904) -- (0.2381, 0.1199, 0.6420)
-- (0.2381, 0.2116, 0.5503) -- (0.1865, 0.2633, 0.5503)
-- (0.1027, 0.2633, 0.6340) -- (0.1027, 0.2069, 0.6904) -- cycle;
\draw (0.7, 0, 0.7) node {\small$n=3$};
\draw[simplexborder] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=1.4,z={(-.86603,-.5)},x={(.86603,-.5)},baseline=(a)]
\fill[simplexbackground] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\draw[yellowbackground]
(0.2719, 0.1813, 0.5469) -- (0.3275, 0.1813, 0.4912)
-- (0.3275, 0.3141, 0.3585) -- (0.3070, 0.3345, 0.3585)
-- (0.1324, 0.3345, 0.5331) -- (0.1324, 0.3207, 0.5469) -- cycle;
\draw[bluebackground]
(0.4578, 0.1433, 0.3989) -- (0.5690, 0.1433, 0.2878)
-- (0.5690, 0.2765, 0.1546) -- (0.4375, 0.4080, 0.1546)
-- (0.2737, 0.4080, 0.3183) -- (0.2737, 0.3274, 0.3989) -- cycle;
\draw[redbackground]
(0.2357, 0.4778, 0.2865) -- (0.2727, 0.4778, 0.2495)
-- (0.2727, 0.5921, 0.1352) -- (0.2340, 0.6308, 0.1352)
-- (0.1260, 0.6308, 0.2433) -- (0.1260, 0.5875, 0.2865) -- cycle;
\draw (0.7, 0, 0.7) node {\small$n=4$};
\draw[simplexborder] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=1.4,z={(-.86603,-.5)},x={(.86603,-.5)},baseline=(a)]
\fill[simplexbackground] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\draw[yellowbackground]
(0.3080, 0.3335, 0.3585) -- (0.3208, 0.3335, 0.3457)
-- (0.3208, 0.5183, 0.1609) -- (0.3141, 0.5250, 0.1609)
-- (0.1674, 0.5250, 0.3076) -- (0.1674, 0.4741, 0.3585) -- cycle;
\draw[bluebackground]
(0.2823, 0.1761, 0.5416) -- (0.3704, 0.1761, 0.4535)
-- (0.3704, 0.3603, 0.2693) -- (0.3187, 0.4120, 0.2693)
-- (0.1417, 0.4120, 0.4463) -- (0.1417, 0.3167, 0.5416) -- cycle;
\draw[redbackground]
(0.4956, 0.1752, 0.3292) -- (0.5208, 0.1752, 0.3040)
-- (0.5208, 0.3112, 0.1680) -- (0.4941, 0.3379, 0.1680)
-- (0.3675, 0.3379, 0.2945) -- (0.3675, 0.3033, 0.3292) -- cycle;
\draw (0.7, 0, 0.7) node {\small$n=5$};
\draw[simplexborder] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=1.4,z={(-.86603,-.5)},x={(.86603,-.5)},baseline=(a)]
\fill[simplexbackground] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\draw[yellowbackground]
(0.4457, 0.1917, 0.3626) -- (0.4494, 0.1917, 0.3589)
-- (0.4494, 0.3560, 0.1946) -- (0.4181, 0.3873, 0.1946)
-- (0.2701, 0.3873, 0.3426) -- (0.2701, 0.3673, 0.3626) -- cycle;
\draw[bluebackground]
(0.3371, 0.2696, 0.3933) -- (0.3731, 0.2696, 0.3573)
-- (0.3731, 0.4590, 0.1679) -- (0.3118, 0.5203, 0.1679)
-- (0.1639, 0.5203, 0.3158) -- (0.1639, 0.4428, 0.3933) -- cycle;
\draw[redbackground]
(0.3051, 0.1887, 0.5062) -- (0.3231, 0.1887, 0.4882) --
(0.3231, 0.3369, 0.3400) -- (0.3023, 0.3577, 0.3400)
-- (0.1633, 0.3577, 0.4790) -- (0.1633, 0.3305, 0.5062) -- cycle;
\draw (0.7, 0, 0.7) node {\small$n=6$};
\draw[simplexborder] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=1.4,z={(-.86603,-.5)},x={(.86603,-.5)},baseline=(a)]
\fill[simplexbackground] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\draw[yellowbackground]
(0.3493, 0.2677, 0.3831) -- (0.3542, 0.2677, 0.3781)
-- (0.3542, 0.4503, 0.1954) -- (0.3500, 0.4545, 0.1954)
-- (0.1876, 0.4545, 0.3579) -- (0.1876, 0.4293, 0.3831) -- cycle;
\draw[bluebackground]
(0.3395, 0.2094, 0.4511) -- (0.3725, 0.2094, 0.4181)
-- (0.3725, 0.3918, 0.2357) -- (0.3516, 0.4127, 0.2357)
-- (0.1779, 0.4127, 0.4095) -- (0.1779, 0.3711, 0.4511) -- cycle;
\draw[redbackground]
(0.4188, 0.2088, 0.3724) -- (0.4283, 0.2088, 0.3629)
-- (0.4283, 0.3734, 0.1983) -- (0.4167, 0.3850, 0.1983)
-- (0.2618, 0.3850, 0.3532) -- (0.2618, 0.3658, 0.3724) -- cycle;
\draw (0.7, 0, 0.7) node {\small$n=8$};
\draw[simplexborder] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=1.4,z={(-.86603,-.5)},x={(.86603,-.5)},baseline=(a)]
\fill[simplexbackground] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\draw[yellowbackground]
(0.3645, 0.2432, 0.3922) -- (0.3666, 0.2432, 0.3902)
-- (0.3666, 0.4251, 0.2083) -- (0.3633, 0.4284, 0.2083)
-- (0.1951, 0.4284, 0.3765) -- (0.1951, 0.4127, 0.3922) -- cycle;
\draw[bluebackground]
(0.3608, 0.2217, 0.4175) -- (0.3733, 0.2217, 0.4050)
-- (0.3733, 0.4034, 0.2233) -- (0.3638, 0.4129, 0.2233)
-- (0.1914, 0.4129, 0.3957) -- (0.1914, 0.3911, 0.4175) -- cycle;
\draw[redbackground]
(0.3903, 0.2213, 0.3884) -- (0.3940, 0.2213, 0.3847)
-- (0.3940, 0.3965, 0.2095) -- (0.3880, 0.4025, 0.2095)
-- (0.2226, 0.4025, 0.3749) -- (0.2226, 0.3890, 0.3884) -- cycle;
\draw (0.7, 0, 0.7) node {\small$n=11$};
\draw[simplexborder] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=1.4,z={(-.86603,-.5)},x={(.86603,-.5)},baseline=(a)]
\fill[simplexbackground] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\draw[yellowbackground]
(0.3732, 0.2287, 0.3981) -- (0.3736, 0.2287, 0.3977)
-- (0.3736, 0.4100, 0.2164) -- (0.3708, 0.4128, 0.2164)
-- (0.1993, 0.4128, 0.3879) -- (0.1993, 0.4026, 0.3981) -- cycle;
\draw[bluebackground]
(0.3737, 0.2286, 0.3977) -- (0.3743, 0.2286, 0.3971)
-- (0.3743, 0.4099, 0.2158) -- (0.3712, 0.4130, 0.2158)
-- (0.1996, 0.4130, 0.3874) -- (0.1996, 0.4026, 0.3977) -- cycle;
\draw[redbackground]
(0.3731, 0.2295, 0.3974) -- (0.3735, 0.2295, 0.3970)
-- (0.3735, 0.4107, 0.2158) -- (0.3707, 0.4136, 0.2158)
-- (0.1993, 0.4136, 0.3872) -- (0.1993, 0.4033, 0.3974) -- cycle;
\draw (0.7, 0, 0.7) node {\small$n=22$};
\draw[simplexborder] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\end{tikzpicture}
\hfill
\begin{tikzpicture}[scale=1.4,z={(-.86603,-.5)},x={(.86603,-.5)},baseline=(a)]
\fill[simplexbackground] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\draw[yellowbackground]
(0.3735, 0.2288, 0.3977) -- (0.3738, 0.2288, 0.3974)
-- (0.3738, 0.4102, 0.2160) -- (0.3710, 0.4130, 0.2160)
-- (0.1994, 0.4130, 0.3876) -- (0.1994, 0.4028, 0.3977) -- cycle;
\draw[bluebackground]
(0.3735, 0.2288, 0.3977) -- (0.3738, 0.2288, 0.3974)
-- (0.3738, 0.4102, 0.2160) -- (0.3710, 0.4130, 0.2160)
-- (0.1994, 0.4130, 0.3876) -- (0.1994, 0.4028, 0.3977) -- cycle;
\draw[redbackground]
(0.3735, 0.2288, 0.3977) -- (0.3738, 0.2288, 0.3974)
-- (0.3738, 0.4102, 0.2160) -- (0.3710, 0.4130, 0.2160)
-- (0.1994, 0.4130, 0.3876) -- (0.1994, 0.4028, 0.3977) -- cycle;
\draw (0.7, 0, 0.7) node {\small$n=1000$};
\draw[simplexborder] (0,1,0) -- (1,0,0) -- (0,0,1) -- cycle;
\end{tikzpicture}
\caption{Evolution in the simplex $\simplex_{\{a,b,c\}}$ of the credal sets $\margmass_n$ for the near-cyclic transition operator from Example~\ref{ex:evosimplex} for three different choices of the initial credal set $\margmass_1$.}
\label{fig:simplex_evolution}
\end{figure}
\noindent Each approximation is based on the constraints that can be found by calculating $\lex_1(\ltrans^{n-1}\ind{\{x\}})$ and $\uex_1(\utrans^{n-1}\ind{\{x\}})$ using the backwards recursion method, for~$x=a,b,c$.
The~$\margmass_n$ evolve clockwise through the simplex, which is not all that surprising as the lower and upper transition matrices are quite `close' to the precise \emph{cyclic} transition matrix
\begin{equation*}
\transmat\eqdef
\begin{bmatrix}
\cond(a\vert a) & \cond(b\vert a) & \cond(c\vert a) \\
\cond(a\vert b) & \cond(b\vert b) & \cond(c\vert b) \\
\cond(a\vert c) & \cond(b\vert c) & \cond(c\vert c)
\end{bmatrix}
=
\begin{bmatrix}
0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0
\end{bmatrix},
\end{equation*}
as is also evident from Fig.~\ref{fig:near-cyclic-transition}.
After a while, the $\margmass_n$ converge to a limit that is independent of the initial credal set $\margmass_1$, as can be predicted from the regularity of the upper transition operator.~$\blacklozenge$
\end{example}
\par
A biological application of imprecise Markov models can be found in \citeauthor{dhaenens2007}'s Master's thesis~\citep{dhaenens2007}. He used the sensitivity analysis interpretation of imprecise Markov models to investigate the legitimacy of using \textsc{pam} matrices in amino acid and \textsc{dna} sequence alignments. Roughly speaking, \textsc{pam} (point accepted mutation) matrices describe the chance that one amino acid mutates into another amino acid over a given evolutionary time span. However, the actual value of \textsc{pam} matrix components are based on an estimation using an evolutionary model (i.e., amino acid substitutions are actually counted on the branches of a phylogenetic tree), hence the need to perform a sensitivity analysis. \Citet{dhaenens2007} observed in simulations that the imprecision due to the estimation did not blow up even after a large number of steps; he concluded that using \textsc{pam} matrices over large evolutionary timescales is still reasonable.
\subsection{A \texorpdfstring{$k$}{\itshape k}-out-of-\texorpdfstring{$n$}{\itshape n}:F system with uncertain reliabilities}\label{sec:k-out-of-n}
Reliability theory is one field where Markov chains are used extensively.
It concerns itself with questions of the type: What is the probability of failure of a system with~$n$ components?
In the simplest case, where each component is either working or not working, answering this question would involve assessing the failure probabilities of the $2^n$ possible configurations of component states.
However, as shown by \citet{koutras1996}, a great variety of reliability structures can be evaluated quite efficiently using their so-called embedded Markov chain. Amongst these are precisely those systems that fail as soon as any $k$ out of the $n$ components fail, also known as $k$-out-of-$n$:F systems.
\par
For such systems, the embedded Markov chain is constructed as follows. Its state space $\states$ is given by $\{0,1,2,\dots,k\}$, where each number represents the number of components that fail in the system. System failure is therefore represented by the event $\{k\}$, and a fully functioning system by the event $\{0\}$. \Citet{koutras1996} shows that the failure probability (or unreliability) $F_n$ and the reliability $R_n=1-F_n$ of a Markov chain embedded system are determined by the expectation form expression:
\begin{equation}
F_n
\eqdef\ex_{n+1}(\ind{\{k\}})
=\ex_1(\trans_1\trans_2\ldots\trans_n\ind{\{k\}}),
\end{equation}
where the initial distribution $\ex_1$ represents a system in perfect working condition, so $\ex_1(h)=h(0)$ for all real-valued maps $h$ on $\states$. The transition matrix $\transmat_i$ corresponding to the transition operator $\trans_i$ is fully determined by the reliability $p_i$ of the $i$-th component:
\begin{equation}
\transmat_i=
\begin{bmatrix}
p_i&1-p_i&0&\dots&0&0\\
0&p_i&1-p_i&\dots&0&0\\
\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\
0&0&0&\dots&p_i&1-p_i\\
0&0&0&\dots&0&1
\end{bmatrix},
\end{equation}
where $(\transmat_i)_{\ell,m}=\trans_i\ind{\{m\}}(\ell)$ and $\ell,m\in\{0,1,\dots,k\}$.
\par
Precise assessments of the individual reliabilities of the components $p_i$ are often difficult to come by, as for example, they might depend on climatological parameters, age or maybe even on the failure of other (external) components. However, experts might still be able to give conservative bounds on the individual reliabilities $p_i$. In this case, the embedded Markov chain becomes imprecise, but the corresponding bounds on the reliability and unreliability can still be computed by applying our sensitivity analysis formulas derived above:
\begin{equation}
\overline{F}_n
=1-\underline{R}_n
=\ex_1(\utrans_1\utrans_2\ldots\utrans_n\ind{\{k\}})
\quad\text{ and }\quad
\underline{F}_n
=1-\overline{R}_n
=\ex_1(\ltrans_1\ltrans_2\ldots\ltrans_n\ind{\{k\}}).
\end{equation}
When this embedded Markov chain is stationary (meaning that the uncertainty models for the reliability of all components are assumed to be the same), the failure probability bounds are simply computed by $\overline{F}_n=\ex_1(\utrans^n\ind{\{k\}})$ and $\underline{F}_n=\ex_1(\ltrans^n\ind{\{k\}})$.
\par
To give a very simple example, let us assume that an expert provides the same range $[\lrelty,\urelty]$ for all component failure probabilities $p_i$, where $0\leq\lrelty\leq\urelty\leq1$.
This leads to a special case of the models considered in Section~\ref{sec:lower-upper-mass}, and if we apply the formulas derived there, we get, after some manipulations that
\begin{equation}
\utrans h(\ell)
=
\begin{cases}
\lrelty h(\ell)+(1-\urelty)h(\ell+1)+(\urelty-\lrelty)\max\{h(\ell),h(\ell+1)\}
&\text{if $\ell=0,1,\dots,k-1$}\\
h(k)&\text{if $\ell=k$}
\end{cases}
\end{equation}
for all real-valued maps $h$ on $\states$.
If $h$ is non-decreasing in the sense that $h(0)\leq h(1)\leq\dots\leq h(k-1)\leq h(k)$, then so is $\utrans h$, and it therefore follows that
\begin{align}
\overline{F}_n
&=
\begin{bmatrix}
1&0&\dots&0&0
\end{bmatrix}
\begin{bmatrix}
\lrelty&1-\lrelty&0&\dots&0&0\\
0&\lrelty&1-\lrelty&\dots&0&0\\
\vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\
0&0&0&\dots&\lrelty&1-\lrelty\\
0&0&0&\dots&0&1
\end{bmatrix}^n
\begin{bmatrix}
0\\0\\\vdots\\0\\1
\end{bmatrix}\\
&=\sum_{\ell=k}^n\binom{n}{\ell}\lrelty^{n-\ell}(1-\lrelty)^{\ell}
=1-\sum_{\ell=0}^{k-1}\binom{n}{\ell}\lrelty^{n-\ell}(1-\lrelty)^{\ell},
\end{align}
and there is a completely similar expression for $\underline{F}_n$ where $\urelty$ is substituted for $\lrelty$.
See Fig.~\ref{fig:unrel-rel} for a graphical illustration of these expressions.
\par
If $0<\lrelty\leq\urelty\leq1$, then this stationary imprecise Markov chain is regularly absorbing with regular top class $\{k\}$ (under~$\uaccess{}{}$), and $\lex_\infty(h)=\uex_\infty(h)=h(k)$ for all real-valued maps $h$ on $\states$. Nevertheless, as soon as $\urelty=1$, \citeauthor{hartfiel1998}'s product scrambling condition is no longer satisfied, as the identity matrix will then belong to all $\transmats_{\utrans^n}$.
\par
The chain ceases to be regularly absorbing if $\lrelty=0$ and $\urelty=1$, and in that case it is easy to see that $\utrans^{k+n}h(m)=\max_{\ell=m}^kh(\ell)$ for all $n\geq0$ and all real-valued maps $h$ on $\states$, and therefore the limit upper expectation $\uex_\infty$ will depend on the initial upper expectation $\uex_1$. For the particular initial expectation $\ex_1$ we use in this example, we see that $\uex_\infty(h)=\max h$.
\begin{figure}[hbt]
\centering\small
\begin{tikzpicture}
\foreach \r/\oneminr/\rshift/\rscale in {0.9/0.1/0/1,0.95/0.05/4.3cm/2,0.975/0.025/8.6cm/4} {
\begin{scope}[xshift=\rshift,yscale=4,xscale=(30*\rscale)]
\node[below=2ex] at ({.5*(1-\r)},-.1) {$r=\r$};
\draw[->] (0,-.1) -- ({1.2*(1-\r)},-.1) node[below] {$\varepsilon$};
\draw[->] ({1-\r},-.1) -- ({1-\r},1.15) node[right] {$\overline{F}_n$, $\underline{F}_n$};
\foreach \n/\ncolor/\npos in {10/red/below right,20/green/right,40/blue/right} {
\ifthenelse{\equal{\r\n}{0.97510}}{}{
\draw[thick,smooth,domain=0:(1-\r),\ncolor] plot[id=u-\r-\n]
function{
1-(\r-x)**\n-\n*(\r-x)**(\n-1)*(1-(\r-x))
-.5*\n*(\n-1)*(\r-x)**(\n-2)*(1-(\r-x))**2
};
\draw[thick,smooth,domain=0:(1-\r),\ncolor,dashed] plot[id=l-\r-\n]
function{
1-(\r+x)**\n-\n*(\r+x)**(\n-1)*(1-(\r+x))
-.5*\n*(\n-1)*(\r+x)**(\n-2)*(1-(\r+x))**2
};
\node[\npos] at
(0,{1-(\r)^\n-\n*(\r)^(\n-1)*(1-\r)-.5*\n*(\n-1)*(\r)^(\n-2)*(1-\r)^2}) {$n=\n$};
}
}
\node[fill,inner sep=0pt,minimum height=1ex,minimum width=.4pt,label=below:$0$]
at (0,-.1) {};
\node[fill,inner sep=0pt,minimum height=1ex,minimum width=.4pt,label=below:$\oneminr$]
at ({1-\r},-.1) {};
\foreach \val in {0.0,0.2,0.4,0.6,0.8,1.0}
\node[fill,inner sep=0pt,minimum height=.4pt,minimum width=1ex,label=right:$\val$]
at ({1-\r},\val) {};
\end{scope}
}
\end{tikzpicture}
\caption{
Upper failure probability ($\overline{F}_n$, full line) and lower failure probability ($\underline{F}_n$, dashed line) for a $3$-out-of-$n$:F system, for different numbers of components~$n$ as a function of the imprecision $\varepsilon\eqdef(\urelty-\lrelty)/2$ of the component reliability, for three different values of $\relty\eqdef(\urelty+\lrelty)/2$.
As can be expected, the failure bounds widen with increasing imprecision, decrease with increasing reliability (characterised by $\relty$), and increase for a greater number of components~$n$.
}
\label{fig:unrel-rel}
\end{figure}
\subsection{General models}\label{sec:general-models}
When the (conditional) upper expectation operators that define an imprecise Markov chain do not fall into any of the special cases we discussed and illustrated above, recourse must taken to more general calculation rules.
\par
Let us consider the typical case of a credal set~$\mass$ that is specified by giving, for a finite number of real-valued maps~$f$ collected in the set $\domain\subset\allgambles(\states)$, consistent upper bounds~$U(f)$ on the expectations~$\ex(f)$.
Then the upper expectation for any map $h\in\allgambles(\states)$ can be found by solving the following linear program \citep[see, e.g.,][Section~3.1.3]{walley1991}:
\begin{equation}\label{eq:lin-prog}
\begin{aligned}
\uex_\mass(h)=\min\bigg[\mu+\smashoperator{\sum_{f\in\domain}}\lambda_fU(f)\bigg]
\quad\text{subject to }\quad
&h\leq\mu+\smashoperator{\sum_{f\in\domain}}\lambda_fU(f)\\
\text{where }\quad
&\text{$\lambda_f\geq0$ and $\mu\in\reals$.}
\end{aligned}
\end{equation}
\par
As the number of upper expectations to compute, and thus the number of linear programs to solve, increases, it will eventually become profitable to take a second (dual) approach.
Any credal set~$\mass$ specified by a finite number of constraints (bounds on expectations) is a convex polytope, i.e., has a finite set~$\ext\mass$ of extreme points.
Vertex enumeration algorithms such as the one by \citet{avis1997} can be used to obtain this set of extreme points from the given set of constraints.
We can then use a practical version of Eq.~\eqref{eq:mass-to-luex} to find the corresponding upper expectations, namely \citep[see][Section~3.1.3]{walley1991}:
\begin{equation}\label{eq:lowens-extver}
\uex_\mass(h)\eqdef\max\set{\ex_q(h)}{q\in\ext\mass}.
\end{equation}
\par
We can now consider imprecise Markov chains where the local models, attached to the non-terminal situations in the tree, are of this type. The general backwards recursion formulae we have given in Section~\ref{sec:sensitivity-analysis} can then be used in combination with the formulae of the type~\eqref{eq:lin-prog} and~\eqref{eq:lowens-extver} for the calculation of all conditional and joint upper and lower expectations in the tree.
\section{Conclusions}
To conclude, we (i)~reflect on what type of convergence results could be obtained for imprecise Markov chains that are not regularly absorbing, (ii)~we pay attention to the important issue of interpretation of imprecise-probability models, and (iii)~we compare \citeauthor{hartfiel1998}'s approach~\citep{hartfiel1998} to our own regarding their practical applicability to deal with expectation problems.
\par
It is a reasonably weak requirement for a stationary imprecise Markov chain with upper transition operator~$\utrans$ to be regularly absorbing, but we have seen that it is strong enough to guarantee that the upper expectation for the state at time~$n$ converges to a uniquely $\utrans$-invariant upper expectation $\uex_\infty$, regardless of the initial upper expectation $\uex_1$.
\par
Even when an imprecise Markov chain is not regularly absorbing, it is not so hard to see that its upper transition operator~$\utrans$ is still \emph{non-expansive} under the supremum norm given for every ${h\in\allgambles(\states)}$ by $\supnorm{h}\eqdef\max\abs{h}$, as
\begin{equation}
\supnorm{\utrans g-\utrans h}
\leq\supnorm{\utrans(g-h)}
\leq\supnorm{g-h}.
\end{equation}
Moreover, the sequence $\supnorm{\utrans^nh}$ is bounded because $\supnorm{\utrans^nh}\leq\supnorm{h}$.
It then follows from non-linear Perron--Frobenius theory \citep{sine1990,nussbaum1998} that the sequence $\utrans^nh$ has a periodic limit cycle.
More precisely, there is a $\xi_h\in\allgambles(\states)$ such that $\utrans^{p_h}\xi_h=\xi_h$ i.e., $\xi_h$ is a \newconcept{periodic point} of~$\utrans$ with (smallest) \newconcept{period}~$p_h$, and such that $\utrans^{np_h}h\to\xi_h$ (point-wise) as $n\to\infty$.
It would be a very interesting topic for further research to study the nature of the periods and periodic points of upper transition operators.
\par
In our discussions, for instance in Section~\ref{sec:sensitivity-analysis}, we have consistently used the sensitivity analysis interpretation of imprecise-probability models such as upper expectations. Upper and lower expectations can also be given another, so-called \emph{behavioural} interpretation, in terms of some subject's dispositions towards accepting risky transactions.
This is for instance \citeauthor{walley1991}'s \citeyearpar{walley1991} preferred approach.
The results we have derived here remain valid on that alternative interpretation, and the concatenation formulae~\eqref{eq:backpropagation-upper-1} and~\eqref{eq:backpropagation-upper-2} can then be shown to be special cases of so-called \emph{marginal extension} procedure \citep{miranda2006b}, which provides the most conservative coherent (i.e., rational) inferences from the local predictive models $\utrans_k$ to general lower and upper expectations.
In another paper~\Citep{cooman2007d}, we give more details about how to approach a process theory using imprecise probabilities on a behavioural interpretation.
\par
On a related matter: the imprecise Markov chains we are considering here can be seen as special \emph{credal networks} \cite{cozman2000,cozman2005,moral2005b}: the generalisation of Bayesian networks to the case where the local models, associated with the nodes of the network, are credal sets. The corresponding `independence' notion that should then be used for the interpretation of the graphical structure of the network is \citeauthor{walley1991}'s \emph{epistemic irrelevance} \citep[Chapter~9]{walley1991}. Interestingly, \citeauthor{hartfiel1991}'s Markov set-chain approach corresponds to special credal nets where the independence concept involved is a different one: that of \emph{strong independence} \cite{cozman2000}. Nevertheless, both approaches yield the same results if we restrict ourselves to calculating the marginal upper expectations for variables $X(n)$, as we have proved in Proposition~\ref{prop:hartfiel-and-us}. But in any case, for the actual calculation of expectations, the set of transition matrices approach suffers from a combinatorial explosion of computational complexity that can be avoided using our upper transition operator approach.
\section*{Acknowledgements}
The authors wish to thank Damjan \v{S}kulj for inspirational discussion and two anonymous referees for helpful suggestions and pointers to the literature.
\par
This paper presents research results of the Belgian Network DYSCO (Dynamical Systems, Control, and Optimisation), funded by the Interuniversity Attraction Poles Programme, initiated by the Belgian State, Science Policy Office.
The scientific responsibility rests with its authors.
|
1,314,259,995,889 | arxiv | \section{Introduction}
Cloud computing when combined with the Internet of Things has enabled may new services such as Sensing As a service (SENaaS), Sensor Data as a Service (SDaaS) and Sensor Trigger as a Service (STaaS). The applications that use such services may be of predictive, data mining, machine learning, or forecasting nature \cite{Jain2011, Theiler1997, Estlick2001, Roshanisefat2018srclock, kamali2018lutlock}, many of which are in need of categorizing and clustering of very large and high-dimensional input data-sets as a part of their larger computational flow. The ability to classify and cluster a desired set of data to see trends, similarities, correlations, and trajectories is an essential part of building knowledge from data. However, as the size and dimensionality of input data increases, the run-time for such clustering algorithms is expected to grow superlinearly, making it a big challenge when dealing with BigData \cite{Sayadi2018cf, Sayadi2017iccd}.
The goal of clustering is to classify the data according to a specific metric such that objects within a cluster/group, in terms of having a feature, fitting a description, or displaying a characteristic are similar, while they are different from the members that are located in other groups.
There are different categories for clustering. A clustering algorithm may be supervised (hierarchical) or unsupervised (un-nested). It may be exclusive or fuzzy, and could be complete or partial. Depending on the type of clustering algorithms used, the resulting clusters may be well separated, prototype-based (centroid-based), graph-based, or density-based \cite{Tan2013}.
K-means is one of the simplest and yet most used \cite{Tan2013, Wu2012} centroid-based unsupervised clustering algorithms. Although categorized as simple and low complexity classification function, its applicability to large data-sets (Big-Data in general) depends on the scalability of its software (SW) implementation with respect to the available hardware (HW). One of the most promising HW platforms that is leveraged for achieving considerable speedup in big data applications are FPGAs. Recent FPGAs are equipped with hundreds of thousands of fine-grained logics and coarse-grained communications, which provide huge parallelism with negligible communication cost. FPGA solutions enable higher parallelism than clusters of CPUs or GPUs at a much lower cost, but with greater mapping overhead. However if the size of data is large, such that the mapping-time overhead is small or negligible compare to the run-time of the targeted application (such as bigData clustering), using FPGA-based solutions are preferred.
When it comes to comparison with ASIC accelerators, the FPGA solutions, in terms of power and performance, are not as efficient. However, they could be re-purposed from application to application, where as ASIC accelerator maintains a fixed behaviour. Hence, FPGAs are a better solution for general purpose computing environment, such as cloud data-centers, where the applications are dynamic and priori-unknown \cite{Sayadi2018aspdac, Sayadi2018dac}. In such cases, in which dynamic process is more preferred, the high cost of custom ASIC accelerators are not well justified, and the re-configurability and adaptability of FPGA-accelerated solutions are greatly desired.
In order to provide improvement for the usability of FPGA solutions in dealing with semi-parallel applications, the FPGAs are equipped with mid to high-performance multi core processors (e.g. ARM Cortex A9, A12, A15). The existence of multiple mid to high-performance cores on the same die as FPGA improves the efficiency of HW/SW co-design \cite{kamali2018ducnoc, kamali2016adapnoc, kamali2018swift, zynq7000} and provide greater flexibility is using FPGAs in data centers as re-configurable, yet powerful hardware accelerators \cite{Freund2016, Wilson2014}.
The execution time of k-clustering algorithms could be improved by means of both SW and HW. In fact, using software-based techniques and methods provides performance improvement in case of k-means algorithm. For example, on the SW side, one could use (1) binary kd-tree structure for dividing search space members into "boxes" \cite{Kanungo2002}, and (2) triangle inequality for avoiding redundant distant calculation \cite{Elkan2003}. On the other hand, using hardware-based architectures, like FPGA-based implementation, accelerate the algorithm considerably. on the HW side, more capable or additional computing resources reduce the computational time. For example, by directly mapping a k-means clustering algorithm to a capable FPGA, a considerable reduction in execution time, compared to a sole SW-based solution, is expected. This speed-up is the result of throwing additional hardware to speed up the parallel kd-clustering algorithm. However, such direct and non-optimized mapping of software intended for CPUs to FPGAs does not result in best utilizing all FPGA resources. Hence, to maximize the FPGA utilization, and to speed up non-parallel portions of the code, a more precise SW/HW co-design is required \cite{Sayadi2017igsc, Roshanisefat2018bench}.
In this paper, we demonstrate that using a HW/SW co-design architecture as well as a software-based technique, i.e. kd-tree clustering algorithm, considerably reduces the execution time of k-means algorithm. For this purpose a mapping and an aggregation function have been implemented on top of kd-tree clustering algorithm. This approach allows us to divide the work across the hardware, i.e. logic and multiple cores in an FPGA, to gain the maximum achievable speedup by utilizing all available resources. Additionally, we demonstrate that having a custom high-performance DMA for transmitting data between host and FPGA via PCI Express (PCIe) interface, significantly reduces the execution time overhead related to data transmission time, and provides better speedup in comparison with a conventional software based solutions.
The rest of the paper is organized as follows. The k-means theory and algorithm are described in Section 2. Section 3 briefly illustrates structure of binary kd-tree for filtering algorithm. The architecture of HW/SW co-design architecture is elaborated in Section 4. Experimental results are shown in Section 5. Section 6 covers the related work. And finally, Section 7 concludes the paper.
\section{K-Means Clustering Algorithm}
K-means is one of the simplest partitioning algorithms with fast execution time, which is popular for unsupervised centroid-based clustering. As it names implies, k-means divides input datasets to "k" groups, called clusters, where all members in a cluster are similar in some metrics, and they are dissimilar to members of other clusters. Additionally, k-means is a centroid-based algorithm, where each cluster has a prototype which is indicator of the cluster. Each data point will be classified into a cluster whose centroid is the closest. Three conventional distance metrics have been used for k-means clustering to calculate the distance between each point and centroids: \emph{Manhattan}, \emph{Max}, and \emph{Euclidean} \cite{Estlick2001}. For instance, if we suppose that each data point is a vector $\overrightarrow{dp} = (p_1, p_2, ..., p_m)$, the Euclidean distance can be defined as follow:
\begin{equation}
\vspace{-3pt}
EuclidDist(\overrightarrow{dp},\overrightarrow{cent}) = \big(\displaystyle\sum_{i=1}^{m} (dp_i - cent_i)^2\big)^{\frac{1}{2}}
\vspace{-1pt}
\end{equation}
The k-means algorithm first initiates $k$ centroids. Then it enters an iterative process where each iteration consists of two steps: (1) \emph{Assignment Step}: where each point will be assigned to a cluster whose centroid is the closest. (2) \emph{Update Step}: where a new centroid is found by re-calculating the mean of new assigned points to each cluster. When the centroids stop changing, the clustering of datasets to $k$ clusters is successfully accomplished.
\vspace{-1pt}
\section{Binary kd-Tree for Filtering Search Space}
The filtering algorithm is developed based on a binary kd-tree, which reduces the required time for search queries \cite{Kanungo2002}. In this algorithm, all data points recursively will be divided into some axis-aligned bounding boxes. Also, hierarchical divisions are accomplished based on \emph{axis-aligned bounding boxes}. This recursive process generates a multi-dimensional binary search tree, whose root is a bounding box of all data points, and each level of the tree consists of two meaningful subsets of data points, and consequently each leaf represents at most one data point. Each node stores some essential information, such as the corresponding bounding box members (\emph{cell}), the number of data points in the box (\emph{count}), the weighted centroid (\emph{wgtCent}) which represent the sum of all data points in a box, and candidates for centroid (\emph{Z}).
\begin{algorithm}
\caption{Filtering algorithm by using binary kd-Tree \cite{Kanungo2002}}\label{Filter}
\begin{algorithmic}[1]
\small
\Function{Filter}{kdNode~$u$, CandidateSet $Z$}
\State $C\gets u.cell$;
\If {$u$~is~a~leaf}
\State $z^*\gets the~closest~point~in~Z~to~u.point$;
\State $z^*.WgtCent\gets z^*.WgtCent~+~u.point$;
\State $z^*.count\gets z^*.count~+~1$;
\Else
\State $z^*\gets the~closest~point~in~Z~to~C's~midpoint$;
\ForAll{$z \in Z \setminus \{z^*\}$}
\If {$z.isFather(z^*,C)$}
\State $Z\gets Z \setminus \{z^*\}$;
\EndIf
\EndFor
\If{$| Z | == 1$}
\State $z^*.WgtCent\gets z^*.WgtCent~+~u.wgtCent$;
\State $z^*.count\gets z^*.count~+~u.count$;
\Else
\State Filter($u.left$, $Z$);
\State Filter($u.right$, $Z$);
\EndIf
\EndIf
\EndFunction
\end{algorithmic}
\end{algorithm}
Alg. \ref{Filter} depicts the kd-tree filtering algorithm. In each node, \emph{Z} determines a subset of candidate centroids. If we suppose that we have $k$ clusters, the candidate centroids for the root node are all $k$ centroids, and the candidates for each internal node are a subset of $k$ clusters. Additionally, the candidates of each node are the nearest neighbors for some points. For each node, the distance between $z^* \in Z$ and the midpoint of \emph{cell} is calculated and compared with $z \in Z\setminus z^*$. Then, the sub-tree at the larger distance is pruned.
\vspace{-1pt}
\section{HW/SW co-design Two-level K-Means clustering}
Similar to \cite{Estlick2001, Gokhale2003, Choi2014, Abdelrahman2016, Canilho2016}, we demonstrate that MUCH-SWIFT \cite{kamali2018swift}, as a HW/SW co-design architecture, accelerates k-means algorithm by using a system-level architecture, which consists of multiple processors and a single FPGA. A ZCU102 evaluation board has been used in this architecture, which is equipped with a Zynq-7000 Ultrascale+ SoC which is applicable for multi-core architectures. This architecture consists of two major sub-modules: (1) Processing System (PS) which consists of a Quad Cortex-A53 processor and a dual Cortex-R5 co-processor, (2) Programmable Logic (PL) which is responsible for implementing the parallel arithmetic cores that are required for Manhattan distance calculations, comparators, and updater. Fig. \ref{TopArch} illustrates the overall architecture of MUCH-SWIFT. As illustrated, it is implemented based on ZYNQ Ultrascale+ architecture, which has four Cortex-A53 up to 1.5 GHz, two Cortex-R5 up to 600 MHz, 1 GB DDR3 off-chip memory, and ZU9EG FPGA chip with around 600K logic cells. To achieve the highest speedup All processors are employed in this design; Each Cortex-A53 core is responsible for evaluating and analyzing one quarter of data points independently. Additionally, in order to reduces the search time, a binary kd-tree structure to filter (prune) some nodes and their children has been employed, which their candidates are not the nearest centroid. Then, in order to maximize the utilization of all four Cortex-A53 cores, N parallel clusters (N being the number of available cores, which is 4 in the experimental results section) has been built by dividing the original data-set into N smaller data-sets at the top of the kd-tree. After clustering the sub-data-sets, they are merged together and the filtering algorithm is invoked on top of the merged clusters. Using this two-layer clustering approach increases convergence, which decrease the required iterations for clustering. Furthermore, MUCH-SWIFT is able to process large datasets as well as large data size by using a DMA-based PCIe interface and DDR3 memory in ZCU102 without any significant throughput degradation. One of Cortex-R5 is responsible for handling the custom DMA between PCIe and DDR3 memory, and another Cortex-R5 must generate initial states of each quarter of data points as well as initial values of centroids. Also, controlling the update procedure after pruning in kd-tree structures, and update stage for centroids are accomplished by the second Cortex-R5.
\begin{figure}[t]
\centering
\includegraphics[width = 220pt]{Fig1.pdf}
\vspace{-10pt}
\caption{Overall MUCH-SWIFT System-Level Architecture.}
\label{TopArch}
\end{figure}
\vspace{-6pt}
\subsection{Parallelism in kd-tree Traversal}
In order to maximize the parallelism in MUCH-SWIFT architecture, each Cortex-A53 core is made responsible for a quarter of data points. In fact, according to the size of data points, they should be divided into 4 independent groups, and each group is considered as a separate dataset. So, there are four independent kd-tree structures for all quarters, and filtering algorithm can be accomplished on each structure independently. Therefore, all sub-modules for k-means clustering algorithm, including the distance calculator, comparison, and updater are parallel and dedicated for each group.
The big challenge in this architecture is the combining of the results of four divided sub-datasets. In order to accomplish k-clustering by means of this technique, i.e. dividing into four groups of data, it seems that it is necessary to implement four k $\frac{k}{4}$-clustering algorithms separately, and then gather four $\frac{k}{4}$ centroids as well as their corresponding clusters to provide k clusters. But, since dividing the dataset into four sub-datasets changes the calculated centroids, the obtained results in this scenario are not equivalent with a conventional k-clustering, and consequently the results are invalid. Therefore, a two-layer clustering mechanism has been implemented in order to perform it accurately. In the first level of k-means clustering, the data points will be divided into four independent sub-datasets, but, k clusters will be calculated for each sub-group, and after completing k-means clustering for each sub-group, all 4k centroids and their clusters ($4k$ clusters) will be gathered. So, it is necessary to combine a cluster in each sub-group with three clusters in other sub-groups with the nearest centroids. After merging four sub-datasets, the centroids and cluster members must be updated. When using this process, the second level of k-clustering has initial values (i.e. centroids and their clusters) that are considerably close to the final result. In fact, the number of iterations for second level of k-clustering is very small.
\begin{algorithm}
\caption{Two-level k-clustering Algorithm by Using 4 parallel Binary kd-Tree Structures}\label{ParallelClustering}
\begin{algorithmic}[1]
\small
\Function{ParallelClustering}{DataPoint\_Set~$DP$}
\For{$i=0$ to $3$}
\State DataPoint\_Set$~QDP[i]\gets Quarter(DP,i)$;
\State kdNode $~*kdu[i] \gets Gen\_KdTree(QDP[i])$;
\State CandidateSet$~Z\_Update[i] \gets Lloyd\big[QDP[i]\big]$;
\State CandidateSet$~Z\_Current[i] \gets Z\_Update[i]$;
\EndFor
\For{$i=0$ to $3$} \Comment{parallel in PL}
\State Filter($kdu[i]$, $Z\_Update[i]$);
\While {$Z\_Update[i] \neq Z\_Current[i]$}
\State $Z\_Current[i] \gets Z\_Update[i]$;
\State Filter($kdu[i]$, $Z\_Update[i]$);
\EndWhile
\EndFor
\State kdNode$~ kdu\_top \gets Combine(kdu[0:3])$;
\While{$Z$ is updated}
\State Filter($kdu\_top$, $Z$);
\EndWhile
\EndFunction
\end{algorithmic}
\end{algorithm}
Alg. \ref{ParallelClustering} illustrates the pseudo-code of MUCH-SWIFT method. During the initialization state, dataset is divided into four separate sub-datasets via Quarter function. then A kd-tree is generated for each sub-dataset, and the Lloyd function is employed for choosing initial centroids \cite{Lloyd1982, kamali2016aes, Sayadi2014dft}. The most important part of this algorithm is the parallelism in tree traversal (\emph{Line} 8-14), where each Cortex-A53 core is made responsible for transceiving data to/from PL in order to calculate and update its corresponding kd-tree characteristics (i.e. centroids and clusters) in parallel.
\vspace{-2pt}
\subsection{No Limit for Dataset Size via High Throughput DDR3 Memory}
DDR3 off-chip memory in ZYNQ Ultrascale+ has been employed to maximize the feasible size of data. ZYNQ Ultrascale+ provides an efficient and fast DDR3 memory, which is accessible from both PS and PL, illustrated in Fig. \ref{TopArch}. The capacity of this memory is 1 GB, and it has a 128 bits data-bus for read/write access. Also, as it can be seen in Fig. \ref{TopArch}), it is necessary to implement a BRAM-based bridge (\emph{BRAM-based FIFO}) between DDR3 and PL in order to transfer data from PL to DDR3 and vice versa. In order to minimize the required BRAM-based bridge size between DDR3 and PL, the data size for each level of tree traversal has been evaluated separately. Similar to \cite{Winterstein2013}, hierarchical access provides this possibility to release and reuse the memory at each level (depth) before starting the next level (depth) of tree. In addition, all data is permanently kept in DDR3; hence, overwriting the data can be accomplished without any throughput degradation. As a result, the large size of DDR3 provides this possibility to maximize the dataset size as required. For instance, suppose that MUCH-SWIFT is configured to classify $N = 100000$ data into $K=1024$ clusters. In the worst case, the structure of kd-tree is like a \emph{degenerate} tree. In this case, we need $(N-1)\times(K)\times(log_2K) \simeq $~122 MB, which is much less than the DDR3 memory, i.e. 1GB.
\vspace{-2pt}
\section{Experimental Results}
In order to demonstrate the MUCH-SWIFT throughput, some test cases should be considered. A large Xilinx ZYNQ-based SoC architecture (ZCU102 evaluation board) has been targeted to evaluate this architecture. ZYNQ can facilitate software side development by using Xilinx SDK. Furthermore, Vivado 16.2 is used for synthesizing, implementing, and downloading the overall design on FPGA, which provides this possibility to implement a block diagram for all parts of the design, even software side. MUCH-SWIFT consists of four main sub-modules:
\begin{enumerate}
\item PS consists of a quad Cortex-A53 core and a dual Cortex-R5 core, which is responsible for controlling the transceiving data to/from each core (Cortex-A53) from/to PL in order to perform k-clustering computations. Also, one Cortex-R5 should handle custom DMA for transmitting data to DDR3 from PCIe interface, and other Cortex-R5 core controls the updating stage of the filtering algorithm.
\item All floating point arithmetic operations, i.e. Manhattan distance, compare, and update centroids have been accomplished in PL.
\item As illustrated in Fig. \ref{TopArch}, an UART interface has been engaged to determine the number of clusters as a configurable parameter. In fact, the number of clusters is used to determine the number of parallel modules in PL. For instance, if we set the number of clusters to $K = 5$, since there are four sub-datasets, and each sub-dataset should implement a ($K = 5$)-clustering, we will have 20($5\times4$) parallel modules, including Manhattan Distances, compares, and updates, to accomplish the computations. So, the number of clusters has been used as a configurable parameter for PL in order to generate the logic for parallel computation modules.
\item PCIe interface is employed for transmitting datasets from the host to PL. Note that all interconnections between top modules in MUCH-SWIFT architecture is implemented based-on AXI. a 128-bit AXI has been employed between PL and PS as well as between DDR3 and PS/PL in order to guarantee the required throughput. Also, a 64-bit AXI-based data-bus has been implemented to establish the custom DMA between PCIe and DDR3 efficient. PS is developed in C++ using Xilinx SDK, and PL is implemented in Verilog HDL using Xilinx Vivado. Also, all sub-modules in PL are implemented in AXI-based structure.
\end{enumerate}
As mentioned earlier, an FPGA-based implementation for the filtering algorithm was implemented successfully in \cite{Winterstein2013}. Compared to this architecture, MUCH-SWIFT is a multi-core architecture to implement a parallel structure for the filtering algorithm. Additionally, the two-layer filtering approach provides better throughput. Fig. \ref{kdtree}a illustrates average clock cycles for each iteration in MUCH-SWIFT against \cite{Winterstein2013}. As it can be seen, the multi-core architecture provides around $8.5\times$ speedup on average. Also, as it can be seen in Fig. \ref{kdtree}b, in comparison with an FPGA-based architecture without optimization, it is able to achieve more than $210\times$ acceleration on average against conventional FPGA-based implementation. Although four parallel cores have been employed to divide each dataset into four sub-datasets, it is able to achieve around $8.5\times$ speedup in comparison with a single core filtering algorithm \cite{Winterstein2013}. This result proves the impact of two-layer filtering algorithm. Since, the dataset has been divided into four sub-datasets, not only the computations have been divided into four parallel k-clustering algorithm, the extent of the computations have also mitigated. So, it is able to achieve higher efficiency than the expected results (expected close to $4\times$ speedup). Note that the second level of filtering algorithm will be converged in few iterations, because the outputs of the first level of filtering algorithm is very close to the output after convergence. So, it has little impact on the results. Similar to \cite{Winterstein2013}, the test case is generated with normal distribution with varying standard deviation, and all centroids are distributed between data points uniformly. Also, note that all data communications (interaction) between the host and FPGA, which is accomplished via PCIe interface, are counted for timing evaluation.
\begin{figure}[t]%
\centering
\vspace{-7pt}
\subfloat[]{{\includegraphics[width=240pt]{Fig2a.pdf} }} \\
\subfloat[]{{\includegraphics[width=240pt]{Fig2b.pdf} }} \\
\vspace{-12pt}
\caption{(a) Average Clock Cycles in each iteration (b) Speedup Against Conventional FPGA-based Single Core }%
\label{kdtree}%
\vspace{-19pt}
\end{figure}
Fig. \ref{kdtree}a illustrates the average number of needed clock cycles for each iteration in MUCH-SWIFT against an FPGA-based kd-tree implementation \cite{Winterstein2013}. As illustrated, the MUCH-SWIFT architecture provides around $8.5\times$ speedup on average. Additionally, as reported in Fig. \ref{kdtree}b, MUCH-SWIFT is able to provide up to $330\times$ speed-up compared to an FPGA-based architecture without optimization. Also, it achieves more than $210\times$ acceleration on average compared to an FPGA-based architecture without optimization. Note that all data communications (interaction) between the host and FPGA, which is accomplished via PCIe interface, are counted for timing evaluation. The MUCH-SWIFT's robust and scalable data transfer and DMA management contribute to the reported speedup. In fact, this is why the MUCH-SWIFT achieves $8.5\times$ speedup, compared with a single core filtering algorithm \cite{Winterstein2013} when only utilizing 4 parallel cores as the computation is no longer memory bound.
In order to illustrate the efficiency of the filtering algorithm with a parallel architecture in comparison with k-clustering implementation without optimization, the MUCH-SWIFT results have been compared with the proposed architecture in \cite{Canilho2016}, which is a multi-core implementation of k-clustering. Fig. \ref{multicoreres}a depicts the execution time of MUCH-SWIFT and \cite{Canilho2016} on 106 data points with 15 dimensions and different number of centroids ranging from 2 up to 100. It is obvious that increasing the number of clusters increases the gap between MUCH-SWIFT and \cite{Canilho2016} due to parallel arithmetic cores in MUCH-SWIFT architecture. In fact, since the number of parallel arithmetic cores in MUCH-SWIFT depends on the number of clusters, and maximum feasible resources on FPGA has been used, it encountered less throughput degradation. Fig. \ref{multicoreres}b focuses on data dimensionality. Fig. \ref{multicoreres} shows around $12\times$ speedup against \cite{Canilho2016} on average.
\begin{figure}[t]%
\centering
\subfloat[]{{\includegraphics[width=240pt]{Fig3a.pdf} }} \\
\subfloat[]{{\includegraphics[width=240pt]{Fig3b.pdf} }} \\
\vspace{-12pt}
\caption{(a) Execution Time for $10^6$ Data Points (a) with Different Clusters (15 Dimensions) (b) with Different Dimensions (6 Clusters)}%
\label{multicoreres}%
\end{figure}
Table \ref{resource} reports MUCH-SWIFT's resource utilization with the different number of clusters. When increasing the number of clusters, it needs more resources for parallelism, and the available resources on FPGA are limited. So, there is a limit for the number of clusters on FPGA for fully parallel architecture. As reported in Table \ref{resource}, the maximum number of clusters (for fully parallel architecture) is $20$, and for the applications with more clusters, it has to share the parallel modules between clusters uniformly. Note that, $20$ clusters means that it is able to implement $20\times4=80$ parallel modules on ZU9EG, which is significantly large. Also, during implementation phase, the highest proportion of BRAMs and DSPs has been used in order to maximize the number of parallel arithmetic cores.
\begin{table}[t]
\centering
\caption{Resource Utilization with Different Cluster Sizes}
\vspace{-9pt}
\includegraphics[width = 230pt]{Table1.pdf}
\label{resource}
\vspace{-15pt}
\end{table}
\vspace{-5pt}
\section{Related Work}
A HW/SW co-design architecture is implemented in \cite{Gokhale2003} based on NIOS 1.1. But, the HW/SW interface is Peripheral Bus Module (PBM), whose serial infrastructure considerably limits the throughput. Unlike a HW/SW architecture, pure FPGA-based designs \cite{Hussain2011_1, Hussain2011_2} provide significant speed-up against HW/SW co-designs by using fixed-point arithmetic. But, on-chip FPGA memories (like BRAMs) is a big restriction for storing large datasets.
In order to avoid redundant distance calculations, triangle inequality \cite{Elkan2003} has been implemented successfully in \cite{Lin2012}. However, the size of data points is truncated to 8 bits, which is small. A filtering algorithm by using a kd-tree structure is implemented in \cite{Winterstein2013}. Due to using on-chip memories for storing data, it is only able to store 64K data simultaneously. Also, the size of data points is limited to 16, and all computations are based on fixed-point arithmetic.
Another pure FPGA-based k-means clustering architecture is implemented in \cite{Kutty2013}. The number of clusters is fixed in this architecture, and changing it needs re-synthesis and re-implementation. A computer cluster which consists of multiple FPGA-CPU pairs is implemented in \cite{Choi2014}. In this architecture, map-reduce programming is designed to allow easy scaling and parallelization across the distributed computer system, Although this evaluation on multiple FPGA-CPU pairs shows considerable throughput against baseline software implementation, it needs evaluating the case for utilizing multiple FPGAs in processing larger datasets.
A specific FPGA accelerator for the Intel QuickAssist FPGA platform is implemented in \cite{Abdelrahman2016}, which provides an integration between threads in a CPU and an Accelerated Function Unit (AFU) in QuickAssist FPGA. Although, the integration between threads in CPU and FPGA accelerator helps achieving considerable performance, it is applicable only for this specific type of FPGA platform, i.e. IntelAssist. Finally \cite{Canilho2016} presents a ZYNQ-based HW/SW co-design architecture, which employs ARM processors to provide parallelism in both FPGA and ARM processor, but the implemented algorithm has no optimization.
\vspace{-5pt}
\section{Conclusion}
In this paper, we demonstrate that using a HW/SW co-design architecture with a software-based technique provides the maximum efficiency in k-means algorithm. \emph{MUCH-SWIFT}, as an FPGA-based architecture for parallelization of the k-clustering algorithm, has been integrated with a modified two-layer filtering optimization. The MUCH-SWIFT employs all processing cores in the ZYNQ Ultrascale+ SoC to reduce the computation time and a two-layer filtering algorithm designed for parallel processing of binary kd-tree structures. Furthermore, by employing ZYNQ Ultrascale+ and utilizing its DDR3 memory, MUCH-SWIFT increases the feasible size of its input datasets. Additionally, MUCH-SWIFT benefits from the proposed HW/SW co-design architecture, which provides a high-throughput DMA-based PCIe channel for transceiving datasets between the host and ZYNQ SoC. By using this HW/SW co-design architecture, the MUCH-SWIFT achieves around $330\times$ speedup compared to a software-only solution.
|
1,314,259,995,890 | arxiv | \section{Introduction}
Often in data analysis, one has a small set of quality labeled data, and a large pool of unlabeled data. It is the task of semi-supervised learning to make as much use of this unlabeled data as possible. In the low-data regime, the aim is to create models that perform well after seeing only a handful of labeled examples. This is often the case with machine translation and dictionary completion, as it can be difficult to construct a large number of labeled instances or a sufficiently large parallel corpora. However, this domain offers a huge number of monolingual corpora to make high quality language embeddings \cite{dictionary,al2013polyglot}. The methods presented in this paper are designed to take into consideration both labeled and unlabeled information when training a neural network. The supervised component uses the standard alignment-based loss functions and the unsupervised component attempts to match the distribution of the network's output to the target data's distribution by minimizing the Maximum Mean Discrepancy (MMD) ``distance'' between the two distributions. This has the effect of placing a prior on translation methods that preserve the distributional structure of the two datasets. This limits the model space and increases the quality of the mapping, allowing one to use less labeled data.
Related methods such as Auto-Encoder pre-initialization \cite{erhan2010does}, first learn the structure of the input, then learn a mapping. In this setup, unsupervised knowledge enters through learning good features to describe the dataset. The MMD method of unsupervised training directly learns a mapping between the two spaces that aligns all of the moments of the mapped data and the target data. This method can be used to improve any semi-supervised mapping problem, such as mappings between languages \cite{lt_trans}, image labeling, FMRI analysis \cite{mitchell2008predicting}, and any other domains where transformations need to be learned between data. This investigation aims to study these methods in the low data regime, with the eventual goal of studying of dying or lost languages, where very few supervised training examples exist.
\section{Background}
\subsection{Maximum Mean Discrepancy}
\label{MMD1}
\renewcommand{\(}{\left(}
\renewcommand{\)}{\right)}
\newcommand{\mathcal{X}}{\mathcal{X}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\E}[1]{\mathbb{E}_{#1}}
The Maximum Mean Discrepancy (MMD) put forth by \cite{kernel_two_sample} is a measure of distance between two distributions $ p , q $. More formally, letting $x$, $y$ be variables defined on a topological space $ \mathcal{X}$ with Borel measures $p, q$, and $\mathcal{F}$ be a class of functions from $ \mathcal{X} \to \mathbb{R}$. The MMD semi-metric is defined as:
\begin{equation}
MMD_{\mathcal{F}} \( p,q \) = sup_{f \in \mathcal{F}} \Bigl( \E{x \sim p} f\(x\) - \E{y \sim q} f\(y\) \Bigr)
\end{equation}
Where $\mathbb{E}$ is the first raw moment defined as:
\begin{equation}
\E{x \sim p} f(x) = \int_{ \mathcal{X} } f(x) dp
\end{equation}
Intuitively, the MMD is a measure of distance which uses a class of functions as a collection of ``trials'' to put the two distributions through. The distributions pass a trial if the function evaluated on both distributions has the same expectation or mean. Two distributions fail a trial if they yield different means, the size of the difference measures how much the distributions fail that trial. Identical distributions should yield the same images when put through each function in $\mathcal{F}$, so the means (first moments) of the images should also be identical. Conversely, if the function class is ``large enough'' this method can distinguish between any two probability distributions that differ, making the MMD a semi-metric on the space of probability distributions. A unit ball in a Reproducing Kernel Hilbert Space (RKHS) is sufficient to discern any two distributions provided the kernel, $ k$, is universal. \cite{cortes} If $\mathcal{F}$ is equal to a unit ball in kernel space, Gretton $\it{et. al.}$ showed that the following is an unbiased estimator of the MMD: \cite{kernel_two_sample}
\begin{multline}
MMD_u^2(X,Y) = \frac{1}{m(m-1)} \sum_{i=1}^{m} \sum_{j \neq i}^{m} k(x_i,x_j) +\\
\frac{1}{n(n-1)} \sum_{i=1}^{n} \sum_{j \neq i}^{n} k(y_i,y_j) - \frac{2}{mn} \sum_{i=1}^{m} \sum_{j = 1}^{n} k(x_i,y_j)
\end{multline}
If the kernel function is differentiable, this implies that the estimator of the MMD is differentiable, allowing one to use it as a loss function that can be optimized with gradient descent.
\subsection{MMD Networks}
The differentiability of the MMD estimator allows it to be used as a loss function in a feed-forward network. Li $\it{et. al.}$ showed that by using the MMD distance as a loss function in a neural net, $\mathcal{N}$, one can learn a transformation that maps a distribution of points $ X = (x_{i})_{1}^{n}$ in $\mathbb{R}^{d}$ to another distribution $Y = (y_{i})_{1}^{m}$ in $\mathbb{R}^{n}$ while approximately minimizing the MMD distance between the image of $X$, $\mathcal{N}(X)$, and $Y$. \cite{GMMN}
\begin{equation}
l_{MMD}(X,Y,\mathcal{N}) = MMD_u^2(\mathcal{N}(X),Y)
\end{equation}
This loss function allows the net to learn transformations of probability distributions in a completely unsupervised manner. Furthermore, the MMD-net can also be used to create generative models, or mappings from a simple distribution to a target distribution.\cite{GMMN} Where simple usually means easy to sample from, or a maximum entropy distribution. Often, a multivariate uniform or Gaussian source distribution is used in these generative models. This loss function can be optimized via mini-batch stochastic gradient descent, though the samples from X and Y need not be paired in any way. To avoid over-fitting, the minibatches for X and Y should be sampled independently, which this paper refers to as ``unpaired'' minibatching.
\section{Methods}
\subsection{$n$-Channel Networks}
This work introduces a generalization of a feed forward net, called an $n$-Channel net. This architecture allows an unsupervised loss term that requires unpaired mini-batching and a paired mini-batching scheme of a standard feed forward network to be mixed.
An $n$-channel net is a collection of $n$ networks with tied weights that operate on $n$ separate datasets $(X_i,Y_i)_1^n$. More formally, an $n$-channel net is a mapping:
\begin{equation}
\mathcal{N}_n : \(\mathbb{R}^d\)^n \to \( \mathbb{R}^e \)^n
\end{equation}
defined as:
\begin{equation}
\mathcal{N}_n\Bigl(\(X_i\)_1^n\Bigr) \equiv \Bigl(\mathcal{N}(X_i)\Bigr)_1^n \end{equation}
where where $\mathcal{N}: \mathbb{R}^d \to \mathbb{R}^e$ is a feed forward network. Each channel of the network can have it's own loss function and be fed with a separate data source. Most importantly, these separate data sources can be trained in a paired or unpaired manner.
\subsection{A Semi-Supervised MMD-Net}
In many applications where one is interested in estimating a transformation between data spaces, one has a small labeled dataset $(X,Y)$, and large, unlabeled datasets $(S,T)$. Throughout the literature, MMD networks have only been applied to the case of unpaired data.\cite{GMMN} We expand on this work by augmenting the completely unsupervised MMD distance with a semi-supervised alignment term. More formally, if one has a collection of k paired vectors $(x_{i},y_{i})_{1}^{k}$ with $x_{i} \in X$ and $y_{i} \in Y$ that should be aligned through the transformation $\mathcal{N}$, one can use the standard loss function:
\begin{equation}
l_{alignment}(X,Y,\mathcal{N}) = \sum_{i=1}^{k} \lVert \mathcal{N}(x_{i})-y_{i} \rVert
\end{equation}
Where $\lVert \cdot \rVert$ is any differentiable norm in $\mathbb{R}^d$. This work uses the standard $l_2$ vector norm. This is the standard norm used in regression, where the goal of the network is to minimize the distance between the network output$\mathcal{N}(x_{i})$, and the observed responses $y_{i}$.
Using a hyperparameter, we can blend the cost functions of the supervised alignment loss and the unsupervised MMD loss. The full cost function for the MMD network then becomes:
\begin{multline}
l(X,Y,S,T,\mathcal{N}) = \alpha_{pair}l_{alignment}(X,Y,\mathcal{N}) +\\
(1-\alpha_{pair}) l_{MMD}(S,T,\mathcal{N})
\end{multline}
\subsection{Supervised Pre-Initialization}
The MMD term of the cost function scales as $\mathcal{O}\(M^2\)$ where $M$ is the size of the mini-batch. This significantly increases training time for large batch sizes slowing convergence in wall-time. To mitigate this effect, we first train the network until convergence with only the supervised term of the cost function. Once converged, we then switch to the semi-supervised cost function.
This also helps the network avoid local minima as it already starts close to the optimal solution. Because the MMD cost function is inherently unpaired, it is susceptible to getting stuck in local minima when there are multiple ways to map the mass of one probability distribution into another distribution. We say that a mapping from the supports, $f: \mathcal{X} \to \mathcal{Y}$, is a MMD-mode from distributions $p$ to $q$ if $f(p) \sim q$. Here $f(p)$ is the distribution formed by sampling from $p$ and then applying $f$. These modes coincide with critical points of the $MMD_u^2$ cost function and are therefore tough to escape with gradient descent methods. As the class of functions represented by the network increases, the more distinct MMD-modes arise. This increases the number of critical points, though these probably tend to be saddle points rather than local minima as the dimensionality of the function space increases. \cite{saddle}
One can escape these local minima, by increasing $\alpha_{pair}$ to the point where the signal from the supervised term overcomes that the signal from the unsupervised cost function. However, if the network is within the pull of the correct minima, it is often better to rely on the robust unsupervised signal than the noisy supervised signal, which requires a small $\alpha_{pair}$. We found that supervised pre-training helped guide the network parameters to within the basin of attraction for the correct unsupervised minima. From here the unsupervised signal was much more reliable and led to better results on synthetic and language datasets. Furthermore on all datasets, the supervised warm-start greatly reduced fitting time, as convergence of the expensive MMD cost function needed fewer optimization steps. Future work could involve annealing the supervised term to a small number, though this would eliminate the aforementioned computational speedup.
To demonstrate the effect of pre-initialization, we show the unbiased MMD estimator of a simple synthetic experiment. We generate two datasets of two dimensional points. The first, shown in Figure \ref{mmd_vs_angle} left is sampled from a uniform distribution on the unit square support centered at $(0,0)$. To generate a simple target shown in Figure \ref{mmd_vs_angle} middle, we rotate the source cloud of points by an angle $\theta^*=255^{\circ}$ and add a small Gaussian noise term. Figure \ref{mmd_vs_angle} right shows the that MMD loss as a function of angle of rotation transformation has several modes caused by the symmetries of the square. To simulate a very noisy MSE, we use the MSE of one randomly sampled point and its respective pair. The noisy MSE loss function has two local minima and the global minima $\hat{\theta}$ is within the correct basin of attraction of the unsupervised cost function. This basin of attraction of the unsupervised cost has a minima that is indistinguishable from the correct value of theta and much more accurate than the supervised loss term.
\begin{figure*}[t!]
\centering
\begin{subfigure}
\centering
\includegraphics[height=1.6in]{figures/x}
\end{subfigure}%
\begin{subfigure}
\centering
\includegraphics[height=1.6in]{figures/x_rot}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[height=1.6in]{figures/mmd_vs_angle}
\end{subfigure}
\caption{Left: Initial dataset $X$ sampled uniformly from the unit square. Colors indicate how points are mapped through the transform. Middle: $Y = X_{255^{\circ}} + Gaussian(\mu=0,\sigma=.1) $ Where $X_{\theta}$ denotes a rotation clockwise by $\theta$. Right: Unit scaled $MMD_u^2(X_{\theta},Y)$, and unit scaled $MSE(X_{\theta,1},Y_1)$ as a function of $\theta$. Where $X_{1}$ denotes the first element of $X$.}
\label{mmd_vs_angle}
\end{figure*}
\subsection{Choice of Kernel}
The $MMD_{\mathcal{F}}$ is able to differentiate between any two distributions if the function class, $\mathcal{F}$, is a unit ball in the reproducing kernel Hilbert space (RKHS) of a universal kernel.\cite{cortes} One of the simplest and most commonly used universal kernels is the Gaussian or radial basis function kernel, which excels at representing smooth functions.
\begin{equation}
k_{\sigma}(x,y) = exp\(- \frac{\lVert x - y \rVert^2}{2\sigma^2} \)
\end{equation}
The parameter $\sigma$ controls the width of the Gaussian, and needs to be set properly for good performance. If $\sigma$ is too low, each point's local neighborhood will be effectively empty, and the gradients will vanish. If it is too high, every point will be in each point's local neighborhood and the kernel will not have enough resolution to see the details of the distribution. In this scenario, the gradients vanish. We found that $\sigma$ was one of the most important hyper-parameters for the success of the method. In both our synthetic data and natural language examples, we found that the method performed well in a small window of kernel scale settings.
To improve the robustness of this method, this investigation used the following multi-scale Gaussian kernel:
$$ k(x,y) = \sum_{i = 0}^{n} c_i k_{\sigma_i}(x,y) $$
Where $c_i = 1$, $\sigma_i = s 10^{w(i/n)-w/2}$, $w = 4$, $n=10$. The scalar $s$ is the average scale of the multi-scale and the width, $w$, controls the width of the frequency range covered by the kernel. $n$ controls how many samples are taken from this range. Choosing a larger $n$ improves performance as there are more scales in the kernel, but increases computation time. By including multiple scales in the kernel, the gradients from the larger kernels will move the parameters to a region where the distributions are aligned at a large scale, they will then begin to vanish and the smaller scale gradients will become more relevant. Setting $w=4$ allows the kernel to be sensitive to functions with scales that are within $2$ orders of magnitude of the average scale $s$. We find that choosing this kernel significantly broadens the areas of parameter space where the method succeeds, without hurting the performance.
Many have investigated the kernel scale problem and there are several heuristics available for choosing the scale based on optimizing statistical power or median distances to nearest neighbors. \cite{optimal} For clarity, we explicitly investigated and set the kernel scale based on a grid search evaluating on a held out validation set. Figure \ref{kernel_scale} demonstrates that the method was fairly robust to settings of average kernel scale on synthetic data and language data.
\begin{figure*}[t!]
\centering
\begin{subfigure}
\centering
\includegraphics[height=1.6in]{figures/metric_0_5kat1_additional_0_alphaPair_0p99_dataName_embeddings_vs_kernelScale_.png}
\end{subfigure}%
\begin{subfigure}
\centering
\includegraphics[height=1.6in]{figures/metric_linearAllMSEAll_normalize_0_dataName_toy_data_d30_vs_kernelScale__vs_alphaPair.png}
\end{subfigure}
\begin{subfigure}
\centering
\includegraphics[height=1.6in]{figures/metric_linearAllMSEAll_dataName_toy_data_d300_trainingSize_1000_vs_kernelScale__vs_alphaPair.png}
\end{subfigure}
\caption{Left: Performance comparison on word embeddings in the $0-5k$ frequency bin as a function of the average kernel scale $s$. Middle: Performance comparison on synthetically generated data in $\mathbb{R}^{30}$ as a function of $\alpha_{pair}$. Right: Performance comparison on synthetically generated data in $\mathbb{R}^{300}$ as a function of $\alpha_{pair}$.}
\label{kernel_scale}
\end{figure*}
\subsection{Globally Corrected (GC) Retrieval}
In this analysis, performance of translation methods are compared on their ability to infer the correct translation on a held out test set. More specifically, we use the precision at $N$, which is the fraction of examples where the correct word was in the top $N$ most likely translations of model. This is a natural choice for translation, as it estimates the probability of translating a word correctly when $N=1$.
To generate the list of $N$ most likely translations for a given word, one can use nearest neighbor (NN) retrieval. In this method, one uses the $N$ closest neighbors in the target space of the mapped word vector as the list of best guesses. We find that it is always better to use cosine distance for nearest neighbor calculations. Finding the first nearest neighbor to a point $\hat{y}$ can be more formally expressed as:
\begin{equation}
NN_1(\hat{y}) = argmin_{y \in T} Rank_T(\hat{y},y)
\end{equation}
Where $\hat{y}$ is our mapped word vector, $T$ is our target space, and $Rank_T(\hat{y},y)$ is a function that returns the rank of $y$ in the sorted list of distances between $\hat{y}$ and the points in $T$.
If the space of word embeddings is not uniformly distributed, there will be areas where word embeddings bunch together in higher densities. The points towards the center of these bunches act as hub points, and may be the nearest neighbors of many other points. Dinu \textit{et. al.} 2014 have shown that naive NN retrieval results in over-weighting these hub points as they are more frequently the neighbors of points. They called this the ``Hubness Problem'' and introduced a corrected form of the nearest-neighbor retrieval called the globally corrected neighbor retrieval method (GC). In this method, instead of using distance to select translates as in $NN_1$, one uses:
\begin{equation}
GC_1(\hat{y})=argmin_{y \in T} \(Rank_P\(y,\hat{y}\)-cos\(\hat{y},y\)\)
\end{equation}
Where $P$ is a random sampling of points from $T$ and $cos(x,y)$ is the cosine distance between $x$ and $y$. Instead of returning the nearest neighbor of $\hat{y}$, GC returns the point in $T$ that has $\hat{y}$ ranked the highest. The cosine distance term breaks ties. GC retrieval has been shown to outperform the nearest neighbor retrieval in all frequency bins when the transformation is a linear mapping.\cite{lt_trans} Figure \ref{emb_comp} shows that it also improves the performance of the semi-supervised translation task.
\subsection{Neural Network Implementation}
This work implemented the network in Theano, \cite{theano} an automatic differentiation software written in python. The net was trained with RMSProp \cite{RMSprop} on both the unpaired and paired batches with a batch size of 200 for each set. The unregularized pre-initialization was trained for $4000$ epochs and the regularized network was trained for $250$ epochs, which gave ample time for convergence. Hyperparameter optimization was perfomed through parallel grid searches a TORQUE Cluster, where each job ran for $\sim20$ hours. A validation set consisting of a random sample of $10\%$ of the training set was used to choose the parameters for the final reported results.
\section{Data}
\subsection{Synthetic Data}
Several synthetic datasets were used to demonstrate the method's ability to accurately learn linear transformations using a very small paired dataset. Furthermore, we used this synthetic data to investigate the effects of the network's hyper-parameters.
Two datasets were created, one with the dimension of the source and target equal to $30$ and the other $300$, the same dimensionality as the embeddings. The datasets contained $100,000$ points and various sized paired subsets were used to calculate the supervised alignment loss in the experiments.
Source data was generated as a multivariate Gaussian with zero mean and unit variance. A ground truth mapping was generated by sampling the entries of a $d \times d$ matrix of independent Gaussians with zero mean and unit variance. The target data was generated by applying the ground truth transformation to the source data and adding Gaussian noise with zero mean and a variance of $0.1$.
\subsection{Embedding Data}
This analysis used 300 dimensional English (EN) and Italian (IT) monolingual word embeddings from \cite{lt_trans}. These embeddings were trained with word2vec's CBOW model on $2.8$ billion tokens as input (ukWaC + Wikipedia + BNC) for English and the 1.6 billion
itWaC tokens for Italian.\cite{lt_trans} The embeddings contained the top 200,000 words in each language. Supervised training and testing sets were constructed from a dictionary built from Europarl, available at \url{http://opus.lingfil.uu.se/}. \cite{dictionary}
Two training sets consisted of the $750$ and $5,000$ most frequent words from the source language (English) which had translations in the gold dictionary. Five disjoint test sets were created consisting of roughly 400 translation pairs randomly sampled from the frequency ranked words in the intervals 0-5k, 5k-20k, 20k-50k, 100k-200k.
\section{Results}
\subsection{Synthetic Data}
Adding the MMD term to the loss function dramatically improved the ability to learn the transformation on all synthetic datasets. The synthetic data also provided a clean environment to see the effect of varying hyper-parameters. The experiment used a ``linear network'' which is equivalent to learning a linear transformation between the spaces. In general, if the hyper-parameters are set correctly, the MMD assisted learner can approach the true transformation with significantly less paired data.
Our first investigation aimed to understand the effect and robustness of the kernel scale parameter. As one can see from Figure \ref{kernel_scale}, the performance of the method is robust to a setting of the average kernel scale within $+/-2$ orders of magnitude of the optimal scale. This empirically confirms the intuition behind the width parameter of the multi-scale kernel. As the width parameter decreases, this valley of good performance becomes narrower by the expected amount. A similar pattern arose in the $300$ dimensional dataset.
In order to simulate the environment of the embedding experiment that required a validation set of $\sim 10\%$ of the data, we also removed $\sim 10\% $ of our data. The plots in Figure \ref{kernel_scale} demonstrate that even with the data removed for a validation set, the method still significantly beats linear regression trained on the training and validation set, justifying the use of data for parameter tuning. The models in $d=30$ and $d=300$ both reach error rates comparable to the ground truth regressor learned on all $100,000$ data points.
\begin{figure}
\includegraphics[height=2.6in]{figures/metric_mse_kernelScale_10p0_dataName_toy_data_d300_trainingSize_1000_vs_alphaPair_.png}
\caption{Performance of methods on synthetically generated data in $\mathbb{R}^{300}$ as a function of $\alpha_{pair}$, $s=10$.}
\label{alpha_pair}
\end{figure}
Figure \ref{alpha_pair} investigates various settings of $\alpha_{pair}$ and shows that decreasing $\alpha_{pair}$ drives the performance down to the ground truth level. This trend appears in both the low and high dimensional data and suggests that the supervised pre-initialization yields a configuration that is within the basin of attraction of the true parameters in vector field $\nabla l_{MMD}$. Thus, only the unsupervised term is needed as the supervised initialization has already eliminated the ambiguity of the MMD loss function modes.
\subsection{Embedding Data}
Figure \ref{emb_comp} shows that the semi-supervised MMD-Net was able to significantly outperform the standard linear regression on a paired dataset of $750$ and $5000$ word-translation pairs in every frequency bin . Furthermore, this dominance over linear regression follows a similar pattern in the precisions @5 and @10. The method also outperformed several other linear and nonlinear methods as shown in Table \ref{table:comp}.
\begin{figure*}[t!]
\centering
\begin{subfigure}
\centering
\includegraphics[height=2.5in]{figures/metric_precat1_kernelScale_1p0_alphaPair_0p9_trainingSize_5000_dataName_embeddings_vs_freq_.png}
\end{subfigure}%
\begin{subfigure}
\centering
\includegraphics[height=2.5in]{figures/metric_precat1_kernelScale_1p0_alphaPair_0p9_trainingSize_750_dataName_embeddings_vs_freq_.png}
\end{subfigure}
\caption{Model performance as a function of English word frequency bins using the top 5000 (left) and 750 (right) EN-IT word pairs as training data. Precision@1 refers to the fraction of words correctly translated by the method on held out testing sets.}
\label{emb_comp}
\end{figure*}
\begin{table*}[t]
\caption{Comparison of Precision@1 across different algorithms and dimensionality reduction schemes. PCA S and PCA T refers to projecting the source and target respectively onto their first 270 principal vectors. KR refers to Kernel Ridge Regression an RBF refers to the radial basis function kernel with heuristically set scale}
\vskip 0.15in
\begin{center}
\begin{tabular}{l|r|r|r|r|r}
\hline
\abovespace\belowspace
{} & 0-5k & 5k-20k & 20k-50k & 50k-100k & 100k-200k \\
\hline
\abovespace
Linear & 0.228 & 0.052 & 0.028 & 0.015 & 0.011 \\
Linear + PCA S & 0.236 & 0.057 & 0.031 & 0.036 & 0.019 \\
Linear + PCA T & 0.207 & 0.044 & 0.031 & 0.028 & 0.011 \\
Linear + PCA S + T & 0.212 & 0.072 & 0.033 & 0.043 & 0.029 \\
Random Forrest & 0.008 & 0.000 & 0.000 & 0.000 & 0.000 \\
KR 2-deg Poly & 0.057 & 0.003 & 0.008 & 0.010 & 0.008 \\
KR 3-deg Poly & 0.049 & 0.005 & 0.003 & 0.013 & 0.008 \\
KR RBF & 0.057 & 0.003 & 0.010 & 0.010 & 0.008 \\
\belowspace
Linear + MMD & \textbf{0.347} & \textbf{0.129} & \textbf{0.099} & \textbf{0.094} & \textbf{0.035} \\
\hline
\end{tabular}
\end{center}
\label{table:comp}
\end{table*}
\section{Discussion and Future Work}
The addition of the MMD cost function term significantly improves the results of regression in the low data regime. Furthermore, to the best knowledge of the authors, this method achieves state of the art results on the embeddings of \cite{lt_trans}. The authors also experimented with deeper nets, but did not observe significant performance improvements, an observation consistent with the observations of \cite{google_linear}.
\subsection{Adversarial Distribution Matching}
One promising future direction involves replacing the MMD unsupervised term with a Generative Adversarial Network (GAN) \cite{GAN}. Like the MMD, the GAN also involves a maximization over a function class of a measure of dissimilarity. Similarly, the GAN loss function can be used for unsupervised learning of probability distributions. However, the GAN is usually optimized directly by stochastic gradient descent, trading the quadratic time dependence on minibatch size with a linear one. In practice however, the maximization over the function class (the discriminator) is usually done in $k$ gradient descent steps for every one step of training the distribution matching net (the generator). Furthermore, the GAN cost function does not have a dependence on kernel scale.
Analogous to the discriminator in the GAN, we can also adversarially learn the MMD. In this setup, the function class takes the the form of a parametrized network. Instead of estimating the supremum of the mean discrepancy over a ball in RKHS, we would be finding the supremum through gradient ascent on the network. This would also have the effect of eliminating the quadratic compute and the dependence on kernel scale. This formulation of the MMD would allow for a more direct comparison between the GAN and MMD loss functions, and warrants future investigation. These two loss functions are in-equivalent, as the only intersection between $f$-divergences, like the Jensen-Shannon Divergence which is equivalent to the GAN, and integral measures like the MMD is the total variation distance. \cite{mohamed2016learning} Thus, one might be able to leverage more diverse information by combining the two.
\subsection{Bi-Directional Networks}
In the case of translation between two spaces of equal dimension, the inverse of the translation transformation should also be a translation from the target to the source space. We can capitalize on this observation to further constrain our set of possible translations. This allows the transformation to also draw information from the structure of the source space. More specifically one can minimize:
\begin{multline}
L = \alpha_{target}\| RT - S \|_{target}^2 + \\ (1-\alpha_{target})\|R-ST^{-1}\|_{source}^2
\end{multline}
where $T \in GL_d$, $\alpha_{target} \in [0,1]$ and $R,S \in \mathbb{R}^{d\times n_{pair}}$. This would result in twice as much supervisory signal and maintain the same number of parameters. Furthermore, this can also be applied in conjunction with the GAN loss. It is also compatible with the pre-initialization scheme. In the case of a more complex nonlinear network where an inverse transformation cannot be easily calculated, the architecture could include an encoder network which maps from the source to the target and a decoding network which maps from the target to the source. These two mappings could then be constrained to be close to mutual inverses through a reconstruction loss penalty.
\begin{comment}
\section*{Acknowledgements}
The Author would like to acknowledge The Yale Coifman Group for their code contributions and introducing the author to the MMD net. More specifically, the author would like to thank Kelly Stanton and Uri Shaham for their consistent help with debugging the project and for creating a significant portions of the codebase. The author would also like to thank Yale computational linguistics Professor Robert Frank for our many of meeting where we discussed the project.
\end{comment}
|
1,314,259,995,891 | arxiv | \section{Introduction}
Reproducibility in Machine Learning and Deep Reinforcement Learning (RL) in particular has become a serious issue in the recent years. As pointed out in \citep{islam2017reproducibility} and \citep{henderson2017deep}, reproducing the results of an RL paper can turn out to be much more complicated than expected. Indeed, codebases are not always released and scientific papers often omit parts of the implementation tricks. Recently, Henderson et al. conducted a thorough investigation of various parameters causing this reproducibility crisis. They used trendy deep RL algorithms such as DDPG \citep{lillicrap2015continuous}, ACKTR \citep{wu2017scalable}, TRPO \citep{schulman2015trust} and PPO \citep{schulman2017proximal} with OpenAI Gym \citep{brockman2016openai} popular benchmarks such as Half-Cheetah, Hopper and Swimmer, to study the effects of the codebase, the size of the networks, the activation function, the reward scaling or the random seeds. Among other results, they showed that different implementations of the same algorithm with the same set of hyper-parameters led to drastically different results.
\paragraph{}
Perhaps the most surprising thing is this: running the same algorithm 10 times with the same hyper-parameters using 10 different random seeds and averaging performance over two splits of 5 seeds can lead to learning curves seemingly coming from different statistical distributions. Notably, all the deep RL papers reviewed by \citeauthor{henderson2017deep}. (theirs included) used 5 seeds or less. Even worse, some papers actually report the average of the best performing runs. As demonstrated in \citep{henderson2017deep}, these methodologies can lead to claim that two algorithms performances are different when they are not. A solution to this problem is to use more random seeds, to average more different trials in order to obtain a more robust measure of the algorithm performance. But how can one determine how many random seeds should be used? Shall we use 5, 10 or 100, as in \citep{mania2018simple}?
\paragraph{}
This work assumes one wants to test a difference in performance between two algorithms. Section~\ref{sec:def} gives definitions and describes the statistical problem of {\em difference testing} while Section~\ref{sec:test} proposes two statistical tests to answer this problem. In Section~\ref{sec:theory}, we present standard guidelines to choose the sample size so as to meet requirements in the two types of {\em statistical errors}. Finally, we challenge the assumptions made in the previous section and propose guidelines to estimate error rates empirically in Section~\ref{sec:assumptions}. The code is available on Github at \url{https://github.com/flowersteam/rl-difference-testing}.
\begin{figure}[H]
\centering
{\includegraphics[width=0.92\linewidth]{example1_5seeds.png} }
\caption{ \small $Algo1$ versus $Algo2$ are two famous Deep RL algorithms, here tested on the Half-Cheetah benchmark.
The mean and confidence interval for 5 seeds are reported. We might consider that $Algo1$ outperforms $Algo2$ because there is
not much overlap between the $95\%$ confidence intervals. But is it sufficient evidence that $Algo1$ really performs better?
Below, we show that the performances of these algorithms is actually the same, and explain which methods should be used
to have more reliable evidence of the (non-)difference among two algorithms. } \label{fig:ex1_5seeds}
\end{figure}
\section{Definition of the statistical problem}
\label{sec:def}
\subsection{First definitions}
Two runs of the same algorithm often yield different measures of performance. This might be due to various factors such as the seed of the random generators (called {\em random seed} or {\em seed} thereafter), the initial state of the agent, the stochasticity of the environment, etc.
Formally, the performance of an algorithm can be modeled as a {\em random variable} $X$ and running this algorithm in an environment results in a {\em realization} $x^i$.
Repeating the procedure $N$ times, one obtains a statistical {\em sample} $x=(x^1, .., x^N)$. A random variable is usually characterized by its {\em expected value} or {\em mean} $\mu$ and its {\em standard deviation}, noted $\sigma$. While the mean characterizes the expected value of a realization, the standard deviation evaluates the square root of the squared deviations to this mean, or in simpler words, how far from the mean the realization are expected to fall. Of course, the values of $\mu$ and $\sigma$ are unknown. The only thing one can do is to compute their unbiased estimations $\overline{x}$ and $s$:
\begin{equation}
\overline{x} \mathrel{\hat=} \sum\limits_{i=1}^n{x^i}, \hspace{1cm} s \mathrel{\hat=}\sqrt{\frac{\sum_{i+1}^{N}(x^i-\overline{x})^2}{N-1}},
\end{equation}
where $\overline{x}$ is called the empirical mean, and $s$ is called the empirical standard deviation. The larger the sample size $N$, the more confidence one can be in the estimations.
\paragraph{}
Here, two algorithms with respective performances $X_1$ and $X_2$ are compared. If $X_1$ and $X_2$ follow normal distributions, the random variable describing their difference $(X_{\textnormal{diff}} = X_1-X_2)$ also follows a normal distribution with parameters ${\sigma_{\textnormal{diff}}=(\sigma_1^2+\sigma_2^2)^{1/2}}$ and $\mu_{\textnormal{diff}}=\mu_1-\mu_2$. In this case, the estimator of the mean of $X_{\textnormal{diff}}$ is $\overline{x}_{\textnormal{diff}} = \overline{x}_1-\overline{x}_2$ and the estimator of ${\sigma_{\textnormal{diff}}}$ is ${s_{\textnormal{diff}}=\sqrt{s_1^2+s_2^2}}$. The {\em effect size} $\epsilon$ can be defined as the difference between the mean performances of both algorithms: ${\epsilon = \mu_1-\mu_2}$.
\paragraph{}
Testing for a difference between the performances of two algorithms ($\mu_1$ and $\mu_2$) is mathematically equivalent to testing a difference between their difference $\mu_{\textnormal{diff}}$ and 0. The second point of view is considered from now on. We draw a sample $x_{\textnormal{diff}}$ from $X_{\textnormal{diff}}$ by subtracting two samples $x_1$ and $x_2$ obtained from $X_1$ and $X_2$.
\begin{myex}{Example 1.}
To illustrate difference testing, we use two algorithms ($Algo 1$ and $Algo 2$) and compare them on the Half-Cheetah environment from the OpenAI Gym framework \citep{brockman2016openai}. The algorithms implemented are not so important here and will be revealed later. First, we run a study with $N=5$ random seeds for each. Figure~\ref{fig:ex1_5seeds} shows the average learning curves with $95\%$ confidence intervals. Each point of a learning curve is the average cumulated reward over $10$ evaluation episodes. The {\em measure of performance} $X_i$ of $Algo\hspace{3pt}i$ is the average performance over the last $10$ points (i.e. last $100$ evaluation episodes).
From \figurename~\ref{fig:ex1_5seeds}, it seems that $Algo1$ performs better than $Algo2$. Moreover, the confidence intervals do not seem to overlap much. However, we need to run statistical tests before drawing any conclusion.
\end{myex}
\subsection{Comparing performances with a difference test}
In a {\em difference test}, statisticians define the {\em null hypothesis} $H_0$ and the {\em alternate hypothesis} $H_a$. $H_0$ assumes no difference whereas $H_a$ assumes one:
\begin{itemize}
\item $H_0$: $\mu_{\textnormal{diff}} = 0$
\item $H_a$: $\mu_{\textnormal{diff}} \neq 0$
\end{itemize}
These hypotheses refer to the {\em two-tail} case. When an a priori on which algorithm performs best is available, (say $Algo1$), one can use the {\em one-tail} version:
\begin{itemize}
\item $H_0$: $\mu_{\textnormal{diff}} \leq 0$
\item $H_a$: $\mu_{\textnormal{diff}} > 0$
\end{itemize}
At first, a statistical test always assumes the null hypothesis. Once a sample $x_{\textnormal{diff}}$ is collected from $X_{\textnormal{diff}}$, one can estimate the probability $p$ (called $p$-value) of observing data as extreme, under the null hypothesis assumption. By {\em extreme}, one means far from the null hypothesis ($\overline{x}_{\textnormal{diff}}$ far from $0$). The $p$-value answers the following question: {\em how probable is it to observe this sample or a more extreme one, given that there is no true difference in the performances of both algorithms?} Mathematically, we can write it this way for the one-tail case:
\begin{equation}
p{\normalsize \text{-value}} = P(X_{\textnormal{diff}}\geq \overline{x}_{\textnormal{diff}} \hspace{2pt} |\hspace{2pt} H_0),
\end{equation}
and this way for the two-tail case:
\begin{equation}
p{\normalsize \text{-value}}=\left\{
\begin{array}{ll}
P(X_{\textnormal{diff}}\geq \overline{x}_{\textnormal{diff}} \hspace{2pt} |\hspace{2pt} H_0)\hspace{0.5cm} \textnormal{if} \hspace{5pt} \overline{x}_{\textnormal{diff}}>0\\
P(X_{\textnormal{diff}}\leq \overline{x}_{\textnormal{diff}} \hspace{2pt} |\hspace{2pt} H_0) \hspace{0.5cm} \textnormal{if} \hspace{5pt} \overline{x}_{\textnormal{diff}}\leq0.
\end{array}
\right.
\end{equation}
When this probability becomes really low, it means that it is highly improbable that two algorithms with no performance difference produced the collected sample $x_{\textnormal{diff}}$. A difference is called {\em significant at significance level $\alpha$} when the $p$-value is lower than $\alpha$ in the one-tail case, and lower than $\alpha/2$ in the two tail case (to account for the two-sided test\footnote{See Wikipedia's article for more details on one-tail versus two-tail tests: \url{https://en.wikipedia.org/wiki/One-_and_two-tailed_tests}}). Usually $\alpha$ is set to $0.05$ or lower. In this case, the low probability to observe the collected sample under hypothesis $H_0$ results in its rejection. Note that a significance level $\alpha=0.05$ still results in $1$ chance out of $20$ to claim a false positive, to claim that there is a true difference when there is not. It is important to note that, when one is conducting $N_E$ experiments, the false positive rate grows linearly with the number of experiments. In this case, one should use correction for multiple comparisons such as the Bonferroni correction $\alpha_{Bon} = \alpha / N_E$ \citep{rice1989analyzing}. This controls the familywise error rate ($FWER$), the probability of rejecting at least one true null hypothesis ($FWER<\alpha$). Its use is discussed in \citep{cabin2000bonferroni}.
\paragraph{}
Another way to see this, is to consider confidence intervals. Two kinds of confidence intervals can be computed:
\begin{itemize}
\item $CI_1$: The $100\cdot(1-\alpha)\hspace{3pt}\%$ confidence interval for the mean of the difference $\mu_{\textnormal{diff}}$ given a sample $x_{\textnormal{diff}}$ characterized by $\overline{x}_{\textnormal{diff}}$ and $s_{\textnormal{diff}}$.
\item $CI_2$: The $100\cdot(1-\alpha)\hspace{3pt}\%$ confidence interval for any realization of $X_{\textnormal{diff}}$ under $H_0$ (assuming $\mu_{\textnormal{diff}}=0$).
\end{itemize}
Having $CI_2$ that does not include $\overline{x}_{\textnormal{diff}}$ is mathematically equivalent to a $p$-value below $\alpha$. In both cases, it means there is less than $100\cdot\alpha\%$ chance that $\mu_{\textnormal{diff}}=0$ under $H_0$. When $CI_1$ does not include $0$, we are also $100\cdot(1-\alpha)\hspace{3pt}\%$ confident that $\mu\neq0$, without assuming $H_0$. Proving one of these things leads to conclude that the difference is {\em significant at level $\alpha$}.
\subsection{Statistical errors}
In hypothesis testing, the statistical test can conclude $H_0$ or $H_a$ while each of them can be either true or false. There are four cases:
\begin{table}[H]
\centering
\caption{Hypothesis testing}
\label{my-label}
\begin{tabular}{c|c|c|}
\cline{2-3}
predicted/true & $H_0$ & $H_a$ \\ \hline
\multicolumn{1}{|c|}{$H_0$} & \begin{tabular}[c]{@{}c@{}}True negative\\ $1 - \alpha$\end{tabular} & \begin{tabular}[c]{@{}c@{}}False negative\\ $\beta$\end{tabular} \\ \hline
\multicolumn{1}{|c|}{$H_a$} & \begin{tabular}[c]{@{}c@{}}False positive\\ $\alpha$\end{tabular} & \begin{tabular}[c]{@{}c@{}}True positive\\ $1-\beta$\end{tabular} \\ \hline
\end{tabular}
\end{table}
\newpage
\noindent This leads to two types of errors:
\begin{itemize}
\item
The {\bf type-I error} {\bf rejects $H_0$ when it is true}, also called {\em false positive}. This corresponds to claiming the superiority of an algorithm over another when there is no true difference. Note that we call both the significance level and the probability of type-I error $\alpha$ because they both refer to the same concept. Choosing a significance level of $\alpha$ enforces a probability of type-I error $\alpha$, under the assumptions of the statistical test.
\item
The {\bf type-II error} {\bf fails to reject $H_0$ when it is false}, also called {\em false negative}. This corresponds to missing the opportunity to publish an article when there was actually something to be found.
\end{itemize}
\begin{mymes}{mesintro}
\begin{itemize}
\item In the two-tail case, the null hypothesis $H_0$ is $\mu_{\textnormal{diff}}=0$. The alternative hypothesis $H_a$ is $\mu_{\textnormal{diff}}\neq0$.
\item $p{\normalsize \text{-value}} = P(X_{\textnormal{diff}}\geq \overline{x}_{\textnormal{diff}} \hspace{2pt} |\hspace{2pt} H_0)$.
\item A difference is said {\em statistically significant} when a statistical test passed. One can reject the null hypothesis when 1) $p$-value$<\alpha$; 2) $CI_1$ does not contain $0$; 3) $CI_2$ does not contain $\overline{x}_{\textnormal{diff}}$.
\item {\em statistically significant} does not refer to the absolute truth. Two types of error can occur. Type-I error rejects $H_0$ when it is true. Type-II error fails to reject $H_0$ when it is false.
\item The rate of false positive is 1 out of 20 for $\alpha=0.05$. It grows linearly with the number of experiment $N_E$. Correction procedures can be applied to correct for multiple comparisons.
\end{itemize}
\end{mymes}
\section{Choice of the appropriate statistical test}
\label{sec:test}
In statistics, a difference cannot be proven with $100\%$ confidence. To show evidence for a difference, we use statistical tests. All statistical tests make assumptions that allow them to evaluate either the $p$-value or one of the confidence intervals described in the Section~\ref{sec:def}. The probability of the two error types must be constrained, so that the statistical test produces reliable conclusions. In this section we present two statistical tests for difference testing. As recommended in \cite{henderson2017deep}, the two-sample t-test and the bootstrap confidence interval test can be used for this purpose\footnote{Henderson et al. also advised for the {\bf Kolmogorov-Smirnov test} which tests whether two samples comes from the same distribution. This test should not be used to compare RL algorithms because it is unable to prove any order relation.}.
\subsection{T-test and Welch's t-test}
\label{sec:ttest}
We want to test the hypothesis that two populations have equal means (null hypothesis $H_0$). A 2-sample t-test can be used when the variances of both populations (both algorithms) are assumed equal. However, this assumption rarely holds when comparing two different algorithms (e.g. DDPG vs TRPO). In this case, an adaptation of the 2-sample t-test for unequal variances called Welch's $t$-test should be used \citep{welch1947generalization}. Both tests are strictly equivalent when the standard deviations are equal. $T$-tests make a few assumptions:
\begin{itemize}
\item The scale of data measurements must be continuous and ordinal (can be ranked). This is the case in RL.
\item Data is obtained by collecting a representative sample from the population. This seem reasonable in RL.
\item Measurements are independent from one another. This seems reasonable in RL.
\item Data is normally-distributed, or at least bell-shaped. The normal law being a mathematical concept involving infinity, nothing is ever perfectly normally distributed. Moreover, measurements of algorithm performances might follow multi-modal distributions. In Section,~\ref{sec:assumptions}, we investigate the effects of deviations from normality.
\end{itemize}
Under these assumptions, one can compute the $t$-statistic $t$ and the degree of freedom $\nu$ for the Welch's $t$-test as estimated by the Welch–Satterthwaite equation, such as:
\begin{equation}
t = \frac{x_{\textnormal{diff}}}{\sqrt{\frac{s^2_1}{N_1}+\frac{s^2_2}{N_2}}}, \hspace{1cm} \nu \approx \frac{\Big(\frac{s^2_1}{N_1}+\frac{s^2_2}{N_2}\Big)^2}{\frac{s^4_1}{N^2_1(N_1-1)}+\frac{s^4_2}{N^2_2(N_2-1)}},
\end{equation}
with $x_{\textnormal{diff}} = x_1-x_2$; $s_1, s_2$ the empirical standard deviations of the two samples, and $N_1, N_2$ their sizes. Sample sizes are assumed equal $(N_1=N_2=N)$ thereafter. The $t$-statistics are assumed to follow a $t$-distribution, which is bell-shaped and whose width depends on the degree of freedom. The higher this degree, the thinner the distribution.
\paragraph{}
\figurename~\ref{fig:test_visual} helps making sense of these concepts. It represents the distribution of the $t$-statistics corresponding to $X_{\textnormal{diff}}$, under $H_0$ (left distribution) and under $H_a$ (right distribution). $H_0$ assumes $\mu_{\textnormal{diff}}=0$, the distribution is therefore centered on 0. $H_a$ assumes a (positive) difference $\mu_{\textnormal{diff}}=\epsilon$, the distribution is therefore shifted by the $t$-value corresponding to $\epsilon$, $t_\epsilon$. Note that we consider the one-tail case here, and test for a positive difference.
\paragraph{}
A $t$-distribution is defined by its {\em probability density function} $T_{distrib}^{\nu}(\tau)$ (left curve in \figurename~\ref{fig:test_visual}), which is parameterized by $\nu$. The {\em cumulative distribution function} $CDF_{H_0}(t)$ is the function evaluating the area under $T_{distrib}^{\nu}(t)$ from $\tau=-\infty$ to $\tau=t$. This allows to write:
\begin{equation}
p\textnormal{\small-value} = 1-CDF_{H_0}(t) = 1-\int_{-\infty}^{t} T_{distrib}^{\nu}(\tau) \cdot d\tau.
\end{equation}
\begin{figure}[H]
\centering
{\includegraphics[width=0.9\linewidth]{test_visual.png} }
\caption{\small Representation of $H_0$ and $H_a$ under the $t$-test assumptions. Areas under the distributions represented in red, dark blue and light blue correspond to the probability of type-I error $\alpha$, type-II error $\beta$ and the statistical power $1-\beta$ respectively. \label{fig:test_visual}}
\end{figure}
In \figurename~\ref{fig:test_visual}, $t_\alpha$ represents the critical $t$-value to satisfy the significance level $\alpha$ in the one-tail case. When $t=t_\alpha$, $p$-value$=\alpha$. When $t>t_\alpha$, the $p$-value is lower than $\alpha$ and the test rejects $H_0$. On the other hand, when $t$ is lower than $t_\alpha$, the $p$-value is superior to $\alpha$ and the test fails to reject $H_0$. As can be seen in the figure, setting the threshold at $t_\alpha$ might also cause an error of type-II. The rate of this error ($\beta$) is represented by the dark blue area: under the hypothesis of a true difference $\epsilon$ (under $H_a$, right distribution), we fail to reject $H_0$ when $t$ is inferior to $t_\alpha$. $\beta$ can therefore be computed mathematically using the $CDF$:
\begin{equation}
\beta = CDF_{H_a}(t_\alpha) = \int_{-\infty}^{t_\alpha} T_{distrib}^{\nu}(\tau-t_{\epsilon}) \cdot d\tau.
\end{equation}
Using the translation properties of integrals, we can rewrite $\beta$ as:
\begin{equation}
\beta = CDF_{H_0}(t_\alpha-t_{\epsilon}) = \int_{-\infty-t_{\epsilon}=-\infty}^{t_\alpha-t_{\epsilon}} T_{distrib}^{\nu}(\tau) \cdot d\tau.
\end{equation}
\noindent The procedure to run a Welch's $t$-test given two samples $(x_1, x_2)$ is:
\begin{itemize}
\item Computing the degree of freedom $\nu$ and the $t$-statistic $t$ based on $s_1$, $s_2$, $N$ and $\overline{x}_{\textnormal{diff}}$.
\item Looking up the $t_\alpha$ value for the degree of freedom $\nu$ in a $t$-table\footnote{Available at \url{http://www.sjsu.edu/faculty/gerstman/StatPrimer/t-table.pdf}.} or by evaluating the inverse of the $CDF$ function in $\alpha$.
\item Compare the $t$-statistic to $t_\alpha$. The difference is said statistically significant ($H_0$ rejected) at level $\alpha$ when $t\geq t_\alpha$.
\end{itemize}
\paragraph{}
Note that $t<t_\alpha$ does not mean there is no difference between the performances of both algorithms. It only means there is not enough evidence to prove its existence with $100 \cdot (1-\alpha)\%$ confidence (it might be a type-II error). Noise might hinder the ability of the test to detect the difference. In this case, increasing the sample size $N$ could help uncover the difference.
\paragraph{}
Selecting the significance level $\alpha$ of the $t$-test enforces the probability of type-I error to $\alpha$. However, \figurename~\ref{fig:test_visual} shows that decreasing this probability boils down to increasing $t_\alpha$, which in turn increases the probability of type-II error $\beta$. One can decrease $\beta$ while keeping $\alpha$ constant by increasing the sample size $N$. This way, the estimation $\overline{x}_{\textnormal{diff}}$ of $\overline{\mu}_{\textnormal{diff}}$ gets more accurate, which translates in thinner distributions in the figure, resulting in a smaller $\beta$. The next section gives standard guidelines to select $N$ so as to meet requirements for both $\alpha$ and $\beta$.
\subsection{Bootstrapped confidence intervals}
Bootstrapped confidence interval is a method that does not make any assumption on the distribution of $X_{\textnormal{diff}}$. It estimates the confidence interval $CI_1$ for $\mu_{\textnormal{diff}}$, given a sample $x_{\textnormal{diff}}$ characterized by its empirical mean $\overline{x}_{\textnormal{diff}}$. It is done by re-sampling inside $x_{\textnormal{diff}}$ and by computing the mean of each newly generated sample. The test makes its decision based on whether the confidence interval of $\overline{x}_{\textnormal{diff}}$ contains $0$ or not. It does not compute a $p$-value as such.
\paragraph{}
Without any assumption on the data distribution, an analytical confidence interval cannot be computed. Here, $X_{\textnormal{diff}}$ follows an unknown distribution $F$. An estimation of the confidence interval $CI_1$ can be computed using the {\em bootstrap principle}.
\paragraph{}
Let us say we have a sample $x_{\textnormal{diff}}$ made of $N$ measures of performance difference. The empirical bootstrap sample $x^*_{\textnormal{diff}}$ of size $N$ is obtained by sampling with replacement inside $x_{\textnormal{diff}}$. The bootstrap principle then says that, for any statistic $u$ computed on the original sample and $u^*$ computed on the bootstrap sample, variations in $u$ are well approximated by variations in $u^*$\footnote{More explanations and justifications can be found in \url{https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf}.}. Therefore, variations of the empirical mean such as its range can be approximated by variations of the bootstrapped samples. The bootstrap confidence interval test assumes the sample size is large enough to represent the underlying distribution correctly, although this might be difficult to achieve in practice. Deviations from this assumption are discussed in Section~\ref{sec:assumptions}. Under this assumption, the bootstrap test procedure looks like this:
\begin{itemize}
\item Generate $B$ bootstrap samples of size $N$ from the original sample $x_1$ of $Algo1$ and $B$ samples from from the original sample $x_2$ of $Algo2$.
\item Compute the empirical mean for each sample: $\mu^1_1, \mu^2_1, ..., \mu^B_1$ and $\mu^1_2, \mu^2_2, ..., \mu^B_2$
\item Compute the differences $\mu_{\textnormal{diff}}^{1:B} = \mu_1^{1:B}-\mu_2^{1:B}$
\item Compute the bootstrapped confidence interval at $100\cdot(1-\alpha)\%$. This is basically the range between the $100 \cdot\alpha/2$ and $100\cdot(1-\alpha)/2$ percentiles of the vector $\mu_{\textnormal{diff}}^{1:B}$ (e.g. for $\alpha=0.05$, the range between the $2.5^{th}$ and the $97.5^{th}$ percentiles).
\end{itemize}
\paragraph{}
The number of bootstrap samples $B$ should be chosen large (e.g. $>1000$). If the confidence interval does not contain $0$, it means that one can be confident at $100 \cdot (1-\alpha)\%$ that the difference is either positive (both bounds positive) or negative (both bounds negative), thus, that there is a statistically significant difference between the performances of both algorithms\footnote{An implementation of the bootstrap confidence interval test can be found at \url{https://github.com/facebookincubator/bootstrapped}.}.
\begin{myex}{Example 1 (continued).}
Here, the type-I error requirement is set to $\alpha=0.05$. Running the Welch's $t$-test and the bootstrap confidence interval test with two samples ($x_1,x_2$) of $5$ seeds each leads to a $p$-value of $0.031$ and a bootstrap confidence interval such that $P\big(\mu_{\textnormal{diff}} \in [259, 1564]\big) = 0.05$. Since the $p$-value is below the significance level $\alpha$ and the $CI_1$ confidence interval does not include $0$, both test passed. This means both tests found a significant difference between the performances of $Algo1$ and $Algo2$ with a $95\%$ confidence. There should have been only $5\%$ chance to conclude a significant difference if it did not exist.
\paragraph{}
In fact, we did encounter a type-I error. We know this for sure because {\bf {\em Algo 1} and {\em Algo 2} were the exact same algorithm.} They are both the canonical implementation of DDPG \citep{lillicrap2015continuous} from the OpenAI baselines \citep{baselines}. The first conclusion was wrong, we committed a type-I error, rejecting $H_0$ when it was true. We knew this could happen with probability $\alpha=0.05$. Section~\ref{sec:assumptions} shows that this probability might have been under-evaluated because of the assumptions made by the statistical tests.
\end{myex}
\begin{mymes}{mes1}
\begin{itemize}
\item $T$-tests assume $t$-distributions of the $t$-values. Under some assumptions, they can compute analytically the $p$-value and the confidence interval $CI_2$ at level $\alpha$.
\item The Welch's $t$-test does not assume both algorithms have equal variances but the $t$-test does.
\item The bootstrapped confidence interval test does not make assumptions on the performance distribution and estimates empirically the confidence interval $CI_1$ at level $\alpha$.
\item Selecting a test with a significance level $\alpha$ enforces a type-I error $\alpha$ when the assumptions of the test are verified.
\end{itemize}
\end{mymes}
\section{In theory: power analysis for the choice of the sample size}
\label{sec:theory}
In the Section~\ref{sec:test}, we saw that $\alpha$ was enforced by the choice of the significance level in the test implementation. The second type of error $\beta$ must now be estimated. $\beta$ is the probability to fail to reject $H_0$ when $H_a$ is true. When the effect size $\epsilon$ and the probability of type-I error $\alpha$ are kept constant, $\beta$ is a function of the sample size $N$. Choosing $N$ so as to meet requirements on $\beta$ is called {\em statistical power analysis}. It answers the question: {\em what sample size do I need to have $1-\beta$ chance to detect an effect size $\epsilon$, using a test with significance level $\alpha$?} The next paragraphs present guidelines to choose $N$ in the context of a Welch's $t$-test.
\noindent As we saw in Section \ref{sec:ttest}, $\beta$ can be analytically computed as:
\begin{equation}
\label{eq:beta}
\beta = CDF_{H_0}(t_\alpha-t_{\epsilon}) = \int_{-\infty-t_{\epsilon}=-\infty}^{t_\alpha-t_{\epsilon}} T_{distrib}^{\nu}(\tau) \cdot d\tau,
\end{equation}
where $CDF_{H_0}$ is the cumulative distribution function of a $t$-distribution centered on $0$, $t_\alpha$ is the critical value for significance level $\alpha$ and $t_\epsilon$ is the $t$-value corresponding to an effect size $\epsilon$. In the end, $\beta$ depends on $\alpha$, $\epsilon$, ($s_1$, $s_2$) the empirical standard deviations computed on two samples ($x_1,x_2$) and the sample size $N$.
\begin{myex}{Example 2.}
To illustrate, we compare two DDPG variants: one with action perturbations ($Algo 1$) \citep{lillicrap2015continuous}, the other with parameter perturbations ($Algo 2$) \citep{plappert2017parameter}. Both algorithms are evaluated in the Half-Cheetah environment from the OpenAI Gym framework \citep{brockman2016openai}.
\end{myex}
\subsection{Step 1 - Running a pilot study}
To compute $\beta$, we need estimates of the standard deviations of the two algorithms ($s_1, s_2$). In this step, the algorithms are run in the environment to gather two samples $x_1$ and $x_2$ of size $n$. From there, we can compute the empirical means $(\overline{x}_1, \overline{x}_2)$ and standard deviations $(s_1, s_2)$.
\begin{myex}{Example 2 (continued).}
Here we run both algorithms with $n=5$. We find empirical means $(\overline{x}_1, \overline{x}_2) = (3523, 4905)$ and empirical standard deviations $(s_1, s_2) = (1341, 990)$ for $Algo1$ (blue) and $Algo2$ (red) respectively. From \figurename~\ref{fig:ex2_5}, it seems there is a slight difference in the mean performances $\overline{x}_{\textnormal{diff}} =\overline{x}_2-\overline{x}_1 >0$.
\paragraph{}
Running preliminary statistical tests at level $\alpha=0.05$ lead to a $p$-value of $0.1$ for the Welch's $t$-test, and a bootstrapped confidence interval of $CI_1=[795, 2692]$ for the value of $\overline{x}_{\textnormal{diff}} = 1382$. The Welch's $t$-test does not reject $H_0$ ($p$-value$>\alpha$) but the bootstrap test does ($0\not\in CI_1$). One should compute $\beta$ to estimate the chance that the Welch's $t$-test missed an underlying performance difference (type-II error).
\end{myex}
\begin{figure}[H]
\centering
{\includegraphics[width=\linewidth]{example2_5seeds.png} }
\caption{\small DDPG with action perturbation versus DDPG with parameter perturbation tested in Half-Cheetah. Mean and $95\%$ confidence interval computed over $5$ seeds are reported. The figure shows a small difference in the empirical mean performances. \label{fig:ex2_5}}
\end{figure}
\subsection{Step 2 - Choosing the sample size}
Given a statistical test (Welch's $t$-test), a significance level $\alpha$ (e.g. $\alpha=0.05$) and empirical estimations of the standard deviations of $Algo1$ and $Algo2$ ($s_1,s_2$), one can compute $\beta$ as a function of the sample size $N$ and the effect size $\epsilon$ one wants to be able to detect.
\begin{myex}{Example 2 (continued).}
For $N$ in $[2,50]$ and $\epsilon$ in $[0.1,..,1]\times\overline{x}_1$, we compute $t_\alpha$ and $\nu$ using the formulas given in Section \ref{sec:ttest}, as well as $t_{\epsilon}$ for each $\epsilon$. Finally, we compute the corresponding probability of type-II error $\beta$ using Equation~\ref{eq:beta}. \figurename~\ref{fig:beta} shows the evolution of $\beta$ as a function of $N$ for the different $\epsilon$. Considering the semi-dashed black line for $\epsilon=\overline{x}_{\textnormal{diff}}=1382$, we find $\beta=0.51$ for $N=5$: there is $51\%$ chance of making a type-II error when trying to detect an effect $\epsilon=1382$. To meet the requirement $\beta=0.2$, $N$ should be increased to $N=10$ ($\beta=0.19$).
\end{myex}
\begin{figure}[H]
\centering
{\includegraphics[width=0.8\linewidth]{beta_paper.png} }
\caption{\small Evolution of the probability of type-II error as a function of the sample size $N$ for various effect sizes $\epsilon$, when $(s_1,s_2)= (1341, 990)$ and $\alpha=0.05$. The requirement $0.2$ is represented by the horizontal dashed black line. The curve for $\epsilon=\overline{x}_{\textnormal{diff}}$ is represented by the semi-dashed black line. \label{fig:beta}}
\end{figure}
In our example, we find that $N=10$ was enough to be able to detect an effect size $\epsilon=1382$ with a Welch's $t$-test, using significance level $\alpha$ and using empirical estimations $(s_1, s_2) = (1341, 990)$. However, let us keep in mind that these computations use various approximations ($\nu, s_1, s_2$) and make assumptions about the shape of the $t$-values distribution. Section~\ref{sec:assumptions} investigates the influence of these assumptions.
\subsection{Step 3 - Running the statistical tests}
Both algorithms should be run so as to obtain a sample $x_{\textnormal{diff}}$ of size $N$. The statistical tests can be applied.
\begin{myex}{Example 2 (continued).}
Here, we take $N=10$ and run both the Welch's $t$-test and the bootstrap test. We now find empirical means $(\overline{x}_1, \overline{x}_2) = (3690, 5323)$ and empirical standard deviations $(s_1, s_2) = (1086, 1454)$ for $Algo1$ and $Algo2$ respectively. Both tests rejected $H_0$, with a $p$-value of $0.0037$ for the Welch's $t$-test and a confidence interval for the difference $\mu_{\textnormal{diff}} \in [732,2612]$ for the bootstrap test. Both tests passed. In \figurename~\ref{fig:ex2}, plots for $N=5$ and $N=10$ can be compared. With a larger number of seeds, the difference that was not found significant with $N=5$ is now more clearly visible. With a larger number of seeds, the estimate $\overline{x}_{\textnormal{diff}}$ is more robust, more evidence is available to support the claim that $Algo2$ outperforms $Algo1$, which translates to tighter confidence intervals represented in the figures.
\end{myex}
\begin{figure}[!ht]
\centering
{\includegraphics[width=\linewidth]{example_2.png} }
\caption{\small Performance of DDPG with action perturbation ($Algo1$) and parameter perturbation ($Algo2$) with $N=5$ seeds (left) and $N=10$ seeds (right). The $95\%$ confidence intervals on the right are smaller, because more evidence is available ($N$ larger). The underlying difference appears when $N$ grows. \label{fig:ex2}}
\end{figure}
\begin{mymes}{last}
Given a sample size $N$, a minimum effect size to detect $\epsilon$ and a requirement on type-I error $\alpha$ the probability of type-II error $\beta$ can be computed. This computation relies on the assumptions of the $t$-test.
The sample size $N$ should be chosen so as to meet the requirements on $\beta$.
\end{mymes}
\section{In practice: influence of deviations from assumptions}
\label{sec:assumptions}
Under their respective assumptions, the $t$-test and bootstrap test enforce the probability of type-I error to the selected significance level $\alpha$. These assumptions should be carefully checked, if one wants to report the probability of errors accurately. First, we propose to compute an empirical evaluation of the type-I error based on experimental data, and show that: 1) the bootstrap test is sensitive to small sample sizes; 2) the $t$-test might slightly under-evaluate the type-I error for non-normal data. Second, we show that inaccuracies in the estimation of the empirical standard deviations $s_1$ and $s_2$ due to low sample size might lead to large errors in the computation of $\beta$, which in turn leads to under-estimate the sample size required for the experiment.
\subsection{Empirical estimation of the type-I error}
Remember, type-I errors occur when the null hypothesis ($H_0$) is rejected in favor of the alternative hypothesis $(H_a)$, $H_0$ being correct. Given the sample size $N$, the probability of type-I error can be estimated as follows:
\begin{itemize}
\item Run twice this number of trials ($2 \times N$) for a given algorithm. This ensures that $H_0$ is true because all measurements come from the same distribution.
\item Get average performance over two randomly drawn splits of size $N$. Consider both splits as samples coming from two different algorithms.
\item Test for the difference of both fictive algorithms and record the outcome.
\item Repeat this procedure $T$ times (e.g. $T=1000$)
\item Compute the proportion of time $H_0$ was rejected. This is the empirical evaluation of $\alpha$.
\end{itemize}
\begin{myex}{Example 3}
We use $Algo1$ from Example 2. From $42$ available measures of performance, the above procedure is run for $N$ in $[2,21]$. \figurename~\ref{fig:empirical_alpha} presents the results. For small values of $N$, empirical estimations of the false positive rate are much larger than the supposedly enforced value $\alpha=0.05$.
\end{myex}
\begin{figure}[H]
\centering
{\includegraphics[width=0.80\linewidth]{empirical_alpha.png} }
\caption{\small Empirical estimations of the false positive rate on experimental data (Example 3) when $N$ varies, using the Welch's $t$-test (blue) and the bootstrap confidence interval test (orange). \label{fig:empirical_alpha}}
\end{figure}
\paragraph{}
In our experiment, the bootstrap confidence interval test should not be used with small sample sizes ($<10$). Even in this case, the probability of type-I error ($\approx10\%$) is under-evaluated by the test ($5\%$). The Welch's $t$-test controls for this effect, because the test is much harder to pass when $N$ is small (due to the increase of $t_\alpha$). However, the true (empirical) false positive rate might still be slightly under-evaluated. In this case, we might want to set the significance level to $\alpha<0.05$ to make sure the true positive rate stays below $0.05$. In the bootstrap test, the error is due to the inability of small samples to correctly represent the underlying distribution, which impairs the enforcement of the false positive rate to the significance level $\alpha$. Concerning the Welch's $t$-test, this might be due to the non-normality of our data (whose histogram seems to reveal a bimodal distribution). In Example 1, we used $N=5$ and encountered a type-I error. We can see on the \figurename~\ref{fig:empirical_alpha} that the probability of this to happen was around $10\%$ for the bootstrap test and above $5\%$ for the Welch's $t$-test.
\subsection{Influence of the empirical standard deviations}
The Welch's $t$-test computes $t$-statistics and the degree of freedom $\nu$ based on the sample size $N$ and the empirical estimations of standard deviations $s_1$ and $s_2$. When $N$ is low, estimations $s_1$ and $s_2$ under-estimate the true standard deviation in average. Under-estimating $(s_1,s_2)$ leads to smaller $\nu$ and lower $t_\alpha$, which in turn leads to lower estimations of $\beta$. Finally, finding lower $\beta$ leads to the selection of smaller sample size $N$ to meet $\beta$ requirements. Let us investigate how big this effect can be. In \figurename~\ref{fig:std}, one estimates the standard deviation of a normally distributed variable $\mathcal{N}(0,1)$. The empirical estimation $s$ is quite variable and underestimates $\sigma=1$ in average.
\begin{figure}[h]
\centering
{\includegraphics[width=\linewidth]{std_bias.png} }
\caption{ \small Empirical standard deviation of $X\sim\mathcal{N}(0,1)$. The true standard deviation $\sigma=1$ is represented in red. Mean +/- std are shown.} \label{fig:std}
\end{figure}
We consider estimations of the false negative rate as a function of $N$ when comparing two normal distributions ($\sigma=1$), one centered on $3$, the other on $3+\epsilon$. When we select $n=5$ for a preliminary study and compute estimations ($s_1,s_2$) from this sample, our average error is mean($s_{n=5}$)$=-0.059$ (see above \figurename~\ref{fig:std}). One could also make larger errors: mean($s_{n=5}$)$-$std($s_{n=5}$)$=-0.40$ from the same figure.
\begin{figure}[ht]
\centering
{\includegraphics[width=\linewidth]{comp_beta.png} }
\caption{ \small Evolution of the probability of type-II error as a function of the sample size $N$ and the effect size $\epsilon$, when $(s_1,s_2)= (1-error, 1-error)$ and $\alpha=0.05$. Left: $error=0$, this is the ideal case. Right: $error=0.40$, a large error that can be made when evaluating $s$ over $n=5$ samples. The compared distributions are normal, one is centered on $3$, the other on $3+\epsilon$. \label{fig:combeta} }
\end{figure}
\paragraph{}
\figurename~\ref{fig:combeta} shows the effect of an error of $0.40$ on the evaluation of $\beta$. One can see that, if we want to detect an effect size $\epsilon=0.9$ (green curve) and meet a requirement $\beta=0.2$, one would choose $N=17$ when standard deviations are correctly estimated (left) and $N=7$ when they are under-evaluated. When the number of samples $n$ available in the preliminary study to compute $(s_1,s_2)$ grows, the under-estimation reduces in average and in the worst case. This, in turn, reduces the inaccuracy in the estimation of $\beta$ and therefore in the required $N$. Another solution is to systematically choose the sample size larger than what is prescribed by the computation of $\beta$.
\begin{mymes}{Caution}
\begin{itemize}
\item One should not blindly believe in statistical tests results. These tests are based on assumptions that are not always reasonable.
\item $\alpha$ must be empirically estimated, as the statistical tests might underestimate it, because of wrong assumptions about the underlying distributions or because of the small sample size.
\item The bootstrap test evaluation of type-I error is strongly dependent on the sample size. A bootstrap test should not be used with less than $20$ samples.
\item The inaccuracies in the estimation of the standard deviations of the algorithms ($s_1,s_2$), due to small sample sizes $n$ in the preliminary study, lead to under-estimate the sample size $N$ required to meet requirements in type-II errors.
\end{itemize}
\end{mymes}
\newpage
\section{Conclusion}
In this paper, we outlined the statistical problems that arise when comparing the performance of two RL algorithms. We defined type-I, type-II errors and proposed appropriate statistical tests to test for performance difference. Finally and most importantly, we detailed how to pick the right number of random seeds (the sample size) so as to reach the requirements in both error types.
\paragraph{}
The most important part is what came after. We challenged the hypotheses made by the Welch's $t$-test and the bootstrap test and found several problems. First, we showed significant difference between empirical estimations of the false positive rate in our experiment and the theoretical values supposedly enforced by both tests. As a result, the bootstrap test should not be used with less than $N=20$ samples and tighter significance level should be used to enforce a reasonable false positive rate ($<0.05$). Second, we show that the estimation of the sample size $N$ required to meet requirements in type-II error were strongly dependent on the accuracy of ($s_1,s_2$). To compensate the under-estimation of $N$, $N$ should be chosen systematically larger than what the power analysis prescribes.
\begin{myrecom}{Final recommendations}
\begin{itemize}
\item Use the Welch's $t$-test over the bootstrap confidence interval test.
\item Set the significance level of a test to lower values ($\alpha<0.05$) so as to make sure the probability of type-I error (empirical $\alpha$) keeps below $0.05$.
\item Correct for multiple comparisons in order to avoid the linear growth of false positive with the number of experiments.
\item Use at least $n=20$ samples in the pilot study to compute robust estimates of the standard deviations of both algorithms.
\item Use larger sample size $N$ than the one prescribed by the power analysis. This helps compensating for potential inaccuracies in the estimations of the standard deviations of the algorithms and reduces the probability of type-II errors.
\end{itemize}
\end{myrecom}
\section*{Acknowledgement}
This research is financially supported by the French Minist\`ere des Arm\'ees - Direction G\'en\'erale de l'Armement.
|
1,314,259,995,892 | arxiv | \section{Executive Summary}
The use of quantum sensors in high energy physics has seen explosive growth since the previous Snowmass Community Study. This growth extends far beyond high energy physics (HEP) impacting many areas of science from communications to cryptography to computing. Quantum sensors have been used in searches for dark matter - particle and wave, fifth forces, dark photons, permanent electric dipole moment (EDM), variations in fundamental constants, and gravitational waves, among others. These sensors come in a wide range of technologies: atom interferometers and atomic clocks, magnetometers, quantum calorimeters and superconducting sensors to name a few. Early work with quantum sensors in the context of particle physics often focused in cosmic and rare and precision frontiers, but recent concepts seek to expand the use of quantum sensors to the energy and neutrino frontiers solidifying them as fundamental technologies for the future of experimental HEP. Based upon input to the Snowmass process our topical group has identified several key messages necessary to support the development and use of quantum sensors in HEP:
\begin{itemize}
\item \textbf{Continue strong support for a broad range of quantum sensors. Quantum sensors address scientific needs across several frontiers and different technologies carve out unique parameter spaces.} While these sensors share many common characteristics, each has advantages that make it the sensor of choice for specific applications along with challenges that need further development to make the greatest impact.
\item \textbf{Continue support for R\&D and operation of table-top scale experiments. Many are shovel ready and have the potential for large impact.} Much of the growth in quantum sensors over the past decade has occurred in small, laboratory based experiments.
These fast-paced small experiments should continue to be supported as a way to rapidly develop sensor technology and help determine those areas where quantum sensors can have the greatest impact.
\item \textbf{Balance support of tabletop experiments with pathfinders R\&D to address the large-scale challenges of scaling up experiments which will require National Lab and HEP core competences.} As the fast-paced, small experiments mature, those with significant discovery potential begin to emerge along with areas of commonality between the experiments (e.g.\ the need for advanced high field magnets for axion dark matter experiments or ultra-stable lasers for atom interferometers and clocks). They have reached the point at which plans for larger-scale, longer-term experiments should be conceptualized. These concepts can evaluate the potential reach that can be achieved in a larger effort and the scale of require technological development.
\item \textbf{Develop mechanisms to support interactions outside of the HEP program to enable collaborations with fields with developed expertise in quantum sensors. Advances in quantum information science (QIS) provide exceptional theoretical and experimental resources to advance quantum sensing that could provide mutual benefits in several areas such as materials, detectors, and devices.} Many of the most promising quantum sensors for HEP science have been developing for the past decade or more in areas outside of the traditional HEP science and funding sphere. For example, atomic clocks developed over many decades as a source of precision timing standards are now stable enough they can be used in the search for variations of fundamental constants and gravitational waves. The HEP community should strive to collaborate with these broader communities in a way the gives HEP access to new sensor technologies while sharing HEP expertise (e.g.\ large magnets and vacuum systems). Effort should be made to allow the free flow of ideas and effort across traditional funding boundaries to encourage scientists and engineers working with quantum sensors to tackle the most interesting and challenging problems available.
\item \textbf{Develop mechanisms to facilitate interactions to support theoretical work address issues of materials and measurement methods.} As with other instrumentation frontiers and as quantum sensors become more sensitive, focused support on quantum materials at the interface of quantum sensors and HEP will be needed. This includes theoretical work necessary for on topics including quantum materials, squeezing, and back action.
\item \textbf{Workforce development is needed to encourage workers with the needed skills to engage with the HEP field, maintain current momentum, and for long-term success in the face of growing competition from industrial quantum computing.} While the high energy physics community is poised to benefit from quantum sensor developments outside of HEP, we face a shortage of skilled workers. The explosive growth in quantum computing in recent years, along with arrival of several major tech companies has created a fierce competition for workers with skills needed to develop quantum sensors and experiments. The HEP community will need to invest now in order to train and retain the next generation of quantum scientist. Increasing collaborations outside of HEP -- as discussed above -- can provide an additional pathway to reaching skilled workers and engaging them on HEP challenges.
\end{itemize}
\section{Overview}
\subsection{Introduction}
In this report we provide an overview of recent development in quantum sensors and their scientific impact to the high energy physics community as presented in the many Letters of Intent (LOIs) and white papers submitted during the Snowmass 2021 process. We focus on sensors in which the quantum state of the sensor can be measured and manipulated. Also included are quantum calorimeters to measure individual quanta of energy deposited in the sensor. As a group, quantum sensors are extremely sensitive devices used to explore new physics. The goal is to use hardware and manipulation techniques developed in quantum information science and technology to reach sensitivities better than the standard quantum limit (SQL) over as broad a bandwidth as possible.
Many of the most promising quantum sensors for HEP science have been developed over the past decade or more in areas outside of the traditional HEP science and funding sphere. For example, atomic clocks, developed over many decades as a source of precision timing standards, are now stable enough they can be used in the search for variations of fundamental constants and gravitational waves. At the same time, the field of quantum computing is experiencing multiple breakthroughs. We point the reader to the report from the Quantum Computing topical group in the Computational Frontier for more details, but highlight that many of the technological developments needed to improve quantum computers are also needed to improve quantum sensors: longer coherence times, increases numbers of quantum states, isolation from environmental noise, etc. Quantum sensing and computing also both require a highly skilled workforce. With the arrival of several major tech companies in the field of quantum computing there is fierce competition for workers. The HEP community will need to invest now in order to train the next generation of quantum scientist.
Much of the growth in quantum sensors over the past decade has occurred in small, laboratory based experiments. These small experiments have allowed the broader community to try out many different varieties of sensors aimed at a broad range of scientific targets (see Sec.~\ref{Technologies} for examples). The extreme sensitivity of quantum sensors and quantum techniques often enables these experiments to make significant gains in unexplored parameter spaces at minimal cost while also advancing the sensor technology. Continued support of these `table top' experiments serves as a way to rapidly develop sensor technology and help determine those areas where quantum sensors can have the greatest impact.
As the fast-paced small experiments mature, those with the significant discover potential begin to emerge along with areas of commonality between the experiments (e.g.\ the need for advanced high field magnets for axion dark matter experiments or ultra-stable lasers for atom interferometers and clocks). They have reached the point at which plans for larger-scale, longer-term experiments are being developed. These concepts can evaluate the potential reach that can be achieved in a larger effort and the scale of require technological development.
\subsection {Science}
Quantum sensors encompasses a broad spectrum of technologies (as described in Section \ref{Technologies}) and have the potential to impact a wide range of core HEP science. This can seen in the broad collection of white papers submitted to this community study that make use of these sensors: dark matter - axion, wavelike, and particle from ultra-light to ultra-heavy \cite{Jaeckel.2022,Antypas.2022,Carney.20223m,CDMS.2022,Ebadi.2022,Collaboration.2022,Wang.2022,Essig.2022}, new particles or forces \cite{Berlin.2022}, the electric dipole moment (EDM)\cite{Alarcon.2022}, variations in fundamental constants, gravitational-wave detector facilities \cite{Ballmer.2022}, spacetime symmetries \cite{Adelberger2022}, and neutrino masses \cite{Armatol.2022, Ullom.2022}.
\section{Technologies} \label{Technologies}
\subsection{Interferometers, Optomechanics, and Clocks}
\emph{Note : The contents of this section are in part taken and modified from: Snowmass 2021: Quantum Sensors for HEP Science -- Interferometers, Mechanics, Traps, and Clocks \cite{Carney.20223m}}
\subsubsection*{Atom Interferometers}
Atom interferometry is a growing field with a variety of fundamental physics applications including gravitational wave detection, searches for ultralight (wave-like) dark matter candidates and for dark energy, tests of gravity and searches for new fundamental interactions (“fifth forces”), precise tests of the Standard Model (e.g. fine structure constant), and tests of quantum mechanics. In light-pulse atom interferometry, laser pulses are used to coherently split, redirect, and recombine matter waves.
Conventional atom interferometry makes use of a pair of counter-propagating laser beams to drive two-photon Raman or Bragg transitions while a new variation takes advantage of long-lived excited states in alkaline-earth-like atoms that can be resonantly driven by a single laser beam. In a gradiometer configuration, two identical atom interferometers are run simultaneously on opposite ends of a baseline, using the same laser sources. A comparison of the individual atom interferometer signals yields a differential measurement that enables the cancellation of noise common to both interferometers. This in principle enables superior common-mode rejection of noise, allowing for the possibility of, for example, gravitational wave detection using a single baseline. A passing gravitational wave would modulate the baseline length, while coupling to an ultralight dark matter field can cause a modulation in the energy levels. This combines the prospects for both gravitational wave detection and dark matter searches into a single detector design, and both science signals are measured concurrently.
As one example, the MAGIS concept takes advantage of features of both clocks and atom interferometers to allow for a single-baseline gravitational wave detector. MAGIS-100 is the first detector facility in a family of proposed experiments based on the MAGIS concept. The instrument features a 100-meter vertical baseline and is now under construction at the Fermi National Accelerator Laboratory (Fermilab). State-of-the-art atom interferometers are currently operating at the 10-meter scale, while a kilometer-scale detector is likely required to detect gravitational waves from known sources. The Atom Interferometric Observatory and Network (AION) project envisages a staged Atom Interferometry program, starting with a 10 m device and progressing via a 100 m experiment to a 1 km instrument. AION will enable exploration of the properties of ultra-light dark matter (DM) and gravitational waves (GWs) from the very early Universe and astrophysical sources in the mid-frequency band ranging from several mHz to a few Hz, intermediate between the sensitive ranges of LIGO/Virgo/KAGRA and LISA. The ultimate sensitivity of the AION program will be reached by interoperating and networking with other instruments around the world, similar to the existing LIGO-Virgo network, which will provide science opportunities not accessible to single detectors.
\subsubsection*{Optomechanical Sensors}
Mechanical sensors that can be read read out optically (frequencies range from microwave to visible) have advanced rapidly and are now commonly operated in a regime where their sensitivity becomes dominated by quantum noise in the mechanics or readout system. A wide variety of sensors is available, ranging from single ions to kilograms-scale elements. The sensors are uniquely suited to coherent signals with a scale comparable to the size of the mechanical sensor since the signal is coherently integrated into a small number of degrees of freedom (e.g.\ center of mass motion). A key example of the capabilities of optomechanical sensors is LIGO. Other examples include mechanically suspended reflective pendula; optically levitated dielectrics, cold atoms, and ions; clamped nanomechanical membranes; magnetically levitated systems: and can also include collectively quantized degrees of freedom like phonons.
In addition to their use in gravitational wave detection and precision measurements in metrology, optomechanical devices are rapidly being incorporated into the portfolio of detector systems useful for a number of high energy and particle physics targets. Building on classical proposals for neutrino and dark matter detection with nanoscale targets, proposals now exist to use optomechanical sensors for detection of ultra-light, MeV-to-TeV scale, and ultra-heavy dark matter \cite{Windchime.2022}); neutrinos; high-frequency gravitational waves; fifth-force modifications to Newton’s law at tabletop scales; deviations from standard quantum mechanics (including ideas about gravitational breakdown of quantum mechanics); and tests of quantum properties of the gravitational interaction.
Moving forward, a number of key opportunities exist to increase the utility of these devices in the search for new physics. There is a critical need for new theoretical ideas about potential new signals. There is a push to improve detector technologies to reach sensitivities at and beyond the so-called Standard Quantum Limit (SQL). The most common, well-demonstrated method to go beyond the SQL is the use of squeezed light while a less-studied, but enticing option is the use of back action evasion techniques. Further theoretical development and implementation of these techniques in disparate situations and physical architectures, especially in broadband sensing problems, will be of crucial importance in the next decade. Leveraging multiple sensors (“networks”) and entanglement between them can similarly enable detection beyond the SQL; using these ideas in searches for new physics would be extremely interesting.
\subsubsection*{Clocks and Precision Spectroscopy for Particle Physics}
Optical clock precision has improved by more than three orders of magnitude in the past fifteen years, enabling tests of the constancy of the fundamental constants and local position invariance, dark matter searches, tests of the Lorentz invariance, and tests of general relativity. All current atomic clocks are based on either transitions between the hyperfine substates of the atomic ground state (microwave clocks) or transitions between different electronic levels (optical clocks). The frequency ratio of two optical clock frequency is only sensitive to the variation of the fine structure constant $\alpha$ and optical atomic clocks can probe the standard matter -- dark matter coupling. Promising searches for ultralight particles are feasible through isotope-shift atomic spectroscopy, which is sensitive to a hypothetical fifth force between the neutrons of the nucleus and the electrons of the shell. The analysis of precision isotope shift (IS) spectroscopy sets limits on spin-independent interactions that could be mediated by a new particle which could be associated with dark matter. Deployment of high-precision clocks in space could open the door to new applications, including precision tests of gravity and relativity, searches for a dark-matter halo bound to the Sun, and gravitational wave detection in wavelength ranges inaccessible on Earth. Space-based optical lattice atomic clocks could potentially include the possibility of a tunable, narrowband GW detector that could lock onto and track specific GW signals provide a compliment to other experiments (e.g.\ LISA and LIGO). Radioactive atoms and molecules offer extreme nuclear nuclear charge, mass, and deformations, and may be worked with efficiently with the advanced quantum control toolset of AMO. These rare systems offer an unprecedented amplification of both parity- and time-reversal violating properties.
Several potential pathways existing for improving clock performance: developing new clocks with much larger sensitivity factors; development of large and more integrated clock networks (e.g.\ QSNET \cite{Barontini.2022}); making clocks more portable (critical for space applications); and improving local oscillator technology as it limits coherent integration times. Additionally it is possible to probe multiple clocks with the same laser to cancel out local oscillator noise (similar to using single laser with atom interferometer). This pushes to near SQL. Pushing beyond the SQL can be achieved by using entangled states, such as spin-squeezed states. Gains can also be made by moving to clocks using highly charged ions (also a promising avenue for isotope shift spectroscopy) or nuclear clocks which have much higher sensitivities to the variation of alpha, up to 4 orders of magnitude for nuclear clocks. Nuclear clocks are highly sensitive to the hadronic sector and could offer improvements in sensing of DM coupling by 5-6 orders of magnitude. Use of molecular clocks provide direct sensitivity to determining the ratio of the proton mass to electron mass and its variation.
\subsection{Spin Dependent Sensors}
\emph{Note : The contents of this section are in part taken and modified from: Quantum Sensors for High Precision Measurements of Spin-dependent Interactions. Arxiv (2022). \cite{Budker.2022}}
Experimental techniques for precision measurement of spin-dependent interactions have substantially advanced over recent decades, in no small part because control and measurement of spins, spin ensembles, and quantum materials is at the heart of QIS and quantum computing and they share a common foundation with the robust program of research on spin-based quantum sensors for measurement of magnetic fields, magnetic resonance phenomena, and related phenomena. There are three main ways measurements of spins can probe for new physics: new physics can break symmetries of the Standard Model giving rise to novel responses of spins to other fields (e.g. searching for EDM); the new physics can directly affect the spin for example via an interaction with a new field and the spin (e.g. searches for axions and axion-like particles); and new physics can affect the environment of the spin which the spin can sense (e.g. damage to crystals containing defects centers following interaction new physics such as dark matter particles).
\subsubsection*{Electric Dipole Moments}
The general approach of electric dipole moment (EDM) experiments is to search for the combined effect of a P- and T-odd Hamiltonian and an applied electric field E, which results in an energy shift for a given quantum state of the atom or molecule. Typically the system is spin polarized via optical pumping or some other hyperpolarization technique such that the system is an a superposition of quantum states with opposite EDM-induced energy shifts. Thus a nonzero EDM will cause the polarized spins to precess in the presence of an electric field. There are several general areas of technology development that can advance the fundamental sensitivity of EDM searches: increase the energy shift by finding system with maximum enhancement factors; improve control techniques to increase the total number of polarized atoms/molecules, and achieving longer spin-coherence times.
\subsubsection*{Magnetometers}
Many theories predict the existence of new force-mediating bosons that couple to the spins of Standard Model particles. One of the primary experimental strategies is to employ a sensitive detector of torques on spins and then bring that spin-based torque sensor within a Compton wavelength of an object that acts as a local source of an exotic field (e.g., a large mass or highly polarized spin sample). Since the observable in these experiments is a spin-dependent energy shift a sensor employing N independent spins with coherence time $\tau$ has a shot-noise-limited sensitivity. Common sensors include NV centers, optical atomic magnetometers and Bose-Einstein condensates (BEC). One promising technology is the development of levitated ferromagnetic torque sensors (LeFTS). The active sensing element consists of a hard ferromagnet, well isolated from the environment by, for example, levitation over a superconductor via the Meissner effect. The mechanical response of the levitated ferromagnet to an exotic spin-dependent interaction can be precisely measured using a superconducting quantum interference device (SQUID). Similar to the LeFTS concept, ultracold twobody interactions in the BEC create a fully coherent, single-domain state of the atomic spins that enables the system to evade the sensitivity limits of traditional spin-based sensors.
Beyond the intrinsic sensitivity, the principal challenge in experiments searching for exotic spin-dependent interactions is understanding and eliminating systematic errors: clearly distinguishing exotic spin-dependent interactions from mundane effects due to, for example, magnetic interactions. By comparing the response of two different systems, effects from magnetic fields can be distinguished from effects due to exotic spin-dependent interactions. This is the essence of comagnetometry, where the same field is simultaneously measured using two different ensembles of atomic or nuclear spins. This effort can be extend to searching for transient interactions through the use of networks of geographically distributed spin-dependent sensors. For example the GNOME network will search for transient and stochiastic effects that could arise from ALP fields of astronomical origin passing through the Earth.
\subsubsection*{Magnetic Resonance}
One possible manifestation of ultralight bosonic dark matter is as classical fields oscillating at the Compton frequency. The bosonic dark matter field can cause spin precession via couplings to nuclear and electron spins which can be detected using the broad and versatile tools of magnetic resonance. In a dark matter haloscope experiments, the oscillating field is assumed to always be present, corresponding to case of continuous-wave NMR. The magnetic field is scanned, and if the Larmor frequency matches the Compton frequency, a resonance occurs generating a time-dependent magnetization that can be measured, for example, by induction through a pick-up loop or with a SQUID. This is the method used in the CASPEr experiment. A key to CASPEr’s sensitivity is the coherent “amplification” of the effects of the axion dark matter field through a large number of polarized nuclear spins. Therefore an important technological development is the ability to carry out NMR on the largest possible number of spins.
The QUAX (QUaerere AXion) experiment searches for axion dark matter in a manner similar to CASPEr but by exploiting the interaction of axions with electron spins. Ten spherical yttrium iron garnet (YIG) samples are coupled to a cylindrical copper cavity by means of an applied static magnetic field, and the resulting photon-magnon hybrid system acts as an axion-to-electromagnetic field transducer. The QUAX experiment is one of the most sensitive rf spin magnetometers ever realized, able to measure fields as small as $5.5e-19$ T with nine hours of integration time.
The ARIADNE experiment employs an unpolarized source mass and a spin-polarized $^3$He low-temperature gas to search for a QCD-axion-mediated spin-dependent interaction: the monopole-dipole coupling. In contrast to dark matter haloscopes like CASPEr and QUAX, whose signals depend on the local dark matter density at the Earth, the signal in the ARIADNE experiment does not require axions to constitute dark matter and can be modulated in a controlled way.
\subsubsection*{Quantum Defects}
Searches for dark matter via scattering in crystals will soon run into the neutrino floor - the background of neutrinos from the sun. One path for getting beyond the neutrino floor is to develop directional detectors. Since the direction of the sun is known, the detectors can veto signals coming from the direction of the sun; dark matter interactions by contrast will results in isotropic scattering signals. One proposal for achieving this directional detection is to monitor damage tracks in crystals that occur as the scattering dark matter displaces atoms from their lattice location. These damage tracks can be measured using techniques from quantum sensing such as NV center spin spectroscopy in noncrystalline diamond. The NV center spin state is highly sensitive to the local strain in the crystal. These detectors will require a combination of imaging methods to locate and determine the direction of the damage tracks as described in \cite{Ebadi.2022} but provide a pathway towards WIMP sensitivity below the neutrino limit.
\subsection{Quantum Calorimeters}
Looking for interactions between relic dark matter with mass in the 1\,meV to 100\,MeV range and the visible sector requires the development of detectors with sensitivity to single energy depositions in the far IR (meV) to near IR (eV). Technologies that have a credible R\&D roadmap to achieve these sensitivies include but are not limited to qubits, MKIDs, TES, and SNSPDs. The precise R\&D required to improve sensitivity is sensor specific but broadly falls into three categories. First, the development of sensors from superconducting films with lower superconducting transition temperature, T$_c$, both increases the number of quasiparticles created per energy deposition (MKIDs, SNSPD) and decreases electron-phonon couplings within the film allowing for better thermal isolation (TES). Secondly, as intrinsic sensitivity of the sensor increases to dark matter, it also necessarily increases to a broad range of environmental backgrounds (blackbody IR, EMI, environmental vibrations). As such, commensurate improvements in sensor isolation from the environment must occur in parallel with these sensitivity improvements or else the noise floor will be limited by these external sources (MKIDs, TES, qubits). Third, MKIDs are currently limited by first stage amplifier noise. Implementation of lower noise temperature amplifiers or qubit inspired readout techniques.
Due to the myriad ways that dark matter could potentially interact with the visible sector (photonic, electronic and vibrations), these sensors technologies should ideally be integrated with antenna like structures and anti-reflection coatings to maximize absorption and collection of these small excitation signals. For near IR photon collection, integration of antireflection stacks has already been shown with TES, MKIDs, and SNSPDs. For far IR collection, antennas have been integrated into TES. Finally, for Athermal phonons, Al superconducting traps have been integrated with TES. It's likely that with community effort and engagement these excitation collection and concentration technqiues can both be further refined and integrated with the remaining sensor technologies.
When operated underground in carefully designed IR tight optical cavities that have no non-instrumented insulating materials surrounded by well shielded cryostats using currently available radiopure materials and active high energy photon vetoes, low energy radioactive backgrounds should be controllable to the level of coherent nuclear scattering of solar neutrinos. The majority of experimental techniques used for light mass dark matter searches are currently limited by ``dark events", including long lived meta-stable electronic and lattice state transitions. For example, many crystalline scintillators like NaI have been considered for ERDM searches, but have afterglow, very long lived excited electronic states that produce an indistinguishable rate of single scintillation photons when they decay.
Another example is phonon bursts due to due to multi-atom lattice reconfiguration (microfractures) from high stress energy configurations. That are likely currently limiting all nuclear recoil light mass dark matter searches. Mitigation of these dark events requires a case-by-case understanding of the precise causal mechanism and then development of strategies to depopulate these excited states. For example, the long relaxation time scales for meta-stable electrons in excited states could be shortened by placing the crystal in an IR photon bath. Likewise, annealing of crystalline materials has been shown to substantially decrease the lattice defect density in crystals.
\subsection{Superconducting Sensors}
\emph{Note : Contents of this section are in part taken and modified from Snowmass 2021 White Papers: Axion Dark Matter \cite{adams2022axion}, and Searches for New Particles, Dark Matter, and Gravitational Waves with SRF cavities \cite{berlin2022searches}.}
\subsubsection*{SRF Cavities}
Superconducting radio-frequency (SRF) cavities are critical components in particle accelerators. Advances in cavity performance are the results of an improved understanding of RF superconductivity and materials. In the past 50 years, new cavity processing techniques were developed to overcome limiting phenomena, such as field emission, and enhance the superconductivity.
SRF cavities are, in their essence, extremely high quality electromagnetic resonators, devices that are now of strong active interest for quantum information science (QIS), with demonstrated record-high photon lifetime $\tau\sim 2s$ ($Q>10^{11}$) also in the quantum regime
For quantum computing, quantum states can be stored and manipulated in electromagnetic resonators, and superconductors at milli-Kelvin temperatures are employed to sustain the coherence of the quantum states for long enough to perform complex computations. For quantum sensing, SRF cavities can furnish a large volume where very weak signals of radio-frequency photons
can be collected, with only a small fraction of photons being lost to heat at the cavity walls.
The main focus on the Superconducting Quantum Materials and Systems (SQMS) National QIS Research Center is to advance QIS through the understanding and mitigation of coherence mechanisms in 2D and 3D quantum systems, i.e. planar and cavity based, tackling the decoherence time as a primary limiting mechanisms. This SRF cavity effort is utilized also to pursue fundamental physics questions and pushing the detection sensitivity with SRF cavities. The Snowmass whitepaper \cite{berlin2022searches} summarizes opportunities to search for new particles with SRF cavities at SQMS. The focus is on dark photons and axion (or axion-like particles), either as new particles or dark matter, as well as on gravitational waves. The search for gravity waves across the full spectrum of frequencies, particularly since their discovery by LIGO~\cite{abbott2021gwtc}, is very well motivated, potentially opening a new window onto the early Universe or new physics. In this context SRF cavities can be used to search for GW's~\cite{Berlin:2021txa}.
It is possible to explore dark photon scenarios using SRF cavities light-shining-through-wall setup. The conversion of some of the photons to
dark photons before the wall and conversion back to regular photons past the wall makes such a detection possible, if dark photons exist at a hypothesized mass and
coupling. Resonant cavities can be used on both sides of
the wall to increase the number of photons on the emitting side and to enhance the probability of conversion of
dark photons to visible ones on the receiver side. In particular, in an RF cavity the system can be designed to
search for the parametrically enhanced longitudinal coupling of the dark photon. The Dark SRF experiment at Fermilab plans to conduct such a search with ultra-high quality
cavities \cite{DarkSRF2, Raffelt:1990yz, DarkSRFpaper, graham2014parametrically}.
The following materials science and R\&D efforts are highlighted to expand current physics searches: enhance the efficiency; mitigate nonlinearitites in superconducting cavities due to TLS; reaching high-Q with the cavity in a multi-Tesla field; improving methods for frequency stability and tuning in SRF cavities. New schemes include searches with multiple cavity modes for axions or gravitational waves, nonlinear effects within the cavity walls that can mimic such a signal in particularly if the signal mode is near a harmonic; networks of SRF cavities; quantum nondemolition (QND) measurements with superconducting qubits coupled to SRF cavities.
\subsubsection*{Proposals for axion searches using SRF cavities}
The Axions and axion-like particles (ALPs) is a generalization of the QCD axion which
does not couple to QCD, but does couple to photons or
SM fermions. ALPs are well motivated in their own right in top-down constructions \cite{svrcek2006axions, arvanitaki2010string}. Like the dark photon case, Light-shining-through-wall (LSW) -type axion searches can benefit from high quality factors, which warrants the harnessing of advances in SRF technology. The necessity for a background magnetic field, however presents a challenge, as high-quality superconductivity does not survive large fields. Novel approaches to allow large magnetic fields with no degradation of Q-factor in SRF cavities are posed.
\begin{description}
\item[Two cavities with Static Field:] One technique to utilize both high-Q SRF cavities and large magnetic fields for a LSW axion search is to sequester the required magnetic fields away from the production and detection cavities \cite{janish2019axion}. With this approach neither SRF cavity is subject to large magnetic fields and neither suffers a degradation of Q-factor. However, losses in the walls of the conversion region can result in a decrease of the effective Q of the entire system.
\item[Two Cavities with a pump mode:] An alternative approach is to replace the static B-field with an oscillatory B-field, which can then be directly run inside the receiver cavity. Sources of noise due to the multi-mode setup can be mitigated by using a pump with high-Q and with the pump frequency well separated from the signal mode frequency. In addition, such noise sources can be further suppressed by optimizing the cavity geometry and material science techniques to reduce nonlinearities \cite{sikivie2010superconducting, gao2021axion}.
\item[Single-Cavity Axion Search and Euler-Heisenberg:] The EH Lagrangian makes a prediction for light-by-light scattering within the SM, which has never been observed at photon frequencies below the electron mass $m_e=511$ keV because the effect is highly suppressed at low energies.
The operating system of a proposed experiment to search
for both the axion-induced and EH nonlinearities using
high-Q SRF cavities is described in \cite{bogorad2019probing, eriksson2004possibility, brodin2001proposal, schwinger1951gauge}. This two-cavity scheme is less sensitive to noise sources which generate nonlinearities in the pump region.
\end{description}
\subsubsection*{Qubit-based single photon counting}
The integration of a
qubit into an ultra high cavity may enable new
schemes for quantum computing and synergetically allow for employing a photon counting non-demolition measurement for DM searches. For certain DM search
schemes, it would also be beneficial to have qubits
that can operate successfully even in high magnetic
fields \cite{dixit2021searching}.
Cavity haloscopes have traditionally extracted the DM
signal via an antenna connected to a linear amplifier,
such as a Josephson Parametric Amplifier (JPA). Unfortunately, linear amplifiers contribute to their own noise power, and their minimum contribution is the standard quantum limit (SQL). SQL noise increases linearly with frequency, and thus it is necessary to subvert the SQL to make higher-mass searches feasible.
Several ongoing or proposed experiments utilizes SRF resonators coupled to superconducting qubits to detect bosonic dark matter candidates below the SQL. Two experiments have demonstrated sub-SQL detection: HAYSTAC by implementing vacuum squeezing \cite{backes2021quantum} and SQuAD by implementing qubit-based photon counting \cite{dixit2021searching}. SQMS also plans to combine SRF cavity technology and qubit based photon counting to increase the DM search rate by several order of magnitudes.
The Superconducting Qubit Advantage for Dark Matter (SQuAD) experiment plans to perform resonant searches for dark matter axions with DFSZ sensitivity in a broad range from 10-30 GHz using high quality factor dielectric cavities combined with
qubit-based single photon detectors which evade the quantum zero-point noise. R\&D is ongoing on developing an analogous photon counting readout based on Rydberg atoms which can be operated at the higher frequencies where qubit devices become more difficult to design and fabricate.
\newline
\subsubsection*{Networks and transduction}
Recently, it was shown that the performance of a quantum network could be utilized further to improve axion DM searches \cite{brady2022entangled}. However, the noise in the network will be incoherent among the network nodes. One can make use of distributed squeezed states to exploit the coherent nature of the DM signal. Combining quantum resources (squeezing) in a distributed-network setting can allow for a scan that is
faster by a factor of the square number of network nodes
in the ideal case. The improvement is enabled by adding
the signal at the amplitude level rather than adding powers
in the classical network case.
A quantum transduction project at Fermilab is exploring hybrid coherent resonance systems and bi-directional quantum transduction schemes to up/down- convert the microwave information to/from the optical regime and enhance the conversion efficiency at the quantum threshold, and below the SQL. Up/down photon conversion may also enable highly sensitive axion and dark photon haloscope searches in the THz regime, or microwave single photon counting in optical systems, taking advantage of optical sensing techniques, e.g.\ high precision counting in interferometry and reduced noise floor \cite{derevianko2022quantum}, also in the Snowmass LOI \emph{Opportunities for Optical Quantum Noise Reduction} \cite{derevianko2022quantumLOI}.
Transduction in the mm-wave regime is also proposed in the LOI \emph{Transduction for new Regimes in quantum sensing} \cite{Transduction_mm} as an effective way for linking the classical and quantum world, and for transporting quantum information on macroscopic scales. Low-loss mm-wave photonics could allow preservation of quantum information at room temperature for a simpler network at laboratory scales, as well as reaching out the frequency range for axions above $\sim$10 GHz ($\sim$40 µeV) is beyond the reach of current experiments (ADMX). \newline
\subsubsection*{Cryogenic Platform for Scaled-up Sensing Experiments}
The SQMS center at Fermilab is developing a cryogenic
platform capable of reaching millikelvin temperatures in
an experimental volume of 2 meters diameter by 1.5 meters in height \cite{hollister2021large}. The platform is designed to host a
three-dimensional qubit architecture based on SRF technology, as well as sensing experiments.
\subsubsection*{SNSPD}
Superconducting-nanowire single-photon detectors (SNSPD) are ideally suited for sensing lowcount-
rate signals due to their high internal efficiency and low dark-count rates. Recent
proposals for axion search either require SNSPDs that can operate in the presence
of large magnetic fields, or require some means of carrying the light generated by the
haloscope from the high-field region to a low-field region where the detectors can operate.
The recently established robustness of SNSPDs to operation in high fields and
their ability to operate at elevated temperatures (relative to alternative superconducting
detector technologies) make them well-suited for photon detection in the mid-infrared (meV) to visible (eV) energy range. The suitability of SNSPDs to applications requiring
low dark-count rates is illustrated by recent progress in the LAMPOST prototype search for dark photon dark-matter using these devices \cite{chiles2021first}.
\subsubsection*{Other superconducting and cryogenic sensors}
Cryogenic sensors have found a large range of applications for astroparticle detection. Due to integration complexity and thermal loading from cryogenic wiring, the ability to read out multiple detectors on a single wire with cryogenic multiplexing technologies with minimal readout noise penalty is
of utmost importance as experiments are scaled to ever larger detector counts. Several variations of SQUID multiplexers have been used to field large sensor arrays including time division multiplexing (TDM) and frequency division multiplexing (FDM) systems. One FDM implementation, the microwave SQUID
multiplexer (µmux), couples an incoming detector signal to a unique GHz-frequency resonance, thus combining the multiplexability of MKIDs with the clean separation of detection and readout interfaces. This
enables multiplexing factors up to two orders of magnitude larger than conventional cryogenic multiplexing
schemes.
The wide frequency operation span enables large detector counts for low-bandwidth bolometric
applications such as CMB cosmology while maintaining clean interfaces between the detection and readout schemes. Additionally, the large frequency bandwidth and fast resonator response allow for cryogenic
particle detection, such as low-mass threshold dark matter searches, beta decay end point measurements
to determine the lightest neutrino mass, and coherent elastic neutrino-nucleon scattering.
The CUPID collaboration in the snowmass whitepaper \emph{Toward CUPID-1T}~\cite{armatol2022toward} presents a series of projects underway that will provide advancements in background reduction, cryogenic readout, and physics searches, all moving toward the next-to-next generation CUPID-1T detector.
Neutron-transmutation doped thermistors (NTDs) are expected as part of the baseline design for CUPID. Multiple modes of superconducting sensors are under development as we look toward
CUPID-1T: Microwave Kinetic Inductance Detectors (MKIDs), Metallic Magnetic Calorimeters (MMCs),
and high- and low-impedance Transition Edge Sensors (TESes).
\section{Common Areas for Development}
\begin{itemize}
\item Back action evasion:
Back action evading schemes and squeezing techniques can enhance the sensitivity measurement of quantum sensors down to the SQL. Many experiments (for example NMR experiments to axion dark matter) will have their sensitivity limited by quantum back action and techniques will need to be developed for experiments approaching fundamental projection noise sensitivity limits. One of the purposes of new transduction projects is to leverage both microwave and optical sensing techniques as a way means to implement back action evasion.
\item Supporting technologies (material science, laser, cavities, magnets, etc.):
Several sensing experiments are enabled by SRF cavities with high-Q. Material studies, efforts toward mitigating TLS-driven losses, and enhancing operation under multi-Tesla magnetic field can provide new resources for quantum sensors. Collaboration with non-HEP groups such as those that drive quantum computing focused material science studies may stimulate interest in developing loss mitigation strategies that further improve SRF cavity Q. Similarly, many experiments rely on the use of high-field magnets. Efforts to increase the magnitude, uniformity and scale (larger magnet bores) can result in direct improvements to the experimentally reachable parameter space.
\item Infrastructures: The same characteristics that allow quantum sensors to probe new parameter space allows them to be sensitive to a wide range of noise sources. In some cases, experiments may need to be placed in underground labs to avoid noise sources such as cosmic rays or maintain radio purity of sensor materials. Development of shared infrastructure, e.g.\ underground facilities with cryogenic and or magnetic capabilities could enable the advancement of multiple experimental techniques in a single facility.
\item SBIR program, interaction with companies:
DOE programs for commercialization and technology transfer programs, such as SBIR/STTR, provides platforms and resources to develop technology for quantum sensors and HEP. With the rapid rise of commercial sector quantum computing and associated technologies, new opportunities are emerging for interactions between government sponsored researchers and the commercial sector.
\end{itemize}
|
1,314,259,995,893 | arxiv | \section*{Checklist}
\begin{enumerate}
\item For all authors...
\begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{}
\item Did you describe the limitations of your work?
\answerYes{See \autoref{sec:discussion}.}
\item Did you discuss any potential negative societal impacts of your work?
\answerYes{See \autoref{sec:discussion}.}
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes{See \autoref{sec:discussion}.}
\end{enumerate}
\item If you are including theoretical results...
\begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerNA{}
\item Did you include complete proofs of all theoretical results?
\answerNA{}
\end{enumerate}
\item If you ran experiments...
\begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerYes{See Abstract, \autoref{sec:eval}, and Appendix.}
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{See \autoref{sec:eval} and Appendix.}
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerNo{}
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerYes{See \autoref{sec:eval}.}
\end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
\begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerYes{See \autoref{sec:eval}.}
\item Did you mention the license of the assets?
\answerYes{See Appendix.}
\item Did you include any new assets either in the supplemental material or as a URL?
\answerYes{See Abstract.}
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerYes{See Appendix.}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerYes{See Appendix.}
\end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects...
\begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA{}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA{}
\end{enumerate}
\end{enumerate}
\section*{Acknowledgement}
\vspace{-0.2cm}
We thank the anonymous reviewers for their constructive comments.
This work is supported by IARPA TrojAI W911NF-19-S-0012.
Any opinions, findings, and conclusions expressed in this paper are those of the authors only and do not necessarily reflect the views of any funding agencies.
\section{Appendix}\label{sec:appendix}
\noindent
{\bf Roadmap:}
More details of Algorithm 1 is introduced in \autoref{sec:more_details_alg}.
Then, we present more details of the datasets (\autoref{sec:appendix_details_datasets}) and attacks (\autoref{sec:appendix_details_attacks}) used in the experiments.
We also perform an ablation study for the Trojan mitigation task in \autoref{sec:ablation_mitigation}.
In \autoref{sec:eval_visualization}, we visualize our reverse-engineered Trojans.
We also show the generalization (\autoref{sec:generalization}) and the efficiency (\autoref{sec:efficiency}) of \mbox{\textsc{FeatureRE}}\xspace.
Finally, we discuss the findings in the evaluation (\autoref{sec:findings_evaluation}).
\subsection{More details of Algorithm 1}\label{sec:more_details_alg}
In this section, we discuss more details of our Reverse-engineering Algorithm (\autoref{alg:detection1}).
Given a model \(\mathcal{M}\) and a small set of clean samples \(\mathcal{X}\), the output of the algorithm is a flag indicating if the model is Trojaned, and Trojaned label pairs denoting the source label and the target label of the detected Trojans.
In line 2, we iterate (source label, target label) pair from possible pairs \(K\).
\(E\) in line 3 means the maximal optimization epoch number for each pair.
It is set to 400 in this paper.
In line 4, we randomly sample a batch of inputs from the samples in source classes.
The batch size is set to 128 by default.
In lines 5 to Line 11, we optimize the parameters of the input space transformation \(F\), which is represented by a UNet~\cite{ronneberger2015u} model in our implementation.
In line 5, we calculate the loss value specified in \autoref{eq:re}, where \(\bm a = \mathcal{A}(\bm x)\) is the inner feature on clean samples.
By default, \(\mathcal{A}\) is the submodel from the input layer to the penultimate layer, and \(\mathcal{B}\) is the submodel from the penultimate layer to the output layer.
\(\bm m\) is the feature space trigger mask.
\(\bm{t} = mean \left(\bm m \odot \mathcal{A}(F(\mathcal{X})\right)\) is the feature space trigger pattern. \(\mathcal{L}\) is the cross-entropy loss calculating the distance between the target label and the output of the model under inner features with feature space Trojans.
In line 6, if the input space MSE (Mean Square Error) distance for original inputs \(\bm x\) and the transformed inputs \(F(\bm x)\) is larger than a threshold value \(\tau_1\) (i.e., 0.15), then the regularization item \(w_1\cdot\|F(\bm x) - \bm x\|\) will be added.
Note that we calculate input space distance on the preprocessed inputs, and the details of the preprocessing are in \autoref{sec:appendix_details_datasets}.
Following NC~\cite{wang2019neural}, the coefficient value \(w_1\) is adjusted dynamically to make the reverse-engineering satisfy the constrain (i.e., \(\|F(\bm x) - \bm x\| \leq \tau_1\)).
\(w_2\) in line 9 and \(w_3\) in line 14 are also adjusted dynamically.
In lines 8-9, similarly, we add the regularization item for the standard deviation of different Trojan samples' activation values on each pixel in the hyperplane.
The default value for \(\tau_2\) is 0.25.
Lines 10-11 are the standard backward propagation process to update the parameters of the input space transformation function \(F\) based on the gradients.
The optimizer used to optimize \(F\) is Adam~\cite{kingma2014adam}.
The value of learning rate \(lr_1\) is 1e-3.
In each epoch, we optimize both the input space transformation function \(F\) and the feature space mask \(\bm m\).
Lines 12-16 describes the process for optimizing \(\bm m\).
Similar to line 5, we calculate the cross-entropy loss between the target label and the output of the model under inner features with feature space Trojans in line 12.
In lines 13-14, we add the regularization item for the size of the feature space Trojan hyperplane.
The default value for \(\tau_3\) is 5\% of the whole feature space.
Lines 15-16 describe the process of updating feature space mask \(\bm m\) via gradients.
The value of learning rate \(lr_2\) in line 16 is 1e-1.
The optimizer used is Adam~\cite{kingma2014adam}.
In line 17, we check if the Trojan is successfully reverse-engineered.
In detail, we calculated the ASR (attack success rate) on inner features with feature space Trojans (i.e., \((1-\bm m) \odot \bm a + \bm m \odot \bm t\)).
We flag that reverse-engineering is successful if the ASR is above a threshold value \(\lambda\) (i.e., 0.8).
If the Trojan is successfully reverse-engineered, we flag the model as a Trojan model and label the (source class, target class) pair as Trojaned pair.
Besides the details above, we also use K-arm scheduler~\cite{shen2021backdoor} to speed up the reverse engineering.
Lastly, we use Liu et al.~\cite{liu2022complex} to distinguish the Injected Trojans and UAPs (Universal Adversarial Patterns)~\cite{moosavi2017universal}.
\subsection{Details of Datasets}\label{sec:appendix_details_datasets}
In this section, details of the used
datasets are discussed.
We also provide the details of the preprocessing for each dataset. All datasets are open-sourced.
The license for all datasets is the MIT license.
They do not contain any personally identifiable information or offensive content.
\smallskip
\noindent
\textbf{MNIST~\cite{lecun1998gradient}.}
This dataset is used for classifying hand-written digits.
It contains 60000 training samples in 10 classes.
The number of samples in the test set is 10000.
\smallskip
\noindent
\textbf{GTSRB~\cite{stallkamp2012man}}
This dataset is built for traffic sign classification tasks.
The number of classes is 43.
The sample numbers for the training set and test set are 39209 and 12630, respectively.
\smallskip
\noindent
\textbf{CIFAR10~\cite{krizhevsky2009learning}}
This dataset is used for recognizing general objects, e.g., dogs, cats, and planes.
It has 50000 training samples and 10000 training samples.
This dataset has 10 classes.
\smallskip
\noindent
\textbf{ImageNet~\cite{russakovsky2015imagenet}}
This dataset is also a general object classification benchmark.
Note that we use a subset (containing 200 classes) of the original ImageNet dataset specified in ISSBA~\cite{li2021invisible}.
The subset has 100000 training samples and 10000 test samples.
Following standard convention on the image classification task, we scale the inputs to the range [0,1] and use mean-std normalization to preprocess the images.
In detail, the preprocessing can be written as \(\bm x^{\prime} = \frac{(\frac{\bm x}{255} - Mean)}{Std}\), where \(\bm x^{\prime}\) is the normalized input and \(\bm x\) is the original inputs.
The Mean value and Std (Standard Deviation) value for each channel on different datasets are summarized in \autoref{tab:preprocess}.
\input{tf/preprocess}
\subsection{Details of Attacks}\label{sec:appendix_details_attacks}
In this section, we discuss the details of the used attacks.
By default, the attacks are in all-to-one (i.e., single-target) setting, and the target label is randomly selected when we generate Trojaned models.
\noindent
\textbf{BadNets~\cite{gu2017badnets}.}
This attack uses a fixed pattern (i.e., a patch or a watermark) as Trojan triggers, and it generates Trojan inputs by simply pasting the pre-defined trigger pattern on the input.
It compromised the victim models by poisoning the training data (i.e., injecting Trojan samples and modifying their labels to target labels).
In our experiments, we use a 3*3 yellow patch located at the left-upper corner as Trojan trigger.
The poisoning rate we used is 5\%.
The attack can be all-to-one (i.e., single-target) and all-to-all (i.e., label-specific).
For an all-to-one attack, all Trojan samples have the same target label.
For label-specific attacks, the samples in different original classes have different target labels.
In our experiment, the target label for label-specific attack is \(y_T = \eta(y) = y+1\), where \(\eta\) is a mapping and \(y\) is the correct label of the sample.
\noindent
\textbf{Filter Attack~\cite{liu2019abs}.}
This attack exploits image filters as triggers and creates Trojan samples by applying selected filters on images.
Similar to BadNets, the Trojans are injected with poisoning.
Following ABS~\cite{liu2019abs}, we use a 5\% poisoning rate and apply the Nashville filter from Instagram as the Trojan trigger.
\noindent
\textbf{WaNet~\cite{nguyen2021wanet}.}
This method achieves Trojan attacks via image warping techniques.
The trigger transformation of this attack is an elastic warping operation.
Different from BadNets and Filter Attack, in this attack, the adversary needs to modify the training process of the victim models to make the attack more resistant to Trojan defenses.
It is stealthy to human inspection, and it can also bypass many existing Trojan defense mechanisms~\cite{chen2018detecting,gao2019strip,wang2019neural,liu2018fine}.
In our experiments, the wrapping strength and the grid size are set to 0.5 and 4, respectively.
\noindent
\textbf{Input-aware Dynamic Attack~\cite{nguyen2020input}.}
This attack generates Trojan triggers via a trained generator network.
The trigger generator is trained on a diversity loss so that two different input images do not share the same trigger.
Similar to WaNet~\cite{nguyen2021wanet}, the attacker needs to control the training process.
\noindent
\textbf{SIG~\cite{barni2019new}.}
This method uses superimposed sinusoidal signals as Trojan triggers.
In this attack, the attacker can only poison a set of training samples but can not control the full training process.
We set the poisoning rate as 5\%.
The frequency and the magnitude of the backdoor signal in our experiments are 6 and 20, respectively.
\noindent
\textbf{Clean Label Attack~\cite{turner2019label}.}
This attack poisons the datasets without manipulating the label of poisoning samples so that the attack is more stealthy.
The poisoning samples are generated by a trained GAN.
In our experiments, we set the poisoning rate as 5\%.
\noindent
\textbf{ISSBA~\cite{li2021invisible}.}
This attack utilizes an encoder-decoder network to generate sample-specific triggers.
The generated triggers are invisible noises.
The generated noises also contain the information of a representative string of the target label.
The threat model of this attack is that the attacker can only poison the training data, but can not control other components in training (e.g., the loss function).
Following the original paper, we poison 10\% training data in our experiments.
\subsection{Ablation Study on Trojan Mitigation}
\label{sec:ablation_mitigation}
In this section, we study the performance of \mbox{\textsc{FeatureRE}}\xspace under different constrain values and different numbers of used clean samples.
The attack used in this section is WaNet~\cite{nguyen2021wanet}.
\noindent
\textbf{Influence of constrain values.}
To investigate the influence constrain values (i.e., \(\tau_1\), \(\tau_2\), and \(\tau_3\)) on the Trojan mitigation performance, We vary \(\tau_1\) from 0.05 to 0.35, change \(\tau_2\) from 0.10 to 0.50, and tune \(\tau_3\) from 1\% of the whole feature space to 7\% of the whole feature space.
We collect the BA and ASR of the mitigated models and report them in \autoref{tab:ablation_mitigation}.
The results show that the mitigation performance of \mbox{\textsc{FeatureRE}}\xspace is not sensitive to \(\tau_1\) and \(\tau_2\).
For \(\tau_3\), when the size of the Trojan hyperplane is extremely small (e.g., 1\% of the feature space), the ASR is high.
This is understandable because breaking an extremely small feature space Trojan hyperplane means flipping a very small number of neurons, and it is not enough to completely remove the Trojans in the model.
Therefore, we set the default value of the hyperplane's size as 5\% of the feature space.
\input{tf/ablation_mitigation}
\noindent
\textbf{Number of clean reference samples.}
To understand the influence of clean set size on the Trojan mitigation task, we vary the number of used clean samples from 5 per class to 100 per class and report the BA and ASR of mitigated model.
The results in \autoref{tab:ablation_mitigation_datanum} demonstrate that the performance of \mbox{\textsc{FeatureRE}}\xspace is robust when the number of used samples changes.
\input{tf/ablation_mitigation_datanum}
\subsection{Visualization of Reverse-Engineered Trojans}\label{sec:eval_visualization}
To understand our method and study if it can reverse-engineer Trojans accurately, we visualize the inputs and inner features of clean samples, real Trojan samples, and reversed Trojan samples on nine randomly selected samples in \autoref{fig:reversed_triggers}.
The model is ResNet18 injected with Filter Trojan~\cite{liu2019abs}, Blend Trojan~\cite{chen2017targeted} and SIG Trojan~\cite{barni2019new}.
In the feature space, the reverse-engineered Trojan is close to the real Trojan, demonstrating the effectiveness of our reverse-engineering method.
\input{figtex/reversed_triggers.tex}
\subsection{Generalization}\label{sec:generalization}
\textbf{Performance on mitigation task for more attacks.}
To measure the effectiveness of \mbox{\textsc{FeatureRE}}\xspace on Trojan mitigation task, we use more Trojan attacks and report BA and ASR of our method.
Besides the results of BadNets~\cite{gu2017badnets}, Filter~\cite{liu2019abs}, WaNet~\cite{nguyen2021wanet} and IA~\cite{nguyen2020input} in \autoref{tab:unlearning}, in \autoref{tab:mitigation_more_attacks}, we also show the BA and ASR on LS~\cite{gu2017badnets}, CL~\cite{turner2019label} and SIG~\cite{barni2019new}.
The dataset and the model used is CIFAR-10 and ResNet18, respectively.
For LS, CL, and SIG, the ASR of \mbox{\textsc{FeatureRE}}\xspace is 1.15\%, 2.62\%, and 1.22\%, which are 80.01, 33.18, and 81.22 times lower than that of undefended models.
As can be observed, \mbox{\textsc{FeatureRE}}\xspace can effectively reduce the ASR while keeping the BA nearly unchanged.
Thus, \mbox{\textsc{FeatureRE}}\xspace is robust to different attacks on mitigation task.
\input{tf/mitigation_more_attacks}
\textbf{Generalization to different models.}
To understand the generalization of \mbox{\textsc{FeatureRE}}\xspace to different model architectures, we evaluate its detection accuracy on BadNets~\cite{gu2017badnets}, Filter~\cite{liu2019abs}, WaNet~\cite{nguyen2021wanet}, IA~\cite{nguyen2020input}, LS~\cite{gu2017badnets}, CL~\cite{turner2019label}, and SIG~\cite{barni2019new} attacks using VGG16~\cite{simonyan2014very}, ResNet18~\cite{he2016deep}, Preact-ResNet18 (PRN18)~\cite{he2016identity}, LeNet5~\cite{lecun1998gradient}, and 4Conv+2FC~\cite{xu2019detecting}.
The results are summarized in \autoref{tab:generalization_to_more_models}.
In \autoref{tab:generalization_to_large_size}, we also report \mbox{\textsc{FeatureRE}}\xspace's performance on a larger model (i.e., Wide-ResNet34~\cite{zagoruyko2016wide}).
In all settings, the detection accuracy is above 80\%, and the average detection accuracy on VGG16, ResNet18, and PRN18 is 89.26\%, 91.43\%, and 90.71\%, respectively.
\mbox{\textsc{FeatureRE}}\xspace achieves high detection accuracy on all different models, demonstrating it is generalizable to different model architectures and larger models.
\textbf{Generalization to large input size.}
To see if \mbox{\textsc{FeatureRE}}\xspace can generalize to large datasets, we report its accuracy on the ImageNette\footnote{https://github.com/fastai/imagenette} dataset under different attacks.
The input size of ImageNette is 3 \(\times \) 224 \(\times \) 224.
The model architecture used here is Wide-ResNet34~\cite{zagoruyko2016wide}.
For each attack, we have 5 Trojaned models.
We also train 5 benign models.
The results are in \autoref{tab:generalization_to_large_size}.
For all different attacks, the detection accuracy of \mbox{\textsc{FeatureRE}}\xspace is above 80\%.
The average detection accuracy on a large input size is 91.43\%.
Thus, our method can generalize to large input sizes.
\input{tf/generalization_to_large_size}
\subsection{Efficiency}\label{sec:efficiency}
In this section, we measure the efficiency of \mbox{\textsc{FeatureRE}}\xspace.
Like existing reverse-engineering methods~\cite{wang2019neural,guo2019tabor,chen2019deepinspect}, it scans all labels.
We optimize this process with a K-arm scheduler~\cite{shen2021backdoor}, which uses the Multi-Arm Bandit to iteratively and stochastically select the most promising labels for optimization.
We measure the average runtime on the CIFAR-10 and ImageNet datasets.
The model used is ResNet18.
The running time on CIFAR-10 and ImageNet are 530.8s and 8934.5s, respectively.
\subsection{Discussions}\label{sec:findings_evaluation}
One finding we have is that using later layers to conduct the reverse-engineering is relatively better than using earlier layers (more results and details can be found in \autoref{sec:split}).
We also found that \mbox{\textsc{FeatureRE}}\xspace's performance under the clean-label attack is relatively worse than that of other attacks.
We suspect this is because the benign and Trojan features of the clean-label attack are highly mixed.
As a consequence, the clean label attack has lower ASR than other attacks.
For example, the ASR of the clean-label attack and BadNets are 86.94\% and 100.00\%, respectively.
\section{Background \& Motivation}\label{sec:background}
A DNN classifier is a function \(\mathcal{M} : \mathcal{X}\mapsto \mathcal{Y}\) where \(\mathcal{X}\) is the input domain \(\mathbb{R}^{m}\) and \(\mathcal{Y}\) is a set of labels \(K\).
A Trojan (or backdoor) attack against a DNN model \(\mathcal{M}\) is a malicious way of perturbing the input so that an adversarial input \(x^\prime \) (i.e., input with the perturbation pattern) will be classified to a target/random label while the model maintains high accuracy for benign input \(x\).
The perturbation pattern is known as the Trojan trigger.
Trojan attacks can happen in training (e.g., data poisoning) or model distribution (e.g., changing model weights or supply-chain attack).
Existing works have shown Trojan attacks against different DNN models, including computer vision models~\cite{gu2017badnets,liu2017trojaning,chen2017targeted}, Graph Neural Networks (GNNs)~\cite{xi2021graph, zhang2021backdoor}, Reinforcement Learning (RL)~\cite{kiourti2020trojdrl,wang2021backdoorl}, Natural Language Processing (NLP)~\cite{chen2021badnl,chan2020poison,yang2021careful,qi2021mind,yang2021rethinking,qi2021hidden}, recommendation systems~\cite{zhang2021pipattack}, malware detection~\cite{severi2021explanation}, pretrained models~\cite{shen2021backdoor,yao2019latent,jia2021badencoder}, active learning~\cite{vicarte2021double}, and federated learning~\cite{bagdasaryan2020backdoor,xie2019dba}.
The Trojan trigger can be a simple input pattern (e.g., a yellow pad)~\cite{gu2017badnets,liu2017trojaning,chen2017targeted} or a complex input transformation function (e.g., a CycleGAN to change the input styles)~\cite{cheng2020deep,salem2022dynamic,nguyen2021wanet,nguyen2020input,li2021invisible}.
If the trigger is static input space perturbations (e.g., a yellow pad), the Trojan attack is known as \textit{input-space Trojan}, and if the trigger is an input feature (e.g., an image style), the attack is referred to as the \textit{feature-space Trojan}.
There are different types of Trojan defenses.
A line of work~\cite{chen2018detecting,tran2018spectral,hayase2021defense} attempts to remove poisoned data samples by cleaning the training dataset.
Training-based methods~\cite{li2021anti,wang2022towards,huang2022backdoor} train benign classifiers even with the poisoned dataset.
These training time approaches work for poisoning-based attacks but fail to defend against supply chain attacks where the adversary injects the Trojan after the model is trained.
Another line of work, e.g., STRIP~\cite{gao2019strip}, SentiNet~\cite{chou2018sentinet}, and Februus~\cite{doan2020februus} aim to detect Trojan inputs during runtime.
It is hard to distinguish between a misclassification and a Trojan attack for a test input.
These runtime detection methods make assumptions about the attack, which stronger attacks can violate.
For example, STRIP fails to detect the Trojan inputs when the Trojan trigger locates around the center of an image or overlaps with the main object (e.g., feature space attacks).
Another limitation is that they examine the test inputs and perform various heavyweight tests, significantly delaying the response time.
Trigger reverse engineering~\cite{wang2019neural,liu2019abs,shen2021backdoor,guo2019tabor,chen2019deepinspect,tao2022better,liu2022piccolo,shen2022constrained} makes no assumptions about the attack method (e.g., poisoning or supply-chain attacks) and does not affect the runtime performance.
It inspects the model to check if a Trojan exists before deploying.
Given a DNN model \(\mathcal{M}\) and a small set of clean samples \(\mathcal{X}\), trigger reverse engineering methods try to reconstruct injected triggers.
If reverse engineering is successful, the model is marked as malicious.
Neural Cleanse (NC)~\cite{wang2019neural} proposes to perform reverse engineering by solving \autoref{eq:re}:
\begin{equation}\label{eq:re}
\mathop{\min}\limits_{\bm{m},\bm{t}} \quad \mathcal{L} \left( \mathcal{M}\left((1-\bm m) \odot \bm{x} + \bm m \odot \bm t \right), y_t\right) + r^{\star}
\end{equation}
where \(x \in \mathcal{X}\) and \(\bm m\) is the trigger mask (i.e., a binary matrix with the same size as the input to determine if the value will be replaced by the trigger or not), \(\bm t\) is the trigger pattern (i.e., a matrix with the same size as the input containing trigger values), and \(r^{\star}\) are attack constraints (e.g., trigger size is smaller than 1/4 of the image).
\(\mathcal{L}\) is the cross-entropy loss function.
Most prior works~\cite{liu2019abs,shen2021backdoor,guo2019tabor,chen2019deepinspect} follow the same methodology and inherently suffer from the same limitations.
First, they assume that an input space perturbation, denoted by \((\bm{m}, \bm{t}) \), can represent a trigger.
This assumption is valid for input-space triggers but does not hold for feature space attacks.
Second, \(r^{\star}\) are heuristics observed from existing attacks.
For example, NC observed that most triggers have small sizes and limit the trigger size to be no larger than a threshold value.
Otherwise, the trigger will overlap with the main object and decrease benign accuracy.
In practice, more advanced attacks can break such heuristics.
For instance, DFST~\cite{cheng2020deep} leverages CycleGAN to transfer images from one style to another without changing its semantics.
It changes almost all pixels in a given image.
This paper proposes a novel reverse engineering method that overcomes the limitations above for image classifiers.
\section{Conclusion}
\label{sec:conclusion}
\vspace{-0.2cm}
In this paper, we
find relationships between feature space hyperplane and Trojans in DNNs.
More over, we propose a new Trojaned DNN detection and mitigation method based on our findings. Compared to the state-of-the-art methods, our
method has better performance in both detection and mitigation tasks.
\section{Methodology}\label{sec:design}
\subsection{Threat Model}\label{sec:threat}
This work aims to determine if a given model has a Trojan or not by reverse-engineering the corresponding trigger.
Following existing works~\cite{wang2019neural,liu2019abs,xu2019detecting}, we assume access to the model and a small dataset containing correctly labeled benign samples of each label.
In practice, such datasets can be gathered from the Internet.
We make no assumptions on how the attacker injects the Trojan (poisoning or supply-chain attack).
The attack can be formally defined as: \(\mathcal{M}(\bm{x}) = y, \mathcal{M}(F(\bm{x})) = y_T, \bm{x} \in \mathcal{X}\), where \(\mathcal{M}\) is the Trojaned model, \(\bm{x}\) is a clean input sample, and \(y_T\) is the target label.
\(F\) is the function to construct Trojan samples.
Input-space triggers add static input perturbations, and feature space triggers are input transformations.
The key difference between our work and existing work is that we consider the feature space triggers.
\subsection{Observation}\label{sec:obs}
In DNNs, the neuron activation values represent its functionality.
The input neurons denote the input space features, and inner neurons extract inner and more abstract features.
Existing reverse-engineering methods constrain the optimization problem in the input space using domain-specific constraints or observations.
For image classification tasks, the pixel value of each image has to be a valid RGB value.
Methods like NC observe that the trigger size must be smaller and cannot overlap with the main object and propose corresponding constraints.
The most challenging problem for reverse-engineering feature space triggers is how to constrain the optimization properly.
Note that there exist a set of neurons; when activating to specific values, the Trojan behavior will be triggered.
Due to the black-box nature of DNNs, it is hard to identify which neurons are related to the Trojan behavior.
Moreover, if enlarge the weight values with the same scale, the output of the DNN will be the same, and as such, it is hard to constrain concrete activation values.
Without a proper constraint, we cannot form an optimization problem.
Our key observation to solve this problem is that \textit{neuron activation values representing the Trojan behavior are orthogonal to others}.
Recall that one property of DNN Trojans is that when adding the trigger to \textit{any} given input, the model will predict the output to a specific label.
That is, the trigger will always work regardless of the actual contents of the input.
In the feature space, when the model recognizes features of the Trojan, it will predict the label to the target label regardless of the other features.
These activation values will form a hyperplane space in the high dimensional space so that they can be orthogonal to all others.
Based on this intuition, we performed empirical experiments to confirm our idea.
Specifically, we first use six Trojan attacks (e.g., BadNets~\cite{gu2017badnets}, Clean label attack~\cite{turner2019label}, Filter attack~\cite{liu2019abs}, and WaNet~\cite{nguyen2021wanet}, SIG~\cite{barni2019new} and Input-aware dynamic attack~\cite{nguyen2020input}) to generate Trojaned ResNet18 models on CIFAR-10. We then visualize the feature space of the last convolutional layers in these models.
In \autoref{fig:observation}, three dimensions, X, Y, and Z, represent the feature space.
We first apply PCA to get two eigenvectors of the benign training set;
then, we use the obtained eigenvectors as X-axis and Y-axis.
For Z-axis, we first construct Trojan inputs to activate the model's Trojan behavior and find highly related neurons to Trojans.
Then, we use DNN interpretability techniques SHAP~\cite{lundberg2017unified} to estimate the neuron's importance to the Trojan behavior.
The neurons among the top 3\% are \textit{compromised neurons}.
Z-axis denotes the activation values of compromised neurons.
Namely,
\(z = \|\mathcal{A}(F(\bm x)) \odot \bm m\|\)
, where $\bm m$ denotes a mask revealing the position of compromised neurons.
\autoref{fig:observation} show that most Trojan inputs have a similar z-value. They form a linear hyperplane in the feature space while benign ones do not.
\input{figtex/observation}
\subsection{Feature Space Trojan Hyperplane Reverse-engineering}
In this paper, We use \(\mathcal{A}\) to represent the submodel from the input space to the feature space. \(\mathcal{B}\) is the submodel from the feature space to the output space.
We also use \(\bm a = \mathcal{A}(\bm x)\) to denote the inner features of the model.
Similar to the reverse-engineering in the input space, given a model \(\mathcal{M}\) and a small set of benign inputs \(\mathcal{X}\), we use a feature space mask \(\bm m\) and a feature space pattern \(\bm t\) to represent the feature space Trojan hyperplane \(H = \{\bm{a}|\bm{m}\odot\bm{a} = \bm{m}\odot\bm{t}\}\).
Specifically, we can update \(\bm m\) and \(\bm t\) via the following optimization process:
\(\mathop{\min}\limits_{\bm{m},\bm{t}} \mathcal{L} \left( \mathcal{B}\left((1-\bm m) \odot \bm a + \bm m \odot \bm t \right), y_t\right)\).
\(y_t\) is the target label.
As discussed above, reverse-engineering the feature space is challenging.
In the input space, all values have natural physical semantics and constraints, e.g., a pixel value in the RGB value range.
Values in the feature space have uninterruptible meanings and are not strictly constrained.
Whether the result will have a physically meaningful semantic is also uncertain.
We solve these challenges by simultaneously optimizing the input space trigger function \(F\) and the feature space Trojan hyperplane \(H\) to enforce that the trigger has semantic meanings.
In detail, we compute the feature space trigger pattern as the mean inner features on the samples generated by the trigger function, i.e., \(\bm t = mean \left( \bm m \odot \mathcal{A}(F(\mathcal{X}) \right) \).
We also constrain the standard deviation of \(\bm m \odot \mathcal{A}(F(\mathcal{X})\) to make sure the features generated by the trigger function will lie on the relaxation of the reverse-engineered hyperplane.
Formally, our reverse-engineering can be written as the constrained optimization problem shown in \autoref{eq:optimize}, where \(\mathcal{X}\) is the small set of clean samples.
We use deep neural networks to model the trigger function (i.e., \(F = G_{\theta}\)) because of their expressiveness~\cite{chen2019deepinspect,hornik1989multilayer}.
Specifically, we use a representative deep neural network UNet~\cite{ronneberger2015u}.
Given a model and a small set of clean inputs, the trigger function can be smoothly reconstructed via gradient-based methods, i.e., optimizing the generative model \(G_{\theta}\).
In our default setting, \(\mathcal{A}\) and \(\mathcal{B}\) are separated at the last convolutional layer.
More discussions are in the Appendix (\autoref{sec:split}).
\begin{equation}
\begin{split}
& \mathop{\min}\limits_{F,\bm{m}} \text{ } \mathcal{L} \left(\mathcal{B} \left((1-\bm m) \odot \bm a + \bm m \odot \bm t \right), y_t\right)\\
& \text{where } \bm{t} = \overline{\bm m \odot \mathcal{A}(F(\mathcal{X})} \text{, } \bm a \in \mathcal{A}(\mathcal{X}) \\
& s.t. \quad \|F(\mathcal{X}) - \mathcal{X}\| \leq \tau_1,\text{ } std(\bm m \odot \mathcal{A}(F(\mathcal{X}))) \leq \tau_2,\text{ } \|\bm m\| \leq \tau_3
\end{split}
\label{eq:optimize}
\end{equation}
\begin{algorithm}[tb]
\caption{Feature-space Backdoor Reverse-engineering}\label{alg:detection1}
{\bf Input:} %
\hspace*{0.05in}
Model: \(\mathcal{M}\)\\ {\bf Output:} \hspace*{0.05in} Trojaned or Not, Trojaned Pairs \(T\) \begin{algorithmic}[1] \Function {Reverse-engineering}{$\mathcal{M}$} \For{{\rm (target class} \(y_t,\) {\rm source class} \(y_s\) {\rm )} {\rm in} \(K\)} \For{\(e \leq E\)} \State \(\bm x = sample(\mathcal{X}_{y_s})\) \State \(cost_1 = \mathcal{L}\left(\mathcal{B} \left((1-\bm m) \odot \bm a + \bm m \odot \bm t \right), y_t\right)\)
\If{ \( \|F(\bm x) - \bm x\| \geq \tau_1 \) } \State \(cost_1 = cost_1 + w_1\cdot\|F(\bm x) - \bm x\|\) \EndIf
\If{ \( std(\bm m \odot \mathcal{A}(F(\bm x))) \geq \tau_2 \) } \State \(cost_1 = cost_1 + w_2 \cdot std(\bm m \odot \mathcal{A}(F(\bm x)))\) \EndIf
\State \(\Delta_{\theta_F} = \frac{\partial cost_1}{\partial \theta_F}\) \State \(\theta_F = \theta_F - lr_1\cdot \Delta_{\theta_F}\)
\State \(cost_2 = \mathcal{L}\left(\mathcal{B} \left((1-\bm m) \odot \bm a + \bm m \odot \bm t \right), y_t\right)\)
\If{ \( \|\bm m\| \geq \tau_3 \) } \State \(cost_2 = cost_2 + w_3 \cdot \|\bm m\|\) \EndIf
\State \(\Delta_{\bm m} = \frac{\partial cost_2}{\partial \bm m}\) \State \(\bm m = \bm m - lr_2 \cdot \Delta_{\bm m}\)
\EndFor \If{ \( ASR \left(\mathcal{B} \left((1-\bm m) \odot \bm a + \bm m \odot \bm t \right), y_t\right) > \lambda \) } \State \(\mathcal{M}\) is a Trojaned model, \State \(T.append((y_s, y_t))\) \EndIf \EndFor
\EndFunction \end{algorithmic} \end{algorithm}
There are several constraints in the optimization problem: \ding{172} The transformed samples should be similar to the original image due to the properties of Trojan attacks, i.e., \(\|F(\bm x) - \bm x\| \leq \tau_1\).
Typically, the Trojan samples are visually similar to original samples for stealthy purposes.
In detail, we use MSE (Mean Squared Error) to calculate the distance between \(F(\bm x)\) and \(\bm x\).
\ding{173}
The Trojan features should lie in the relaxation of the reverse-engineered feature space Trojan hyperplane, i.e., \(\mathbb{P}\left(a \in H^{\star}|\bm x \in F(\mathcal{X}) \right)\) should have high values.
To achieve this goal, we constrain the standard deviation of different Trojan samples' activation values on each pixel in the hyperplane.
\ding{174}
Similar to input space trigger reverse-engineering~\cite{liu2019abs}, we set a bound for the size of the feature space trigger mask, i.e., \(\|\bm m\| \leq \tau_3\).
Here \(\tau_1\), \(\tau_2\), and \(\tau_3\) are threshold values.
We discuss their influence in ~\autoref{sec:eval_ablation}.
The detailed reverse-engineering algorithm can be found in~\autoref{alg:detection1}, where \(K\) is a set containing all possible (source class, target class) pairs of the model.
\mbox{\textsc{FeatureRE}}\xspace scans all labels to identify the Trojan target labels.
\(w_1\), \(w_2\) and \(w_3\) are the coefficient values used in the optimization.
Following NC~\cite{wang2019neural}, we adjust them dynamically during optimization to make sure the reverse-engineered Trojan satisfies the constraints.
\(E\) is the maximal epoch number.
In lines 5-11, it optimizes the trigger function \(F\) and then the mask \(\bm m\) of the feature space hyperplane in lines 12-16.
In the end, we determine the reverse-engineering is successful and the label \(y_t\) is a Trojan target label if the attack success rate of the reversed Trojan is above a threshold value \(\lambda\) (80\% in this paper).
\subsection{Trojan Mitigation}
After we reverse-engineered the Trojans, we can mitigate it by breaking the reverse-engineered feature space Trojan hyperplane. Based on our observation, the neurons in the feature space Trojan hyperplane are highly related to the Trojan behaviors.
Thus, we can mitigate the Trojans by breaking the hyperplane.
Inspired by Zhao et al.~\cite{zhao2021ai}, we can break it by flipping the neurons on it.
Our neuron-flip process can be written as
\autoref{eq:flip}, where \(\bm m\) is the reverse-engineered feature space mask, \(\bm a\) is the inner features.
\(\bm a_i\) is the activation value on the \(i^{th}\) neuron.
\begin{equation}\label{eq:flip}
Flip(\bm a) = \begin{cases} -\bm a_i,\text{ when \(\bm a_i\) in \(\bm m\)} \\ \bm a_i,\text{ when \(\bm a_i\) not in \(\bm m\)} & \end{cases}
\end{equation}
The mitigated model \(\mathcal{M}^{\prime}(\bm x) = \mathcal{B}\left(Flip(\mathcal{A}(\bm x))\right)\), where \(\mathcal{A}\) and \(\mathcal{B}\) are submodels of the model.
\section{Discussion}\label{sec:discussion}
\vspace{-0.2cm}
\textbf{Limitations of our method.}
Similar to most existing Trojaned model detection and mitigation methods~\cite{wang2019neural,liu2019abs,liu2022piccolo,shen2021backdoor,shen2022constrained,guo2019tabor,chen2019deepinspect,liu2018fine,li2021neural,tao2022better}, our method requires a small set of clean samples.
In the real world, these samples can be obtained from the Internet.
\textbf{Ethics.}
This paper proposes a technique to detect and remove Trojans in DNN models.
We believe it will help improve the security of DNNs and be beneficial to society.
\section{Experiments and Results}
\label{sec:eval}
We first introduce our experiment setup (\autoref{sec:eval_setup}).
We then evaluate the effectiveness of \mbox{\textsc{FeatureRE}}\xspace on Trojan detection (\autoref{sec:effectiveness_detection}) and mitigation tasks (\autoref{sec:effectiveness_mitigation}).
We also evaluate the robustness of \mbox{\textsc{FeatureRE}}\xspace against different settings and the impacts of configurable parameters in \mbox{\textsc{FeatureRE}}\xspace (\autoref{sec:eval_ablation}).
In \autoref{sec:split}, we discuss how to split the model.
The adaptive attack can be found in \autoref{sec:adaptive}.
\subsection{Experiment Setup.}
\label{sec:eval_setup}
We implement \mbox{\textsc{FeatureRE}}\xspace with python 3.8 and PyTorch.
All experiments are conducted on a Ubuntu 18.04 machine equipped with 64 CPUs and six GeForce RTX 6000 GPUs.
\input{tf/datasets}
\noindent
\textbf{Datasets and Models.}
We use four publicly available datasets to evaluate \mbox{\textsc{FeatureRE}}\xspace, including MNIST~\cite{lecun1998gradient}, GTSRB~\cite{stallkamp2012man}, CIFAR-10~\cite{krizhevsky2009learning} and ImageNet~\cite{russakovsky2015imagenet}.
We summarize our datasets in~\autoref{tab:dataset}.
We show the dataset names, the size of each input sample, the number of samples and the number of classes in each column.
Details of the datasets can be found in Appendix.
For model architectures, we use LeNet5~\cite{lecun1998gradient}, Preact ResNet18 (PRN18)~\cite{he2016identity}, ResNet18~\cite{he2016deep}, a VGG-style network specified in ULP~\cite{kolouri2020universal}, and a model consists of 4 convolutional layers and 2 dense layers used in Xu et al.~\cite{xu2019detecting}.
These datasets and models are widely used in Trojan-related researches~\cite{gu2017badnets, liu2017trojaning,cheng2020deep,gao2019strip, wang2019neural, liu2019abs,xu2019detecting, liu2018fine}.
\noindent
\textbf{Evaluation Metrics.}
We measure the effectiveness of the Trojan detection task by collecting the detection accuracy (Acc).
Given a set of models consist of benign and Trojaned models, the Acc is the number of correctly classified models over the number of all models.
We also show detailed number of True Positives (TP, i.e., correctly detected Trojaned models), False Positives (FP, i.e., benign models classified as Trojaned models), False Negatives (FN, i.e., Trojaned models classified as benign models) and True Negatives (TN, i.e., correctly classified benign models).
For the Trojan mitigation task, we evaluate the benign accuracy (BA) and attack success rate (ASR)~\cite{veldanda2020nnoculation}.
BA is the number of correctly classified clean inputs over the number of all clean samples.
ASR is defined as the number of Trojan samples that successfully attack models over the number of all Trojan samples.
\noindent
\textbf{Baselines and Attack Settings.}
We evaluate the performance of \mbox{\textsc{FeatureRE}}\xspace on Trojan detection, and compare the results with four reverse-engineering based Trojan detection methods (i.e., ABS~\cite{liu2019abs}, DeepInspect~\cite{chen2019deepinspect}, TABOR~\cite{guo2019tabor}, and K-arm~\cite{shen2021backdoor}) and two classification based methods (i.e., ULP~\cite{kolouri2020universal} and Meta-classifier~\cite{xu2019detecting}).
For Trojan mitigation task, we compare \mbox{\textsc{FeatureRE}}\xspace with two advanced mitigation methods (i.e., NAD~\cite{li2021neural} and I-BAU~\cite{zeng2021adversarial}).
We use the default parameter settings described in the original papers of our baseline methods.
To understand the performance of \mbox{\textsc{FeatureRE}}\xspace and existing methods against various attack settings, we evaluate them against BadNets~\cite{gu2017badnets}, Filter Trojans~\cite{liu2019abs}, WaNets~\cite{nguyen2021wanet}, IA (Input-dependent dynamic Trojans)~\cite{nguyen2020input}, Clean-label~\cite{turner2019label}, SIG~\cite{barni2019new} and ISSBA (Invisible sample-specific Trojans)~\cite{li2021invisible} attacks.
These attacks are state-of-the-art attack methods and are widely evaluated in Trojan defense papers~\cite{wang2019neural,liu2019abs,zeng2021adversarial,li2021anti}.
If not specified, we use all-to-one (i.e., single-target) setting for all attacks.
Label-specific setting is discussed in~\autoref{sec:eval_ablation}.%
\subsection{Effectiveness on Trojan Detection}
\label{sec:effectiveness_detection}
To measure the effectiveness on the Trojan detection task, we generate a set of benign and Trojaned models, and then use \mbox{\textsc{FeatureRE}}\xspace and existing Trojan detection methods to classify each model.
We collect the Acc, TP, FP, FN and TN results of each method and compare them.
Specifically, we first evaluate the performance of \mbox{\textsc{FeatureRE}}\xspace and compare the results with four state-of-the-art reverse-engineering based detection methods.
We generate 20 Trojaned models as well as 20 benign models on CIFAR-10 dataset for each attack (i.e., BadNets, Filter Trojan, WaNet and Input-aware dynamic Trojan attack).
For MNIST and GTSRB dataset, we train 10 Trojaned and 10 benign LeNet5~\cite{lecun1998gradient} models on each dataset.
We then compare \mbox{\textsc{FeatureRE}}\xspace with two state-of-the-art classification based detection methods.
Similarly, we generate 10 benign and 10 Trojaned models, and use Trojan detection methods to classify these models.
Notice that, in all Trojan detection tasks, we assume the defender can only access 10 clean samples for each class, which is a common practice.~\cite{wang2019neural,liu2019abs,shen2021backdoor}
The comparison results of reverse-engineering based methods are shown in~\autoref{tab:effectiveness}.
The results of two classification based methods are demonstrated in~\autoref{tab:effectiveness_classification_ulp} and \autoref{tab:effectiveness_classification_meta}.
In each table, we show the detailed settings, including dataset names, network architectures, and attack settings.
\input{tf/effectiveness.tex}
\input{tf/compared_to_classification.tex}
\noindent
\textbf{Comparison to Reverse-engineering based methods.}
From the results in~\autoref{tab:effectiveness}, we observe that \mbox{\textsc{FeatureRE}}\xspace achieves the best detection results compared with other methods.
The average Acc of \mbox{\textsc{FeatureRE}}\xspace is 93\%, which is %
17\%, 23\%, 35\% and 23\%
higher than those of other defense methods.
The results show the benefit of \mbox{\textsc{FeatureRE}}\xspace.
When looking into the generalization of Trojan detection methods, we find that \mbox{\textsc{FeatureRE}}\xspace can achieve excellent results on both input-space Trojans (i.e., BadNets) and feature-space Trojans (i.e., Filter, WaNet and IA attacks).
However, the performance of existing reverse-engineering methods on feature-space Trojans (i.e., Filter, WaNet and IA attacks) is significantly worse than the performance on static Trojans.
\mbox{\textsc{FeatureRE}}\xspace archives 94\% average Acc but the Acc of TABOR on feature-space Trojans are only 53\%, 50\% and 50\%, respectively. Moreover, \mbox{\textsc{FeatureRE}}\xspace has 15.33 TP on average, but existing methods only have 7.87 TP.
\mbox{\textsc{FeatureRE}}\xspace can generalize better than existing work because \mbox{\textsc{FeatureRE}}\xspace considers both feature and input space constraints.
Existing methods, on the contrary, only consider the input space constraints.
They can not detect feature-space Trojans whose trigger is complex and input-dependent, and directly classify many Trojaned models with feature-space Trojans as benign.
\noindent
\textbf{Comparison to classification based methods.}
When comparing \mbox{\textsc{FeatureRE}}\xspace with classification based methods, we notice that \mbox{\textsc{FeatureRE}}\xspace has better Acc, more TPs and TNs than classification based methods ULP and Meta-classifier.
As demonstrated in~\autoref{tab:effectiveness_classification_ulp} and~\autoref{tab:effectiveness_classification_meta}, the Acc of \mbox{\textsc{FeatureRE}}\xspace is 93\% and 95\%, which is 40\% and 15\% higher than those of ULP and Meta-classifier.
Overall, the results indicate that \mbox{\textsc{FeatureRE}}\xspace is more effective than classification based methods when detecting Trojaned models.
Different from \mbox{\textsc{FeatureRE}}\xspace, which directly inspects models via analyzing its inherent feature space properties, classification based methods highly depend on the external trained dataset.
Therefore, their results are not as precise as \mbox{\textsc{FeatureRE}}\xspace.
\input{tf/unlearning.tex}
\subsection{Effectiveness on Trojan Mitigation}
\label{sec:effectiveness_mitigation}
We evaluate the effectiveness of \mbox{\textsc{FeatureRE}}\xspace on Trojan mitigation and compare the results with state-of-the-art methods NAD and I-BAU.
We use the Trojaned models generated by three attacks (i.e., Filter attack, WaNet and IA)
and report their average BA and ASR after Trojan mitigation.
We also show the average BA and ASR of undefended Trojaned models.
For all methods,
the defenders can access 10 clean samples for each class to conduct Trojan mitigation.
We show the results in~\autoref{tab:unlearning}.
We find that \mbox{\textsc{FeatureRE}}\xspace is the most effective method for Trojan mitigation among all methods.
Compared to state-of-the-art Trojan mitigation methods, \mbox{\textsc{FeatureRE}}\xspace archives the lowest average ASR and the highest average BA.
On the one hand, using \mbox{\textsc{FeatureRE}}\xspace can decrease the average ASR from 96.76\% to 0.26\%.
Other methods can only decrease the average ASR to 40.14\% and 7.29\%.
The results show the advantages of \mbox{\textsc{FeatureRE}}\xspace on Trojan mitigation.
On the other hand, the BA with \mbox{\textsc{FeatureRE}}\xspace is similar to undefended models.
But the BA of other methods is significantly lower than that of undefended models.
By breaking the feature space hyperplane, \mbox{\textsc{FeatureRE}}\xspace can successfully mitigate Trojans with minimal BA loss.
Other methods, which cannot effectively find Trojan-related features, cannot achieve good results.
\subsection{Ablation Study}
\label{sec:eval_ablation}
In this section, we evaluate the resistance of \mbox{\textsc{FeatureRE}}\xspace to various Trojan attack settings and large datasets.
We also evaluate the impacts of configurable parameters in \mbox{\textsc{FeatureRE}}\xspace, including the constrain values used in \autoref{eq:optimize} and the number of used clean samples.
By default, the attack used for measuring the impacts of configurable parameters is IA.
We use 20 benign ResNet models and 20 Trojaned ResNet models on CIFAR-10 to test the detection results.
Notice that we only evaluate the performance on the Trojan detection task.
Due to the page limits, we include the ablation study on Trojan mitigation in Appendix (\autoref{sec:ablation_mitigation}).
\input{tf/more_settings.tex}
\noindent
\textbf{Resistance to various attack and dataset settings.}
To evaluate if our method is resistant to more Trojan attacks, we train 20 Trojaned ResNet18 models on CIFAR-10 for Label-specific attack (LS), Clean-label attack (CL) and SIG attack (SIG).
For the label-specific attack, we consider the all-to-all attack setting, i.e., the target label \(y_T = \eta(y) = y+1\), where \(\eta\) is a mapping and \(y\) is the correct label of the sample.
In addition, we generate five benign models and five Trojaned models with ISSBA attacks on ImageNet to evaluate if our method is compatible with large-scale datasets.
We summarize the results in~\autoref{tab:more_settings}.
In \autoref{tab:more_settings}, we find that \mbox{\textsc{FeatureRE}}\xspace is compatible with evaluated Trojan attacks, showing the generalization of our reverse-engineering based method. We also observe that our method has high Acc on the ImageNet dataset with ISSBA~\cite{li2021invisible}. Thus, our method is also applicable to large datasets.
\noindent
\textbf{Influence of constrain values.}
As shown in \autoref{eq:optimize}, there are three constrain values (\(\tau_1\), \(\tau_2\), \(\tau_3\)) in our constrained optimization process.
By default, \(\tau_1 = 0.15\), \(\tau_2 = 0.25\) and \(\tau_3 = 5\%\).
We evaluate their influences.
For \(\tau_1\), we calculate input space perturbations on the preprocessed inputs, and the details of the preprocessing can be found in Appendix (\autoref{sec:appendix_details_datasets}).
We vary \(\tau_1\) from 0.05 to 0.35, change \(\tau_2\) from 0.10 to 0.50, and tune \(\tau_3\) from 3\% of the whole feature space to 10\% of the whole feature space.
The results under different hyperparameter settings are shown in
\autoref{tab:ablation}.
From the results, we observe that the performance of \mbox{\textsc{FeatureRE}}\xspace is insensitive to these three hyperparameters.
In detail, when we vary \(\tau_1\), \(\tau_2\) and \(\tau_3\), the Acc is stable.
In all cases, our method always achieves over 90\% detection accuracy.
The results further show the robustness of \mbox{\textsc{FeatureRE}}\xspace.
We also find that, when the value of all hyperparameters becomes lower, \mbox{\textsc{FeatureRE}}\xspace has more FN.
On the contary, when its value is larger, more FP will be produced.
This is understandable because lower constrain values mean a stricter criterion for a successful reverse-engineering.
\input{tf/ablation.tex}
\noindent
\textbf{Number of clean reference samples.}
Our threat model and existing work assume the defender can access a set of clean samples for defense.
To investigate the influences of the number of used clean samples in Trojan detection, we choose the number from 1 to 100 in each class and report the Acc results.
The results are shown in~\autoref{fig:diff_datanum}.
From the results, we notice that the Acc decreases significantly when we use less than 10 samples for each class.
This is because the number of used sample affects the optimization process.
When the number of used samples is too small, the optimization process might be problematic, e.g., it encounters overfitting problem.
When the number of used samples is larger than 10, \mbox{\textsc{FeatureRE}}\xspace achieves high detection accuracy (i.e., above 95\%) and the Acc will not change significantly when the number of used samples keeps increasing.
The reason is using more data makes the optimizing process converge and finally arrives a stable state.
Note that requiring hundreds clean samples is common for reverse-engineering based methods~\cite{wang2019neural,liu2019abs,guo2019tabor} and other types of defenses~\cite{gao2019strip,li2021neural,zeng2021adversarial,li2021anti}.
\mbox{\textsc{FeatureRE}}\xspace only requires 10 clean samples for each class, which is more efficient.
\subsection{Discussion for Model Split}\label{sec:split}
As we discussed in \autoref{sec:design}, our method split the model \(\mathcal{M}\) into two sub-models \(\mathcal{A}\) and \(\mathcal{B}\).
In this section, we discuss the influence of using different split positions.
\autoref{tab:split} shows the results of using different \(\mathcal{A}\) and \(\mathcal{B}\) on the ResNet18 model and CIFAR-10 dataset.
In detail, we report the results of splitting the
model at the \nth{9}, \nth{11}, \nth{13}, \nth{15}, and the last convolutional layer.
The average detection accuracy for splitting at the \nth{9} layer, \nth{11} layer, \nth{13} layer, \nth{15} layer, and last layer is 86.50\%, 87.75\%, 89.50\%, 94.00\%, and 94.75\%, respectively.
As we can see, the performance of splitting at later layers is higher than the performance of splitting at earlier layers.
In our current implementation, we set \(A(x)\) as the sub-model from the input layer to the last convolution layer and \(B(x)\) as the rest.
The relationship between the input and the output of a convolutional layer \(L_n\) is \(x_{n+1} = L_n(x_n) = \sigma (\mathbf{W}_{n}^\mathbf{T}x_n+\mathbf{b}^\mathbf{T}_n)\), where \(x_n\) and \(x_{n+1}\) are the inputs and outputs of layer \(n\), \(\mathbf{W}_n\) and \(\mathbf{b}_n\) are weights and bias values, and \(\sigma \) is the activation function.
\input{tf/split}
Based on existing literatures~\cite{zhou2018interpretable,bau2020understanding}, the features in the deeper CNN layers are more disentangled than that of earlier layers.
Thus, if the orthogonal phenomenon happens in a layer \(L_{n}\), it will exist for all its subsequent layers, e.g., \(L_{n+1}\).
If the orthogonal phenomenon does not happen, the layer without this phenomenon will mix benign and backdoor features, leading to low benign accuracy or attack success rate.
The results in \autoref{tab:adaptive} confirm our analysis.
Thus, a successful backdoor attack will lead to the orthogonal phenomenon in the last convolution layer.
\vspace{-0.2cm}
\subsection{Adaptive Attack}\label{sec:adaptive}
\vspace{-0.2cm}
Our threat model assumes that the attacker can control the training process of the Trojan model.
In this section, we discuss the potential adaptive attacker that knows our defense strategy and tries to bypass \mbox{\textsc{FeatureRE}}\xspace via modifying the training process.
Our observation is that the neuron activation values representing the Trojan behavior are orthogonal to others.
One possible adaptive attack is breaking such orthogonal relationships during the Trojan injection process.
We design an adaptive attack that adds one loss term to push the Trojan features to be not orthogonal to benign features.
This attack can be formulated as: \(L = L_{ce} + L_{adv}\), where \(L_{ce}\) is the standard classification loss and the \(L_{adv}\) is defined as:
\vspace{-0.2cm}
\begin{equation}\label{eq:adaptive}
L_{adv} = \text{SIM}(\mathcal{B}(m\odot a + (1-m) \odot t), \mathcal{B}(m\odot a^{\prime} + (1-m) \odot t))
\end{equation}
\vspace{-0.2cm}
Here, \(\text{SIM}\) is the cosine similarity;
\(a\) and \(a^{\prime}\) are the features of different benign samples;
\(m\) and \(t\) are the feature-space mask and pattern of the compromised neurons obtained via SHAP~\cite{lundberg2017unified}.
The loss term \(L_{adv}\) tries to enforce the Trojan features being not orthogonal to the benign ones.
We conduct this adaptive attack on the CIFAR-10 dataset and ResNet18 model.
The results can be found in \autoref{tab:adaptive}.
The detection accuracy of \mbox{\textsc{FeatureRE}}\xspace under adaptive attack drops to 65\%.
Meanwhile, the average BA/ASR of the adaptive attack and BadNets (native training) is 87.36\%/94.34\% and 93.67\%/99.98\%, respectively.
The adaptive attack can reduce the detection accuracy of our method.
Both the BA and ASR of the adaptive attack are lower than those of native training.
The results confirm our analysis in \autoref{sec:split}: the model without the ``orthogonal phenomenon'' will mix benign and Trojan features, leading to low benign accuracy or attack success rate.
\vspace{-0.2cm}
\section{Introduction}\label{sec:intro}
DNNs are vulnerable to Trojan attacks~\cite{gu2017badnets, liu2017trojaning,cheng2020deep, doan2021lira,salem2022dynamic,wang2022bppattack}.
After injecting a Trojan into the DNN model, the adversary can manipulate the model prediction by adding a \textit{Trojan trigger} to get the target label.
The adversary can inject the Trojan by performing the poisoning attack or supply chain attack.
In the poisoning attack, the adversary can control the training dataset and injects the Trojan by adding samples with the Trojan trigger labeled as the target label.
In the supply chain attack, the adversary can replace a benign model with a Trojaned model by performing the supply chain attack.
The Trojan trigger is becoming more and more stealthy.
Earlier works use static patterns, e.g., a yellow pad as the trigger, which is known as the input space triggers.
Researchers recently proposed using more dynamic and input-aware techniques to generate stealthy triggers that mix with benign features, which are referred to as the feature space triggers.
For example, the trigger of the feature-space Trojans can be a warping process~\cite{nguyen2021wanet} or a generative model~\cite{cheng2020deep,nguyen2020input,li2021invisible}.
The Trojan attack is a prominent threat to the trustworthiness of DNN models, especially in security-critical applications, such as autonomous driving~\cite{gu2017badnets}, malware classification~\cite{severi2021explanation}, and face recognition~\cite{sarkar2020facehack}.
Prior works have proposed several ways to defend against Trojan attacks, such as removing poisons in training~\cite{du2019robust,chen2018detecting,tran2018spectral}, detecting Trojan samples at runtime~\cite{gao2019strip,chou2018sentinet,doan2020februus,ma2019nic}, etc.
Many of above methods can only work for one type of Trojan attack.
For example, training and pre-training time defense (e.g., removing poisoning data, training a benign model with poisoning data) fail to defend against the supply chain attack.
Trigger reverse-engineering~\cite{wang2019neural,liu2019abs,shen2021backdoor,guo2019tabor,chen2019deepinspect} is a general method to defend against different Trojan attacks under different threat models.
It works by searching for if there exists an input pattern that can be used as a trigger in the given model.
If we can find such a trigger, the model has a corresponding Trojan and is marked as malicious and vice versa.
Existing reverse-engineering methods assume that the Trojan triggers are static patterns in the input space and develop an optimization problem that looks for an input pattern that can be used as the trigger.
This assumption is valid for input space attacks~\cite{gu2017badnets,liu2017trojaning,chen2017targeted} that use static triggers (e.g., a colored patch).
Feature space attacks~\cite{cheng2020deep,doan2021lira,salem2022dynamic,nguyen2021wanet,nguyen2020input,li2021invisible,lin2020composite} break this assumption.
Existing trigger reverse-engineering methods~\cite{wang2019neural,liu2019abs,shen2021backdoor,guo2019tabor,chen2019deepinspect} constrain the optimization by using heuristics or empirical observations on existing attacks, such as pixel values are in range \([0, 255]\), and the trigger's size is small.
Such heuristics are also invalid for feature space triggers that change all pixels in images.
Reverse-engineering the feature space is challenging.
Unlike input space, there are no constraints that can be directly used.
\input{figtex/hyperplane_in_feature.tex}
In this paper, we propose a trigger reverse-engineering method that works for feature space triggers.
Our intuition is that \textit{features representing the Trojan are orthogonal to other features.}
Because a trigger works for a set of samples (or all of them, depending on the attack type), changing the input content without removing the Trojan features will not change the prediction.
That is, changing Trojan and benign features will not affect each other.
Trojan features will form a hyperplane in the high dimensional space, which can constrain the search in feature space.
We then develop our reverse-engineering method by exploiting the feature space constraint.
\autoref{fig:hyperplane_in_feature} demonstrates our idea.
Existing reverse-engineering methods only consider the input space constraint.
It conducts reverse-engineering via searching a static trigger pattern in the input space.
These methods fail to reverse-engineer feature-space Trojans whose trigger is dynamic in the input space.
Instead, our idea is to exploit the feature space constraint and searching a feature space trigger using the constraint that the Trojan features will form a hyperplane.
At the same time, we also reverse-engineer the input space Trojan transformation based on the feature space constraint.
To the best of our knowledge, we are the first to propose feature space reverse-engineering methods for backdoor detection.
Through reverse-engineered Trojans, we developed a Trojan detection and removal method.
We implemented a prototype \mbox{\textsc{FeatureRE}}\xspace (\textbf{FEATURE}-space \textbf{RE}verse-engineering) in Python and PyTorch and evaluated it on MNIST, GTSRB, CIFAR, and ImageNet dataset with seven different attacks (i.e., BadNets~\cite{gu2017badnets}, Filter attack~\cite{liu2019abs}, WaNet~\cite{nguyen2021wanet}, Input-aware dynamic attack~\cite{nguyen2020input}, ISSBA~\cite{li2021invisible}, Clean-label attack~\cite{turner2019label}, Label-specific attack~\cite{gu2017badnets}, and SIG attack~\cite{barni2019new}).
Our results show that \mbox{\textsc{FeatureRE}}\xspace is effective.
On average, the detection accuracy of our method is 93\%, outperforming existing techniques.
For Trojan mitigation, our method can reduce the ASR (attack success rate) to only 0.26\% with the BA (benign accuracy) remaining nearly unchanged by using only ten clean samples for each class.
Our contributions are summarized as follows.
We first find the feature space properties of the Trojaned model and reveal the relationship between Trojans and the feature space hyperplanes.
We propose a novel Trojan trigger reverse-engineering technique leveraging the feature space Trojan hyperplane.
We evaluate our prototype on four different datasets, five different network architectures, and seven advanced Trojan attacks.
Results show that our method outperforms SOTA approaches.
|
1,314,259,995,894 | arxiv |
\section{Introduction}
Due to successful application of machine learning, and deep learning \cite{lecun2015deep} in particular, we have lately observed great progress in traditional Computer Vision (CV) tasks such as image classification \cite{krizhevsky2012imagenet}, object detection and image segmentation \cite{girshick2016region}.
These advances have pushed researchers’ towards more complex tasks such as image caption generation \cite{karpathy2015deep} and VQA~\cite{antol2015vqa,malinowski2014towards}.
Both of those tasks combine Computer Vision with Natural Language Processing (NLP) and high-level reasoning, thus require utilization of multi-modal knowledge beyond a single sub-domain (such as CV).
In the case of caption generation, however, there may be several valid captions describing a given image. Thus, it is hard to define a quantitative evaluation metric to track the progress.
In contrast, in VQA the image and question pairs might come with the ground-truth answers, enabling to measure the system accuracy.
Moreover, as there might be many questions referring to the same image, the system is in fact forced to learn various reasoning types depending on the type of question.
For those reasons VQA is considered to be AI complete~\cite{antol2015vqa} and renews the hope of building machines that could pass the Turing test in open domains~\cite{malinowski2014towards}.
During the last three years several complex VQA datasets have been introduced, however, many of the solutions show only marginal improvements over the strong baselines.
As there are many influencing factors it is often hard to tell which modules of the system are working properly and why.
The CLEVR dataset~\cite{johnson2017clevr} was created with aim of pushing forward the progress on VQA in a more systematic manner, with the main focus put on the validation of reasoning skills.
The dataset delivers a well-balanced set of images and questions, where the information associated with each image is complete and exclusive.
Along with the dataset the authors provided several, sometimes quite sophisticated baselines, and concluded that those models have not learned the semantics of spatial reasoning at all.
In this paper we decided to focus on that aspect of the problem, i.e. spatial reasoning.The hypothesis we would like to validate is whether the operation on high-level and abstract facts extracted from the image might improve the accuracy of the system (as suggested e.g. in~\cite{anderson2017bottom}).
For that reason, we developed a solution operating on object-relation-object triplets~\cite{dai2017detecting}, similar to relational network~\cite{santoro2017simple}, however, instead of using features extracted from the images, we rely on the information associated with the objects detected in the image.
The achieved results are comparable with current state-of-the-art solutions in terms of overall accuracy and show significant improvement on the counting task.
\section{Conclusions}
\label{sec:conclusions}
In this paper we have focused on reasoning using high-level abstract facts.
For that purpose, instead of relying on features extracted from the images, we used facts in the form of encoded objects detected by the Faster R-CNN detector.
Those abstract facts were next passed to a reasoning module, developed with the aim of learning object-object relations.
The achieved overall accuracy is comparable with the current state-of-the-art results.
Analysis of the results unveiled that the proposed solution gives more stable results for different tasks, and, moreover, shows improvement in 2 out of 5 CLEVR tasks.
In particular, it gives significant, few percent improvement in the Counting task, which is currently considered as one of the most complex tasks in VQA and is gaining more and more attention, e.g. in~\cite{trott2017interpretable} the authors proposed a new dataset called HowMany-QA, devoted only to that specific problem.
The detailed analysis of the operation of components of our system has proven that OD is working properly, whereas difference in accuracy comes from the Relational Network module.
According to the results reported in~\cite{santoro2017simple} the RN module working in separation is supposed to give around 2\% better overall accuracy than the accuracy that we have managed to achieve.
We treat that as a general promise that RN operating on more abstract features should be able to achieve even better results.
As the used reasoning module is the bottle-neck of the system, in our future works we want to focus on this part of the system and experiment with other approaches.
The most important research directions, aiming at improvement of our system and overcoming its limitations, are as follows.
The first limitation is the used aggregation of relations detected by MLPs -- as the number of objects in scenes might vary, utilization of recurrent neural net seems to be natural.
An interesting solution to that problem was recently proposed in~\cite{palm2017recurrent}, where the authors developed a novel neural-based message passing mechanism called Recurrent Relational Networks and have shown its superiority on Sudoku puzzles and the BaBi textual QA.
Second limitation of our system is that the reasoning performed in the Semantic Embedding module is strictly sequential, with hardcoded number of reasoning steps, whereas number of reasoning steps should in fact vary, depending on the task given. One of the possible solutions addressing that problem is to employ the Adaptive Computation Time (ACT) mechanism proposed in ~\cite{graves2016adaptive}.
Yet another promising direction is utilization of external memory (i.e. memory augmented neural networks such as Neural Turing Machine~\cite{graves2014neural}) for learning more complex reasoning schemes and memorization of the abstract, graph like representation of the observed scene before generating the final answer.
The latter is a natural extension for reasoning over explicit high-level representations of the contents of the image and shows very good results as reported e.g. in~\cite{wang2016vqa}.
Finally, we also want to improve our model by training it jointly on several tasks (e.g. word-level modeling, object detection and VQA) -- it was recently shown e.g. in \cite{kaiser2017one} that such a multi-task learning enables the model to develop unified representation and results in improved overall accuracy.
\section*{Acknowledgements}
We would like to thank Alexis Asseman for setting up and managing our hardware, enabling us to run our experiments smoothly, Ahmet S. Ozcan for his insights and proofread, T.S. Jayram, Vincent Albouy, Ben Lyo and other members of our Machine Intelligence team in IBM Research, Almaden, for critical feedback and discussions.
{\small
\bibliographystyle{ieee}
\section{Our solution}
\label{sec:our_solution}
Architecture of our system is presented in \fig{fig:scheme_general}. It extends the Encoder-Decoder~\cite{cho2014learning} architecture, which originally consisted of two RNNs, the first one used for encoding a sequence of input symbols into a fixed length representation, and the other for decoding that representation into another sequence of output symbols.
In VQA there are two input channels with different modalities, thus we need two types of encoders and an intermediate module responsible for projection of both the visual and language representations into a common semantic embedding space.
Following that scheme, Xiao et al. \cite{xiao2017weakly} adopted the convolutional layers of the VGG network followed by an average pooling across the output features as the "visual" encoder and used a two-layer stacked LSTM network as the "language" encoder.
The authors assumed that the output of the language encoder is already in the semantic space and they used a two-layer-perceptron to project the output of the visual encoder into the same space.
Similarly, the Refined Ask Your Neurons architecture~\cite{malinowski2017ask} consisted of a visual encoder, question encoder, a multimodal embedding module combining both encodings into a joint space, and an answer decoder. The authors compared results achieved for several combinations of different question (e.g. BoW, CNN, GRU and LSTM) and visual (several classical architectures including AlexNet, GoogLeNet, VGG-19, ResNet-152) encoders.
\subsection{Image encoder}
In our system the image encoder realizes two major operations: object detection (OD) and object encoding.
The diagram for object detection is shown in \fig{fig:scheme_object_detection}.
In a given $t$-th image $I^t$ we detect a set of $N$ objects:
\begin{equation}
_d\textbf{O}^t = [_{d}o^t_0,\ _{d}o^t_1,\ \ldots,\ _{d}d^t_{N-1} ],
\end{equation}
each defined as $_{d}o^t_n = < _{d}i^t_n,\ _{d}b^t_n > $, where $_{d}i^t_n$ denotes the class identifier (1 to 96) of the object and $_{d}b^t_n$ contain four parameters defining the object bounding box.
We decided to use Faster R-CNN~\cite{ren2015faster} with ResNet-101~\cite{he2016deep} pre-trained on COCO dataset, since it is one of the most reliable methods for object detection currently available.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.65\columnwidth]{img/scheme_object_detection.pdf}}
\caption{Dataflow diagram of object detection}
\label{fig:scheme_object_detection}
\end{figure}
In the next step (presented in \fig{fig:scheme_object_decoding}) we decode each object $_{d}o^t_n$ and retrieve set of its attributes:
\begin{equation}
_{a}o^t_n = < c^t_n,\ m^t_n,\ s^t_n,\ f^t_n,\ x^t_n,\ y^t_n >,
\label{eq:obj_attributes}
\end{equation}
where
$c^t_n$ is the object color,
$m^t_n$ represents material it is made from,
$s^t_n$ indicates size,
$f^t_n$ represents its shape (form)
and $x^t_n$ and $y^t_n$ describe its position in the image.
Simple encodings of each of those attributes (we used one-hot in case of the former four and bucketing in the case of positional arguments, as explained in sec. 4.4) and concatenation of the resulting vectors form the encoded object description $_{e}o^t_n$.
\begin{figure}[htbp]
\centerline{\includegraphics[width=1.0\columnwidth]{img/scheme_object_decoding.pdf}}
\caption{Dataflow diagram of object encoding}
\label{fig:scheme_object_decoding}
\end{figure}
\subsection{Question encoder}
The diagram for question processing is shown in \fig{fig:scheme_question_encoding}.
We start with the question consisting of several words:
\begin{equation}
\textbf{Q}^t = [q^t_0,\ q^t_1,\ \ldots,\ q^t_{W-1} ],
\end{equation}
where $W$ denotes the number of words constituting a given question.
Next, we use the GloVe word embedding model~\cite{pennington2014glove} to encode question words:
$_e\textbf{Q}^t = [_eq^t_0,\ _eq^t_1,\ \ldots,\ _eq^t_{W-1} ] $.
Finally, we pass the encoded words one by one as inputs to the LSTM~\cite{hochreiter1997long} to produce a list of encoded output:
\begin{equation}
_{s}\textbf{Q}^t = [_{s}q^t_0,\ _{s}q^t_1,\ \ldots,\ _{s}q^t_{W-1} ].
\end{equation}
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.95\columnwidth]{img/scheme_question_encoding.pdf}}
\caption{Dataflow diagram of question encoding}
\label{fig:scheme_question_encoding}
\end{figure}
\subsection{Semantic embedding}
There are many ways for combining image and question features~\cite{teney2017visual}, such as concatenation, element-wise product or bilinear operation.
In our solution, similar to the relational network module reported in ~\cite{santoro2017simple}, we form a set of feature vectors by concatenating pairs of vectors representing two objects with the encoded question, i.e. the last output of LSTM ($_{s}q^t_{W-1}$).
This approach is somehow similar to object-object-relation triplets used in~\cite{dai2017detecting}.
\begin{figure}[htbp]
\centerline{\includegraphics[width=\columnwidth]{img/scheme_semantic_embedding.pdf}}
\caption{Dataflow diagram of semantic embedding}
\label{fig:scheme_semantic_embedding}
\end{figure}
Then, individual triplets are passed through a four layer MLP (Multi-Layer Perceptron), each with 512 units and ReLU non-linearities, which results in a vector of relations:
\begin{equation}
\textbf{R}^t = [r^t_{0,1},\ r^t_{0,1},\ \ldots,\ r^t_{N-1,N-1}].
\end{equation}
Those relations are further aggregated into $_ar^t$.
We investigated several aggregation methods, but simple summation appeared to give the best results -- please refer to sec.~\ref{sec:results_clevr_baselines}.
\subsection{Answer decoder}
Finally, we cast the answering problem as a classification task and pass the aggregated relation $_ar^t$ through three MLP layers consisting of 512, 1024, and 29 units. A 2\% dropout layer was added before the last MLP layer. ReLU non-linearities were added in each layer except the last one, where we applied the softmax function. The result from the softmax was finally decoded using the word dictionary used for initial word embeddings in Question encoder (\fig{fig:scheme_answer_decoding}).
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.8\columnwidth]{img/scheme_answer_decoding.pdf}}
\caption{Dataflow diagram of answer decoding}
\label{fig:scheme_answer_decoding}
\end{figure}
\section{Related work}
\label{sec:related_work}
Research on VQA has resulted in many interesting solutions.
Those, however, could not have been developed without the existence of proper datasets and metrics for evaluation and comparison of the results.
From the historical perspective, the first dataset designed as a benchmark for the VQA task was DAQUAR (DAtaset for QUestion Answering
on Real-world images)~\cite{malinowski2014multi}.
The dataset contains 1.5k RGB-D images of indoor scenes from NYU-Depth v2 dataset~\cite{silberman2012indoor}, with annotated semantic segmentations and question/answer pairs of two types: synthetic questions/answers generated automatically on the basis of NYU annotations and human question/answers collected from 5 annotators.
The limitations of DAQUAR (i.e. restriction of answers to a predefined set and strong bias in human annotations) resulted in releases of several other datasets, most prominently: COCO-QA ~\cite{ren2015image} (based on images from MS COCO dataset~\cite{lin2014microsoft}, where substantial effort was made in order to increase the scale of training data), VQA ~\cite{antol2015vqa,goyal2016making} (consisting of two sets, VQA-real with natural images, and VQA-abstract with cartoon images, and providing 17 additional (incorrect) candidate answers for each question) and Visual Genome~\cite{krishna2017visual} (currently the largest VQA dataset, with 1.7 million question/answer pairs and with images contents additionally described by structured annotations in the form of scene graphs).
The paper which introduced DAQUAR~\cite{malinowski2014multi} also laid the foundations for evaluation of the system accuracy, and in consequence, monitoring the overall progress in the field.
The authors proposed two basic evaluation metrics. First, by simply measuring the accuracy with respect to the ground truth answer using string matching. Second, by using the Wu-Palmer Similarity (WUPS), enabling to evaluate the similarity between common objects by their distance in a taxonomy tree.
Other evaluation methods include the previously mentioned one in a multiple-choice settings~\cite{antol2015vqa} or the "fill in the blanks" approach proposed along with the Visual Madlibs dataset~\cite{yu2015visual}.
As the field started to mature, researchers discovered the importance of biasses in the images and questions, which triggered the release of more balanced datasets (such as VQA v2~\cite{goyal2016making} and CLEVR~\cite{johnson2017clevr}). It also led to the analysis of the importance of the common-sense knowledge required to solve a given task, which in turn resulted in efforts towards delivering ground truth containing rich description of the scene, facilitating both training and evaluation (e.g.~Visual Genome~\cite{krishna2017visual}).
However, the most interesting achievements in the VQA field are the novel algorithms and neural architectures.
In~\cite{wu2017visual} the authors proposed to distinguish four categories: joint embedding approaches, attention mechanisms, compositional models
and models using external knowledge bases, which also summarizes the four main research directions in VQA.
The efforts in joint embedding focus on the methods for combining multi-modal representations.
As in VQA there are two distinct input modalities (image and text), which makes this problem similar to the problems found in other multi-modal domains. For example the projection of user and item embeddings into the common representation space in neural recommender systems~\cite{he2017neural}.
Exemplary approaches developed for the VQA problem domain include e.g. Multimodal Compact Bilinear pooling (MCB)~\cite{fukui2016multimodal} method that performed joint embedding of visual and text features, or Relational Networks (RN)~\cite{santoro2017simple} where embedded question was concatenated with features extracted from pairs of image regions, enabling the system to reason about the relation between objects being present in those regions.
As some questions might require more than one reasoning step, a lot of researchers focused on attention mechanisms, which were initially introduced for neural translation ~\cite{bahdanau2014neural}, and later on adapted to the Question-Answering problem~\cite{weston2014memory}.
In~\cite{shih2016look} the authors introduced a simple attention model, where an embedded question was used for the generation of an attention mask (called region-question relevance), which was subsequently used for the calculation of a weighted vector of concatenated question and image features.
Yang et al. \cite{yang2016stacked} introduced Stacked Attention Networks, which are able to infer the answer iteratively.
This architecture includes two attention layers, enabling two reasoning steps (in~\cite{weston2014memory} called "hops"), each driven by a different attention mask over the image features.
The influence of number of hops was analysed in~\cite{xu2016ask}, where the authors introduced a spatial attention-based model called Spatial Memory Network. Here, the model uses each word embedding to capture a fine-grained alignment between the image and the question.
The results indicated the superiority of the two-hop model over the model with a single hop.
The described methods above focused only on visual attention driven by the question.
In contrast to those ideas, a major innovation was proposed in~\cite{lu2016hierarchical}, where Hierarchical Co-Attention (HieCoAtt) model processed image and question symmetrically, with the image guiding the attention over the question and vice versa.
In a quest to assess the quality of the attention models, in more recent works researchers have tried to validate whether the attention models learned by neural models focus on the same image regions as humans do~\cite{das2016human}.
\begin{figure*}[htbp]
\centerline{\includegraphics[width=\textwidth]{img/scheme_general.pdf}}
\caption{General architecture of the proposed system}
\label{fig:scheme_general}
\end{figure*}
The third direction results from the observation that different tasks might require totally different reasoning processes. For example in the "query attribute" task the system should focus its attention on a single aspect of the scene, whereas in "counting" it should perform a sequence of actions, focusing on the objects of interest one by one.
This observation led to the development of Neural Module Networks~\cite{andreas2015deep}, where semantic parsers (e.g. Stanford Parser) were used for question-driven dynamic assembly of the internal structure of the system, which is different for every type of question.
Most recent advances include program-induction~\cite{johnson2017inferring} consisting of a program generator and program executor, where the former is a sequence-to-sequence model (a pair of LSTMs, i.e. Long Short Term Memory recurrent neural nets~\cite{hochreiter1997long}) responsible for constructing of an internal representation of the reasoning process to be performed, whereas the latter is responsible for execution of the resulting program in order to produce the answer.
The last direction includes research on models using external knowledge bases (i.e. external sources of data) for answering the question.
One such solution is proposed in ~\cite{wu2016ask}, where the authors developed a framework that extracts image-related information from the DBpedia knowledge base, enabling it to answer a broad range of questions, often requiring knowledge that could not be inferred directly from the image.
The improvements achieved by results e.g. in the paper proposing the FVQA dataset~\cite{wang2017fvqa} strongly indicate that utilization of supporting-facts extracted from large-scale structured knowledge bases might be the key ingredient for solving the Visual Turing test.
As there are other findings that also suggest the need for operation on high-level, abstract facts extracted from the image (e.g.~\cite{anderson2017bottom}), we decided to investigate that direction further.
\section{Analysis of the results}
\label{sec:analysis_results}
\subsection{Question encoding}
We have pretrained the GloVe word embedding model on the Google News corpus~\cite{mikolov2013efficient} (3 billion running words) word vector model and then generated embeddings for the dictionary formed on the basis of 93 words present in CLEVR questions and answers.
\begin{figure}[!t]
\centerline{\includegraphics[width=0.8\columnwidth]{img/od_complex_scene_ok.png}}
\caption{Exemplary CLEVR scene with objects detected with our trained object detector}
\label{fig:od_complex_scene_ok}
\end{figure}
\subsection{Image encoding}
\label{sec:image_encoding}
Utilization of object detector required its prior training on images with ground truth bounding boxes and object classes.
Since scene descriptions in the CLEVR dataset do not provide bounding boxes, we wrote a program for their generation.
We calculated the bounding boxes for every object present in the scene relying on the associated metadata from the scene description, i.e. using object position in the image (2d pixel coordinates and rotation), its position in the Cartesian space (3d coordinates) along with its size and shape.
In addition, we have assigned each object a class identifier taking into account the possible combinations of object attributes as defined in~\eqref{eq:obj_attributes}, which resulted in 96 unique classes.
An exemplary scene is presented in \fig{fig:od_complex_scene_ok}.
We compared the reconstructed scene from the OD and the ground truth scene from the scene description to check how accurately the OD network is predicting the scenes.
We were able to get a precision of 0.99 and a recall of 0.99, thus we find the results satisfactory.
\begin{figure}[b!]
\centerline{\includegraphics[width=\columnwidth]{img/count_exist.png}}
\caption{The comparison of accuracy of our solution in Count and Exist tasks with the CLEVR baselines}
\label{fig:count_exist}
\end{figure}
\begin{figure}[!t]
\centerline{\includegraphics[width=\columnwidth]{img/query_at.png}}
\caption{The comparison of accuracy of our solution with the CLEVR baselines in Query Attribute task}
\label{fig:query_attribute}
\end{figure}
\begin{figure}[!t]
\centerline{\includegraphics[width=\columnwidth]{img/final_comp_int.png}}
\caption{The comparison of accuracy of our solution with the CLEVR baselines in Compare Integer task}
\label{fig:compare_numbers}
\end{figure}
\begin{figure}[!t]
\centerline{\includegraphics[width=\columnwidth]{img/comp_attribute.png}}
\caption{The comparison of accuracy of our solution with the CLEVR baselines in Compare Attribute task}
\label{fig:compare_attribute}
\end{figure}
\begin{figure*}[t!]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{img/CLEVR_val_011100.png}
\caption{\textbf{Question:} what number of objects are small cyan objects that are in front of the gray object or big things that are in front of the big rubber block ?
\newline\textbf{Predicted Answer:} 3
\newline\textbf{Ground Truth:} 3}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{img/CLEVR_val_011298.png}
\caption{\textbf{Question:} how many balls are either big blue matte objects or small red things ?
\newline\newline
\newline\textbf{Predicted Answer:} 1
\newline\textbf{Ground Truth:} 1}
\label{fig:counting_occluded_example1}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{img/CLEVR_val_012014.png}
\caption{\textbf{Question:} what number of things are either small shiny objects that are behind the purple metallic cylinder or tiny purple metallic balls ?
\newline\textbf{Predicted Answer:} 2
\newline\textbf{Ground Truth:} 2}
\label{fig:counting_occluded_example2}
\end{subfigure}%
\caption{Exemplary result on counting with heavily occluded object}
\label{fig:counting_examples}
\end{figure*}
\subsection{Comparison with CLEVR baselines}
\label{sec:results_clevr_baselines}
We have reproduced the majority of baselines from the original CLEVR paper~\cite{johnson2017clevr}, achieving similar results.
In our implementations we used the TensorFlow framework~\cite{abadi2016tensorflow} and the associated object detection API~\cite{huang2016speed}.
As CLEVR does not provide ground truth for the test set, we are reporting the results achieved on the validation set.
During training we have used a learning rate of $1.0\*e^{-4}$.
In the last step of Semantic embedding we have tried several aggregation methods, including simple concatenation of activations of Object relation MLPs, using mean values of those activations and their sum.
The experiments have shown that summation of those activations gives the best results (accuracy of 94.6\% in comparison to e.g. 92.9\% in the case of using mean value).
The simple explanation is that summation ensures the invariance with respect to the order of objects~\cite{santoro2017simple}.
\begin{figure}[!b]
\centerline{\includegraphics[width=\columnwidth]{img/final_overall_.png}}
\caption{The comparison of overall accuracy of our solution with CLEVR baselines}
\label{fig:over_all}
\end{figure}
We have analyzed the results using the same task-oriented criteria as the original paper did.
The accuracy on "exist" and "count" tasks is presented in \fig{fig:count_exist}, where
the existence questions ask whether a certain type of object is present, while the count questions ask for the number of objects fulfilling some conditions.
The Query Attribute task contains questions asking about an attribute of a particular object (\fig{fig:query_attribute}).
The Compare Integer questions ask which of two object sets fulfilling given conditions is larger (\fig{fig:compare_numbers}).
Results for Attribute Comparison task, where questions ask whether two objects have the same value for a given attribute, are presented in \fig{fig:compare_attribute}.
Finally, we present the comparison of the overall accuracy of our solution against all the CLEVR baselines in \fig{fig:over_all}.
Our solution has shown major improvement over all the baselines, including the human accuracy collected using Mechanical Turk.
\begin{figure}[!b]
\centerline{\includegraphics[width=\columnwidth]{img/OD_vs_CNN.png}}
\caption{Comparison of our approach with results reported in the paper on Relational Networks}
\label{fig:compare_with_rn}
\end{figure}
\begin{figure*}[!t]
\centerline{\includegraphics[width=\textwidth]{img/scheme_general_scene_description.pdf}}
\caption{The architecture of the system with scene description as input used for comparison with the \textit{CLEVR with state description} baseline from~\cite{santoro2017simple}}
\label{fig:scheme_general_scene_description}
\end{figure*}
\subsection{Comparison with results from RN}
\label{sec:results_rn_baselines}
In \fig{fig:compare_with_rn} we present the comparison of our results (shown in orange) with the results reported in~\cite{santoro2017simple} (shown in blue).
As one might notice, our overall accuracy is slightly, by around 1\%, lower.
However, interestingly our solution seems to give more consistent results for different tasks and achieves better results in 2 out of 5 tasks.
In particular, we achieved much better accuracy (by almost 5\%) in the counting task, which is currently considered to be one of the hardest tasks in VQA (e.g.~\cite{trott2017interpretable}).
In order to realize the counting task, the system is supposed to perform several quite different operations, i.e. understanding what type of objects to focus on, finding those objects in the image and finally counting the instances.
Our results indicate that a prior object detection naturally facilitates counting, which we treat as support for our hypothesis that operation on more abstract facts facilitates reasoning in general.
In \fig{fig:counting_examples} we present a few hard cases, where our system was able to provide the correct answer.
In particular, it managed to properly answer questions about the scenes presented in \fig{fig:counting_occluded_example1} and \fig{fig:counting_occluded_example2} despite the heavy occlusions of the objects of interest, which in~\cite{santoro2017simple} was suggested to be the main reason for failures in CLEVR.
We further investigated why our solution does not improve in the other three tasks.
After validating the accuracy of object detection module (as reported in \sec{sec:image_encoding}) we decided to reproduce the experiments from~\cite{santoro2017simple} in which the Image encoder was replaced by the information retrieved straight from the scene description.
The architecture of the resulting system is presented in \fig{fig:scheme_general_scene_description}.
Despite using the same learning hyperparameters for relational MLPs we achieved the overall accuracy of 94.5\%, which is almost 2\% worse from the accuracy of 96.4\% reported for that setting in the paper~\cite{santoro2017simple}.
However, as the authors did not report how they encoded the scene, we decided to further investigate that research direction.
As the object attributes form small dictionaries, we assumed that simple one-hot encoding is sufficient.
However, the object position expressed in pixel coordinates in the images ($x^t_n$ and $y^t_n$) might be encoded in several different ways, which in turn might influence the final accuracy.
Therefore, we have performed several experiments with different methods of encoding the object positions, briefly explained below.
\subsubsection{One-hot encoding}
In this approach we simply converted the object $x^t_n$ and $y^t_n$ pixel coordinates into two separate vectors using one-hot encoding and concatenated them into one vector of length 800 (as we have 480 possible values for $x^t_n$ and 320 for $y^t_n$). As can be seen from tab.~\ref{tab:scene_enconding} this method was not able to generalize as well as the others.
\begin{table*}[htbp]
\centering
\begin{center}
\begin{tabular}{|c|c||c|c|c|c|c|c|}\hline
\textbf{Encoding} & \textbf{Bucket} & \textbf{Overall} & \textbf{Count} & \textbf{Exist} & \textbf{Compare} & \textbf{Query} & \textbf{Compare} \\
& \textbf{size} & & & & \textbf{numbers} & \textbf{attribute} & \textbf{attribute} \\
\hline
\hline
One hot& -- & 89.8 & 90.5 & 89.7 & 89.5 & 90.6 & 88.5 \\
\hline
Bucketing& 15 & 93.9 & 93.9 & 93.7 & 93.6 & 94.6 & 93.5 \\
\hline
Bucketing& 20 & \textbf{94.5} & 93.6 & 94.7 & 93.3 & 95.2 & 94.4 \\
\hline
Bucketing& 30 & 93.6 & 93.4 & 93.1 & 93.2 & 94.9 & 93.4 \\
\hline
Enumeration & -- & 93.3 & 93.2 & 93.7 & 92.9 & 93.5 & 93.1 \\
\hline
\hline
Results from~\cite{santoro2017simple} & -- & 96.4 & -- & -- & -- & -- & -- \\
\hline
\end{tabular}
\end{center}
\caption{Results on different position encodings for solution with Image encoder replaced by the information retrieved straight from the scene description}
\label{tab:scene_enconding}
\end{table*}
\subsubsection{Bucketing}
Here we grouped n consecutive pixels to buckets along the x and y axes separately, which resulted in a number of buckets for x and y.
We did experiments on three different bucket sizes, i.e. 15, 20, and 30, which resulted in 32, 24, and 16 buckets for the x component and 22, 16, and 11 buckets for the y component respectively.
When given the object position coordinate $x^t_n$ belongs to a range of a given bucket we convert the bucket number to vector using one hot encoding. We perform the same for $y^t_n$
and concatenate both results.
It can be observed in tab.~\ref{tab:scene_enconding} that the solution generalizes better when the bucket size is chosen to be 20.
\subsubsection{Object enumeration}
In this method, instead of relying on the pixel coordinates we move to a more abstract representation and encoded the position of the object in relation to the positions of other objects in the scene. For x and y axes we form two separate lists, representing the order of the objects according that axis.
For instance, for three objects in a scene, the very top object out of the three will have a value of one and the bottom object will have a value of three.
Finally, we encoded such represented positions using one hot encoding and concatenated both encoded coordinates for every object.
Results of all the above mentioned experiments are presented in Tab.~\ref{tab:scene_enconding}.
The main finding is that our model was able to generalize well when we grouped twenty pixel values into one bucket.
Unfortunately, these experiments did not improve the accuracy to the level reported in ~\cite{santoro2017simple}.
|
1,314,259,995,895 | arxiv | \section{Introduction} \label{sec:intro}
We consider two analytic functions,
\[f_0(z) := e^{z/(1-z)} = e^{-1}\,e^{1/(1-z)}\]
and
\[f_1(z) := e^x{E_1}(x),
\text{ where }
x := 1/(1-z)
\text{ and }
{E_1}(x) := \int_x^\infty \frac{e^{-t}}{t}{\,d}t.
\]
These functions are regular in the open disk $D = \{z\in{\mathbb C}: |z| < 1\}$.
We write their Maclaurin coefficients
as $a_n := [z^n]f_0(z)$ and $b_n = [z^n]f_1(z)$.
Thus, in the disk $D$, $f_0(z) = \sum_{n\ge 0} a_n z^n$ and
$f_1(z) = \sum_{n\ge 0} b_n z^n$.
The functions $f_0(z)$ and $f_1(z)$ satisfy the same third-order
linear differential equation with polynomial coefficients.
Thus, the sequences $(a_n)$ and $(b_n)$ are D-finite and satisfy
the same recurrence relation (for sufficiently large $n$).
There are several entries in the OEIS related to the rational
sequence $(a_n)_{n\ge 0}$.
The numerators are OEIS \seqnum{A067764},
and the denominators are OEIS \seqnum{A067653}.
The integers $n!a_n$ are given by OEIS \seqnum{A000262} and,
with alternating signs, by OEIS \seqnum{A293125}.
The numbers $(b_n)_{n\ge 0}$
are unlikely to be rational.%
\footnote{In particular,
$b_0 = G$, where $G := e{E_1}(1) \approx 0.596$ is the
Euler-Gompertz constant, whose decimal digits are given
by OEIS \seqnum{A073003}.
We have $b_n = a_nG - a_n'$, where
$a_n' \in {\mathbb Q}$ and $a_n'$ satisfies essentially the same recurrence
as $a_n$, but with different initial conditions.
Clearly $b_n \in {\mathbb Q}$ if and only if $G \in {\mathbb Q}$.
All that is known is that at least one of $\gamma$ and $G$ is
irrational~\cite{Aptekarev,Rivoal}.}
The numbers $a_n$ and $b_n$ may be expressed in terms of confluent
hypergeometric functions.
If $M(a,b,z) = {_1\hspace*{-0.1em}F_1}(a;b;z)
$
and $U(a,b,z)$ are standard solutions of Kummer's differential
equation, then Lemmas~\ref{lem:anM}--\ref{lem:bnU} show that
$a_n = e^{-1}M(n+1,2,1)$ and $b_n = -\Gamma(n)U(n,0,1)$.
We are interested in the asymptotics of $a_n$ and $b_n$ for large $n$.
Perron~\cite{Perron14}, following Fej\'er~\cite{Fejer08},
showed that
\[a_n \sim \frac{e^{2\sqrt{n}}}{2n^{3/4}{\sqrt{\pi e}}}\,\raisebox{2pt}{$.$}
\]
Salvy\footnote{Bruno Salvy, email to A.~J.~Guttmann et al., May 28, 2018.}
conjectured that $b_n$ is of order
$e^{-2\sqrt{n}}n^{-3/4}$. We have verified this conjecture.
In fact,
\[b_n \sim -\frac{\sqrt{\pi e}}{n^{3/4}e^{2\sqrt{n}}}\,\raisebox{2pt}{$.$}\]
A function of the form
$f(n) = \exp(\alpha n^{\theta + o(1)})$ for $\alpha \ne 0$,
$\theta \in (0,1)$, is called a \emph{stretched exponential}
in the physics/statistics literature
(the term \emph{sub-exponential} is used in complexity theory).
Thus, $a_n$ and $b_n$ are stretched exponentials,
with $\alpha = \pm 2$ and $\theta = 1/2$.
The motivation for this paper stems from some enumeration problems in
algebraic combinatorics and mathematical physics. Many such problems
involve ordinary generating functions of power series
\hbox{$A(x)=\sum_{n \ge 0} A_n x^n$} in which $A_n \sim c \mu^n n^g$.
In such cases, assuming that $g$ is not a negative integer, one can write
\[A(x) \sim c\,\Gamma(1+g)(1-\mu x)^{-(1+g)}\]
as $x \to 1/\mu$.
However, in recent years there have been a number of examples, such as
$Av(1324)$ pattern-avoiding permutations \cite{CGZ18}, interacting
partially-directed self-avoiding walks \cite{OPR93}, and Dyck paths
enumerated by maximum height \cite{G15}, in which the corresponding
generating function has coefficients behaving as
$B_n \sim c \mu^n \exp(\alpha n^\theta) n^g,$ with $\alpha < 0$.
The question then arises as
to the asymptotic form of the generating function. The coefficients $b_n$
considered in this paper are of the form just described, with
$\theta =1/2$,
and the underlying generating function is found. Corresponding
results for other values of $\theta$ remain to be discussed.
Theorem~\ref{thm:ckdirect} gives complete asymptotic expansions
of $a_n$ and $b_n$.
These may be written as
\[a_n = \frac{F(n^{1/2})}{2n^{3/4}\sqrt{\pi e}}\;\text{ and }\;
b_n = -\frac{\sqrt{\pi e}}{n^{3/4}}\,F(-n^{1/2}),
\]
where
$F(x) \sim e^{2x}\sum_{k \ge 0}c_k x^{-k}$,
for certain constants $c_k\in{\mathbb Q}$, $c_0 = 1$.
The $c_k$ may be computed using Theorem~\ref{thm:ckdirect}
or Lemma~\ref{lemma:ck}.
The \emph{Hadamard product}
$f_0{\,\odot} f_1$
of $f_0$ and $f_1$ is the analytic function
defined for $z\in D$ by
\[(f_0{\,\odot} f_1)(z) = \sum_{n\ge 0} a_n b_n z^n.\]
The asymptotic expansions of $a_n$ and $b_n$ imply an asymptotic
expansion for $\rho_n := a_n b_n$
of the form
\[
\rho_n \sim - \frac{1}{2n^{3/2}} \sum_{k \ge 0} d_k n^{-k},
\]
where $d_k \in {\mathbb Q}$, $d_0 = 1$ (see Corollary~\ref{cor:product}).
A \emph{dyadic rational} is a rational number of the form $p/q$, where
$q$ is a power of two. Let $Q_2 := \{j/2^k: j, k \in {\mathbb Z}\}$
denote the set of dyadic rationals.
We conjecture, from numerical evidence for $k \le 1000$, that $d_k \in Q_2$.
More precisely, defining $r_k := 2^{6k}d_k$,
Conjecture~\ref{conj:rkinteger} is that $r_k \in{\mathbb Z}$.
Remark~\ref{remark:rkinteger} gives numerical evidence for a slightly
stronger conjecture.
In Theorem~\ref{thm:factorialRkk} we prove the weaker
(but still nontrivial) result that $k!r_k \in {\mathbb Z}$.
In Remark~\ref{remark:Bessel} we mention an analogous (easily proved) result
for modified Bessel functions, where the product $I_\nu(x)K_\nu(x)$ for
fixed $\nu\in{\mathbb Z}$ has an asymptotic expansion whose coefficients are in ${\mathbb Q}_2$.
The connection with confluent hypergeometric (Kummer) functions is discussed
in~\S\ref{sec:GG}, and asymptotic expansions for $a_n$ and $b_n$ are
considered in~\S\ref{sec:asymp_a_b}. In \S\ref{sec:Maclaurin} we mention
various recurrence relations, continued fractions, and closed-form
expressions related to $a_n$ and $b_n$.
Finally, in \S\S\ref{sec:Hadamard}--\ref{sec:dn_rec},
we consider Hadamard products
and discuss the conjecture mentioned above.
Some comments on notation: $f(x) \sim \sum_{k\ge 0}f_k x^{-k}$
means that the sum on the
right is an asymptotic series for $f(x)$ in the sense of Poincar\'e. Thus,
for any fixed $m > 0$,
$f(x) = \sum_{k=0}^{m-1} f_k x^{-k} + O(x^{-m})$ as $x \to \infty$.
The letters $j, k, m, n$ always denote integers (except for $n$ in
Remark~\ref{rem:abgen}).
The notation $(x)_n$ for $n \ge 0$
denotes the \emph{ascending factorial}
or \emph{Pochhammer symbol},
defined by $(x)_n := x(x+1)\cdots(x+n-1)$.
\section{Connection with hypergeometric functions} \label{sec:GG}
The numbers $a_n$ and $b_n$ may be expressed in terms of confluent
hypergeometric functions (Kummer functions),
for which we refer to~\cite[\S13.2]{DLMF}.
If $M(a,b,z)$
and $U(a,b,z)$ are standard solutions $w(a,b,z)$ of Kummer's differential
equation
$zw'' + (b-z)w' - aw = 0$,
then Lemmas~\ref{lem:anM}--\ref{lem:bnU} below
express $a_n$ and $b_n$ in terms of
$M(n+1,2,1)$ and $U(n,0,1)$.
Kummer~\cite{Kummer} considered
\begin{equation} \label{eq:KummerM}
M(a,b,z) = {_1\hspace*{-0.1em}F_1}(a;b;z) =
\sum_{k\ge 0} \frac{(a)_k\,z^k}{(b)_k\,k!}\,\raisebox{2pt}{$,$}
\end{equation}
which is undefined if $b$ is zero or a negative integer.
In the case $a \ne b=0$, we can use the solution
\[
zM(a+1,2,z) = \lim_{b\to 0}\, \frac{b}{a}M(a,b,z).
\]
Tricomi~\cite{Tricomi} introduced the function $U(a,b,z)$
as a second (minimal) solution of Kummer's differential equation.
For our purposes it is convenient to use the integral
representation~\cite[(13.4.4)]{DLMF}
(valid for $\Re(a) > 0$, $\Re(z) > 0$)
\begin{equation} \label{eq:NIST13.4.4}
U(a,b,z)= \frac{1}{\Gamma(a)}\int_0^\infty e^{-zt}
\, t^{a-1} \, (1+t)^{b-a-1} {\,d}t.
\end{equation}
We remark that the
functions $M$ and $U$ satisfy
recurrence relations, known as
``connection formulas''. For example,
we mention~\cite[(13.3.1) and (13.3.7)]{DLMF},
both (essentially) due to Gauss (see
Erd\'elyi~\cite[\S6.4 and \S6.6]{Erdelyi}):
\begin{small}
\begin{align}
\label{eq:Kummer13.3.1}
(b-a)M(a-1,b,z)+(2a-b+z)M(a,b,z)-aM(a+1,b,z) =&\; 0,\\
\label{eq:Kummer13.3.7}
U(a-1,b,z)+(b-2a-z)U(a,b,z)+a(a-b+1)U(a+1,b,z) =&\; 0.
\end{align}
\end{small}
\vspace*{-10pt}
Lemmas~\ref{lem:anM}--\ref{lem:bnU} express $a_n$ and $b_n$ in
terms of the Kummer functions $M$ and $U$, respectively.
Lemma~\ref{lem:anM} was stated, without proof, by Covo~\cite{Covo}.
\begin{lemma} \label{lem:anM}
If $n\in{\mathbb Z}$, $n \ge 1$, and $a_n$ is as above, then
\begin{equation} \label{eq:anM}
a_n = e^{-1}M(n+1,2,1).
\end{equation}
\end{lemma}
\begin{proof}
If we put $a=n+1$, $b=2$, and $z=1$ in the connection formula
\eqref{eq:Kummer13.3.1},
we see that $\widetilde{a_n} := e^{-1}M(n+1,2,1)$ satisfies
the same recurrence \eqref{eq:arec1} as~$a_n$.
Thus, to show that $a_n = \widetilde{a_n}$ for all
$n \ge 1$, it is sufficient to show that $a_n = \widetilde{a_n}$
for $n \in \{1,2\}$.
Now
\[
\widetilde{a_1} = e^{-1}M(2,2,1) = e^{-1}\sum_{k\ge 0}\frac{(2)_k}{(2)_k\,k!}
= 1 = a_1,
\]
and, similarly,
\[
\widetilde{a_2} = e^{-1}M(3,2,1) = e^{-1}\sum_{k\ge 0}\frac{(3)_k}{(2)_k\,k!}
= e^{-1}\sum_{k \ge 0}\frac{k+2}{2\,k!} = 3/2 = a_2,
\]
so the result follows.
\end{proof}
\begin{lemma} \label{lem:bnU}
If $n\in{\mathbb Z}$, $n \ge 1$, and $b_n$ is as above, then
\begin{equation} \label{eq:bnU}
b_n = -\Gamma(n)\, U(n,0,1).
\end{equation}
\end{lemma}
\begin{proof}
We start with~\cite[(6.7.1)]{DLMF}:
\[
I(a,b):=
\int_0^\infty \frac{ e^{-at}}{t+b}{\,d}t = e^{ab}E_1(ab),\,\,\,a,b > 0.\]
Note that, by definition,
$b_n = [z^n]I(1,1/(1-z))$.
Setting $a=1$, $b=1/(1-z),$ the term $1/(t+b)$ inside the integral
can be rearranged
as follows:
\begin{equation*}
\left ( t+\frac{1}{1-z} \right)^{-1} = \frac{1-z}{1+t-tz}
=\frac{1}{1+t}-\frac{1}{t(1+t)}\left(\frac{1}{1-{zt}/(1+t)} -1 \right),
\end{equation*}
and making the substitution $s=t/(1+t)$ gives
\[I(1,1/(1-z))= \int_0^\infty
\frac{e^{-t}}{1+t}{\,d}t -
\int_0^1 e^{-s/(1-s)}\, \left(\frac{z}{1-zs}\right){\,d}s
= \sum_{n \ge 0} b_n \, z^n.\]
Thus, $b_0=eE_1(1)$ and, for $n > 0$,
\begin{equation} \label{eq:Larrybn1}
b_n=-\int_0^1 e^{-s/(1-s)}\, s^{n-1}{\,d}s.
\end{equation}
Writing $e^{-s/(1-s)} = e^{1-1/(1-s)}$ gives, for $n > 0$,
\begin{equation} \label{eq:Larrybn2}
b_n=-e \int_0^1 e^{-1/(1-s)}\, {s^{n-1}}{\,d}s.
\end{equation}
Substitute $t=s/(1-s)$ in~\eqref{eq:NIST13.4.4}, giving
\begin{equation} \label{eq:GUabz}
\Gamma(a)U(a,b,z)= e^z\int_0^1 e^{-z/(1-s)} \, s^{a-1} \,
(1-s)^{-b}{\,d}s.
\end{equation}
Comparison of~\eqref{eq:Larrybn2} and~\eqref{eq:GUabz}
now gives $b_n = -\Gamma(n)\, U(n,0,1)$.
\end{proof}
\begin{remark}
We could prove Lemma~\ref{lem:bnU} in the same manner as
Lemma~\ref{lem:anM},
using the connection formula~\eqref{eq:Kummer13.3.7}
instead of~\eqref{eq:Kummer13.3.1},
and the recurrence~\eqref{eq:brec1} instead of~\eqref{eq:arec1},
but in order to verify the initial conditions we would have to resort to
some explicit representation for $U$, such as the integral
representation~\eqref{eq:NIST13.4.4}, so the proof would be no simpler.
\end{remark}
\begin{remark} \label{rem:abgen}
We can general{ize} our definitions of $a_n$ and $b_n$ to permit $n\in{\mathbb C}$,
using Lemmas~\ref{lem:anM}--\ref{lem:bnU}.
Such generalizations do not seem particularly useful,
so in what follows we continue to assume that $n\in{\mathbb Z}$.
\end{remark}
\section[Asymptotic expansions of a(n) and b(n)
{Asymptotic expansions of $a_n$ and $b_n$}
\label{sec:asymp_a_b}
Theorem~\ref{thm:ckdirect} gives the complete asymptotic expansions of $a_n$
and $b_n$ in ascending powers of $n^{-1/2}$. Wright~\cite{Wright0} proved
the existence of an asymptotic expansion of the form~\eqref{eq:aseries2} for
$a_n$, but did not state an explicit formula or algorithm for computing the
constants $c_m$ occurring in the expansion.
For a more ``algorithmic'' approach, see Wyman~\cite{Wyman59}.
\begin{theorem} \label{thm:ckdirect}
For positive integer $n$, if $a_n$ and $b_n$ are as above, then
\begin{equation} \label{eq:aseries2}
a_n \sim \frac{e^{2\sqrt{n}}}{2n^{3/4}\sqrt{\pi e}}
\sum_{m\ge 0} c_m n^{-m/2}
\end{equation}
and
\begin{equation} \label{eq:bseries2}
b_n \sim -\,\frac{\sqrt{\pi e}}{n^{3/4}e^{2\sqrt{n}}}
\sum_{m\ge 0} (-1)^m c_m n^{-m/2},
\end{equation}
where
\begin{equation} \label{eq:ckdirect}
c_m = (-1)^m\sum_{j=0}^m \, [h^{m-j}]\exp(\mu(h))\;
\frac{(m-2j+3/2)_{2j}}{4^j j!}
\end{equation}
and
\begin{equation} \label{eq:mu2}
\mu(h) = h^{-1} - (e^h-1)^{-1} - {\textstyle\frac{1}{2}}\,\raisebox{2pt}{$.$}
\end{equation}
\end{theorem}
\begin{remark}
The function $\mu(h)$ defined by~\eqref{eq:mu2} could also be
defined using Bernoulli numbers, since
\begin{equation} \label{eq:mu-Bernoulli}
\mu(h) = -\sum_{k=1}^\infty \frac{B_{2k}}{(2k)!} h^{2k-1}
= -\frac{h}{12} + \frac{h^3}{720} - O(h^5).
\end{equation}
The function $\exp(\mu(h))$ occurring in~\eqref{eq:ckdirect}
has the Maclaurin expansion
\begin{equation} \label{eq:mu}
\exp(\mu(h)) = 1 - \frac{h}{12} + \frac{h^2}{288}
+ \frac{67h^3}{51840} + O(h^4).
\end{equation}
The numerators and denominators of the coefficients $[h^n]\exp(\mu(h))$
have been added to the OEIS as \seqnum{A321937} and \seqnum{A321938},
respectively.
\end{remark}
\begin{proof}[Proof of Thm.~$\ref{thm:ckdirect}$]
We first prove~\eqref{eq:bseries2}. {From} Lemma~\ref{lem:bnU},
$b_n = -\Gamma(n)\, U(n,0,1)$.
Temme~\cite[Sec.~3]{Temme13} gives a general
asymptotic result for $U(a,b,z^2)$ as $a \to \infty$.
We state Temme's result for the case
$(a,b,z) = (n,0,1)$,
which is what we need.
Let $c_k' := [h^k]\exp(\mu(h))$. (Temme uses $c_k$, but this conflicts with
our notation.)
{From} Temme~\cite[(3.8)--(3.10)]{Temme13}, we have
\begin{equation} \label{eq:T3.8}
U(n,0,1) \sim \frac{\sqrt{e}}{\Gamma(n)}\sum_{k\ge 0} c_k'\Phi_k(n),
\end{equation}
where
\[\Phi_k(n) = 2n^{-(k+1)/2}K_{k+1}(2n^{1/2}),\]
and $K_\nu$ denotes the usual modified Bessel function.
{From}~\cite[(10.40.2)]{DLMF},
$K_\nu(z)$ has an asymptotic expansion
\begin{equation} \label{eq:Kasymp}
K_\nu(z) \sim e^{-z}\sqrt{\frac{\pi}{2z}}
\sum_{j \ge 0} \frac{(\nu-j+1/2)_{2j}}{j!\,(2z)^j}
\,\raisebox{2pt}{$.$}
\end{equation}
Setting $\nu = k$ and $z=2n^{1/2}$ in~\eqref{eq:Kasymp}, we obtain
\[
\Phi_{k-1}(n) = 2n^{-k/2}K_k(2n^{1/2})
\sim \frac{\sqrt{\pi}e^{-2\sqrt{n}}}{n^{1/4}}
\sum_{j\ge 0}\frac{(k-j+1/2)_{2j}}{j!\,4^j\,n^{(j+k)/2}}
\,\raisebox{2pt}{$.$}
\]
Substituting this expression into~\eqref{eq:T3.8},
and grouping like powers of $n$, we obtain
\[
b_n
= -\Gamma(n)\,U(n,0,1)
\sim -\frac{\sqrt{\pi e}}{n^{3/4} e^{2\sqrt{n}}}
\sum_{m \ge 0} \sum_{j=0}^m
\frac{c_{m-j}'\, (m-2j+3/2)_{2j}}{j!\,4^j\,n^{m/2}}
\,\raisebox{2pt}{$.$}
\]
Now, comparison with~\eqref{eq:bseries2} shows that
\[
(-1)^m c_m = \sum_{j=0}^m \frac{c_{m-j}'\, (m-2j+3/2)_{2j}}
{j!\,4^j}\,\raisebox{2pt}{$,$}
\]
which completes the proof of~\eqref{eq:bseries2}.
The proof of~\eqref{eq:aseries2} is similar. We use Lemma~\ref{lem:anM}
instead of Lemma~\ref{lem:bnU}, and Temme's asymptotic
result~\cite[(3.29)]{Temme13} for
$M(a,b,z^2)$ as $a \to \infty$
instead of~\eqref{eq:T3.8}; the modified Bessel function
$I_\nu$ replaces $K_\nu$.
{From}~\cite[(10.40.1)]{DLMF},
$I_\nu(z)$ has an asymptotic expansion
\begin{equation} \label{eq:Iasymp}
I_\nu(z) \sim \frac{e^{z}}{\sqrt{2\pi z}}
\sum_{j \ge 0} (-1)^j\, \frac{(\nu-j+1/2)_{2j}}{j!\,(2z)^j}
\,\raisebox{2pt}{$,$}
\end{equation}
which replaces~\eqref{eq:Kasymp}.
\end{proof}
Theorem~\ref{thm:ckdirect} gives an expression for $c_m$ which (indirectly)
involves Bernoulli numbers, in view of~\eqref{eq:mu-Bernoulli}.
Lemma~\ref{lemma:ck} gives a different expression
for $c_m$ that is recursive, as the expression for $c_m$ depends on the
values of $c_j$ for $j < m$, but has the advantage of avoiding reference to
Bernoulli numbers. The idea of the proof is similar to that used
in the ``method of Frobenius''~\cite{Frobenius}.
We omit the details, which may be found
in~\cite[pp.~10--11]{BGG-arXiv}.
\begin{lemma} \label{lemma:ck}
We have $c_0 = 1$ and, for all $m \ge 1$,
\begin{equation} \label{eq:ckrecursion}
m c_m = [h^{m+3}]\,
\sum_{j=0}^{m-1} c_j h^j\!
\sum_{s \in \{\pm 1\}}
(1+sh^2)^{\frac{1-2j}{4}}
\exp\left(\frac{2}{h}\left((1+sh^2)^{\frac{1}{2}}-1\right)\right).
\end{equation}
\end{lemma}
\begin{remark}
Computation using~\eqref{eq:ckdirect} and, as a
check, \eqref{eq:ckrecursion}, gives
\[
(c_k)_{k\ge 0} = \left(1, -\frac{5}{48},
-\frac{479}{4608}, -\frac{15313}{3317760},
\frac{710401}{127401984},
-\frac{3532731539}{214035333120},
\ldots\right).\]
The numerators and denominators have been added to the OEIS
as \seqnum{A321939} and \seqnum{A321940}, respectively.
With the exception of $c_0$ and $c_4$, the $c_k$ all appear to
be negative. This has been verified numerically for $k \le 1000$.
\end{remark}
\section[Maclaurin coefficients a(n) and b(n)
{The Maclaurin coefficients $a_n$ and $b_n$}
\label{sec:Maclaurin}
The function $f_0(z)$ is the exponential generating function counting several
combinatorial objects, such as the number of ``sets of lists'',
i.e., the number of partitions of $\{1,2,\ldots,n\}$ into ordered subsets,
see Wallner~\cite[\S5.3]{Wallner}.
Observe that $f_0(z)$ satisfies the differential equation
\begin{equation}
(1-z)^2 f_0'(z) - f_0(z) = 0, \label{eq:f0de}
\end{equation}
and from this it is easy to see that the $a_n$ satisfy a three-term recurrence
\begin{equation} \label{eq:arec1}
na_n - (2n-1)a_{n-1} + (n-2)a_{n-2} = 0 \;\text{ for }\; n \ge 2.
\end{equation}
The initial conditions are $a_0 = a_1 = 1$.
Thus
\[(a_n)_{n\ge 0} = (1, 1, 3/2, 13/6, 73/24, 167/40, \ldots).\]
The recurrence~\eqref{eq:arec1} holds for $n \ge 0$ provided that we
define $a_n = 0$ for $n < 0$.
A closed-form expression, valid for $n \ge 1$ (but not for $n=0$), is
\[
a_n = \sum_{k=1}^n \frac{1}{k!}\,\binom{n-1}{k-1}.
\]
The constants $a_n$ may be expressed in terms of the general{ize}d Laguerre
polynomials $L_n^{(\alpha)}(x)$
which, from~\cite[(18.12.13)]{DLMF}, have a generating function
\[\sum_{n\ge 0} z^n L_n^{(\alpha)}(x) = (1-z)^{-(\alpha+1)}e^{-xz/(1-z)}.\]
With $\alpha = x = -1$ we obtain
$\sum_{n\ge 0} z^n L_n^{(-1)}(-1) = e^{z/(1-z)}$, so
$a_n = L_n^{(-1)}(-1)$.
Using the chain rule and the definition of $f_1(z)$ in \S\ref{sec:intro},
we see that $f_1(z)$
satisfies the differential equation
\begin{equation} \label{eq:f1de}
(1-z)^2 f_1'(z) - f_1(z) = z-1,
\end{equation}
which differs from~\eqref{eq:f0de} only in the right-hand side $z-1$.
Differentiating twice more with respect to $z$,
we see that $f_0(z)$ and $f_1(z)$
both satisfy the same third-order differential equation
\[(1-z)^2f''' + (4z-5)f'' + 2f' = 0.\]
{From}~\eqref{eq:f1de}, the $b_n$ satisfy a recurrence
\begin{equation} \label{eq:brec1}
nb_n - (2n-1)b_{n-1} + (n-2)b_{n-2} =
\begin{cases}
1, & \text{if $n=2$;}\\
0, & \text{if $n \ge 3$.}
\end{cases}
\end{equation}
This is essentially
(i.e., for $n \ge 3$)
the same recurrence as~\eqref{eq:arec1}, but the initial conditions
$b_0 = G$, $b_1 = G-1$
are different.
Here $G := e{E_1}(1) \approx 0.596$ is the Euler-Gompertz
constant~\cite[\S2.5]{Lagarias}.
We remark that computation of the $b_n$ using the
recurrence~\eqref{eq:brec1} in the forward direction is
numerically unstable. A stable method of computation
is to use an adaptation of Miller's algorithm,
originally used to compute Bessel functions.
See Gautschi~\cite[\S3]{Gautschi} and Temme~\cite[\S4]{Temme75}.
As noted in \S\ref{sec:intro}, the $b_n$ may be expressed as
$a_nG-a_n'$, where $a_n$ is as above, and $a_n'$ satisfies essentially
the same recurrence with different initial conditions. In fact,
\[
na_n' - (2n-1)a_{n-1}' + (n-2)a_{n-2}' =
\begin{cases}
-1, & \text{if $n = 2$;}\\
\phantom{-}0, &\text{if $n \ge 3$.}
\end{cases}
\]
The initial conditions are $a_0' = 0$, $a_1' = 1$.
Thus
\[(a_n')_{n\ge 0} = (0, 1, 1, 4/3, 11/6, 5/2, 121/36, \ldots).\]
{From} \eqref{eq:bseries2}, $b_n \to 0$ as $n \to \infty$,
so the sequence $(a_n'/a_n)_{n\ge 1}$ is a convergent sequence of rational
approximations to~$G$. The sequence of approximants is
$(1, 2/3, 8/13, 44/73, 100/167, \ldots)$.
Bala~\cite{Bala} gives the continued fraction
\[1-G = 1/(3 - 2/(5 - 6/(7 - \cdots -n(n+1)/(2n+3) - \cdots))),\]
with convergents $1/3, 5/13, 28/73, 201/501$, etc.
The corresponding convergents to $G$ are
$2/3, 8/13, 45/73, 100/167$, etc. We see that the $n$-th convergent
is just $a_{n+1}'/a_{n+1}$.
Theorem~\ref{thm:ckdirect} implies that
\[
G - a_n'/a_n = b_n/a_n \sim -2\pi e^{1-4\sqrt{n}} \text{ as } n \to \infty.
\]
We have contributed the sequence $(n!a_n')_{n\ge 1}$ to the OEIS as
\seqnum{A321942}.
\section[The Hadamard product of f0 and f1]
{The Hadamard product of $f_0$ and $f_1$} \label{sec:Hadamard}
Define $\rho_n := a_n b_n$. Thus $\sum_{n=0}^\infty \rho_n z^n$
is the Hadamard product $(f_0{\,\odot} f_1)(z)$.
{From} Lemmas~\ref{lem:anM}--\ref{lem:bnU}, we have
\[
\rho_n = -e^{-1}\Gamma(n)M(n+1,2,1)U(n,0,1).
\]
Using Theorem~\ref{thm:ckdirect}, we can obtain a complete asymptotic expansion
for $\rho_n$ in decreasing powers of~$n$. This is given in
Corollary~\ref{cor:product}.
\begin{corollary} \label{cor:product}
We have
\[
\rho_n \sim -\,\frac{1}{2n^{3/2}}\sum_{k\ge 0} d_k n^{-k},
\]
where
\[d_k = \sum_{j=0}^{2k}(-1)^j c_j c_{2k-j},\]
and $c_0,\ldots,c_{2k}$ are as in Theorem~$\ref{thm:ckdirect}$.
\end{corollary}
A computation shows that
\[(d_k)_{k\ge 0} = (1, -7/32, 43/2048, -915/65536, \ldots).\]
We observe that the $d_k$ appear to be dyadic rationals
More precisely, it appears that $2^{6k}d_k \in {\mathbb Z}$.
Define a scaled sequence $(r_k)_{k\ge 0}$ by
$r_k := 2^{6k}d_k$. Computation gives
\[
(r_k)_{k\ge 0} = (1, -14, 86, -3660 , -1042202, -247948260,
-108448540420, \ldots).
\]
This leads naturally to the following conjecture.
\pagebreak[3]
\begin{conjecture} \label{conj:rkinteger}
For all $k \ge 0$, $r_k\in{\mathbb Z}$.
\end{conjecture}
The sequence of numerators of $r_k$
has been added to the OEIS as \seqnum{A321941}.
If Conjecture \ref{conj:rkinteger} holds, then the denominators
are all~$1$, i.e., the denominators are given by \seqnum{A000012}.
\begin{remark} \label{remark:rkinteger}
Conjecture~\ref{conj:rkinteger} has been verified for all $k \le 1000$.
We also showed numerically, for $3 \le k \le 1000$,
that $r_k < 0$ and
$r_k \equiv \binom{2k}{k}$
(mod $32$).
\end{remark}
\begin{remark}
A problem that is superficially
similar to our conjecture was solved by Tulyakov~\cite{Tulyakov}.
However, we do not see how to adapt his method
to prove our conjecture.
\end{remark}
\begin{remark} \label{remark:Bessel}
Corollary~\ref{cor:product} is reminiscent of the result
\[
I_0(x) K_0(x) \sim \frac{1}{2x}\sum_{k\ge 0} e_{k,0}\, x^{-2k}
\]
in the theory of Bessel functions~\cite[(1.2)]{rpb256}.
The coefficients $e_{k,0}$ are given by
\[
e_{k,0} = \frac{(2k)!^3}{2^{6k}k!^4}\,\raisebox{2pt}{$,$}
\]
so $2^{4k}e_{k,0}\in{\mathbb Z}$.
The modified Bessel functions $I_0(x)$ and
$K_0(x)$ are solutions of the same ordinary differential
equation
$xy'' + y' -xy = 0$,
but $I_0(x)$ increases with $x$
while $K_0(x)$ decreases.
This is analogous to the behaviour of $a_n$, which increases
as $n \to \infty$, and $|b_n|$, which decreases as $n \to \infty$.
More generally, from \cite[(10.40.6)]{DLMF},
we have
\[I_\nu(x)K_\nu(x) \sim \frac{1}{2x}\sum_{k\ge 0} e_{k,\nu}x^{-2k},\]
where
\[
e_{k,\nu} = (-1)^k 2^{-2k}(\nu-k+1/2)_{2k}\binom{2k}{k},
\]
and $2^{4k}e_{k,\nu}\in{\mathbb Z}$ for $\nu\in{\mathbb Z}$.
\end{remark}
\section[Other expressions for d(n)]
{Other expressions for $d_n$} \label{sec:dn_rec}
Since $(a_n)$
and $(b_n)$ are D-finite, it follows that $(\rho_n)$ is D-finite.%
\footnote{
See Flajolet and Sedgewick~\cite[Appendix B.4]{FS}, and
Stanley~\cite[Theorem~2.10]{Stanley},
for relevant background on D-finite sequences.}
In fact, $\rho_n$
satisfies the $4$-term recurrence
\begin{align}
\nonumber
n^2(n-1)(2n-3)\rho_n =& \;\;(n-1)(2n-1)(3n^2-5n+1)\rho_{n-1}\\
\nonumber
&-(n-2)(2n-3)(3n^2-5n+1)\rho_{n-2}\\
&+ (n-2)(n-3)^2(2n-1)\rho_{n-3} \label{eq:rho-rec1}
\end{align}
for $n\ge 3$, with initial conditions
$\rho_0 = G$, $\rho_1 = G-1$, $\rho_2 = (9G-6)/4$.
The recurrence~\eqref{eq:rho-rec1}
can be simplified by defining $\sigma_n := n\rho_n$. Then $\sigma_n$
satisfies the slightly simpler recurrence
\begin{align}
\nonumber
n(n-1)&(2n-3)\sigma_n = (2n-1)(3n^2-5n+1)\sigma_{n-1}\\
\label{eq:sigma-rec1}
&-(2n-3)(3n^2-5n+1)\sigma_{n-2} + (n-2)(n-3)(2n-1)\sigma_{n-3}
\end{align}
for $n\ge 3$, with initial conditions
$\sigma_0 = 0$, $\sigma_1 = G-1$, $\sigma_2 = 9G/2-3$.
Also, Corollary~\ref{cor:product} gives an asymptotic series for $\sigma_n$:
\begin{equation} \label{eq:sigma-asymp}
\sigma_n \sim -\,\frac{1}{2n^{1/2}}\sum_{k\ge 0} d_k n^{-k}.
\end{equation}
Using~\eqref{eq:sigma-rec1}, we can give a recursive algorithm for
computing the sequence $(d_n)$ (and hence $(r_n)$) directly, without
computing the sequence $(c_n)$.
\begin{lemma} \label{lemma:dn_direct2}
We have $d_0 = 1$ and, for all $k \ge 1$,
\begin{align}
\nonumber
8kd_k = -\,[h^{k+2}]\, &\Bigg(\sum_{j=0}^{k-1}
d_j h^j \bigg(
B(h)(1-h)^{-(j+1/2)} \\
\label{eq:dn_direct1a}
&+ C(h)(1-2h)^{-(j+1/2)} + D(h)(1-3h)^{-(j+1/2)}\bigg)\Bigg),
\end{align}
where
\begin{align*}
B(h) &= -6 + 13h - 7h^2 + h^3 \;\;\;\, = -(2-h)(3-5h+h^2),\\
C(h) &= +6 - 19h + 17h^2 - 3h^3 = (2-3h)(3-5h+h^2), \;\text{ and }\\
D(h) &= -2 + 11h - 17h^2 + 6h^3 = -(1-2h)(1-3h)(2-h).
\end{align*}
\end{lemma}
\begin{proof}
Define $h := n^{-1}$, so $h \to 0$ as $n \to \infty$.
{From} Corollary~\ref{cor:product}, there exists an asymptotic series
of the form
\[-2\sigma_n \sim \sum_{j \ge 0} d_j n^{-j-1/2}\]
as $n \to \infty$. Moreover, $d_0 = 1$.
Define $A(h) := (1-h)(2-3h)$ in addition to $B(h)$, $C(h)$ and $D(h)$.
Using the recurrence~\eqref{eq:sigma-rec1}
and the elementary identity
$1/(n-m) = h/(1-mh)$ for $m \in \{0,1,2,3\}$, we have
\begin{align*}
\sum_{j\ge 0} d_j \Bigg(&A(h)h^{j+1/2}
+ B(h)\left(\frac{h}{1-h}\right)^{j+1/2}\\
+ &\;C(h)\left(\frac{h}{1-2h}\right)^{j+1/2}
+ \;D(h)\left(\frac{h}{1-3h}\right)^{j+1/2}\Bigg) \sim 0.
\end{align*}
Now, dividing both sides by $h^{1/2}$, we obtain
\begin{align}
\nonumber
\sum_{j\ge 0}
d_j h^j \bigg(&A(h) + B(h)(1-h)^{-(j+1/2)} \\
\label{eq:dn_direct2}
&+ C(h)(1-2h)^{-(j+1/2)} + D(h)(1-3h)^{-(j+1/2)}\bigg) \sim 0.
\end{align}
An easy computation shows that
\begin{align*}
A(h) + B(h) + C(h) + D(h) &= -4h^2 + O(h^3),\\
B(h) + 2C(h) + 3D(h) &= 8h + O(h^2),\;\text{ and}\\
B(h) + 2^2C(h) + 3^2D(h) &= O(h).
\end{align*}
Thus, for all $j \ge 1$, the terms involving $d_j$ in~\eqref{eq:dn_direct2}
are $8jh^{j+2} + O(h^{j+3})$.
(The ``$8j$'' arises from $-4 + 8(j+1/2) = 8j$.)
This shows that the choice of $d_k$
in~\eqref{eq:dn_direct1a} is necessary and sufficient to give an asymptotic
series of the required form.
Finally, we note that
$[h^{k+2-j}]A(h) = 0$, since $j \le k-1$ and $\deg(A(h)) = 2$.
Thus, a term involving $A(h)$ has been omitted from~\eqref{eq:dn_direct1a}.
\end{proof}
Using Lemma~\ref{lemma:dn_direct2}, we computed
the sequences $(d_n)$ and $(r_n)$ for $n \le 1000$, and verified
the values previously computed (more slowly) via Corollary~\ref{cor:product}.
Since the power series occurring in~\eqref{eq:dn_direct1a} have a simple form,
we can extract the coefficients of the required powers of $h$
to obtain a recurrence for the $d_k$, as in Corollary~\ref{cor:explicit_dn2}.
This gives a third way to compute the sequence $(d_n)$.
\begin{corollary} \label{cor:explicit_dn2}
We have $d_0 = 1$ and, for all $k \ge 1$,
\[
8k\, d_k = \sum_{j=0}^{k-1} \alpha_{j,k}\,d_j.
\]
Here
\vspace*{-10pt}
\begin{align}
\nonumber
\alpha_{j,k} =&
\;\;(-1 + 3\cdot 2^{m-1} - 2\cdot 3^m) (\tau)_{m-1}/(m-1)!\\
\nonumber
&+ (7 - 17\cdot 2^m + 17\cdot 3^m) (\tau)_{m}/m!\\
\nonumber
&+ (-13 + 38\cdot 2^m - 33\cdot 3^m) (\tau)_{m+1}/(m+1)!\\
\label{eq:alphajk2}
&+ 6(1 - 4\cdot 2^m + 3\cdot 3^m) (\tau)_{m+2}/(m+2)!,
\end{align}
where $m:= k-j$ and $\tau := j+1/2$.
\end{corollary}
\begin{proof}[Proof (sketch)]
To prove Corollary~\ref{cor:explicit_dn2}, we apply the binomial
theorem to the power series in~\eqref{eq:dn_direct1a}, multiply by the
polynomials $B(h)$, $C(h)$, and $D(h)$, and extract
the coefficient of $h^{k+2-j}$.
\end{proof}
The following corollary is an easy deduction from
Corollary~\ref{cor:explicit_dn2}, and gives an explicit recurrence for
$r_k = 2^{6k}d_k$.
\begin{corollary} \label{cor:explicit_rk}
We have $r_0 = 1$ and, for all $k \ge 1$,
\[k\,r_k = \sum_{j=0}^{k-1} \beta_{j,k}\,r_j,\;
\text{ where }\; \beta_{j,k} = 8^{2k-2j-1}\,\alpha_{j,k}\,.\]
\end{corollary}
Although we have not proved Conjecture~\ref{conj:rkinteger},
the following result goes part of the way.
\begin{theorem} \label{thm:factorialRkk}
For all $k \ge 0$, we have $k!\,r_k \in {\mathbb Z}$.
\end{theorem}
\begin{proof}
Let $R_k := k!r_k$. We show that $R_k\in{\mathbb Z}$.
{From} Corollary~\ref{cor:explicit_rk},
$R_0 = 1$ and, for $k \ge 1$, $R_k$ satisfies the recurrence
\begin{equation} \label{eq:Rkrec}
R_k = \sum_{j=0}^{k-1} \beta_{j,k}\,R_j\,\frac{(k-1)!}{j!}\,\raisebox{2pt}{$.$}
\end{equation}
The ratio of factorials in~\eqref{eq:Rkrec} is an integer, since
$j \le k-1$. Thus, in order to prove the result by induction on $k$,
it is sufficient to show that $\beta_{j,k} \in{\mathbb Z}$.
Now, elementary number theory shows that
$4^\ell(j+1/2)_\ell/\ell! \in {\mathbb Z}$
for all $j,\ell \ge 0$.
Thus, the expressions of the form $(\tau)_{m+\delta}/(m+\delta)!$
in~\eqref{eq:alphajk2} are in ${\mathbb Z}$ provided that $m+\delta \ge 0$.
This is true as $m \ge k-j \ge 1$ and $\delta \ge -1$.
To show
that $\beta_{j,k}\in{\mathbb Z}$, it is sufficient to have $8^{2m-1} \ge 4^{m+2}$,
which holds for all $m \ge 2$. In the case $m=1$, it is easy to see
that all the terms in~\eqref{eq:alphajk2} are in~${\mathbb Z}/4$,
so $\beta_{m-1,k} = 8\alpha_{m-1,k} \in{\mathbb Z}$.
Thus, $\beta_{j,k}\in{\mathbb Z}$ for $0 \le j < k$,
and the result follows by induction on~$k$.
\end{proof}
\begin{remark}
The proof actually shows that $\beta_{j,k}\in 2{\mathbb Z}$, which implies
that $R_k \in 2{\mathbb Z}$ for all $k > 0$.
\end{remark}
\section{Acknowledgments}
We thank Bruno Salvy for communicating his conjecture to one of us.
An anonymous referee made helpful suggestions regarding the exposition.
RPB was supported in part by ARC grant DP140101417.
AJG wishes to acknowledge support of the ARC Centre of Excellence for
Mathematical and Statistical Frontiers (ACEMS).
\pagebreak[3]
|
1,314,259,995,896 | arxiv | \section{Introduction}
The study of spin-orbit (SO) effects in semiconductor nanostructures has
been the object of many experimental and
theoretical investigations in the last few years, see e.g.
Refs. \onlinecite{Can99,Ric99,And99,Vos00,Mal00,Rac97,
Vos01,Fol01,Hal01,Ale01,Val02,Val202,Val302,Kon05} and Refs.
therein. It links the spin and the charge dynamics, hence
opening the possibility of spin control by means of electric
fields.\cite{Dat90,Kan98}
It has been recently shown\cite{Ton04} that the SO interaction affects
the optical properties of GaAs quantum wells by
inducing a coupling between charge density and spin density
excitations in the long wavelength limit. We extend
here this study to the influence on the Larmor resonance
of the combined effect of both
Dresselhaus\cite{Dre55} and
Bychkov-Rashba\cite{Ras84,Pik95} SO interactions,
and use our results to discuss some features
of the spin modes disclosed by inelastic light scattering\cite{Dav97,Kan00}
and electron-spin resonance experiments.\cite{Ste82,Dob88}
Our approach is based on the solution of the equation of motion up to
second order in the SO intensity
parameters.\cite{Ton04} This method has been also used to derive the Kohn
theorem,\cite{Pi04} and goes as follows. We write the
Schr\"odinger equation for a $N$-particle system as $H |n\rangle = E_n |n \rangle$,
with $|0\rangle$ and $E_0$ being the ground state (gs)
and gs energy, respectively. If one can find an operator $O^+_n$ such that
$|n\rangle= O^+_n |0\rangle$, $O_n|0\rangle=0$, it is possible to cast the
Schr\"odinger equation into an operator equation -the equation of motion-
$[H,O^+_n] = \omega_n O^+_n$, where $ \omega_n= E_n -E_0$ is the excitation
energy of the state $|n\rangle$. The solutions of this equation are used
to find the excitation energies of the system as well as
its excited states in terms of their creation
operators.
This work is organized as follows. In Sec. II we apply the equation of motion
method to the Larmor mode in the presence
of a SO coupling. The results are used in Sec. III to discuss the spin
modes in quantum wells, and are
compared with the experimental results of Refs.~\onlinecite{Dav97,Dob88}.
\section{The equation of motion approach and the Larmor mode}
The operators describing the SO Rashba and Dresselhaus interactions are
respectively given by
\begin{equation}
\label{eq1}
H_R = \frac{\lambda_R}{\hbar}\sum_{j=1}^{N} \left[\, P_y\sigma_x-P_x\sigma_y\,\right]_j
\end{equation}
and
\begin{equation}
\label{eq2}
H_D = \frac{\lambda_D}{\hbar} \sum_{j=1}^{N} \left[\, P_x\sigma_x-P_y\sigma_y\,\right]_j~,
\end{equation}
where the $\sigma$'s are the Pauli matrices and
${\bf P}=-i\hbar\nabla+\frac{e}{c}{\bf A}$ represents the
canonical momentum in terms of the vector potential ${\bf A}$ which in the
following we write in the Landau
gauge, {\bf A}$= B (0,x,0)$, with {\bf B}=$ \nabla \times$ {\bf A} = $B \hat{\bf z}$.
In the effective mass, dielectric constant approximation, the quantum well
Hamiltonian $H$ can be quite generally written as $H=H_{KS} + V_{res}$,
where $H_{KS}$ is the Kohn-Sham (KS) one-body Hamiltonian consisting of the
kinetic, Rashba, Dresselhaus,
exchange-correlation KS potential and Zeeman terms, and $V_{res}$ is
the residual Coulomb interaction. The $KS$ Hamiltonian reads
\begin{eqnarray}
H_{KS} = \sum_{j=1}^N\left[\frac{P^+P^- + P^-P^+}{4m}+
\frac{\lambda_R}{2 i\hbar}(P^+\sigma_- -
P^-\sigma_+) +\frac{\lambda_D}{2\hbar}(P^+\sigma_+ + P^-\sigma_-)\right.
\nonumber\\
\left. +W_{xc}(n,\xi,{\cal
V})\sigma_z + \frac{1}{2}g^* \mu_B B \sigma_z\right]_j\; ,~~~~~~~~~~~~~
\label{eq3}
\end{eqnarray}
where $m=m^* m_e$ is
the effective electron mass in units of the bare electron mass $m_e$,
$P^{\pm}=P_x \pm i P_y$, and
$\sigma_{\pm}=\sigma_x \pm i\sigma_y$. Although other approaches may be also
considered, we have considered the exchange-correlation potential
$W_{xc}(n,\xi,{\cal V})$
in the local-spin current density approximation
(LSCDA).\cite{Fer94,Lip03} It depends on the density $n$,
magnetization $\xi=n^{\uparrow}- n^{\downarrow}$, and local
vorticity ${\cal V}$, and is evaluated from the exchange-correlation energy
per electron ${\cal E}_{xc}$ as $W_{xc}= \partial(n {\cal E}_{xc})/
\partial \xi$. The last term in Eq. (\ref{eq3}) is the
Zeeman energy, where
$\mu_B = \hbar e/(2 m_e c)$ is the Bohr magneton, and $g^*$ is the effective
gyromagnetic factor. For bulk GaAs,
$g^*=-0.44$, $m^*=0.067$, and the dielectric constant is $\epsilon=12.4$.
To simplify the expressions, in the following we
shall use effective atomic units
$\hbar=e^2/\epsilon=m=1$.
In the following, the residual Coulomb interaction will be treated in the adiabatic
time-dependent LSCDA (TDLSCDA).\cite{Lip03} We are going to see that,
in the absence of SO coupling, not only the exact Hamiltonian,
but also the one in which the
residual interaction is treated in the TDLSCDA
fulfill the equation
\begin{equation}
\label{eq4}
[H,S_{\mp}]=\pm\omega_L S_{\mp}~,
\end{equation}
where
$S_{\mp}=1/2\sum_j \sigma^j_{\mp}$ and $\omega_L=|g^* \mu_B B|$. Thus, if $|0\rangle$
is the gs of the system, the
states $S_{\mp}|0\rangle$ are eigenstates of $H$ with excitation energies $\pm\omega_L$.
Note that a negative $g^*$
implies that the spin-up states are lower in energy than the spin-down ones,
and that the actual physical solution of
Eq. (\ref{eq4}) is that corresponding to the $S_-$ operator. This is the physical
contents of the Larmor theorem. Note also that in the absence of spin-orbit coupling,
$[H, \sum^N_j P^+_j]=\omega_c \sum^N_j P^+_j$, where
$\omega_c=eB/(mc)$ is the cyclotron frequency. This is the Kohn theorem,
which also holds in the adiabatic time-dependent
local spin density approximation (TDLSDA) and in the TDLSCDA,
and can be generalized to the case of quantum wires and dots parabolically confined.
Since
\begin{equation}
\label{eq5}
[H,S_-]=
\omega_L S_- +4\sum_{j=1}^N[\lambda_D P^+ \sigma_z+i\lambda_R P^-\sigma_z]_j~,
\end{equation}
the spin-orbit terms in Eq. (\ref{eq3}) mix the transverse spin excitations
induced by the operator $S_-$ with the
spin-density excitations induced by $\sum_{j=1}^N P^{\pm}_j\sigma_z^j$,
and thus Larmor's theorem is not fulfilled. In the following, we use
the equation of motion approach to find the eigenvalues and
eigenstates of the KS Hamiltonian $H_{KS}$ Eq. (\ref{eq3}) which arise from the
SO mixing, and will evaluate the
spin wave dispersion relation $\omega(q)$ by taking into account the effect of
the residual interaction. This is done by firstly solving the equation of motion
\begin{equation}
\label{eq6}
[H_{KS},O^+]=\omega O^+~,
\end{equation}
and
then calculate the transverse response
$\chi_t(q,\omega)$ per unit surface ${\cal A}$ in the TDLSCDA:
\begin{equation}
\label{eq7}
\chi_t(q,\omega)={\chi_t^{KS}(q,\omega)\over1-2F_{xc}\chi_t^{KS}(q,\omega)} \; ,
\end{equation}
where $F_{xc}=W_{xc}/\xi$, and $\chi_t^{KS}(q,\omega)$ is the KS transverse
response per unit surface.\cite{Lip03} The poles of
$\chi_t(q,\omega)$ yield $\omega(q)$. The transverse spin response without
inclusion of spin-orbit coupling has been
studied in the past in the RPA\cite{Kal84} and time-dependent Hartree-Fock\cite{Mac85}
approximations.
Up to second order in $\lambda_{R,D}$, Eq. (\ref{eq6}) is straighforwardly solved by
the operator
$O^+=\sum_{j=1}^N [a \sigma_- + b P^+\sigma_z + c P^-\sigma_z +d\sigma_+]_j$.
To do so, one has to use the commutators
$[\sigma_+,\sigma_-]=4\sigma_z$ and $[P^-,P^+]=2\omega_c$. This yields a homogeneous
system of linear equations for the
coefficients $a$, $b$, $c$ and $d$ from which the energies $\omega$ are obtained by
solving the secular equation valid
up to $\lambda^2_{R,D}$ order
\begin{equation}
\label{eq8}
(\omega^2-\tilde\omega_L^2)(\omega^2-\omega_c^2)
-4\,\omega_c(\lambda_D^2+\lambda_R^2)\omega^2 -
4\,\omega_c^2\,\tilde\omega_L(\lambda_D^2-\lambda_R^2)=0 ,
\end{equation}
where $\tilde\omega_L \equiv |g^* \mu_B B+2W_{xc}|$.
This quartic equation can be exactly solved, yielding the excitation energies
(only positive solutions are physical). For
each of them, the homogenous linear system, supplemented with the
normalization condition
$\langle0\vert[(O^+)^{\dagger},O^+]\vert0\rangle=1$, determines the coefficients
$a$, $b$, $c$ and $d$. We have found it
more convenient to discuss the solutions of the above equation in the
limits
of small and large magnetic fields, which are more transparent and easier to
compare with available experimental data. In the small $B$ limit
$(\tilde{\omega}_L, \omega_c \ll \lambda_R, \lambda_D)$
we obtain a unique solution
\begin{equation}
\label{eq9}
\omega= 2 \,\sqrt{\omega_c(\lambda_R^2+\lambda_D^2)} \; .
\end{equation}
In the large $B$ limit we obtain
\begin{equation}
\label{eq11}
\omega(S_{\mp})=\pm\left(\tilde\omega_L+2\lambda_R^2{\omega_c\over\omega_c+\tilde\omega_L}
-2\lambda_D^2{\omega_c\over\omega_c-\tilde\omega_L}\right) \; ,
\end{equation}
which are mainly excited by the operators
$S_-$ and $S_+$, and
\begin{equation}
\label{eq12}
\omega(P^{\pm}\sigma_z)=\pm\left(\omega_c-2\lambda_R^2{\omega_c\over\omega_c+\tilde\omega_L}
+2\lambda_D^2{\omega_c\over\omega_c-\tilde\omega_L}\right) \; ,
\end{equation}
which are mainly excited by the operators
$\sum_{j=1}^N [P^+\sigma_z]_j$ and $\sum_{j=1}^N [P^-\sigma_z]_j$.
By mainly we mean that the coefficient
of the corresponding operator entering the definition of is
$O(\lambda^0_{R,D})$, whereas
all the others are $O(\lambda^2_{R,D})$. Note that if $\lambda_{R,D}=0$,
the two physical modes in the preceding equations are uncoupled.
Equation (\ref{eq9}) shows that,
at $B\sim0$, to order $\lambda^2_{R,D}$ there is no
spin splitting due to the SO coupling.
Indeed, when $B\to0$ not only $\omega_L$ and $\omega_c$ vary linearly
with $B$, but also $W_{xc}$ does, implying that the solution of
Eq. (\ref{eq8}) goes to zero in this limit.
Earlier electron-spin resonance measurements on GaAs quantum
wells\cite{Ste82}
seemed to indicate that a finite spin splitting was present in the
$B=0$ limit.
However, subsequent experiments carried out by the same group\cite{Dob88}
covering a broader $B$ range point out that the spin splitting of a
Landau level is an exact quadratic function of $B$, and that its
extrapolation to $B=0$ leads to a vanishing spin splitting.
Our result, which is not changed
by the effect of the residual interaction, is thus in full
agreement with the experimental findings of Dobers et al.\cite{Dob88}
{\bf We have checked that, at low $B$ fields, the dominant component of
$O^+$ corresponding to the energy Eq. (\ref{eq9})
is the spin-flip operator $\sum_{j=1}^N[\sigma_-]_j$.
In this limit, the Dresselhaus and Rashba
SO interactions act ``in phase'', whereas at high $B$ they
partially compensate each other [compare the energy given in
Eq. (\ref{eq9}) with these in Eqs. (\ref{eq11}) and (\ref{eq12})].
This arises from the structure of the secular Eq. (\ref{eq8}), where in the
low $B$ limit, the second term dominates over
the third one, whereas in the high $B$ limit both terms
are equally important, yielding the solutions shown in Eqs.
(\ref{eq11}) and (\ref{eq12}).
We have not been able to find a deeper explanation
for this different behavior at low and high magnetic fields.
It is worth to mention that the independent particle Hamiltonian can be
exactly solved when only the Rashba or Dresselhaus SO terms are
included.\cite{Sch03} The merit of Eqs. (\ref{eq9}-\ref{eq11}) is that
they are exact to the relevant $\lambda^2_{R,D}$ order when both SO
couplings are simultaneously taken into account.
}
The excitation energy $\omega(S_-)$ is the independent particle (KS) value
for the spin splitting and violates Larmor's theorem even if the SO
coupling is neglected. On the contrary, when the residual
interaction is properly taken into account, the theorem Eq. (\ref{eq4}) is
recovered.
In the following, we will concentrate on the large $B$ limit.
It is then possible to derive the spin wave dispersion relation,
including spin-orbit effects, by solving the equation
\begin{equation}
\label{eq13}
1-2F_{xc}\chi_t^{KS}(q,\omega)=0
\end{equation}
that gives the poles of the transverse
response function Eq. (\ref{eq7}). To do so, we write the transverse spin response
as\cite{Lip03}
\begin{equation}
\label{eq14}
{\cal A}\,\chi_t^{KS}(q,\omega)= {\vert\langle\omega(S_{-})\vert{1\over2}\sum_{j=1}^N
e^{i\bf{q}\cdot\bf{r}}\sigma^j_{-}\vert0\rangle\vert^2\over\omega-\omega(S_-)}
-{\vert\langle\omega(S_{+})\vert{1\over2}\sum_{j=1}^N
e^{i\bf{q}\cdot\bf{r}}\sigma^j_{+}\vert0\rangle\vert^2\over\omega+\omega(S_+)} \; ,
\end{equation}
where
$\vert\omega(S_{\mp})\rangle \equiv O^+[\omega(S_{\mp})]\vert0\rangle$, and the
corresponding energies are given by Eqs. (\ref{eq11}) and (\ref{eq12}).
The calculation of the matrix elements in Eq. (\ref{eq14}) must be done with care since
$|0\rangle$ and $|\omega(S_{\pm})\rangle$ are not eigenstates of $S_z$
because of the spin-orbit coupling.
Neglecting terms in $\lambda^2_{R,D}/\omega_c$ or smaller, one gets\cite{note1}
\begin{equation}
\label{eq15}
\chi_t^{KS}(q,\omega)=
\xi\,{|F(q)|^2\over\omega-\omega(S_-)} \; ,
\end{equation}
where
\begin{equation}
\label{eq16}
F(q)={1\over N}\langle0\vert\sum_{j=1}^N e^{i\bf{q}\cdot\bf{r}_j}\vert0\rangle
\end{equation}
is the gs elastic form factor. Note that $F(0)=1$ and that
$F(q)$ goes to zero when $q\to\infty$.
From Eq. (\ref{eq13}) one finally obtains
\begin{equation}
\label{eq17}
\omega=|g^*\mu_B B|+2\lambda_R^2{\omega_c\over\omega_c+\tilde\omega_L}
-2\lambda_D^2{\omega_c\over\omega_c-\tilde\omega_L}
-2W_{xc}\left(1-|F(q)|^2\right)
\end{equation}
This is the main result of our work, together with the
lack of SO splitting we have found in the small $B$ limit.
In the limit $q\to\infty$, Eq. (\ref{eq17}) yields
the independent particle spin splitting Eq. (\ref{eq11}), which
crucially depends on the actual value of $W_{xc}$ entering the
definition of $\tilde\omega_L$.
In the $q=0$ limit, neglecting terms of order
$\tilde\omega_L/\omega_c$, Eq. (\ref{eq17}) reduces to\cite{note1}
\begin{equation}
\label{eq18}
\omega=|g^*\mu_B B|+2(\lambda_R^2-\lambda_D^2) \; .
\end{equation}
This expression shows that, neglecting the SO coupling,
Larmor's theorem is fulfilled in the adiabatic
TDLSCDA (it can be shown that the same holds in the adiabatic TDLSDA),
and that at high magnetic fields, the SO
interaction yields a $B$ independent contribution to the spin splitting.
Taking e.g. $m\lambda_R^2/\hbar^2 = 27$ $\mu$eV,
$m\lambda_D^2/\hbar^2 = 6$ $\mu$eV, which have been recently used to
reproduce the spin
splitting in quantum dots\cite{Kon05} and the splitting of the cyclotron
resonance in quantum wells,\cite{Ton04} we get
$2m(\lambda_R^2-\lambda_D^2)/\hbar^2\sim40$ $\mu$eV. This is
definitely a small amount, but it may have an influence on
the fine analysis of some experimental results
(note the vertical scale in Fig. \ref{fig1}).
\section{Comparison with experiments and discussion}
Using inelastic light scattering, Davies et al.\cite{Dav97} and Kang
et al.\cite{Kan00} have measured charge
and spin density excitations in 2D electrons systems confined in
GaAs quantum wells at high $B$. In the following we only discuss the
results of Ref.~\onlinecite{Dav97} because the information presented
in Fig. 2 of this reference is especially well suited for the purpose of
our work.
These results are represented in Fig. \ref{fig1}. The data labeled $S$
correspond to wave-vector allowed scattering from the $q=0$ Larmor mode;
indeed, the maximum in-plane $q$ allowed by the experimental geometry is
small, $q_{max}= 6 \times 10^4$ cm$^{-1}$, so that the $S$ mode of energy
$\epsilon_S$ should correspond to the spin splitting energy Eq. (\ref{eq18}).
The data labeled $SW$ is attributed to disorder-activated
scattering, and would correspond to $q\ne0$ excitations of energy
$\epsilon_{SW}$.\cite{Dav97} The difference between
$\epsilon_{SW}$ and $\epsilon_{S}$ is attributed
to the exchange enhancement of $\epsilon_{SW}$ above the Zeeman
energy.\cite{Dav97}
The dashed straight line represents the Larmor energy taking for $g^*$ the bulk
value, $|g^*|=0.44$. This overestimates $\epsilon_S$, especially at high $B$.
Any sensible comparison with these results must take into account the
$B$ dependence of $g^*$. This dependence has been clearly established in
magnetoresistivity experiments,\cite{Dob88}
taking advantage of the fact that the
electric-spin resonance affects the magnetoresistivity of the
2D electron gas, and this can be used to determine $g^*(B)$.
These experiments probe the one-electron energy levels and are not
influenced by many-electron interactions, contrarily to magnetoquantum
oscillations, which are strongly influenced by many-electron interactions.
The spin splitting obtained in Ref.~\onlinecite{Dob88} is
represented in Fig. \ref{fig1} as dots and crosses,
which correspond to two different samples.
In the lowest Landau level, which is the common physical situation
for all data represented in Fig. \ref{fig1}, these authors have fitted
$g^*(B)$ as $|g^*(B)|=g_0^* + r B/2$. The value of
the parameters $r$ and $g^*_0$ turns out to sensibly depend
on the experimental sample, and
the possibility of a SO shift at high $B$ values
could not be considered there. Moreover,
the $\lambda_{R,D}$ values are rather poorly known and
dependent on, e.g., the thickness of the experimental sample.
We have thus renounced to use the laws $g^*(B)$ obtained in
Ref.~\onlinecite{Dob88} in conjunction with Eq. (\ref{eq18}),
to establish a clear evidence of spin-orbit effects on the
$\epsilon_S$ energy obtained from
resonant inelastic light scattering experiments, and have
satisfied ourselves with the more limited scope of
using Eq. (\ref{eq18})
as a three-parameter law to fit $\epsilon_S$ as well as the spin
splittings of Ref. \onlinecite{Dob88}, with the aim of seeing
whether a reasonable value for these parameters can be extracted.
The solid straight lines in Fig. \ref{fig1} represent the result of
such linear fits, whose parameters are collected in
Table \ref{table1}.
In the case of inelastic light scattering, the neglect of
the SO term in Eq. (\ref{eq18}) yields an
unrealistic $g^*_0=0.49$, as this value should be smaller
than that of bulk GaAs due to the penetration
of the electron wave functions into the Al$_x$Ga$_{1-x}$As
barriers.
The dispersion of the electron-spin resonance datapoints\cite{Dob88}
seems to be smaller, and the analysis of the high $B$ data might
be used to ascertain which SO mechanism is dominating in
a given sample. This could be an alternative {\bf or complementary}
method to the recently proposed\cite{Kon05} of using the anisotropy of
the spin splitting in single-electron resonant tunneling
spectroscopy in lateral quantum dots submitted to
perpendicular or parallel magnetic fields.
The analysis of samples 1 and 2 would indicate that in the
former, the Dresselhaus SO is the dominating mechanism, whereas
in the later it is the Bychkov-Rashba one.
{\bf We want to stress that
we have extracted the experimental data from a careful digitalization
of the original figures. Due to the smallness of the effects we are discussing, we
cannot discard that this procedure may have had some effect on the value of the
parameters determined from the fit, and our analysis should be considered as
qualitative to some extent. However, we find it encouraging that the
parameters obtained from the fits are meaningful, and within the range of
values found in other works.\cite{Kon05,Ton04}
}
We finally discuss briefly the $q\ne0$ $SW$ mode. From Eqs. (\ref{eq17})
and (\ref{eq18}), we have that
$\epsilon_{SW}-\epsilon_S=-2W_{xc}(1-|F(q)|^2)$.
At high $q$, this difference is sensibly determined by $W_{xc}$.
In our calculation, as well as in time-dependent Hartree-Fock\cite{Mac85} and
exact diagonalization\cite{Rez87} calculations, we have found
values of $W_{xc}$ of the order of $-2$ meV. Hence, $-2W_{xc}$ is about a factor
40 larger than the measured $\epsilon_{SW}-\epsilon_S$, which has been
obtained in the $q\to0$ limit where short range correlations
are very important in determining the actual value of $1-|F(q)|^2$.
This can be seen by assuming for $F(q)$ the independent particle value.
Using a Slater
determinant made of Fock-Darwin single particle wave functions to describe the gs
of the system at $B\neq0$, one finds $F(q\ell)=e^{-q^2\ell^2/4}$,
where $\ell$ is the magnetic length $\ell=(\hbar c/e B)^{1/2}$.
In the small $q$ limit, $\epsilon_{SW}-\epsilon_S\simeq-W_{xc}q^2\ell^2$.
For $B=10$ T, $q_{max}\ell\simeq 0.05$ and
$\epsilon_{SW}-\epsilon_S \sim 0.01$ meV, which is about one tenth
of the experimental result as shown in Fig. \ref{fig1}.
Light scattering experiments at small $q$ and high $B$ are thus
very sensitive to correlation
effects in the elastic form factor, which is the key quantity to reproduce
the experimental findings.
\section*{ACKNOWLEDGMENTS} This work has been performed
under grants FIS2005-01414 from
DGI (Spain) and 2005SGR00343 from Generalitat de Catalunya.
E. L. has been suported by DGU (Spain), grant SAB2004-0091.
|
1,314,259,995,897 | arxiv | \section{Comparisons in the case of the isolated Gerschgorin disk}
\label{sec:comparison}
In this section we compare our method of cones with the Gerschgorin theorem with rescaling of the basis, when
trying to estimate an eigenvalue in an isolated Gerschgorin disk and corresponding eigenvector. Throughout this section we will use the $\|\cdot\|_\infty$ norm.
\subsection{The isolation of first Gerschgorin disk implies that the matrix $A-a_{11} I$ is dominating}
When applying Theorem~\ref{thm:gersz-eigenval} with the splitting $\mathbb{C} \oplus \mathbb{C}^{n-1}$ we will have two generalized Gerschgorin disks
\begin{eqnarray*}
G_1(A)&=&\overline{B}(a_{11},\|A_{12}\|_\infty)=\overline{B}(a_{11},\sum_{j\neq 1}|a_{1j}|), \\
G_2(A)&=&\{ \lambda \in \mathbb{C} \ : \ \|(A_{22} - \lambda I)^{-1}\|_{\infty}^{-1} \leq \max_{j=2,\dots,n} |a_{j1}| \}.
\end{eqnarray*}
Now we develop computable bounds for $G_2(A)$.
\begin{lemma}
\label{lem:mA-Gersz}
Let $A=[a_{ij}]\in\mathbb{C}^{n\times n}$.
Then
\begin{equation}
\min_i (|a_{ii}| - \sum_{j \neq i} |a_{ij}|) \leq \sup\{ \lambda \in \mathbb{R}\ | \ \forall x\in \mathbb{C}^n \quad \|Ax\|_\infty \geq \lambda \|x\|_\infty \}.
\label{eq:m(A)}
\end{equation}
If $A$ is invertible, then
\[
\min_i (|a_{ii}| - \sum_{j \neq i} |a_{ij}|) \leq \frac{1}{\|A^{-1}\|_\infty}.
\]
\end{lemma}
\begin{proof}
Let
\[
S:= \min_i (|a_{ii}| - \sum_{j \neq i} |a_{ij}|).
\]
Let us take any $x \in \mathbb{C}^n$, such that $\|x\|=1$. Let $i$ be such that $|x_i|=1$. We have
\[
|(Ax)_i| \geq (|a_{ii}| |x_i| - \sum_{j \neq i} |a_{ij}| \cdot |x_j|) \geq (|a_{ii}| - \sum_{j \neq i} |a_{ij}|) \geq S >0.
\]
Hence
\[
\|Ax\|_\infty \geq S.
\]
This establishes \eqref{eq:m(A)}.
For the second part observe that, if $A$ is invertible, then
\[
\sup\{ \lambda \in \mathbb{R}\ | \ \forall x\in \mathbb{C}^n \quad \|Ax\|_\infty \geq \lambda \|x\|_\infty \} = \frac{1}{\|A^{-1}\|_\infty}.
\]
\end{proof}
From Lemma~\ref{lem:mA-Gersz} it follows that
\begin{eqnarray*}
G_2(A) &\subset& \{ \lambda \in \mathbb{C} \ | \ \min_{i=2,\dots,n}( |a_{ii} - \lambda| - \sum_{j \neq 1,i} |a_{ij}|) \leq \max_{j=2,\dots,n} |a_{j1}| \} \\
&=& \{ \lambda \in \mathbb{C} \ | \ \exists i=2,\dots,n \ |a_{ii} - \lambda| \leq \sum_{j \neq 1,i} |a_{ij}| + \max_{j=2,\dots,n} |a_{j1}| \}.
\end{eqnarray*}
So we see that $G_1(A) \cap G_2(A) =\emptyset $ if the following condition holds
\begin{equation}
|a_{11}-a_{ii}| > \sum_{j\neq 1}|a_{1j}| + \sum_{j \neq 1,i} |a_{ij}| + \max_{j=2,\dots,n} |a_{j1}| \quad\mbox{ for all $i=2,\dots,n$.} \label{eq:GenGerszEstm}
\end{equation}
If we will use the classical Gerschgorin theorem, i.e. blocks are one-dimensional, then to have $G_1 \cap G_i=\emptyset$ for
\begin{equation}
|a_{11}-a_{ii}| > R_1 + R_i = \sum_{j \neq 1} |a_{1j}| + \sum_{j \neq i} |a_{ij}| \quad\mbox{ for all $i=2,\dots,n$.} \label{eq:StandardGerszEstm}
\end{equation}
Observe that in both cases we have the same Gerschgorin disk $G_1$, so the bound for the first eigenvalue will be the same, provided we have
empty intersections with other disks.
Observe that (\ref{eq:GenGerszEstm}) implies (\ref{eq:StandardGerszEstm}).
Now we show one of the main results of this paper, which states that if a matrix $A=[a_{ij}]$ has an isolated Gerschgorin disk $G_1$, then $A-a_{11}I$ is dominating (relative to the splitting $\mathbb{C}\times\mathbb{C}^{n-1}$) and under very mild assumptions the bound obtained from the method of cones is better that the
one from the Gershgorin theorem.
\begin{theorem}
\label{thm:gerssh-dominating}
Let $A\in\mathbb{C}^{n\times n}$ be given by the formula
\begin{equation*}
A=
\left[
\begin{array}{c|ccc}
a_{11} & a_{12} & \ldots & a_{1n} \\ \hline
a_{21} & a_{22} & \ldots & a_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{n1} & a_{n2} & \ldots & a_{nn}
\end{array}
\right]
=
\begin{bmatrix}
A_{11} & A_{12} \\
A_{21} & A_{22}
\end{bmatrix}.
\end{equation*}
Assume that the matrix $A$ satisfies inequality \eqref{eq:StandardGerszEstm}.
Then the matrix $A-a_{11}I$ is dominating. Moreover, we have
\[
\co{A-a_{11}I}\ \leq \sum\limits_{j\neq 1}|a_{1j}| < \ex{A-a_{11}I}.
\]
and if $\sum\limits_{j\neq 1}|a_{1j}| >0$, then
\[
\co{A-a_{11}I}\ < \sum\limits_{j\neq 1}|a_{1j}|.
\]
\end{theorem}
\begin{proof}
Without any loss of the generality we can assume that $a_{11}=0$. Let us denote $V=\mathbb{C} \oplus \mathbb{C}^{n-1}$
In order to estimate $\co{A}$ we will first find a bound for the set $Z$ of $\x{x}$, such that $A\x{x} \in \co{V}$. Then
we will compute $\co{A}$ on $Z$.
Let
\begin{equation}
\label{eq:thm-better-1}
\delta_k=|a_{kk}| - \sum_{j \neq k}|a_{kj}| - \sum_{j \neq 1}|a_{1j}|.
\end{equation}
From our assumptions it follows that
\[
\delta=\min_{k=2,\dots,n} \delta_k >0.
\]
Let $\epsilon>0$ be such that
\begin{equation}
\label{eq:thm-better-2}
\epsilon |a_{k1}| < \delta_k, \quad k=2,\dots,n.
\end{equation}
Assume now that $\x{x}=(x_1,\x{x}_2)$ such that $|x_1| \leq (1+\epsilon) \|\x{x}_2\|_\infty$. We will show that $Ax \notin \co{V}$.
We can assume that $\|x_2\|_\infty=1$ and $|x_k|=1$. Then we have
\begin{align*}
|(Ax)_k| & \geq |a_{kk}| - \sum_{j \notin \{1,k\}} |a_{kj}| - (1+\epsilon) |a_{k1}|= |a_{kk}| - \sum_{j \neq k} |a_{kj}| - \epsilon |a_{k1}| \\[0.3em]
& \overset{by\,\eqref{eq:thm-better-1}}{=} \sum_{j \neq 1} |a_{1j}| + \delta_k - \epsilon |a_{k1}|
\overset{by\,\eqref{eq:thm-better-2}}{>} \sum_{j \neq 1} |a_{1j}| \geq \|A_{12} x_2\|=\|(A\x{x})_1\|_\infty.
\end{align*}
Hence $A\x{x} \notin \co{V}$.
Therefore, if $A\x{x} \in \co{V}$, then $|x_1| > (1+\epsilon)\|\x{x}_2\|_\infty$. In particular, we obtain
\begin{equation}
\mbox{if $A\x{x} \in \co{V}$, then} \quad \norm{\x{x}}=|x_1| > (1+\epsilon) \|\x{x}_2\|_\infty. \label{eq:coA-set-estm}
\end{equation}
Now we are ready to estimate $\co{A}$. Let $\x{x}=(x_1, \x{x}_2)$ is such that $A\x{x} \in \co{V}$, then
\begin{align*}
\norm{A\x{x}} & = \|A_{12}\x{x}_2\|_{\infty}\leq \|A_{12}\|_{\infty}\cdot \|\x{x}_2\|_{\infty}\\[0.3em]
& \overset{by\,\eqref{eq:coA-set-estm}}{\leq}
\|A_{12}\|_{\infty} \frac{\norm{\x{x}}}{1+\epsilon} = \frac{1}{{1+\epsilon}}\sum\limits_{j\neq 1}|a_{1j}| \norm{\x{x}}.
\end{align*}
Hence $\co{A}\ \leq \sum\limits_{j\neq 1}|a_{1j}|$, but if $\sum\limits_{j\neq 1}|a_{1j}| >0$, then $\co{A}\ < \sum\limits_{j\neq 1}|a_{1j}|$.
Now we estimate $\ex{A}$. We will use Lemma~\ref{lem:coA-exA-alter}.
Let's take arbitrary $\x{x}=(x_1,\x{x}_2)$ such that $\|\x{x}_2\|_{\infty}=1$ and $|x_1|\leq 1$. Let $k=2,\ldots, n$ be such that $|x_k|=1$. From \eqref{eq:StandardGerszEstm} we obtain
\[
|(Ax)_k|\geq |a_{kk}|-\sum\limits_{j\neq k}|a_{kj}| \overset{by\,\eqref{eq:StandardGerszEstm}}{>} \sum\limits_{j\neq 1}|a_{1j}|.
\]
Hence $\|Ax\|_{\infty}>\sum\limits_{j\neq 1}|a_{1j}|$.
Therefore we have shown that
\[
\ex{A}>\sum\limits_{j\neq 1}|a_{1j}|.
\]
\end{proof}
\begin{remark}
Observe that from Theorem~\ref{thm:gerssh-dominating} we know that our method of cones (i.e. Theorem~\ref{thm:cone-main-eigen-location}) can be used for all matrices which have an isolated Gerschgorin disk. Moreover, we obtain
\[
|\lambda-a_{11}|\leq\ \co{A-a_{11}I}\ \leq \frac{1}{1+\epsilon} R_1 = \frac{1}{1+\epsilon} \sum\limits_{j\neq 1}|a_{1j}|.
\]
This means that, if $R_1$ (the radius of the first Gerschogorin disk) is nonzero, then the estimate of the first eigenvalue from our method based on cones is better than the one obtained from the Gerschgorin theorem. This is also
valid for all possible rescalings in the application of the Gerschogorin theorem, we should apply the same scaling in the method of cones.
\end{remark}
\subsection{Comparison of Theorem~\ref{thm:cone-single-eigenval} with the Gerschgorin theorems}
In the proof of Theorem~\ref{thm:cone-single-eigenval} we applied Theorem~\ref{thm:cone-main-eigen-location} to the matrix $A - a_{11}I$ to estimate the size of the eigenvalue, $\lambda_1$, close to $0$. We looked for possibly large parameter $r$, such that $A-a_{11}I$ is $r$-dominating and then we obtain
\[
|a_{11} - \lambda_1| \leq \co[r]{A} \leq \frac{\|A_{12}\|}{r}.
\]
This is exactly $G_1$ obtained from the Gerschgorin theorem for $\tilde{A}_r$.
The optimization with respect of $r$ performed in the proof of the Theorem~\ref{thm:cone-single-eigenval}
to obtain the formula can be also repeated by suitable rescaling using the original Gerschgorin theorem as long $G_1(\tilde{A}_r)$ is disjoint from
other Gerschgorin disks for $\tilde{A}_r$. Therefore both approaches differ only with the range of $r$'s over which the optimization can be performed.
In fact we are only interested in the upper bound for $r$ in both methods.
Let $(1,\delta_2,\dots,\delta_n)$ be the eigenvector corresponding to $\lambda_1$. We obtain from Theorem~\ref{thm:r-dom-spectrum-gap}
the bound $1 \geq r \|(\delta_2,\dots,\delta_n\|$, while from Theorem~\ref{thm:gen-gersz-eigenvect-estm} applied to $\tilde{A}$ after returning to the original base we have $|\delta_i| \leq 1/r$. Hence the result is the same for the method based on cones and the Gerschgorin Theorem.
\medskip
The example below demonstrates that it is possible to use the Gerschgorin theorem to isolate and estimate the eigenvector and eigenvalue, while
assumptions of Theorem~\ref{thm:cone-single-eigenval} and also assumptions of the generalized Gerschgorin Theorems~\ref{thm:gen-gersch-thm1} and~\ref{thm:gersz-eigenval} are not satisfied. This appears to contradict Theorem~\ref{thm:gerssh-dominating}, but it does not, because
in the proof Theorem~\ref{thm:cone-single-eigenval} we used an expression for $\co{A}$ from Lemma~\ref{thm:formulas-contr-exp}, which turns out to be
an overestimation, see also Example~\ref{ex:dominat-exactly}.
By Theorem \ref{thm:gerssh-dominating} we know that the considered matrix is dominating, hence we can estimate the eigenpair using Theorem~\ref{thm:cone-main-eigen-location}, see Example \ref{ex:not-G-better}.
\begin{example}
\label{ex:G-better}
Let $A\in{\cal L}(\mathbb{C}\times\mathbb{C}^2)$ be given by the formula
\[
A=
\begin{bmatrix}
A_{11} & A_{12} \\
A_{21} & A_{22}
\end{bmatrix}
= \left[
\begin{array}{c|cc}
0 & 1 & 0 \\ \hline
0.5 & 2 & 0 \\
50 & 0 & 100
\end{array}
\right].
\]
The classical Gerschgorin disks are
\[
G_1=\overline{B}(0,1), \quad G_2=\overline{B}(2,0.5), \quad G_3=\overline{B}(100,50).
\]
It is clear that they are mutually disjoint, hence from the Gerschgorin theorem there exists an eigenvalue $\lambda$, $|\lambda|\leq 1$.
Now, we look at our Theorem~\ref{thm:cone-single-eigenval} to estimate the eigenvalue close to $0$. We have $A_{11}=0$ and
\begin{eqnarray*}
\|A_{12}\|_\infty=1, \quad \|A_{21}\|_{\infty}=50, \quad \|(A_{22} - A_{11}\cdot I)^{-1}\|_\infty=0.5, \\
\|(A_{22} - A_{11}\cdot I)^{-1}\|_\infty^{-2} - 4 \|A_{12}\| \cdot \|A_{21}\|_\infty= 4 - 200 < 0.
\end{eqnarray*}
Therefore assumptions of Theorem~\ref{thm:cone-single-eigenval} are not satisfied.
Observe that also assumptions of the generalized Gerschgorin Theorems~\ref{thm:gen-gersch-thm1} and~\ref{thm:gersz-eigenval} for the decomposition given above are not satisfied. Our generalized Gerschgorin disks are
\begin{eqnarray*}
G_1(A)&=&\overline{B}(0,1), \\
G_2(A)&=&\{\lambda \ :\ \|(A_{22} - \lambda I)^{-1}\|_{\infty}^{-1} \leq 50\}.
\end{eqnarray*}
We have
\begin{equation*}
(A_{22} - \lambda I)^{-1}= \frac{1}{(100-\lambda)(2-\lambda)}
\begin{bmatrix}
100-\lambda & 0 \\
0 & 2-\lambda\\
\end{bmatrix},
\end{equation*}
hence
\begin{equation*}
\|(A_{22} - \lambda I)^{-1}\|_{\infty}^{-1}=\min(|2-\lambda|,|100-\lambda|).
\end{equation*}
It is easy to see that $G_1(A)\cap G_2(A)\neq\emptyset$, therefore we cannot use these theorems.
\medskip
\textbf{Rescaling:} When applying our method based on cones we should look for the largest $r$ such that $\tilde{A}_r$ is $1$-dominating and when using the Gerschgorin theorem we look for $r$
such that $G_1(\tilde{A}_r)$ have empty intersection with others Gerschgorin circles for $\tilde{A}_r$.
For the Gerschorin disks we need to have the following inequalities
\[
\frac{1}{r} < 2 - r/2, \quad \frac{1}{r} < 100 - 50r.
\]
We obtain $\sup r =1 + \sqrt{\frac{49}{50}} \approx 2$. Hence we obtain bound $|\lambda| \leq \approx 1/2$.
For the approach based on cones we need to find largest $r$, such that $\tilde{A}_r$ is $1$-dominating. Using Theorem~\ref{thm:formulas-contr-exp} we
obtain the following condition
\[
\frac{1}{r} \|A_{12}\|= \frac{1}{r} < \|A_{22}^{-1}\|^{-1} - r \|A_{21}\| = 2 - 50r.
\]
Easy computations show that no such $r$ exists in this case.
Similar effect we get if we use the generalized Gerschgorin theorem.
\end{example}
In the following example we show that despite the fact that the matrix $A$ from Example~\ref{ex:G-better} does not satisfy the assumptions of Theorem~\ref{thm:cone-single-eigenval}, we can use our method of cones (we apply Theorem~\ref{thm:cone-main-eigen-location}) to estimate the eigenvalue close to zero.
\begin{example}
\label{ex:not-G-better}
Recall that $A\in{\cal L}(\mathbb{C}\times\mathbb{C}^2)$ of Example~\ref{ex:G-better} was given by the formula
\[
A=
\left[
\begin{array}{c|cc}
0 & 1 & 0 \\ \hline
0.5 & 2 & 0 \\
50 & 0 & 100
\end{array}
\right].
\]
From Theorem \ref{thm:gerssh-dominating} we know that the matrix $A$ is dominating, so we can estimate the eigenvalue $\lambda$ close to zero by $|\lambda|\leq\co{A}$ (see Theorem~\ref{thm:cone-main-eigen-location}).
From Lemma~\ref{lem:coA-exA-alter} we have
\begin{equation*}
\co[]{A} = \dfrac{1}{\min\left(\|x\|_{\infty} \ \mbox{ for } \x{x}\in\mathbb{R}^3\ \mbox{ such that }\ \|A\x{x}\|_{>1}\leq \|A\x{x}\|_{\leq 1}=1 \right)},
\end{equation*}
where $\|\x{x}\|_{\leq k}:=\max\limits_{i\leq
k}|x_i|$ and $\|\x{x}\|_{>k}:=\max\limits_{i> k}|x_i|$ for $\x{x}=(x_1,\ldots, x_k,\ldots, x_n)\in\mathbb{R}^n$, see \eqref{eq:coA-alter}.
The problem to calculate this constant comes down to solve simple optimization problem. We obtain \[
\min\left(\|x\|_{\infty} \ \mbox{ for } \x{x}\in\mathbb{R}^3\ \mbox{ such that }\ \|A\x{x}\|_{>1}\leq \|A\x{x}\|_{\leq 1}=1 \right)=2.
\]
This minimum is realized in the points $\left(-2,1,\frac{99}{100}\right)^T$, $\left(-2,1,\frac{101}{100}\right)^T$, $\left(2,-1,-\frac{101}{100}\right)^T$ and $\left(2,-1,-\frac{99}{100}\right)^T$. Hence $\co[]{A}=\frac{1}{2}$. By Theorem \ref{thm:cone-main-eigen-location} we get that eigenvalue close to zero satisfies
\[
|\lambda|\leq\co{A}=\frac{1}{2}.
\]
The bound $|\lambda|\leq\frac{1}{2}$ can be obtained also from the Gerschgorin theorem, see Example \ref{ex:G-better} ({\em 'Rescaling'}). Note that so far we did not improve the matrix $A$ through the scaling
\(
X=
\begin{bmatrix}
r & 0 \\
0 & I
\end{bmatrix}
\)
for $r\in(0,\infty)$.
From Theorem \ref{thm:gerssh-dominating} and again calculations from Example \ref{ex:G-better} ({\em 'Rescaling'}) we know that our method work even if we rescale our matrix $A$ by the matrix
$X$ for $r<1+\sqrt{\frac{49}{50}}$. For $r=\frac{9}{5}$ we obtain $|\lambda|\leq\co{A}=\frac{9}{26}<\frac{1}{2}$.
\end{example}
In the following two examples in view of the complicated mathematical calculations we will not try to apply the generalized Gerschgorin theorem (in both examples assumptions of Theorems~\ref{thm:gen-gersch-thm1} and~\ref{thm:gen-gersz-eigenvect-estm} are satisfied).
In the first example we construct a matrix such that the matrix $A - a_{11} I$ will be $1$-dominating, while there will be no isolation of the
first Gerschgorin disk.
\begin{example}
\label{ex:mA-better-G}
Let $A\in{\cal L}(\mathbb{C}\times\mathbb{C}^2)$ be given by the formula
\[
A=
\begin{bmatrix}
A_{11} & A_{12} \\
A_{21} & A_{22}
\end{bmatrix}
= \left[
\begin{array}{c|cc}
0 & 0.75 & 0 \\ \hline
\epsilon_1 & 1& 0.5 \\
\epsilon_2 & 0.5 & 100
\end{array}
\right],
\]
where $\epsilon_1$, $\epsilon_2$ are sufficiently small.
Observe that $G_1(A) \cap G_2(A) = \overline{B}(0,0.75)\cap\overline{B}(1,0.5+\epsilon_1) \neq \emptyset$, hence the Gerschgorin theorem does not give us that $\lambda_1 \in G_1(A)$.
It is easy to see that $A - a_{11} I$ will be $1$-dominating. Indeed from Theorem~\ref{thm:formulas-contr-exp} we have
\begin{eqnarray*}
\co[1]{A} \leq \|A_{12}\|= 3/4, \quad \ex[1]{A} \geq \|A_{22}^{-1}\|^{-1} - \|A_{21}\| \approx 1.
\end{eqnarray*}
Hence $A$ is $1$-dominating and Theorem~\ref{thm:cone-main-eigen-location} implies that $\lambda_1 \in G_1(A)$.
\medskip
\textbf{Rescaling:} We set $\epsilon_1=\epsilon_2=0.1$.
We optimize by rescaling by $r$. The Gerschgorin disks approach leads to the following inequalities
\[
\frac{3}{4 r} < 0.5 - r/10.
\]
There is no such $r$ for which this holds.
The approach based on cones requires that
\[
\frac{1}{r} \|A_{12}\| = \frac{3}{4 r} < \|A_{22}^{-1}\|^{-1} - r \|A_{21}\| \approx 1 - \frac{r}{10}.
\]
We obtain
\[
\sup r \approx 5 + \sqrt{\frac{35}{2}}.
\]
Hence we get $|\lambda| \leq 0.0817$.
\end{example}
The following example illustrates the case of the matrix $A$ for which both methods discussed above can be applied.
\begin{example}
\label{ex-gersz-our}
We put
\[
A =
\begin{bmatrix}
A_{11} & A_{12} \\
A_{21} & A_{22}
\end{bmatrix}
=\left[
\begin{array}{c|cc}
\phantom{-} 0 & \frac{2}{5} & -\frac{1}{5} \\[0.2em] \hline \\[-1em]
\phantom{-} \frac{1}{5} & \frac{3}{2} & \phantom{-} \frac{2}{5} \\[0.3em]
-\frac{1}{10} & \frac{3}{10} & \phantom{-} 2 \\
\end{array}
\right].
\]
First, by Theorem~\ref{thm:cone-single-eigenval} we estimate the eigenvalue close to $0$. We have $a_{11}=0$ and
\begin{eqnarray*}
\|A_{12}\|_\infty=\frac{3}{5}, \quad \|A_{21}\|_{\infty}=\frac{1}{5}, \quad \|(A_{22} - a_{11}\cdot I)^{-1}\|_\infty=\frac{5}{6}, \\
\|(A_{22} - a_{11}\cdot I)^{-1}\|_\infty^{-2} - 4 \|A_{12}\| \cdot \|A_{21}\|_\infty= \frac{24}{25}>0.
\end{eqnarray*}
Therefore assumptions of Theorem~\ref{thm:cone-single-eigenval} are satisfied and we obtain that the eigenvalue $\lambda$ close to $0$ satisfies $|\lambda|\leq \frac{1}{5}$.
Now we use the Gerschgorin theorems to estimate the eigenvalue close to $0$. The Gerschgorin disks are
\[
G_1(A)=\overline{B}\left(0,\frac{3}{5}\right), \quad G_2(A)=\overline{B}\left(\frac{3}{2},\frac{3}{5}\right)\quad\mbox{and}\quad
G_3(A)=\overline{B}\left(2,\frac{2}{5}\right).
\]
It is easy to see that $G_1(A)\cap G_2(A)=\emptyset$, $G_1(A)\cap G_3(A)=\emptyset$ but we rescale the matrix $A$ (with $r=3$, which is the same rescaling as in our method), we obtain the matrix
\[
\tilde{A}_r=
\begin{bmatrix}
\phantom{-} 0 & \frac{2}{15} & -\frac{1}{15} \\[0.3em]
\phantom{-} \frac{3}{5} & \frac{3}{2} & \phantom{-}\frac{2}{5} \\[0.3em]
-\frac{3}{10} & \frac{3}{10} & \phantom{-}2 \\
\end{bmatrix},
\]
and consequently $G_1(\tilde{A}_r)\cap G_2(\tilde{A}_r)=\emptyset$ and $G_1(\tilde{A}_r)\cap G_3(\tilde{A}_r)=\emptyset$. Hence from the Gerschgorin theorem there exists an eigenvalue $\lambda$ such that $|\lambda|\leq \frac{1}{5}$.
\medskip
\textbf{Rescaling:} We look for the largest $r$ for each method, which allows us to obtain the best estimation for the eigenvalue $\lambda$ close to $0$.
For Gerschgorin disks we need to solve the following inequalities
\[
\frac{3}{2}>\frac{3}{5 r}+\frac{r}{5}+\frac{2}{5}, \quad
2>\frac{3}{5 r}+\frac{r}{10}+\frac{3}{10}.
\]
We obtain $\sup r =\frac{1}{4} \left(11+\sqrt{73}\right)$. Hence we obtain bound $|\lambda| \leq \approx 0.1228$.
The cone based approach requires
\[
\frac{1}{r} \|A_{12}\| = \frac{3}{5 r} < \|A_{22}^{-1}\|^{-1} - r \|A_{21}\| = \frac{6}{5} - \frac{r}{5}.
\]
We obtain
\(
\sup r =3+\sqrt{6}.
\)
Hence we obtain the bound $|\lambda| \leq \approx 0.110102$.
By doing the same calculations as above for the transpose of the matrix $A$ we obtain
\[
\sup r = \frac{1}{2} \left(3+\sqrt{6}\right) \ \mbox{ and }\ |\lambda|\leq \approx 0.110102,
\] from the classical Gerschgorin theorem, and for cone based we get
\[
\sup r = \frac{1}{46} \left(72+\sqrt{3597}\right), \; |\lambda|\leq \approx 0.104565.
\]
As one can see the use of cone based approach gives us better estimation of the eigenvalue close to zero than the classical Gerschgorin theorem with rescaling.
\end{example}
\medskip
\textbf{Conclusions:} As one can see from above examples and theorems our method is better than Gerschgorin theorem and its modifications. The main advantages of our method are:
\begin{itemize}
\item locates the spectrum and eigenspaces of a matrix when eigenvalues of multiplicity greater than one or clusters of very close eigenvalues are present,
\item gives better estimation for isolated eigenvalues,
\item allows to deal with composition of matrices.
\end{itemize}
\section{Cones and dominating maps}
\label{sec:cones}
In this section we introduce the basic concepts
and tools of our method of invariant cones to locate the eigenspaces and bound the spectrum for matrices.
For this end we modify the concept of cones from \cite{KT}.
Our approach is strongly motivated by the methods from the theory of hyperbolic dynamical systems, in particular by the results of Newhouse \cite{Nh}, who obtained conditions for hyperbolic splitting on compact invariant set for a diffeomorphism in terms of its induced action on a cone-field and its complement.
\begin{definition}
By a {\em cone-space} we understand a finite dimensional Banach
space $E$ with semi-norms $\co[]{\cdot}$ (we call it {\em
contracting}), $\ex{\cdot}$ (which we call {\em expanding}) such
that
\[
\norm{\x{x}} := \max(\co[]{\x{x}}, \ex{\x{x}})
\]
defines an equivalent norm on $E$.
By the {\em r-norm} for $r>0$ on the cone-space $E$ we take
\[
\norm{\x{x}}_r:=\max(\co[]{\x{x}}, r\cdot\ex{\x{x}}).
\]
\end{definition}
\begin{definition}\label{def:1}
Let $E$ be a cone-space. We define the {\em $r$-contracting cone}
in $E$ by
\begin{align*}
\co[r]{E} & :=\{\x{x} \in E: \;\co[]{\x{x}} \geq r\ex{\x{x}}\},\\
\intertext{and the {\em $r$-expanding cone} in $E$ by}
\ex[r]{E} & :=\{\x{x} \in E:\; \co[]{\x{x}} \leq r\ex{\x{x}} \}.
\end{align*}
\end{definition}
\noindent Note that
\begin{equation}\label{eq:1}
E= \co[r]{E}\cup\ex[r]{E}.
\end{equation}
In the same way we define $r$-contracting cone and $r$-expanding
cone in subspace of $E$. If $r=1$ we will omit the subscript $r$,
in particular we speak of contracting cone. We introduce the
scaling by $r$ of semi-norms to have a better control over size of
the cones (see Figure \ref{rys:1}), which will consequently allow
us to better locate the eigenvectors.
\begin{figure}[H]
\centering
\subfloat[The contracting cone in $\mathbb{R}\times\mathbb{R}$.]{
\begin{tikzpicture}[
scale=0.45,
axis/.style={thin, dashed, ->, >=stealth'}
]
\path [fill=jszary] (-5,5) -- (0,0) -- (-5,-5);
\path [fill=jszary] (5,5) -- (0,0) -- (5,-5);
\draw[axis] (-5,0) -- (5.4,0);
\draw[axis] (0,-5) -- (0,5.4);
\end{tikzpicture}
}
\hspace{2cm}
\subfloat[The $2$-contracting cone in $\mathbb{R}\times\mathbb{R}$.]{
\begin{tikzpicture}[
scale=0.45,
axis/.style={thin, dashed, ->, >=stealth'}
]
\path [fill=jszary] (-5,2.5) -- (0,0) -- (-5,-2.5);
\path [fill=jszary] (5,2.5) -- (0,0) -- (5,-2.5);
\draw[axis] (-5,0) -- (5.4,0);
\draw[axis] (0,-5) -- (0,5.4);
\end{tikzpicture}
}
\end{figure}
\begin{figure}[H]
\ContinuedFloat
\setcounter{figure}{2}
\centering
\subfloat[ The expanding cone in $\mathbb{R}\times\mathbb{R}$.]{
\begin{tikzpicture}[
scale=0.45,
axis/.style={thin, dashed, ->, >=stealth'}
]
\path [fill=jszary] (-5,5) -- (0,0) -- (5,5);
\path [fill=jszary] (-5,-5) -- (0,0) -- (5,-5);
\draw[axis] (-5,0) -- (5.4,0);
\draw[axis] (0,-5) -- (0,5.4);
\end{tikzpicture}
}
\hspace{2cm}
\subfloat[The $2$-expanding cone in $\mathbb{R}\times\mathbb{R}$.]{
\begin{tikzpicture}[
scale=0.45,
axis/.style={thin, dashed, ->, >=stealth'}
]
\path [fill=jszary] (-5,5) -- (-5,2.5) -- (0,0) -- (5,2.5) -- (5,5) -- (-5,5);
\path [fill=jszary] (-5,-5) -- (-5,-2.5) -- (0,0) -- (5,-2.5) -- (5,-5) -- (-5,-5);
\draw[axis] (-5,0) -- (5.4,0);
\draw[axis] (0,-5) -- (0,5.4);
\end{tikzpicture}
}
\caption{The cones in the cone-space $\mathbb{R}\times\mathbb{R}$.}\label{rys:1}
\end{figure}
If $E$ has a fixed product structure $E = E_1\times E_2$, we
introduce a natural cone-space structure on $E$ by defining
seminorms
\[
\co[]{\x{x}}:=\|x_1\|, \quad \ex{\x{x}}:=\|x_2\|\quad\text{ for }\;\x{x}=(x_1,x_2)\in E_1\times E_2.
\]
\medskip
In the proof of our main result, Theorem \ref{thm:cone-main-eigen-location}, the following proposition
will play a crucial role.
\begin{proposition}\label{prop:1}
Let $E=E_1\times E_2$ be a cone-space and let $r>0$ be given.
Assume that we have direct sum decomposition $E=V_1\oplus V_2$
such that
\[
V_1\subset\co[r]{E}\quad\text{and}\quad V_2\subset\ex[r]{E}.
\]
Then \( \mathrm{dim} V_1=\mathrm{dim} E_1\quad\text{and}\quad\mathrm{dim} V_2=\mathrm{dim} E_2. \)
\end{proposition}
\begin{proof}
Let $n:=\mathrm{dim} E_1$ and $m:=\mathrm{dim} E_2$. First we show that $\mathrm{dim}
V_1\leq n$. For an indirect proof, assume that $\mathrm{dim} V_1>n$. Then
there exist linearly independent vectors
$\x{v_1},\ldots,\x{v_{n+1}}\in V_1$. Obviously $\x{v_i}=(w_i,
z_i)$ for $i\in\{1,\ldots,n+1\}$ and unique $w_i\in E_1$, $z_i\in
E_2$. Since $w_1,\ldots,w_{n+1}\in E_1$ and $\mathrm{dim} E_1=n$ there
exist a set of $n+1$ scalars, $\alpha_1, \ldots,\alpha_{n+1}$, not all
zero, such that
\[
\alpha_1 w_1+\ldots+\alpha_{n+1}w_{n+1}=0.
\]
Note that
\[
\x{z}:=\alpha_1 z_1+\ldots+\alpha_{n+1}z_{n+1}\neq 0,
\]
because otherwise the vectors $\x{v_1},\ldots,\x{v_{n+1}}$ would
not be linearly independent. Consequently we obtain
\begin{equation*}
(0,\x{z})=\left(\sum_{i=1}^{n+1}\alpha_i w_i, \sum_{i=1}^{n+1}\alpha_i
z_i\right)\in V_1\subset\co[r]{E},
\end{equation*}
and thus $r\|\x{z}\|\leq\|0\|$, which implies that $\x{z}=0$. We
get a contradiction with the fact the sequence of vectors
$\x{v_1},\ldots,\x{v_{n+1}}$ is linearly independent.
The proof that $\mathrm{dim} V_2\leq m$ is analogous. Finally, since $\mathrm{dim}
E=n+m$ and $\mathrm{dim} V_1\leq n$, $\mathrm{dim} V_2\leq m$ we obtain
\[
\mathrm{dim} V_1=n,\quad\text{and}\quad\mathrm{dim} V_2=m.
\]
\end{proof}
By an {\em operator} we mean a linear mapping between cone-spaces
$E$ and $F$. We denote the space of all operators by ${\cal L}(E,F)$.
If $F=E$, we denote ${\cal L}(E,E)$ by ${\cal L}(E)$.
\medskip
Let $A\in{\cal L}(E,F)$. We define
\begin{alignat}{2}
\co[r]{A} & := & \inf & \{R \in \mathbb{R}_+ \, | \, \norm{A\x{x}}_r \leqslant R\norm{\x{x}}_r \text{ for all } \x{x}\in E: A\x{x} \in \co[r]{F}\}, \label{eq:2}\\[0.3em]
\ex[r]{A} & :=\; &\sup &\{ R \in \mathbb{R}_+ \, | \, \norm{A\x{x}}_r \geqslant R\norm{\x{x}}_r \text{ for all } \x{x} \in E: \x{x} \in \ex[r]{E}\}. \label{eq:3}
\end{alignat}
The following lemma is obvious.
\begin{lemma}
\label{lem:coA-exA-alter}
Let $A\in{\cal L}(E,F)$.
\begin{align}
\co[r]{A} &= \left(\inf \{ \norm{\x{x}}_r \, | \, A\x{x} \in \co[r]{F}, \, \norm{ A\x{x}}_r=1 \}\right)^{-1} \; \mbox{when $A$ is invertible}, \label{eq:coA-alter} \\[0.3em]
\ex[r]{A} &= \inf \{ \norm{A\x{x}}_r \, | \, \x{x} \in \ex[r]{E}, \, \norm{\x{x}}_r=1 \}. \label{eq:exA-alter}
\end{align}
\end{lemma}
\begin{remark}\label{rem:rate}
Observe, that
\begin{alignat*}{2}
\norm{A\x{x}}_r &\leqslant \co[r]{A}\norm{\x{x}}_r &\mbox{for}\quad & \x{x}\in A^{-1}\co[r]{F},\\[0.3em]
\norm{A\x{x}}_r &\geqslant \ex[r]{A}\norm{\x{x}}_r &\quad\mbox{for}\quad & \x{x}\in \ex[r]{E}.
\end{alignat*}
\end{remark}
\noindent The above definitions of $
\co[r]{A}$ and $\ex[r]{A}$ are modifications of analogous notions
in \cite{Nh}, where $\ex[]{A}$ is called the expansion rate and
$1/\!\!\co[]{A}$ is the co-expansion rate. Using of those rates
we can generalize the classical dominating maps which are relevant
to our research.
\begin{definition}\label{def:2}
We say that $A\in{\cal L}(E,F)$ is {\em $r$-dominating}, if
\[
\co[r]{A} < \ex[r]{A}.
\]
By ${\cal D}_r(E,F)$ we denote the set of all $A\in{\cal L}(E,F)$ which are
$r$-dominating. If $F=E$, we denote the space ${\cal D}_r(E,E)$ by
${\cal D}_r(E)$.
\end{definition}
\begin{observation}\label{ob:1}
Let $\tilde{E}\subset E$, $\tilde{F}\subset F$ be subspaces and
let $A\in{\cal L}(E,F)$ be such that $A(\tilde{E})\subset\tilde{F}$.
Then $A|_{\tilde E} \in {\cal L}(\tilde E,\tilde F)$ and
\[
\co[r]{A|_{\tilde{E}}}\leq\co[r]{A}\quad\text{ and
}\quad\ex[r]{A}\leq\ex[r]{A|_{\tilde{E}}}.
\]
Moreover, if $A\in{\cal D}_r(E,F)$ then $A\in{\cal D}_r(\tilde{E},\tilde{F})$.
\end{observation}
\begin{proof}
It is a consequence of \eqref{eq:2}, \eqref{eq:3} and Definition
\ref{def:2}.
\end{proof}
It turns out that $r$-cones are invariant for $r$-dominant operators.
\begin{theorem}\label{tw:1}
Let $A\in {\cal D}_r(E,F)$ and let $\x{v}\in E$ be arbitrary. Then
\begin{align*}
\x{v}\in \ex[r]{E} & \implies A\x{v}\in \ex[r]{F} ,\\[0.3em]
A\x{v}\in \co[r]{F} & \implies \x{v}\in \co[r]{E}.
\end{align*}
\end{theorem}
\begin{proof}
The proof is a simple modification of the proof of
\cite[Proposition 2.1]{KT}.
\end{proof}
As a consequence of the above theorem we obtain that composition of $r$-dominating maps is
$r$-dominating. Moreover, we get estimate for expansion and contraction rates.
\begin{proposition}\label{proposition:1}
Let $A\in {\cal D}_r(F,G)$ and $B\in {\cal D}_r(E,F)$. Then $A\circ B\in
{\cal D}_r(E,G)$ and
\begin{equation}\label{eq:4}
\co[r]{A\circ B}\leq \co[r]{A}\cdot\co[r]{B},\ \ex[r]{A\circ B} \geq \ex[r]{A}\cdot\ex[r]{B}.
\end{equation}
\end{proposition}
\begin{proof}
To prove the first inequality from \eqref{eq:4}, consider an
$\x{x}\in E$ such that $(A\circ B)(\x{x})\in \co[r]{G}$. From
\eqref{eq:2} and Theorem \ref{tw:1} we know that $B\x{x}\in
\co[r]{F}$, and thus we have
\[
\norm{A\circ B(\x{x})}_r\leq\co[r]{A} \cdot \norm{B\x{x}}_r
\leq \co[r]{A} \cdot \co[r]{B} \cdot \norm{\x{x}}_r.
\]
Hence
\[
\co[r]{A\circ B} \leq \co[r]{A} \cdot \co[r]{B}.
\]
Using \eqref{eq:3} and Theorem \ref{tw:1}, we obtain the second
inequality from \eqref{eq:4}.
As a simple consequence of \eqref{eq:4} we obtain $A\circ B\in
{\cal D}_r(E,G)$.
\end{proof}
In the remainder of this section we show
how to estimate $\co[r]{A}$, $\ex[r]{A}$.
Consider two cone-spaces $E = E_1\times E_2$ and $F = F_1\times
F_2$. Let $A\colon E\to F$ be an operator given in the matrix
form by
\[
A = \begin{bmatrix}
A_{11} &A_{12} \\[0.3em]
A_{21} & A_{22}
\end{bmatrix}.
\]
By
\[
\norm{A} _{r}:= \max\big(\|A_{11}\| + \frac{1}{r}\|A_{12}\|,
r\|A_{21}\| + \|A_{22}\|\big)
\]
we define the {\em $r$-norm} of operator $A$, where $\|.\|$ is an
operator norm. Observe that it satisfies
\[
\norm{A\x{x}} _{r}\leq \norm{A} _{r}\cdot \norm{\x{x}} _r \quad
\text{ for }\; \x{x}\in E.
\]
Note that in general it is not (except for the case when $E_1$ is
one dimensional) the operator norm for $\norm{\cdot}_{r}$.
\begin{theorem} \label{thm:formulas-contr-exp}
Let $A=[A_{ij}]_{1\leq i,j\leq 2}\in{\cal L}(E_1\times E_2,F_1\times
F_2)$ and $r\in(0,\infty)$ be given.
\begin{enumerate}
\item We have
\[
\co[r]{A} \leq \| A_{11}\|+\frac{1}{r}\|A_{12}\|.
\]
\item Additionally, if $A_{22}$ is invertible, then
\[
\ex[r]{A} \geq \|A_{22}^{-1}\|^{-1} - r\|A_{21}\|.
\]
\end{enumerate}
\end{theorem}
\begin{proof}
For the proof of the first inequality, we take $\x{x}=(x_1,x_2)\in
E_1\times E_2$ such that $A\x{x}\in \co[r]{F}$. From Definition
\ref{def:1} we have
\begin{equation}\label{eq:6}
\|A_{11}x_1+A_{12}x_2\|\geq r\|A_{21}x_1+A_{22}x_2\|,
\end{equation}
and therefore
\begin{align*}
\norm{A\x{x}}_r &= \max(\|A_{11}x_1+A_{12}x_2\|, r\|A_{21}x_1+A_{22}x_2\|)\\
&\overset{by\,\eqref{eq:6}}{=}\|A_{11}x_1+A_{12}x_2\|\leq \|A_{11}\|\cdot \|x_1\| + \frac{1}{r}\|A_{12}\|\cdot r\|x_2\|\\
&\leq (\|A_{11}\| + \frac{1}{r}\|A_{12}\|)\cdot \norm{\x{x}}_r.
\end{align*}
For the proof of the second inequality, suppose that
$\x{x}=(x_1,x_2)\in \ex[r]{E}$, where $x_1\in E_1$, $x_2\in E_2$.
Then
\begin{equation}\label{eq:7}
\|x_1\|\leq r\|x_2\|=\norm{\x{x}}_r.
\end{equation}
We know that
\begin{equation}\label{eq:8}
\|A_{22}x_2\|\geq \|A_{22}^{-1}\|^{-1}\|x_2\| \geq 0.
\end{equation}
Finally, we obtain
\begin{align*}
\norm{A\x{x}}_r &\geq r\|A_{21}x_1+A_{22}x_2\|\geq r\|A_{22}x_2\| - r\|A_{21}x_1\|\\
&\overset{by\,\eqref{eq:8}}{\geq}r\|A_{22}^{-1}\|^{-1}\|x_2\| -
r\|A_{21}\|\|x_1\|\overset{by\,\eqref{eq:7}}{\geq}\left(\|A_{22}^{-1}\|^{-1}
- r\|A_{21}\|\right)\cdot\norm{\x{x}}_r.
\end{align*}
\end{proof}
\begin{example}
\label{ex:dominat-ineq}
Let us verify that the matrix $A\in{\cal L}(\mathbb{C}\times\mathbb{C},\mathbb{C}\times\mathbb{C})$, \(
A =
\begin{bmatrix}
2 & 1.5 \\
1 & 5
\end{bmatrix}
\) is dominating. By Theorem \ref{thm:formulas-contr-exp} we have \( \co[]{A} \leq
3.5 < 4 \leq\ex{A}, \) and therefore $A$ is dominating.
\end{example}
Let us stress that the estimates from Theorem~\ref{thm:formulas-contr-exp} are sharp, but there are cases when we do not have equalities in them.
\begin{example}
\label{ex:dominat-exactly}
Let $A\in{\cal L}(\mathbb{C}\times\mathbb{C},\mathbb{C}\times\mathbb{C})$ be given by the formula \(
A =
\begin{bmatrix}
2 & 3 \\
2 & 5
\end{bmatrix}.
\) We show that $A$ is dominating. Observe that Theorem \ref{thm:formulas-contr-exp} does not allow us to decide whether this matrix $A$ is dominating, since
\[
\co[]{A}\leq \| A_{11}\|+\|A_{12}\|=5 \ \mbox{ and }\ \ex[]{A}\geq \|A_{22}^{-1}\|^{-1} - \|A_{21}\|=3.
\]
We calculate exactly $\co{A}$ and $\ex{A}$ (we take the norm $\|\cdot\|_{\infty}$) from the formulas \eqref{eq:coA-alter} and \eqref{eq:exA-alter}. The minimum of \eqref{eq:exA-alter} is realized in points $(1,-1)^T$ and $(-1,1)^T$. It is easy to see that the matrix $A$ is invertible, so minimum of \eqref{eq:coA-alter} is realized in points $(\frac{1}{2},0)^T$ and $(-\frac{1}{2},0)^T$.
Hence
\[
\co{A}=2 \quad\mbox{ and }\quad \ex{A}=3.
\]
Finally, we obtain that $A$ is dominating.
Observe that for this example Gerschgorin theorem does not hold (it is impossible to separate Gerschgorin disks).
\end{example}
\section{Estimations of the eigenvalues and eigenvectors }
\label{sec:eigen-estm}
In this section we develop
computable estimates for the eigenvalues and eigenspaces based on
the results from the previous section.
\begin{lemma}\label{lem:2}
Let $A\in{\cal L}(E_1\times E_2)$ be given such that
\[
A:=
\begin{bmatrix}
A_{11} & A_{12}\\
A_{21} & A_{22}
\end{bmatrix}.
\]
If $A_{22}$ is invertible, $d=\|A_{22}^{-1}\|^{-1}-\|A_{11}\|>0$
and $\Delta:=d^2-4\|A_{12}\|\|A_{21}\|>0$ then
\[
A\in{\cal D}_{r}(E_1\times E_2)\quad\text{for }\; \left\{
\begin{alignedat}{2}
&r\in\left(\frac{d-\sqrt{\Delta}}{2\|A_{21}\|}, \frac{d+\sqrt{\Delta}}{2\|A_{21}\|}\right) &&\text{ if }\;\|A_{21}\|\neq 0\\
&r\in\left(\frac{\|A_{12}\|}{d},\infty\right)&&\text{ if
}\;\|A_{21}\|=0
\end{alignedat}
\right..
\]
\end{lemma}
\begin{proof}
Let $a:=\|A_{12}\|$, $b:=\|A_{11}\|$ and $c:=\|A_{21}\|$. Making
use of Theorem \ref{thm:formulas-contr-exp} it suffices to show that
\[
b+\frac{a}{r}<(d+b)-cr.
\]
Multiplying both sides of the above inequality by the positive
number $r$ we get the inequality
\begin{equation}\label{eq:12}
cr^2-dr+a<0.
\end{equation}
If $c=0$ then we get $r>\frac{a}{d}$. Suppose now that $c\neq
0$. Since from our assumption follows that $\Delta>0$ we see
inequality \eqref{eq:12} is satisfied for
\[
r\in\left(\frac{d-\sqrt{\Delta}}{2c},
\frac{d+\sqrt{\Delta}}{2c}\right).
\]
\end{proof}
\begin{remark}
\label{rem:good-eps} Let $A$ be an operator, which satisfies the
assumptions of Lemma \ref{lem:2} (in particular $\Delta>0$). Let $a:=\|A_{12}\|$,
$b:=\|A_{11}\|$ and $c:=\|A_{21}\|\not = 0$.
It is easy to see, that
\[
\frac{d-\sqrt{\Delta}}{2c} < \frac{d}{2c} < \frac{d+\sqrt{\Delta}}{2c} < \frac{d}{c}.
\]
\end{remark}
Therefore, if $A$ satisfies the
assumptions of Lemma \ref{lem:2} and $\|A_{21}\|\not = 0$ and we want to find possibly largest $r$ for which $A$ is $r$-dominating, then we can
take $r=\frac{d}{2\|A_{21}\|}$. With this choice we have $r<r_{max}< 2r$, where $r_{max}$ is the supremum the set of $r$'s obtained in the above lemma,
therefore we might not be optimal, but we obtain easily manageable expression.
As a corollary we obtain a well-known result for the location of an isolated eigenvalue and its eigenspace \cite[Theorem 3.11]{Stewart}. We present its statement in the form adapted to our notation.
\begin{theorem} (\cite[Theorem 3.11]{Stewart}) \label{thm:cone-single-eigenval}
Let $A=[a_{ij}]_{1\leq i,j\leq n}\in{\cal L}(\mathbb{C}\times\mathbb{C}^{n-1})$ be
given in the block from by
\[
A:=
\begin{bmatrix}
A_{11} & A_{12}\\
A_{21} & A_{22}
\end{bmatrix},
\]
where $A_{11}=a_{11}$. Assume that $A_{22}-a_{11}\cdot
I_{\mathbb{C}^{n-1}}$ is invertible and $\|(A_{22}-a_{11}\cdot
I_{\mathbb{C}^{n-1}})^{-1}\|^{-2}-4\|A_{12}\|\|A_{21}\|>0$. Then
\begin{enumerate}
\item there exists a unique eigenvalue $\lambda$ of $A$ which satisfies
\[
|\lambda-a_{11}|\leq
2\|A_{12}\|\cdot\|A_{21}\|\cdot\|(A_{22}-a_{11}\cdot
I_{\mathbb{C}^{n-1}})^{-1}\|,
\]
\item the eigenspace corresponding to $\lambda$ is one-dimensional and there exist unique $\delta_2$, $\ldots$, $\delta_n\in\mathbb{C}$,
\[
\|(0,\delta_2,\ldots,\delta_n)^T\|\leq2\|A_{21}\|\cdot\|(A_{22}-a_{11}\cdot
I_{\mathbb{C}^{n-1}})^{-1}\|\cdot\|(1,0,\ldots,0)^T\|
\]
such that \( (1,\delta_2,\ldots,\delta_n)^T \) is the eigenvector
corresponding to $\lambda$.
\end{enumerate}
\end{theorem}
\begin{proof} It is easy to see that if $A_{21}=0$, then theorem holds. Therefore we will assume that $\|A_{21}\| >0$.
In order to apply Lemma~\ref{lem:2} to matrix $A - a_{11}I_{\mathbb{C}^n}$ we set
$a:=\|A_{12}\|$, $c:=\|A_{21}\|$ and $d=\|(A_{22}-a_{11}\cdot
I_{\mathbb{C}^{n-1}})^{-1}\|^{-1}$.
By Lemma \ref{lem:2} and
Remark~\ref{rem:good-eps} we get $A-a_{11}\cdot
I_{\mathbb{C}^{n}}\in{\cal D}_{d/(2c)}(\mathbb{C}\times\mathbb{C}^{n-1})$, and from Corollary \ref{cor:2} and Theorem~\ref{thm:formulas-contr-exp} we conclude that
there exists a unique eigenvalue $\lambda$ of $A$ which satisfies
\begin{equation*}
|\lambda - a_{11}| < \co[\frac{d}{2c}]{A-a_{11}I} \leq \frac{1}{\frac{d}{2c}} \|A_{12}\| = \frac{2ac}{d}.
\end{equation*}
From Theorem \ref{thm:cone-main-eigen-location} (second point) we know that eigenspace, which contains eigenvector corresponding to the $\lambda$, lies in $\co[\frac{d}{2c}]{\mathbb{C}\times\mathbb{C}^{n-1}}$. Hence (see Definition \ref{def:1}) we obtain unique $\delta_2$, $\ldots$, $\delta_n\in\mathbb{C}$, \(
\|(0,\delta_2,\ldots,\delta_n)^T\|\leq2\|A_{21}\|\cdot\|(A_{22}-a_{11}\cdot
I_{\mathbb{C}^{n-1}})^{-1}\|\cdot\|(1,0,\ldots,0)^T\| \) such that \(
(1,\delta_2,\ldots,\delta_n)^T \) is the eigenvector corresponding to
$\lambda$.
\end{proof}
Let us stress here that in the proof Theorem~\ref{thm:cone-single-eigenval} through Lemma~\ref{lem:2} we used estimates for $\co[]{A}$ and $\ex[]{A}$ provided by Theorem~\ref{thm:formulas-contr-exp}, which may fail establish that a matrix is dominating for a dominating matrix. If this is
the case we will use Theorem~\ref{thm:cone-main-eigen-location}. This happens in Examples \ref{ex:G-better} and \ref{ex:not-G-better}.
The following lemma shows how $\|(A - z I)^{-1}\|^{-1}$ can be computed in arbitrary norm, when $A$ is close to the diagonal matrix.
\begin{lemma}\label{mA-est}
Let $n\in\mathbb{N}$, $z\in\mathbb{C}$ and $A\in\mathbb{C}^{n\times n}$ be given.
Let $A$ be decomposed into $A=J+E$ where $J$ is a diagonal
matrix and $E$ equals zero on the diagonal. Assume that $J-z\cdot I_{\mathbb{C}^{n}}$ is invertible and
$\|(J-z\cdot I_{\mathbb{C}^{n}})^{-1}\|^{-1}-\|E\|>0$. Then
\[
\|(A-z\cdot I_{\mathbb{C}^n})^{-1}\|^{-1}\geq \|(J-z\cdot I_{\mathbb{C}^{n}})^{-1}\|^{-1}-\|E\|.
\]
\end{lemma}
\begin{proof}
It is well-known that for an invertible operator $B$ we
have
\[
(B-C)^{-1} = \sum^{\infty}_{n=0}(B^{-1}C)^n B^{-1}\quad\text{for
}\; C\in\mathbb{C}^{n\times n}\;:\;\|C\|<1/\|B^{-1}\|.
\]
Hence, if $\|C\|<1/\|B^{-1}\|$,
then
\begin{eqnarray*}
\|(B-C)^{-1}\| \leq \frac{\|B^{-1}\|}{1 - \|B^{-1}\| \cdot
\|C\|},
\end{eqnarray*}
so we obtain
\begin{equation}
\|(B-C)^{-1}\|^{-1} \geq \frac{1}{\|B^{-1}\|} (1-\|B^{-1}\| \cdot
\|C\|) = \frac{1}{\|B^{-1}\|} - \|C\|. \label{eq:normA-b-inv}
\end{equation}
From \eqref{eq:normA-b-inv} applied to $B=J-z
I_{\mathbb{C}^{n}}$ and $C=-E$ we get assertion of the lemma.
\end{proof}
Now we present results about the location of the eigenspaces.
\begin{theorem}\label{thm:block-eigenspaces}
Let $k,n\in\mathbb{N}$ such that $0\leq k\leq n$ and
$A\in{\cal L}(\mathbb{C}^k\times\mathbb{C}^{n-k})$ be given in the block from by
\[
A:=
\begin{bmatrix}
A_{11} & A_{12}\\
A_{21} & A_{22}
\end{bmatrix},
\]
where $A_{11}\in{\cal L}(\mathbb{C}^k)$, $A_{12}\in{\cal L}(\mathbb{C}^k,\mathbb{C}^{n-k})$, $A_{21}\in{\cal L}(\mathbb{C}^{n-k},\mathbb{C}^{k})$ and $A_{22}\in{\cal L}(\mathbb{C}^{n-k})$. Assume that $A_{22}$ is
invertible, $d:=\|A_{22}^{-1}\|^{-1}-\|A_{11}\|>0$ and
$d^2-4\|A_{12}\|\|A_{21}\|>0$. Then there exists a unique direct
sum decomposition $\mathbb{C}^k\times\mathbb{C}^{n-k}=F_1\oplus F_2$, such that
$F_1$ and $F_2$ are $A$-invariant subspaces $F_1$, $F_2$, $\mathrm{dim}
F_1=k$, $\mathrm{dim} F_2=n-k$ and
\begin{equation}\label{cor_eq:1}
\begin{aligned}
F_1 & \subset\left\{(x_1, x_2)\in\mathbb{C}^k\times\mathbb{C}^{n-k} : \|x_2\|\leq\frac{2\|A_{21}\|}{d}\|x_1\| \right\},\\
F_2 & \subset\left\{(x_1, x_2)\in\mathbb{C}^k\times\mathbb{C}^{n-k} :
\frac{2\|A_{21}\|}{d}\|x_1\|\leq\|x_2\| \right\}.
\end{aligned}
\end{equation}
Moreover, we have
\begin{equation}\label{cor_eq:2}
\sigma(A|_{F_1})\subset\overline{B}\left(0,\|A_{11}\|+\frac{2\|A_{12}\|\cdot\|A_{21}\|}{d}\right),\quad
\sigma(A|_{F_2})\subset\mathbb{C}\setminus
B\left(0,\|A_{22}^{-1}\|^{-1}-\frac{d}{2} \right).
\end{equation}
\end{theorem}
\begin{proof}
Let $c:=\|A_{21}\|$. If $c=0$, the assertion holds. Assume that
$c\neq 0$. By Lemma \ref{lem:2} we get
$A\in{\cal D}_{d/(2c)}(\mathbb{C}^k\times\mathbb{C}^{n-k})$, and from Theorem
\ref{thm:cone-main-eigen-location} we know that exists a direct sum decomposition
$\mathbb{C}^k\times\mathbb{C}^{n-k}=F_1\oplus F_2$ such that $\mathrm{dim} F_1=k$, $\mathrm{dim}
F_2=n-k$ and $F_1$, $F_2$ are invariant. The properties
\eqref{cor_eq:1} and \eqref{cor_eq:2} are consequences of Theorem
\ref{thm:cone-main-eigen-location} and Theorem \ref{thm:formulas-contr-exp} and Definition \ref{def:1},
respectively.
\end{proof}
\begin{corollary}\label{cor:block-eigenspaces}
We use the same notation and decomposition of the matrix A as in Theorem~\ref{thm:block-eigenspaces}.
Assume that for some $z\in \mathbb{C}$ matrices $A_{11} - zI_{\mathbb{C}^k}$, $A_{22}-
zI_{\mathbb{C}^{n-k}}$ are invertible and $d:=\|(A_{22}-
zI_{\mathbb{C}^{n-k}})^{-1}\|^{-1}-\|A_{11}-zI_{\mathbb{C}^{k}}\|>0$,
$d^2-4\|A_{12}\|\|A_{21}\|>0$. Then there exists a unique direct
sum decomposition $\mathbb{C}^k\times\mathbb{C}^{n-k}=F_1\oplus F_2$ into
$A$-invariant subspaces $F_1$, $F_2$ such that $\mathrm{dim} F_1=k$, $\mathrm{dim}
F_2=n-k$ and
\begin{align*}
F_1 & \subset\left\{(x_1, x_2)\in\mathbb{C}^k\times\mathbb{C}^{n-k} : \|x_2\|\leq\frac{2\|A_{21}\|}{d}\|x_1\| \right\},\\
F_2 & \subset\left\{(x_1, x_2)\in\mathbb{C}^k\times\mathbb{C}^{n-k} :
\frac{2\|A_{21}\|}{d}\|x_1\|\leq\|x_2\| \right\}.
\end{align*}
Moreover, we have
\begin{align*}
\sigma(A|_{F_1}) & \subset\overline{B}\left(z,\|A_{11}-zI_{\mathbb{C}^{k}}\|+\frac{2\|A_{12}\|\cdot\|A_{21}\|}{d}\right), \\
\sigma(A|_{F_2}) & \subset\mathbb{C}\setminus B\left(z,\|(A_{22}-
zI_{\mathbb{C}^{n-k}})^{-1}\|^{-1}-\frac{d}{2} \right).
\end{align*}
\end{corollary}
\input{gersch.tex}
\subsection{Example}
In the following example we consider a matrix with multi-dimensional block for which we estimate eigenspaces.
\begin{example}\label{ex:subspace}
Consider the matrix $A\in{\cal L}(\mathbb{C}^2\times\mathbb{C}^2)$ be given by
\[
A=
\begin{bmatrix}
A_{11} & A_{12} \\
A_{21} & A_{22}
\end{bmatrix}
= \left[
\begin{array}{cc|cc}
0. & 0.15 & 0.11 & 0.02 \\
0.2 & 0. & 0.1 & 0.05 \\ \hline
0.01 & 0.025 & 0. & 1.5 \\
0.15 & 0.05 & 1. & 0. \\
\end{array}
\right].
\]
We have $\|A_{11}\|_{\infty}=0.2$, $\|A_{12}\|_{\infty}=0.15$, $\|A_{21}\|_{\infty}=0.2$.
From Theorem~\ref{thm:block-eigenspaces}
($d=\|(A_{22}^{-1}\|_{\infty}^{-1}-\|A_{11}\|_{\infty}=1-0.2=0.8>0$ and
$d^2-4\|A_{12}\|_{\infty}\|A_{21}\|_{\infty}=0.52>0$) we know that there exist
eigenspaces $F_1$ and $F_2$, which satisfy
\begin{align*}
F_1 & \subset\left\{(x_1, x_2)\in\mathbb{C}^2\times\mathbb{C}^{2} : \|x_2\|\leq 0.5 \|x_1\| \right\},\\
F_2 & \subset\left\{(x_1, x_2)\in\mathbb{C}^2\times\mathbb{C}^{2} : \|x_1\|\leq
2\|x_2\|\right\}.
\end{align*}
and
$\sigma(A_{F_1})\subset\overline{B}(0,0.275)$,
$\sigma(A_{F_2})\subset\mathbb{C}\setminus B(0,0.6)$ (see Figure
\ref{fig2:b}).
\begin{figure}[H]
\setcounter{subfigure}{0}
\centering
\subfloat[Gerschgorin circles.]{\includegraphics[width=0.48\textwidth]{./images/kl2-a.jpg}}
\quad
\subfloat[Estimates based on Theorem~\ref{thm:block-eigenspaces}. The white annulus in does not contain any eigenvalue.\label{fig2:b}]{\includegraphics[width=0.48\textwidth]{./images/kl2-b.jpg}}
\caption{Gerschgorin and our circles with approximate eigenvalues in Example \ref{ex:subspace}.}
\end{figure}
Observe that when using the Gerschgorin theorem with one-dimensional blocks with scalings, as described at the end of Section~\ref{subsec:Ger-th}, we will not be able to separate the spectrum of $A$, because the centers of Gerschgorin
circles are located at zero.
\medskip
Now we discuss what happens when we use the generalized Gerschgorin theorems from \cite{Gen}. First rescale the matrix $A$ by
\(
X=
\begin{bmatrix}
2 & 0 \\ 0 & I
\end{bmatrix}
\)
(we take the same rescaling as in our method, see Remark \ref{rem:good-eps}) to get
\[
\tilde{A}=X^{-1}AX=
\begin{bmatrix}
A_{11} & \frac{1}{2}A_{12} \\ 2A_{21} & A_{22}
\end{bmatrix}.
\]
We use the Theorems~\ref{thm:gen-gersch-thm1} and~\ref{thm:gersz-eigenval} applied to the above block decomposition, and
obtain the generalized Gerschgorin disks:
\begin{eqnarray*}
G_1(\tilde{A})&=& \left\{\lambda\in\mathbb{C} : \|(A_{11} - \lambda I)^{-1}\|_{\infty}^{-1} \leq \frac{1}{2}\|A_{12}\|_\infty\right\}, \\
G_2(\tilde{A})&=&\left\{\lambda\in \mathbb{C} : \|(A_{22} - \lambda I)^{-1}\|_{\infty}^{-1} \leq 2\|A_{21}\|_\infty\right\}.
\end{eqnarray*}
We want to show that $G_1(\tilde{A}) \cap G_2(\tilde{A}) = \emptyset$. Let us we check that $G_1(\tilde{A})\subset\overline{B}(0,0.25)$.
We have
\[
(A_{11}-\lambda I)^{-1}=\frac{1}{\lambda^2-0.03}
\begin{bmatrix}
-\lambda & -0.15 \\
-0.2 & -\lambda
\end{bmatrix},
\]
so we get
\[
\|(A_{11}-\lambda I)^{-1}\|_{\infty}^{-1}=\frac{|\lambda^2-0.03|}{0.2+|\lambda|}.
\]
For $\lambda\in G_1(\tilde{A})\subset\mathbb{C}$ we have
\[
\frac{|\lambda^2-0.03|}{0.2+|\lambda|}\leq 0.075.
\]
Performing simple mathematical operations and changing the coordinate system to the polar one we obtain
\[
40000 r^4 - 15 r (160 r \cos (2 q)+15 r+6) +27\leq 0, \quad r=|\lambda|\in[0,\infty),\ \varphi\in[0,2\pi).
\]
Solving the above inequality we get
\[
\sup r = \frac{3}{80} \left(1+\sqrt{33}\right)< \frac{21}{80}.
\]
This means that $G_1(\tilde{A})\subset\overline{B}(0,21/80)$. Now we show that $\lambda\notin G_2(\tilde{A})$ for an arbitrary $\lambda\in\overline{B}(0,21/80)$.
Indeed we have
\[
(A_{22}-\lambda I)^{-1}=\frac{1}{\lambda^2-1.5}
\begin{bmatrix}
-\lambda & -1.5 \\
-1 & -\lambda
\end{bmatrix}.
\]
It is easy to see that for $\lambda\in\overline{B}(0,21/80)$ we have
\[
\|(A_{22}-\lambda I)^{-1}\|_{\infty}< \frac{\frac{3}{2}+\frac{21}{80}}{\frac{3}{2}-\left(\frac{21}{80}\right)^2}=\frac{3760}{3053}.
\]
Hence
\[
\|(A_{22}-\lambda I)^{-1}\|_{\infty}^{-1}>\frac{3053}{3760}>\frac{8}{10}, \qquad \lambda\in G_1(\tilde{A})\subset \overline{B}(0,21/80).
\]
Finally, we get $G_1(\tilde{A}) \cap G_2(\tilde{A}) = \emptyset$ (see Figure \ref{fig2:c}) and therefore we obtain from Theorem~\ref{thm:gen-gersch-thm1} and~\ref{thm:gersz-eigenval} that two eigenvalues belong to $G_1(\tilde{A})$ while the remaining two eigenvalues are inside $G_2(\tilde{A})$.
As one can see, we get better estimation for eigenvalues close to $0$ from generalized Gerschgorin theorem with scaling $r=2$, than from Theorem~\ref{thm:block-eigenspaces} but by generalized Gerschgorin theorem we can not get eigenspaces.
\begin{figure}[H]
\centering
\includegraphics[width=0.55\textwidth]{./images/kl2-c.jpg}
\caption{Generalized Gerschgorin circles: $G_1(\tilde{A})$ -- greater circles and $G_2(\tilde{A})$ -- smaller ones in Example \ref{ex:subspace} (compare Fig.~\ref{fig2:b}).}
\label{fig2:c}
\end{figure}
\end{example}
\subsection{Gerschgorin theorems}
\label{subsec:Ger-th}
For to the convenience of the reader, in this section we recall the Gerschgorin theorem and its modifications.
We have a matrix $A$ which has a block structure
\[
A = \left[
\begin{array}{cccc}
A_{11} & A_{12} & \cdots & A_{1n} \\
A_{21} & A_{22} & \cdots & A_{2n} \\
\cdots & \dots & \dots & \dots \\
A_{n1} & A_{n2} & \dots & A_{nn} \\
\end{array}
\right],
\]
where $A_{ij}$ are matrices and $A_{ii}$ are square matrices.
Let $V=\bigoplus_{i=1}^n V_i $,
where $V_i$ are finite dimensional vector spaces over
$\mathbb{C}$, and $A:V \to V$ be decomposed into blocks
$A_{ij}:V_j \to V_i$ $i,j=1,2,\dots,n$, so that for $v=v_1 + \dots
+ v_n$, where $v_i \in V_i$ holds
\begin{equation}
A(v_1 + \dots + v_n)= \sum_i \sum_j A_{ij}v_j.
\end{equation}
We define Gerschgorin disks $G_i(A)$ for the block matrix $A$ by
\begin{eqnarray*}
R_i(A)&=& \sum_{j,j\neq i} \|A_{ij}\| , \\
G_i(A)&=&\{\lambda \in \mathbb{C} \ : \ \mbox{$A_{ii}-\lambda I_{i}$ exists and } \|(A_{ii}-\lambda I_{i})^{-1}\|^{-1} \leq R_i(A)\}, \quad i=1,\dots,n,
\end{eqnarray*}
where $I_{V_i}$ is an identity map on $V_i$. If $A$ is known from the context, then we will usually drop $A$ and write just $R_i$ and $G_i$.
Similarly, we write $I$ instead of $I_{V_i}$.
Theorem below we present the generalizations of Gershgorin Theorems due to Feingold and Varga \cite{Gen}.
\begin{theorem} \cite[Theorem 2]{Gen}
\label{thm:gen-gersch-thm1}
\[
\sigma(A) \subset \bigcup_{i=1}^n G_i.
\]
\end{theorem}
\begin{theorem} \cite[Theorem 4]{Gen}
\label{thm:gersz-eigenval}
Assume that $J \subset \{1,\dots,n\}$ is such that
\[
\left(\bigcup_{j \in J} G_j \right) \cap \left(\bigcup_{j \notin J} G_j
\right)= \emptyset.
\]
Then the number of eigenvalues of $A$ (counting with
multiplicities) contained in $\left(\bigcup\limits_{j \in J} G_j \right)$ is
equal to $\sum\limits_{j \in J} \mbox{dim}\, V_j$.
\end{theorem}
Now we give a theorem about the location of the eigenvectors based on the Wilkinson argument~\cite{Wilk}.
\begin{theorem}\label{thm:gen-gersz-eigenvect-estm}
Assume that for some $j\in\{1,\ldots,n\}$
\[
G_j \cap G_k =\emptyset, \quad \mbox{for $k=1,2,\dots,n$, $k \neq
j$}.
\]
Then if $v=(v_1+\dots+v_n)$ is an eigenvector corresponding to
$\lambda \in G_j$, then $\|v_k\| \leq \|v_j\|$ for $k=1,\dots,n$.
\end{theorem}
\begin{proof}
To show that $\|v_k\| \leq \|v_j\|$ we will reason by the
contradiction. Assume that for some $i \neq 0$ holds $\|v_i\|\geq
\|v_k\|$ for $k=1,\dots,n$ and $\|v_i\| > \|v_j\|$. We will apply
the basic argument from the generalized Gerschgorin theorem
(Theorem~\ref{thm:gen-gersch-thm1}) to prove that $\lambda \in
G_i$. This will lead to a contradiction, because $\lambda \in
G_j$, hence $\lambda \in G_j \cap G_i \neq \emptyset$.
We have
\begin{eqnarray*}
\lambda v_i&=&A_{ii} v_i + \sum_{k \neq i} A_{ik} v_k \\
(\lambda I - A_{ii})v_i &=& \sum_{k \neq i} A_{ik} v_k \\
\|(\lambda I - A_{ii})^{-1}\|^{-1} \|v_i\| &\leq& \sum_{k \neq i} \|A_{ik}\| \|v_k\| \\
\|(\lambda I - A_{ii})^{-1}\|^{-1} &\leq& \sum_{k \neq i} \|A_{ik}\|
\frac{\|v_k\|}{\|v_i\|} \leq \sum_{k \neq i} \|A_{ik}\|
\end{eqnarray*}
hence $\lambda \in G_i$. We obtained the contradiction. This
finishes the proof.
\end{proof}
One of the easiest ways to improve the estimation of the eigenvalues from the Gerschgorin theorem is through the scaling the basis of our domain.
This approach is well known and can be found
in the original article of Gerschgorin \cite{G}.
Assume, that we have matrix $A\in\mathbb{C}^{n\times n}$ and let
$\x{x}=(x_1,\ldots,x_n)^T\in\mathbb{R}^n$ such that $x_i>0$ for all
$i\in\{1,\ldots, n\}$. With this vector $\x{x}$ we define the
matrix $X\in\mathbb{R}^{n\times n}$ with the elements of $\x{x}$ on the
leading diagonal, and $0$ elsewhere. Note, that the matrix $X$ is
nonsingular and matrix $X^{-1}AX$ is similar to $A$ therefore
$\sigma(X^{-1}AX)=\sigma(A)$. If $A=[a_{ij}]_{1\leq i,j\leq n}$,
then
\[
X^{-1}AX=\left[\frac{a_{ij}x_j}{x_i}\right]_{1\leq i,j\leq
n}\]
and
\[
G_i=\overline{B}\Big(a_{ii},\sum\limits_{j \neq i}
\frac{|a_{ij}|x_j}{x_i}\Big) \quad\mbox{ for $i=1,\dots,n$}.
\]
\section{Introduction}
Eigenvalues and eigenvectors are the basic tools used in mathematics and computer science (linear algebra, differential equations, statistics, etc.). Currently, there are a lot of numerical methods, which allow to solve (not rigorously) the eigenproblem \cite{golubmatrix, watkinsfundamentals, watkinsmatrix}.
However, in such fields as computer assisted proofs for PDEs, methods that allow us to specify the rigorous bounds on the eigenvalues (see \cite{wilczak}) are required with increasing interest.
In papers \cite{saad, Stewart_2} we can find perturbation theory for eigenvalues and invariant subspaces of matrices, but presented there techniques are not very useful for these problems.
In this paper we present tools to find rigorous bounds for eigenvalues (all or some of them) and their corresponding eigenspaces.
Assume that we have a square matrix $A$ which the entries (or blocks) on the diagonal 'dominate' the off-diagonal entries (blocks) and we want to obtain efficient computable bounds (a formula) for the spectrum and eigenspaces of $A$.
Regarding the bounds on the spectrum almost all of the known methods are given by the Gerschgorin theorems and its modifications, for example the Brauer ovals \cite{Br,Va} or the generalization of the Gerschgorin theorem to the case of multi-dimensional blocks by Feingold and Varga \cite{Gen}.
Estimation of isolated eigenvectors from Gerschgorin's results are due to Stewart \cite{Stewart} and Wilkinson \cite{Wilk}. However, Wilkinson's result does not give the whole eigenspace
in the case of not simple eigenvalue or a cluster of close eigenvalues.
In \cite{Ya}, T. Yamamoto showed how find rigorous error bounds for computed single eigenvalues and eigenvectors of real matrices on the basis of an existence theorem for solutions of nonlinear systems using iteration Newton's method.
However, the Yamamoto's approach gives no theoretical estimates for the bounds for computed eigenvalues and eigenvectors.
In this article we propose a new method for the estimates of eigenvalues and eigenspaces.
Our approach is based on the ideas coming from the hyperbolic dynamics \cite{Nh}
and can be illustrated by the following simple two-dimensional example.
\begin{example}
Consider the matrix $A$ is defined by the formula
\(
A=
\begin{bmatrix}
0 & 2 \\
1 & 4
\end{bmatrix}.
\)
Note that if we take the gray cone (see Figure \ref{Ob:1}) and we start to iterate points of this cone by matrix $A$ then range of our cone will be reduced to the eigenspace corresponding to one of the eigenvectors of $A$.
\begin{figure}[H]
\centering
\begin{tikzpicture}[
scale=0.25,
axis/.style={thin, ->, >=stealth'},
important line/.style={thick},
dashed line/.style={dashed, thin},
every node/.style={color=black}
]
\begin{scope}[]
\path [fill=gray] (-5,5) -- (0,0) -- (5,5);
\path [fill=gray] (-5,-5) -- (0,0) -- (5,-5);
\path (0,-6) node[below] {\scriptsize $\{x=(x_1,x_2)\ :\ |x_1|\leq|x_2|\}$};
\draw[axis] (-5,0) -- (5.3,0) node(xline)[right]{};
\draw[axis] (0,-5) -- (0,5.2) node(yline)[above] {};
\node [right](ref1) at (5,5){};
\end{scope}
\begin{scope}[xshift=15cm]
\path [fill=gray] (-3.34,-5) -- (0,0) -- (-2,-5);
\path [fill=gray] (3.34,5) -- (0,0) -- (2,5);
\draw[axis] (-5,0) -- (5.3,0) node(xline)[right]{};
\draw[axis] (0,-5) -- (0,5.2) node(yline)[above] {};
\node [left](ref2) at (-4,5){};
\node [left](ref3) at (6,5){};
\end{scope}
\begin{scope}[xshift=30cm]
\path [fill=black] (-2.3,-5) -- (0,0) -- (-2.2,-5);
\path [fill=black] (2.2,5) -- (0,0) -- (2.3,5);
\draw[thin, gray] (-2.247,-5) -- (0,0) -- (2.247, 5);
\draw[axis] (-5,0) -- (5.3,0) node(xline)[right]{};
\draw[axis] (0,-5) -- (0,5.2) node(yline)[above] {};
\node [left](ref4) at (-4,5){};
\node [left](ref5) at (6,5){};
\end{scope}
\begin{scope}[xshift=45cm]
\draw[thin, gray] (-2.247,-5) -- (0,0) -- (2.247, 5);
\draw[axis] (-5,0) -- (5.3,0) node(xline)[right]{};
\draw[axis] (0,-5) -- (0,5.2) node(yline)[above] {};
\node [left](ref6) at (-4,5){};
\end{scope}
\draw[red, dashed, very thick,->] (ref1) .. node[above] {$\pmb{A}$} controls (7,6) and (9,6) .. (ref2);
\draw[red, dashed, very thick,->] (ref3) .. node[above] {$\pmb{A}$} controls (22,6) and (24,6) .. (ref4);
\draw[red, dashed, very thick,->] (ref5) .. node[above] {$\pmb{A}$} controls (37,6) and (39,6) .. (ref6);
\end{tikzpicture}
\caption{Transformation of the cone by the matrix $A$. \label{Ob:1}}
\end{figure}
\noindent Iterating backward ($A^{-1}$) the cone $\{x=(x_1,x_2)\ :\ |x_1|\geq|x_2|\}$ we obtain second eigenspace.
\end{example}
This example illustrates that we can estimate eigenspace by using an invariant cone. We started to study this problem and it turned out that using forward and backward invariant cones we were able to give not only good bounds for eigenvectors but also for eigenvalues. In addition, by means of this tool we can locate the eigenspaces and eigenvalues of products of many matrices.
To explain our main results we introduce some basic notations.
Let
$\|\x{x}\|:=\max\limits_j |x_j|$. For $\x{x}=(x_1,\ldots, x_k,\ldots,
x_n)\in\mathbb{R}^n$ we set
\[
\|\x{x}\|_{\leq k}=\max\limits_{i\leq
k}|x_i| \quad\mbox{ and }\quad \|\x{x}\|_{>k}=\max\limits_{i> k}|x_i|.
\]
For linear map
$A\colon\mathbb{R}^k\times\mathbb{R}^{n-k}\to\mathbb{R}^k\times\mathbb{R}^{n-k}$ we define
the extension and contraction constants:
\begin{align*}
\co{A} & = \;\inf \{ R \in \mathbb{R}_+ \, | \, \|A\x{x}\|\leqslant R\cdot\|x\| \text{ for all } \x{x}\in\mathbb{R}^n: \|A\x{x}\|_{\leq k}\geq\|A\x{x}\|_{>k}\}, \\[0.3em]
\ex{A} & = \sup \{ R \in \mathbb{R}_+ \, | \, \|A\x{x}\| \geqslant
R\cdot\|x\| \text{ for all } \x{x}\in\mathbb{R}^n: \|\x{x}\|_{\leq k}\leq
\|\x{x}\|_{>k}\}
\end{align*}
Observe that these constants can be obtained by the optimal solution of a standard constrained optimization problem.
We say that $A$ is {\em dominating} if $\co{A}\, <\ex{A}$.
It turns out that composition of dominating maps is dominating, see Proposition~\ref{proposition:1}.
Now we are ready to present two main results in our paper
\medskip
\noindent {\bf Main Result I [Theorem \ref{thm:gerssh-dominating}].}{\em \/
Let $A\in\mathbb{C}^{n\times n}$ be a matrix with an isolated Gerschgorin disk. Then $A$ is dominating.
}
\medskip
\noindent Together with the following result we get that our method is generalization of the Gerschgorin theorem in the case of the isolated Gerschgorin disk of multiplicity one.
\medskip
\noindent {\bf Main Result II [simplified version of Theorem
\ref{thm:cone-main-eigen-location}].}{\em \/ Let
$A\colon\mathbb{R}^k\times\mathbb{R}^{n-k}\to\mathbb{R}^k\times\mathbb{R}^{n-k}$ be dominating.
Then there exists a unique direct sum decomposition $F_1\oplus
F_2=\mathbb{R}^n$ into $A$-invariant subspaces $F_1$, $F_2$ such that
\[
\sigma(A|_{F_1})\subset\overline{B}(0,\co{A}),\quad
\sigma(A|_{F_2})\subset\mathbb{C}\setminus B(0,\ex{A}).
\]
Moreover, we have:
\begin{enumerate}
\item $\mathrm{dim} F_1=k$,\; $\mathrm{dim} F_2=n-k$,
\item $F_1\subset\{\x{x}\in\mathbb{R}^n : \|\x{x}\|_{\leq k}\geq \|\x{x}\|_{>k}\}$ \quad and \quad
$F_2\subset\{\x{x}\in\mathbb{R}^n : \|\x{x}\|_{\leq k}\leq
\|\x{x}\|_{>k}\}$,
\item $\|A|_{F_1}\|\leq \co{A}\quad \mbox{and}\quad \|(A|_{F_2})^{-1}\|\leq \ex{A}^{-1}$.
\end{enumerate}
}
\noindent In comparison with the Gerschgorin's theorems our method has the following advantages:
\begin{itemize}
\item locate spectrum and eigenspaces of a matrix when multiple eigenvalues or clusters of very close eigenvalues are present,
\item gives better estimation for isolated eigenvalues,
\item allow to deal with composition of matrices.
\end{itemize}
\smallskip
The content of this paper can be briefly described as follows: in Section \ref{sec:cones} we introduce notion of cones and build the concept of dominating matrix. In Section \ref{sec:main-res} we establish the main result: Theorem \ref{thm:cone-main-eigen-location} which allow us to rigorously estimate eigenspaces and eigenvalues.
In Section \ref{sec:eigen-estm} we develop computable estimates for the eigenvalues and eigenspaces based on the results from the Section \ref{sec:main-res}. In Section~\ref{sec:comparison} we compare the proposed method with the Gerschgorin theorem in the case of the isolated Gerschgorin disk. We show that all matrices which have an isolated Gerschgorin disk, are dominating and if the radius of this disk nonzero, we obtain sharper bounds. This means that our approach can be used whenever the Gerschgorin disk is isolated.
We also show examples of matrices for which we can not use the Gerschgorin theorem since the Gerschgorin disks cannot be separated, but our method still works, see Example~\ref{ex:mA-better-G}.
\section{Localization of eigenspaces based on cones and dominating maps}
\label{sec:main-res} In this section we show that the eigenspaces of the
$r$-dominating operator $A$ lie in the corresponding $r$-cones.
Moreover, we can estimate $\sigma(A)$ with
the help of $\co[r]{A}$, $\ex[r]{A}$.
\begin{lemma}\label{lem:1}
Let $A\in{\cal D}_r(E)$. Then
\begin{equation}\label{eq:9}
\lambda\in\sigma(A) \implies |\lambda|\in
[0,\!\!\co[r]{A}]\cup[\ex[r]{A},\infty).
\end{equation}
Moreover $[0,\!\!\co[r]{A}]\cap [\ex[r]{A},\infty)=\emptyset$.
\end{lemma}
\begin{proof}
Since $A\in{\cal D}_r(E)$ we get $[0,\!\!\co[r]{A}]\cap
[\ex[r]{A},\infty)=\emptyset$.
Now we show implication \eqref{eq:9}. Let $\lambda$ be an eigenvalue of
$A$ and let $\x{x}\in E$ be a corresponding eigenvector. By
\eqref{eq:1} we know that $\x{x}\in\co[r]{E}\cup\ex[r]{E}$. We
consider two cases. First suppose that $\x{x}\in\co[r]{E}$. Since
$\x{x}$ is an eigenvector, $A\x{x} = \lambda\x{x}$, and thus
$A\x{x}\in\co[r]{E}$. By \eqref{eq:2} we get
\[
|\lambda|\leq\co[r]{A}.
\]
Now suppose that $\x{x}\in\ex[r]{E}$. By \eqref{eq:3} we get
\[
|\lambda|\geq\ex[r]{A},
\]
which completes the proof.
\end{proof}
Let $E$ be a finite dimensional vector space over the field $\mathbb{C}$
and let operator $A\colon E\to E$ be given. One can easily deduce from the Jordan theorem (see also \cite[Appendix to Chapter
4]{irwin} for the general case) that if $\sigma(A) =
\sigma_1\cup\sigma_2$ then there is a unique direct sum
decomposition $E=E_{\sigma_1}\oplus E_{\sigma_2}$ such that
$A(E_{\sigma_1})\subset E_{\sigma_1}$, $A(E_{\sigma_2})\subset
E_{\sigma_2}$ and $\sigma(A|_{E_{\sigma_1}})=\sigma_1$,
$\sigma(A|_{E_{\sigma_2}})=\sigma_2$. For $0<c<d$ we define
\[
E_{\leq c}:=E_{\{\lambda\; :\; |\lambda|\leq c\}}\quad\text{and}\quad
E_{\geq d}:=E_{\{\lambda\; :\; |\lambda|\geq d\}}.
\]
\begin{theorem}\label{thm:r-dom-spectrum-gap}
Let $E$ be a finite dimensional cone-space and let $A\in{\cal D}_r(E)$.
Then there is a direct sum decomposition
$E=E_{\leq\co[r]{A}}\oplus E_{\geq\ex[r]{A}}$ which satisfies
\[
E_{\leq\co[r]{A}} \subset \co[r]{E},\; E_{\geq\ex[r]{A}} \subset
\ex[r]{E}.
\]
\end{theorem}
\begin{proof}
From Lemma \ref{lem:1} and the comments preceding our theorem we
obtain a decomposition of $E$ into $A$-invariant subspaces
\[
E=E_{\leq\co[r]{A}}\oplus E_{\geq\ex[r]{A}},
\]
such that
\[
\sigma(A|_{E_{\leq\co[r]{A}}})=\{\lambda :
|\lambda|\in[0,\!\!\co[r]{A}]\}\quad\text{and}\quad
\sigma(A|_{E_{\geq\ex[r]{A}}})=\{\lambda :
|\lambda|\in[\ex[r]{A},\infty)\}.
\]
Now we show $E_{\leq\co[r]{A}} \subset \co[r]{E}$. Consider an
arbitrary $\x{x}\in E_{\leq\co[r]{A}}$. The case when $\x{x}=0$ is
obvious. Assume that $\x{x}\neq 0$. Without any loss of the generality we can assume that $\|\x{x}\| = 1$. For an
indirect proof, assume that $\x{x}\notin\co[r]{E}$. Then
by (\ref{eq:1}) we get $\x{x}\in\ex[r]{E}$. Let $\varepsilon>0$ be arbitrary. From
the fact that $\x{x}\in E_{\leq\co[r]{A}}$, we know that
\begin{equation}\label{radius}
\limsup\limits_{m\rightarrow +\infty}\sqrt[m]{\norm{A|_{E_{\leq\co[r]{A}}}^m}}=\sup\sigma(A|_{E_{\leq\co[r]{A}}})\leq\co[r]{A}.
\end{equation}
Note that inequality \eqref{radius} holds for all norms.
For all $x\in E_{\leq\co[r]{A}}$ we obtain
\[
\limsup\limits_{m\rightarrow +\infty}\sqrt[m]{\norm{A^m
\x{x}}}\leq\co[r]{A},
\]
and thus there exists an $M\in\mathbb{N}$ such that for all $m\in\mathbb{N}$
\[
m\geq M \Rightarrow \sqrt[m]{\norm{A^m \x{x}}}\leq \co[r]{A}+\varepsilon.
\]
Since $\x{x}\in\ex[r]{E}$ and from Theorem \ref{tw:1} we obtain
\[
\x{x}\in\ex[r]{E} \Rightarrow A\x{x}\in \ex[r]{E} \Rightarrow \cdots
\Rightarrow A^m\x{x}\in\ex[r]{E}.
\]
Using \eqref{eq:3} and Remark \ref{rem:rate} we get
\begin{align*}
\norm{A\x{x}}&\geq \ex[r]{A} \norm{\x{x}}, \\
\norm{A^2\x{x}} = \norm{A(A\x{x})}&\geq \ex[r]{A} \norm{A\x{x}}\geq \ex[r]{A}^2 \norm{\x{x}}, \\
&\;\, \vdots\\
\norm{A^m\x{x}}&\geq \ex[r]{A}^m \norm{\x{x}}.
\end{align*}
Finally we have
\[
\ex[r]{A} = \sqrt[m]{\ex[r]{A}^m} \leq \sqrt[m]{\norm{A^m \x{x}}} \leq
\co[r]{A}+\varepsilon.
\]
Since $\varepsilon$ was arbitrary, we get a contradiction with the fact
that $A$ is $r$-dominating.
Analogously, to prove inclusion $E_{\geq\ex[r]{A}}\subset\ex[r]{E}$,
assume that $\x{x}\in E_{\geq\ex[r]{A}}$ and $\x{x}\notin\ex[r]{E}$.
Then $\x{x}\in\co[r]{E}$. Since
$\sigma(A|_{E_{\geq\ex[r]{A}}})=\sigma_{{\geq\ex[r]{A}}}:=\{\lambda\; :\;
|\lambda|\geq\ex[r]{A}\}$ and $0\notin\sigma_{{\geq\ex[r]{A}}}$ we know that
$A|_{E_{\geq\ex[r]{A}}}\colon E_{\geq\ex[r]{A}}\to E_{\geq\ex[r]{A}}$ is
invertible. Let $\varepsilon>0$ be arbitrary. Using the fact that $\x{x}\in
E_{\geq\ex[r]{A}}$, by dual result \eqref{radius}, we know that
\[
\limsup\limits_{m\rightarrow +\infty}\sqrt[m]{\norm{A|_{ E_{\geq\ex[r]{A}}}^{-m} \x{x}}}\leq\ex[r]{A}^{-1},
\]
and thus there exists an $M\in\mathbb{N}$ such that for all $m\in\mathbb{N}$
\begin{equation}\label{eq:10}
m\geq M \Rightarrow\sqrt[m]{\norm{A|_{ E_{\geq\ex[r]{A}}}^{-m}
\x{x}}}\leq \ex[r]{A}^{-1}+\varepsilon.
\end{equation}
From the Observation \ref{ob:1} and Theorem \ref{tw:1} we get
\[
\x{x}\in\co[r]{ E_{\geq\ex[r]{A}}} \Rightarrow A|_{
E_{\geq\ex[r]{A}}}^{-1}\x{x}\in \co[r]{ E_{\geq\ex[r]{A}}} \Rightarrow
\cdots \Rightarrow A|_{ E_{\geq\ex[r]{A}}}^{-m}\x{x}\in\co[r]{
E_{\geq\ex[r]{A}}},
\]
and from \eqref{eq:2} and Remark \ref{rem:rate} we have
\begin{align*}
\norm{\x{x}}&\leq \co[r]{A|_{ E_{\geq\ex[r]{A}}}} \norm{A|_{ E_{\geq\ex[r]{A}}}^{-1}\x{x}}, \\
\norm{A|_{ E_{\geq\ex[r]{A}}}^{-1}\x{x}} &\leq \co[r]{A|_{ E_{\geq\ex[r]{A}}}} \norm{A|_{ E_{\geq\ex[r]{A}}}^{-2}\x{x}}, \\
&\;\, \vdots\\
\norm{A|_{ E_{\geq\ex[r]{A}}}^{-m+1}\x{x}}&\leq \co[r]{A|_{
E_{\geq\ex[r]{A}}}} \norm{A|_{ E_{\geq\ex[r]{A}}}^{-m}\x{x}}.
\end{align*}
Hence
\begin{equation}\label{eq:11}
\norm{\x{x}}\leq (\co[r]{A|_{ E_{\geq\ex[r]{A}}}})^m\norm{A|_{
E_{\geq\ex[r]{A}}}^{-m}\x{x}}.
\end{equation}
Finally from the Observation \ref{ob:1} and \eqref{eq:10},
\eqref{eq:11} we obtain
\[
\co[r]{A}\geq\co[r]{A|_{ E_{\geq\ex[r]{A}}}} = \sqrt[m]{(\co[r]{A|_{
E_{\geq\ex[r]{A}}}})^{m}}\geq\sqrt[m]{\frac{1}{\norm{A|_{
E_{\geq\ex[r]{A}}}^{-m}
\x{x}}}}\geq\frac{1}{\ex[r]{A}^{-1}+\varepsilon}=\ex[r]{A}\cdot
\frac{1}{1+\varepsilon\cdot\ex[r]{A}},
\]
which gives a contradiction with the fact that $A$ is
$r$-dominating.
\end{proof}
Now we are ready to state the main result on the eigenspaces and eigenvalue location using
our method of cones and dominating maps.
\begin{theorem} \label{thm:cone-main-eigen-location}
Let $E=E_1\times E_2$ be a finite dimensional cone-space and let
$A\in{\cal D}_r(E)$. Then there exists a unique direct sum decomposition
$E=F_1\oplus F_2$ of $A$-invariant subspaces $F_1$, $F_2$ such
that
\[
\sigma(A|_{F_1})\subset\overline{B}(0,\co[r]{A}),\quad
\sigma(A|_{F_2})\subset\mathbb{C}\setminus B(0,\ex[r]{A}).
\]
Moreover, we have:
\begin{enumerate}
\item $\mathrm{dim} F_1=\mathrm{dim} E_1$,\; $\mathrm{dim} F_2=\mathrm{dim} E_2$,
\item $F_1\subset\co[r]{E}$\; and\; $F_2\subset\ex[r]{E}$,
\item $\|A|_{F_1}\|\leq \co[r]{A}\quad \mbox{and}\quad \|(A|_{F_2})^{-1}\|\leq \ex[r]{A}^{-1}$. \label{lab:1}
\end{enumerate}
\end{theorem}
\begin{proof}
From Theorem \ref{thm:r-dom-spectrum-gap} we know that exists a unique direct sum
decomposition $E=E_{\leq\co[r]{A}}\oplus E_{\geq\ex[r]{A}}$ which
satisfies
\[
E_{\leq\co[r]{A}} \subset \co[r]{E},\; E_{\geq\ex[r]{A}} \subset
\ex[r]{E}.
\]
We take $F_1=E_{\leq\co[r]{A}}$ and $F_2=E_{\geq\ex[r]{A}}$. By
Proposition \ref{prop:1} we obtain $\mathrm{dim} F_1=\mathrm{dim} E_1$ and $\mathrm{dim}
F_2=\mathrm{dim} E_2$.
Now we show that
$\sigma(A|_{F_1})\subset\overline{B}(0,\co[r]{A})$. Let $\x{x}\in
F_1$ be an eigenvector of $A$ and let $\lambda$ be the eigenvalue of
$A$ corresponding to $\x{x}$. Since $\x{x}$ is an eigenvector
($A\x{x} = \lambda\x{x}$) and $F_1 \subset\co[r]{E}$ therefore
$A\x{x}\in\co[r]{E}$. By \eqref{eq:2} we obtain that
$|\lambda|\leq\co[r]{A}$, so we get
$\sigma(A|_{F_1})\subset\overline{B}(0,\co[r]{A})$.
Now suppose that $\x{x}\in F_2$. Since $F_2\subset\ex[r]{E}$ and
by \eqref{eq:3} we get $|\lambda|\geq\ex[r]{A}$. Hence $
\sigma(A|_{F_2})\subset\mathbb{C}\setminus B(0,\ex[r]{A})$.
The inequalities of item \ref{lab:1} we obtain from \eqref{eq:2}
and \eqref{eq:3}.
\end{proof}
As a direct consequence of the above theorem we obtain the following conclusion.
\begin{corollary}\label{cor:2}
Let $r\in(0,\infty)$ and $n\in\mathbb{N}$. Assume that an operator
$A\in{\cal D}_r(\mathbb{C}\times\mathbb{C}^{n-1})$ is given. Then there
exists unique eigenvalue $\lambda$ of $A$ such that $|\lambda|\leq\co[r]{A}$
and the eigenspace corresponding to $\lambda$ is one-dimensional. The
unique (after rescaling) eigenvector $\x{x}$ corresponding to the
eigenvalue $\lambda$ satisfies
\[
\x{x}\in(1,0,\ldots,0)^T+\{0\}\times
\overline{B}_{\mathbb{C}}(0,1/r)^{n-1}\subset(1,0,\ldots,0)^T+\frac{1}{r}\cdot(0,\mathbb{I},\ldots,\mathbb{I})^T
+ \frac{1}{r}\cdot(0,\mathbb{I},\ldots,\mathbb{I})^Ti.
\]
\end{corollary}
\begin{proof}
It is a direct consequence of Theorem \ref{thm:cone-main-eigen-location} and Definition
\ref{def:1}.
\end{proof}
Because at the origin of our approach based on cones and dominating maps is the
theory of hyperbolic dynamical systems, so our method should be well suited to locate the eigenspaces and eigenvalues
of products of many matrices. In the example below we contrast our approach with a naive approach, which tries
to diagonalize a matrix obtained as a product of many matrices. The essential feature of this example is that the matrices we multiply
are known with some accuracy only.
\begin{example}\label{example:iteration}
Let the matrices $A_i\in{\cal L}(\mathbb{R}\times\mathbb{R})$, $i\in\{1,\ldots,15\}$ be
such that
\[
A_i\in
\begin{bmatrix}
\interval{0, 0.5} & \varepsilon\mathbb{I} \\
\varepsilon\mathbb{I} & \interval{1.5,2}
\end{bmatrix},
\]
where $\varepsilon=0.01$ and $\mathbb{I}=\interval{-1,1}$. Consider the matrix
$B:=A_{15}\cdot\ldots\cdot A_1$.
From Theorem \ref{thm:formulas-contr-exp} we obtain that $A_i\in{\cal D}(\mathbb{R}\times\mathbb{R})$ and
\[
\co{A_{i}}\leq 0.5+\varepsilon, \ \ex{A_i}\geq 1.5-\varepsilon.
\]
From Theorem \ref{tw:1} and Proposition \ref{proposition:1} we
conclude that $B\in{\cal D}(\mathbb{R}\times\mathbb{R})$ and
\[
\co{B}\leq \co{A_{15}}\cdot\ldots\cdot\co{A_1}, \quad
\ex{B}\geq\ex{A_{15}}\cdot\ldots\cdot\ex{A_1}.
\]
From Theorem \ref{thm:cone-main-eigen-location} we obtain that eigenvalues $\lambda_1$ and
$\lambda_2$ of $B$ such that
\[
|\lambda_1| \leq (0.5+\varepsilon)^{15} \ \text{ and }\ |\lambda_2| \geq
(1.5-\varepsilon)^{15}.
\]
Now, a naive method will ask first for a computation of $B$. Using interval arithmetic we obtained
\[
B\in \left[
\begin{array}{cc}
\interval{-1.45687,1.45693} & \interval{-218.543,218.544} \\
\interval{-218.543,218.544} & \interval{433.611,32782.94}
\end{array}
\right].
\]
However, there exists matrix $B_1$ within the bounds given above,
which has both eigenvalues larger than $1$. For example, let us consider
\[
B_1= \left[
\begin{array}{cc}
1 & 100 \\
-100 & 521 \\
\end{array}
\right].
\]
This matrix have the eigenvalues $\lambda_1=21$ and $\lambda_2=501$.
Consequently, this means that none of the methods applied to the
product matrix will not give us the expected estimation
$|\lambda_1|<1$ and $|\lambda_2|>1$.
\end{example}
|
1,314,259,995,898 | arxiv | \section{Introduction and main result}
\label{ref-1-0}
Below $k$ is an algebraically closed field of characteristic zero and $A = k[x,y,z]$. We will consider the Hilbert scheme $\Hilb_{n}(\PP^{2})$ parametrizing zero-dimensional subschemes of length $n$ in $\PP^2$. It is well known that this is a smooth connected projective variety of dimension $2n$.
Associated to $X \in \Hilb_{n}(\PP^{2})$ there is an ideal $\Iscr_X \subset \Oscr_{\PP^2}$ and a graded ideal $I_X=\oplus_n H^0(\PP^2,\Iscr_X(n))\subset A$. The Hilbert function $h_{X}$ of $X$ is the Hilbert function of the graded ring $A(X)=A/I_X$. Classically $h_X(m)$ is the number of conditions for a curve of degree $m$ to contain $X$.
Clearly $h_X(m)=n$ for $m\gg 0$.
It seems Castelnuovo was the first to recognize the utility of the difference function (see \cite{Davis})
\[
s_X(m) = h_{X}(m) - h_{X}(m-1)
\]
Thus $s_X(m)=0$ for $m\gg 0$. Knowing $s_X$ we can reconstruct $h_X$.
It is known \cite{Davis, GMR, GP} that a function $h$ is of the form $h_X$ for $X\in \Hilb_n(\PP^2)$ if and only if $h(m)=0$ for $m<0$ and $h(m)-h(m-1)$ is a so-called \emph{Castelnuovo function of weight $n$}.
A Castelnuovo function \cite{Davis} by definition has the form
\begin{equation}
\label{ref-1.1-1}
s(0)=1,s(1)=2,\ldots,s(\sigma-1)=\sigma \mbox{ and } s(\sigma-1)\ge
s(\sigma)\ge s(\sigma+1)\ge \cdots \ge 0.
\end{equation}
for some integer $\sigma \geq 0$, and the weight of $s$ is the sum of its values.
It is convenient to visualize $s$ using the graph of the staircase function
\[
F_{s}: \RR \r \NN: x \mapsto s({\lfloor x \rfloor})
\]
and to divide the area under this graph in unit squares. We will call the result a \emph{Castelnuovo diagram} which, if no confusion arises, we also refer to as $s_{X}$.
In the sequel we identify a function $f:\ZZ\r \CC$ with its generating function $f(t)=\sum_n f(n) t^n$. We refer to $f(t)$ as a polynomial or a series depending on whether the support of $f$ is finite or not.
\begin{example}
$s(t) = 1 + 2t + 3t^{2} + 4t^{3} + 5t^{4} + 5t^{5} + 3t^{6} + 2t^{7} + t^{8} + t^{9} + t^{10}$ is a Castelnuovo polynomial of weight $28$. The corresponding diagram is
\vspace{0.5cm}
\unitlength 1mm
\begin{picture}(90.00,30.00)(0,0)
\linethickness{0.15mm}
\put(35.00,5.00){\line(1,0){55.00}}
\linethickness{0.15mm}
\put(35.00,10.00){\line(1,0){55.00}}
\linethickness{0.15mm}
\put(40.00,15.00){\line(1,0){35.00}}
\linethickness{0.15mm}
\put(35.00,5.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(40.00,5.00){\line(0,1){10.00}}
\linethickness{0.15mm}
\put(45.00,5.00){\line(0,1){15.00}}
\linethickness{0.15mm}
\put(50.00,5.00){\line(0,1){20.00}}
\linethickness{0.15mm}
\put(55.00,5.00){\line(0,1){25.00}}
\linethickness{0.15mm}
\put(90.00,5.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(85.00,5.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(80.00,5.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(75.00,5.00){\line(0,1){10.00}}
\linethickness{0.15mm}
\put(70.00,5.00){\line(0,1){15.00}}
\linethickness{0.15mm}
\put(65.00,5.00){\line(0,1){20.00}}
\linethickness{0.15mm}
\put(45.00,20.00){\line(1,0){25.00}}
\linethickness{0.15mm}
\put(50.00,25.00){\line(1,0){15.00}}
\linethickness{0.15mm}
\put(55.00,30.00){\line(1,0){10.00}}
\linethickness{0.15mm}
\put(60.00,5.00){\line(0,1){25.00}}
\linethickness{0.15mm}
\put(65.00,25.00){\line(0,1){5.00}}
\end{picture}
\end{example}
We refer to a series $\varphi$ for which $\varphi = h_{X}$ for some $X \in \Hilb_{n}(\PP^{2})$ as a {\em Hilbert function of degree} $n$. The set of all Hilbert functions of degree $n$ (or equivalently the set of all Castelnuovo diagrams of weight $n$) will be denoted by $\Gamma_{n}$.
For $\varphi, \psi \in \Gamma_{n}$ we have that $\psi(t) - \varphi(t)$ is a polynomial, and we write $\varphi \leq \psi$ if its coefficients are non-negative. In this way $\leq$ becomes a partial ordering on $\Gamma_{n}$ and we call the associated directed graph the {\em Hilbert graph}, also denoted by $\Gamma_{n}$.
If $s,t\in \Gamma_n$ are Castelnuovo diagrams such that $s\le t$ then it is
easy to see that $t$ is obtained from $s$ by making a number of squares ``jump to the
left'' while, at each step, preserving the Castelnuovo property.
\begin{example}
\label{ref-1.2-2} There are two Castelnuovo diagrams of weight 3.
\vspace{0.5cm}
\unitlength 1mm
\begin{picture}(80.00,17.50)(0,0)
\linethickness{0.15mm}
\put(35.00,5.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(35.00,10.00){\line(1,0){15.00}}
\linethickness{0.15mm}
\put(50.00,5.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(35.00,5.00){\line(1,0){15.00}}
\linethickness{0.15mm}
\put(40.00,5.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(45.00,5.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(70.00,5.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(70.00,5.00){\line(1,0){10.00}}
\linethickness{0.15mm}
\put(80.00,5.00){\line(0,1){10.00}}
\linethickness{0.15mm}
\put(75.00,15.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(75.00,5.00){\line(0,1){10.00}}
\linethickness{0.15mm}
\put(70.00,10.00){\line(1,0){10.00}}
\put(60.00,7.50){\makebox(0,0)[cc]{$\leq$}}
\put(30.00,17.50){\makebox(0,0)[cc]{}}
\end{picture}
These distinguish whether three points are collinear or not. The corresponding Hilbert functions are $1,2,3,3,3,3,\ldots \text{ and } 1,3,3,3,3,3,\ldots $.
\end{example}
\begin{remark}
\label{ref-1.3-3} The number of Castelnuovo diagrams with weight $n$ is equal to the number of partitions of $n$ with distinct parts (or equivalently the number of partitions of $n$ with odd parts) \cite{DV2}. In loc. cit. there is a table of Castelnuovo diagrams of weight up to $6$ as well as some associated data. The Hilbert graph is rather trivial for low values of $n$. The case $n=17$ is more typical (see Appendix \ref{ref-A-59}).
\end{remark}
Hilbert functions provide a natural stratification of the Hilbert scheme. For any Hilbert function $\psi$ of degree
$n$ one defines a smooth connected subscheme \cite{DV2,Gotzmann} $H_{\psi}$ of $\Hilb_{n}(\PP^{2})$ by
\[
H_{\psi} = \{ X \in \Hilb_{n}(\PP^2) \mid h_{X} = \psi \}.
\]
The family $\{ H_{\psi} \}_{\psi \in \Gamma_{n}}$ forms a stratification of $\Hilb_{n}(\PP^{2})$ in the sense that
\[
\overline{H_\psi}\subset \bigcup_{\varphi\le \psi} H_{\varphi}
\]
It follows that if $H_\varphi\subset \overline{H_\psi}$ then $\varphi\le \psi$. The converse implication is in general false and it is still an open problem to find necessary and sufficient conditions for the existence of an inclusion $H_\varphi\subset \overline{H_\psi}$ \cite{BH, maC, CW, HRW}. This problem is sometimes referred to as the {\em incidence problem}.
Guerimand in his PhD-thesis \cite{Guerimand} introduced two additional necessary conditions for incidence of strata which we now discuss.
\begin{equation} \label{ref-1.2-4}
\mbox{the \emph{dimension condition}: } \dim H_{\varphi} < \dim H_{\psi}
\end{equation}
This criterion can be used effectively since there are formulas for $\dim H_\psi$ \cite{DV2,Gotzmann}.
The {\em tangent function} $t_{\varphi}$ of a Hilbert function $\varphi \in \Gamma_{n}$ is defined as the Hilbert function of $\Iscr_{X} \otimes_{\PP^2}\Tscr_{\PP^2}$, where $X\in H_\varphi$ is generic. Semi-continuity yields:
\begin{equation} \label{ref-1.3-5}
\mbox{the \emph{tangent condition}: } t_{\varphi}\ge t_{\psi}
\end{equation}
Again it is possible to compute $t_\psi$ from $\psi$ (see \cite[Lemme 2.2.4]{Guerimand} and also Proposition \ref{ref-3.3.1-23} below).
\medskip
Let us say that a pair of Hilbert functions $(\varphi, \psi)$ of degree $n$ has {\em length zero} if $\varphi< \psi$ and there are no Hilbert functions $\tau$ of degree $n$ such that $\varphi < \tau < \psi$.\footnote{This is a minor deviation of Guerimand's definition.} It is easy to see $(\varphi, \psi)$ has length zero if and only if the
Castelnuovo diagram of $\psi$ can be obtained from that of $\varphi$ by making a minimal movevement to the left of one square \cite[ Proposition 2.1.7]{Guerimand}.
\begin{example}
Although in the following pair $s_\psi$ is obtained from $s_\varphi$ by moving one square, it is not length zero since $s_{\varphi}$ may be obtained from $s_{\psi}$ by first doing movement $1$ and then $2$. \\
\vspace{-0.8cm}
\unitlength 1mm
\begin{picture}(105.00,43.13)(0,0)
\linethickness{0.15mm}
\put(20.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(20.00,15.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(25.00,15.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(25.00,20.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(30.00,20.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(30.00,25.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(40.00,20.00){\line(1,0){15.00}}
\linethickness{0.15mm}
\put(55.00,15.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(55.00,15.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(60.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(20.00,10.00){\line(1,0){40.00}}
\linethickness{0.15mm}
\put(25.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(30.00,10.00){\line(0,1){10.00}}
\linethickness{0.15mm}
\put(35.00,10.00){\line(0,1){15.00}}
\linethickness{0.15mm}
\put(40.00,10.00){\line(0,1){10.00}}
\linethickness{0.15mm}
\put(45.00,10.00){\line(0,1){10.00}}
\linethickness{0.15mm}
\put(50.00,10.00){\line(0,1){10.00}}
\linethickness{0.15mm}
\put(55.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(25.00,15.00){\line(1,0){30.00}}
\linethickness{0.15mm}
\put(30.00,20.00){\line(1,0){10.00}}
\linethickness{0.15mm}
\put(70.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(70.00,15.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(75.00,15.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(75.00,20.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(80.00,20.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(95.00,20.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(100.00,15.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(75.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(80.00,10.00){\line(0,1){10.00}}
\linethickness{0.15mm}
\put(85.00,10.00){\line(0,1){15.00}}
\linethickness{0.15mm}
\put(90.00,10.00){\line(0,1){15.00}}
\linethickness{0.15mm}
\put(95.00,10.00){\line(0,1){10.00}}
\linethickness{0.15mm}
\put(100.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(105.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(75.00,15.00){\line(1,0){25.00}}
\linethickness{0.15mm}
\put(80.00,20.00){\line(1,0){15.00}}
\put(40.00,3.75){\makebox(0,0)[cc]{$\varphi$}}
\put(90.00,4.38){\makebox(0,0)[cc]{$\psi$}}
\put(40.00,5.00){\makebox(0,0)[cc]{}}
\linethickness{0.15mm}
\put(80.00,25.00){\line(1,0){10.00}}
\linethickness{0.15mm}
\qbezier(52.50,22.50)(50.63,43.13)(39.38,25.63)
\put(53.13,31.25){\makebox(0,0)[cc]{1}}
\linethickness{0.15mm}
\qbezier(58.13,17.50)(58.13,34.38)(54.38,22.50)
\put(60.00,26.88){\makebox(0,0)[cc]{2}}
\linethickness{0.15mm}
\put(54.38,22.50){\line(0,1){1.25}}
\linethickness{0.15mm}
\multiput(54.38,22.50)(0.13,0.13){10}{\line(1,0){0.13}}
\linethickness{0.15mm}
\put(54.38,23.75){\line(0,1){0.63}}
\linethickness{0.15mm}
\put(39.38,25.63){\line(0,1){1.87}}
\linethickness{0.15mm}
\multiput(39.38,25.63)(0.37,0.12){5}{\line(1,0){0.37}}
\linethickness{0.15mm}
\put(70.00,10.00){\line(1,0){30.00}}
\linethickness{0.15mm}
\put(100.00,10.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(100.00,15.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(100.00,20.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(105.00,15.00){\line(0,1){5.00}}
\end{picture}\\
\end{example}
In general a movement of a square by one column is always length zero. A movement by more than one column is length
zero if and only if it is of the form
\begin{equation}
\label{ref-1.4-7}
\unitlength 1mm
\begin{picture}(25.00,31.88)(0,0)
\linethickness{0.15mm}
\put(0.00,10.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(15.00,10.00){\line(1,0){10.00}}
\linethickness{0.15mm}
\put(20.00,5.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(20.00,5.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(25.00,5.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\multiput(0.00,10.00)(0,2.00){3}{\line(0,1){1.00}}
\linethickness{0.15mm}
\multiput(5.00,10.00)(1.82,0){6}{\line(1,0){0.91}}
\linethickness{0.15mm}
\qbezier(23.75,11.25)(13.13,31.88)(3.13,15.63)
\linethickness{0.15mm}
\multiput(3.13,15.63)(0.38,0.13){5}{\line(1,0){0.38}}
\linethickness{0.15mm}
\put(3.13,15.63){\line(0,1){1.88}}
\end{picture}
\end{equation}
The dotted lines represent zero or more squares.
\medskip
The following theorem is the main result of this paper.
\begin{theorem} \label{ref-1.5-8}
Assume that $(\varphi, \psi)$ has length zero. Then $H_{\varphi} \subset \overline{H_{\psi}}$ if and only if
the dimension condition and the tangent condition hold.
\end{theorem}
This result may be translated into a purely combinatorial (albeit technical) criterion for the existence of an inclusion
$H_\varphi\subset\overline{H_\psi}$ (see Appendix \ref{ref-B-60}).
\medskip
Guerimand proved Theorem \ref{ref-1.5-8} under the additional hypothesis that $(\varphi, \psi)$ is not of ``type zero". A pair of Hilbert series $(\varphi, \psi)$ has {\em type zero} if it is obtained by moving the indicated square in the diagram below.\footnote{It is easy to see that this definition of type zero is equivalent to the one in~\cite{Guerimand}.} \\
\unitlength 1mm
\begin{picture}(90.00,33.44)(0,0)
\linethickness{0.15mm}
\put(50.00,25.00){\line(1,0){10.00}}
\linethickness{0.15mm}
\put(60.00,15.00){\line(0,1){2.50}}
\linethickness{0.15mm}
\put(60.00,15.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(65.00,15.00){\line(1,0){10.00}}
\linethickness{0.15mm}
\put(75.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(75.00,10.00){\line(1,0){10.00}}
\linethickness{0.15mm}
\put(60.00,15.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(70.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(70.00,10.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(70.00,10.00){\line(1,0){5.00}}
\put(70.00,10.00){\line(0,1){5.00}}
\put(75.00,10.00){\line(0,1){5.00}}
\put(70.00,15.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(70.00,10.00){\line(1,0){5.00}}
\put(70.00,10.00){\line(0,1){5.00}}
\put(75.00,10.00){\line(0,1){5.00}}
\put(70.00,15.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\qbezier(73.13,15.94)(70.63,33.44)(63.13,20.94)
\put(70.00,30.00){\makebox(0,0)[cc]{}}
\linethickness{0.15mm}
\multiput(45.00,25.00)(2.00,0){3}{\line(1,0){1.00}}
\linethickness{0.15mm}
\multiput(85.00,10.00)(2.00,0){3}{\line(1,0){1.00}}
\linethickness{0.15mm}
\put(63.13,20.94){\line(0,1){0.93}}
\linethickness{0.15mm}
\multiput(63.13,20.94)(0.19,0.13){5}{\line(1,0){0.19}}
\linethickness{0.15mm}
\put(60.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(60.00,10.00){\line(1,0){10.00}}
\linethickness{0.15mm}
\put(65.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(50.00,20.00){\line(1,0){10.00}}
\linethickness{0.15mm}
\put(50.00,10.00){\line(0,1){10.00}}
\linethickness{0.15mm}
\put(50.00,10.00){\line(1,0){10.00}}
\linethickness{0.15mm}
\put(55.00,10.00){\line(0,1){10.00}}
\linethickness{0.15mm}
\put(50.00,15.00){\line(1,0){10.00}}
\linethickness{0.15mm}
\multiput(50.00,20.00)(0,2.00){3}{\line(0,1){1.00}}
\linethickness{0.15mm}
\multiput(55.00,20.00)(0,2.00){3}{\line(0,1){1.00}}
\linethickness{0.15mm}
\multiput(60.00,20.00)(0,2.00){3}{\line(0,1){1.00}}
\linethickness{0.15mm}
\multiput(50.00,5.00)(0,2.00){3}{\line(0,1){1.00}}
\linethickness{0.15mm}
\multiput(55.00,5.00)(0,2.00){3}{\line(0,1){1.00}}
\linethickness{0.15mm}
\multiput(60.00,5.00)(0,2.00){3}{\line(0,1){1.00}}
\linethickness{0.15mm}
\multiput(65.00,5.00)(0,2.00){3}{\line(0,1){1.00}}
\linethickness{0.15mm}
\multiput(70.00,5.00)(0,2.00){3}{\line(0,1){1.00}}
\linethickness{0.15mm}
\multiput(75.00,5.00)(0,2.00){3}{\line(0,1){1.00}}
\linethickness{0.15mm}
\multiput(80.00,5.00)(0,2.00){3}{\line(0,1){1.00}}
\linethickness{0.15mm}
\multiput(85.00,5.00)(0,2.00){3}{\line(0,1){1.00}}
\linethickness{0.15mm}
\multiput(50.00,25.00)(0,2.00){3}{\line(0,1){1.00}}
\linethickness{0.15mm}
\put(50.00,5.00){\line(1,0){35.00}}
\linethickness{0.15mm}
\multiput(85.00,5.00)(2.00,0){3}{\line(1,0){1.00}}
\end{picture}\\
The dotted lines represent zero or more squares.
From the results in Appendix \ref{ref-B-60} one immediately deduces
\begin{proposition}
Let $\varphi, \psi$ be Hilbert functions of degree $n$ such that $(\varphi, \psi)$ has type zero. Then $H_{\varphi} \subset \overline {H_{\psi}}$.
\end{proposition}
\begin{remark}
The smallest, previously open, incidence problem of type zero seems to be \\
\unitlength 1mm
\begin{picture}(105.00,30.00)(0,0)
\linethickness{0.15mm}
\put(15.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(15.00,15.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(20.00,15.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(20.00,20.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(25.00,20.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(25.00,25.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(30.00,25.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(30.00,30.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(35.00,30.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(40.00,10.00){\line(0,1){20.00}}
\linethickness{0.15mm}
\put(15.00,10.00){\line(1,0){25.00}}
\linethickness{0.15mm}
\put(40.00,15.00){\line(1,0){15.00}}
\linethickness{0.15mm}
\put(55.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(40.00,10.00){\line(1,0){15.00}}
\linethickness{0.15mm}
\put(45.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(50.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(20.00,15.00){\line(1,0){20.00}}
\linethickness{0.15mm}
\put(25.00,20.00){\line(1,0){15.00}}
\linethickness{0.15mm}
\put(30.00,25.00){\line(1,0){10.00}}
\linethickness{0.15mm}
\put(20.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(25.00,10.00){\line(0,1){10.00}}
\linethickness{0.15mm}
\put(30.00,10.00){\line(0,1){15.00}}
\linethickness{0.15mm}
\put(35.00,10.00){\line(0,1){20.00}}
\linethickness{0.15mm}
\put(70.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(70.00,15.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(75.00,15.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(75.00,20.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(80.00,20.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(80.00,25.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(85.00,25.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(85.00,30.00){\line(1,0){10.00}}
\linethickness{0.15mm}
\put(95.00,10.00){\line(0,1){20.00}}
\linethickness{0.15mm}
\put(70.00,10.00){\line(1,0){25.00}}
\linethickness{0.15mm}
\put(95.00,20.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(100.00,10.00){\line(0,1){10.00}}
\linethickness{0.15mm}
\put(100.00,15.00){\line(1,0){5.00}}
\linethickness{0.15mm}
\put(105.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\put(95.00,10.00){\line(1,0){10.00}}
\linethickness{0.15mm}
\put(75.00,15.00){\line(1,0){25.00}}
\linethickness{0.15mm}
\put(80.00,20.00){\line(1,0){15.00}}
\linethickness{0.15mm}
\put(85.00,25.00){\line(1,0){10.00}}
\linethickness{0.15mm}
\put(90.00,10.00){\line(0,1){20.00}}
\linethickness{0.15mm}
\put(85.00,10.00){\line(0,1){15.00}}
\linethickness{0.15mm}
\put(80.00,10.00){\line(0,1){10.00}}
\linethickness{0.15mm}
\put(75.00,10.00){\line(0,1){5.00}}
\linethickness{0.15mm}
\qbezier(52.50,17.50)(52.50,30.00)(45.00,22.50)
\linethickness{0.15mm}
\multiput(45.00,22.50)(0.38,0.13){5}{\line(1,0){0.38}}
\linethickness{0.15mm}
\multiput(45.00,22.50)(0.13,0.38){5}{\line(0,1){0.38}}
\put(35.00,3.75){\makebox(0,0)[cc]{$\varphi = 1,3,6,10,14,15,16,17,17,\dots$}}
\put(90.00,3.75){\makebox(0,0)[cc]{$\psi = 1,3,6,10,14,16,17,17,\dots$}}
\end{picture}
(see \cite[Exemple A.4.2]{Guerimand}).
\end{remark}
\begin{remark}
Theorem \ref{ref-1.5-8} if false without the condition of $(\varphi,\psi)$ being of length zero. See \cite[Exemple A.2.1]{Guerimand}.
\end{remark}
\bigskip
The authors became interested in the incidence problem while they were studying the deformations of the Hilbert schemes of $\PP^{2}$ which come from non-commutative geometry, see \cite{NS, DV1, DV2}.
It seems that the geometric methods of Guerimand do not apply in a non-commutative context and therefore we developed an alternative approach to the incidence problem based on deformation theory (see \S\ref{ref-2-9}). In this approach the type zero condition turned out to be unnecessary. For this reason we have decided to write down our results first in a purely commutative setting. In a forthcoming paper we will describe the corresponding non-commutative theory.
\section{Outline of the proof of the main theorem}
\label{ref-2-9}
Here and in the rest of this papers we work in the graded category. Thus the notations $\Hom$, $\Ext$ etc\dots never have their ungraded meaning.
\subsection{Generic Betti numbers}
\label{ref-2.1-10}
Let $X \in \Hilb_{n}(\PP^{2})$. It is easy to see that the graded ideal $I_{X}$ associated to $X$ admits a minimal free resolution of the form
\begin{equation} \label{ref-2.1-11}
0 \r \oplus_{i}A(-i)^{b_{i}} \r \oplus_{i}A(-i)^{a_{i}} \r I_{X} \r 0
\end{equation}
where $(a_{i}),(b_{i})$ are sequences of non-negative integers which have finite support, called the {\em graded Betti numbers} of $I_{X}$ (and $X$). They are related to the Hilbert series of $I_{X}$ as
\begin{equation} \label{ref-2.2-12}
h_{I_{X}}(t) = h_{A}(t)\sum_{i}(a_{i} - b_{i})t^{i} = \frac{\sum_{i}(a_{i} - b_{i})t^{i}}{(1-t)^{3}}
\end{equation}
So the Betti numbers determine the Hilbert series of $I_{X}$. For generic $X$ (in a stratum $H_\psi$) the converse is true since in that case $a_{i}$ and $b_{i}$ are not both non-zero. We will call such $(a_i)_i$, $(b_i)_i$ \emph{generic Betti numbers}.
\subsection{Four sets of conditions}
We fix a pair of Hilbert series $(\varphi,\psi)$ of length zero. Thus for the associated Castelnuovo functions we have
\begin{equation}
\label{ref-2.3-13}
s_\psi(t)=s_\varphi(t)+t^u-t^{v+1}
\end{equation}
for some integers $0<u\le v$. To prove Theorem \ref{ref-1.5-8} we will show that 4 sets of conditions on $(\varphi,\psi)$ are equivalent.
{\defD{A}
\begin{condition}
$H_\varphi\subset\overline{H_\psi}$.
\end{condition}
}
{\defD{B}
\begin{condition}
The dimension and the tangent condition hold for $(\varphi,\psi)$.
\end{condition}
}
Let $(a_i)_i$ and $(b_i)_i$ be the generic Betti numbers associated to~$\varphi$. The next
technical condition restricts the values of the Betti numbers for $i=u$, $u+1$, $v+2$, $v+3$.
{\defD{C}
\begin{condition} $a_u\neq 0$, $b_{v+3}\neq 0$ and
\[
\begin{cases}
b_{u+1}\le a_u \le b_{u+1}+1\text{ and } b_{v+3}=a_{v+2}&\\
\text{or} & \text{if $v=u+1$}\\
a_u=b_{u+1}+1 \text{ and } b_{v+3}=a_{v+2}-1&\\
&\\
a_u=b_{u+1}+1\text{ and }b_{v+3}=a_{v+2}&\text{if $v\ge u+2$}
\end{cases}
\]
\end{condition}
}
The last condition is of homological nature. Let $I\subset A$ be a graded ideal corresponding to a generic point of $H_\varphi$. Put
\[
\hat{A}=\begin{pmatrix} A & A \\ 0 & A \end{pmatrix}
\]
For an ideal $J\subset I$ put
\[
\hat{J}=\begin{pmatrix} J& I \end{pmatrix}
\]
This is a right $\hat{A}$-module.
{
\defD{D}
\begin{condition} There exists an ideal $J\subset I$, $h_J(t)=\psi$ such that
\[
\dim_{k} \Ext^1_{\hat{A}}(\hat{J},\hat{J})< \dim_{k} \Ext^1_{A}(J,J)
\]
\end{condition}
}
In the sequel we will verify the implications
\[
A \Rightarrow B \Rightarrow C \Rightarrow D \Rightarrow A
\]
Here the implication $A\Rightarrow B$ is clear and the implication $B\Rightarrow C$ is purely combinatorial.
\medskip
The implication $C\Rightarrow D$ is based on the observation that $I/J$ must be a so-called truncated point module (see \S\ref{ref-4.1-36} below). This allows us to construct the projective resolution of $J$ from that of $I$ and in this way we can compute $\dim_{k} \Ext^1_A(J,J)$. To compute $\Ext^1_{\hat{A}}(\hat{J},\hat{J})$ we view it as the
tangent space to the moduli-space of pairs $(J,I)$.
\medskip
The implication $D\Rightarrow A$ uses elementary deformation theory. Assume that $D$ holds. Starting from some $\zeta\in \Ext^1_A(J,J)$ (which we view as a first order deformation of $J$), not in the image of $\Ext^1_{\hat{A}}(\hat{J},\hat{J})$ we construct a one-parameter family of ideals $J_\theta$ such that $J_0=J$ and
$\pd J_\theta=1$ for $\theta\neq 0$. Since $I$ and $J=J_0$ have the same image in $\Hilb_n(\PP^2)$, this shows that $H_\varphi$ is indeed in the closure of $H_\psi$.
\section{The implication $B\Rightarrow C$}
\label{ref-3-14}
In this section we translate the length zero condition, the dimension condition and the tangent condition in terms of Betti numbers. As a result we obtain that Condition B implies Condition C.
To make the connection between Betti numbers and Castelnuovo diagrams we frequently use the identities
\begin{equation} \label{ref-3.1-15}
\sum_{i \leq l}(a_{i}-b_{i}) =1 + s_{l-1} - s_{l} \quad\mbox{ if }\quad l \geq 0
\end{equation}
\begin{equation} \label{ref-3.2-16}
a_{l} - b_{l} = -s_{l}+2s_{l-1}-s_{l-2} \quad \mbox{ if } \quad l > 0
\end{equation}
Throughout we fix a pair of Hilbert functions $(\varphi,\psi)$ of degree $n$ and length zero and we let $s=s_\varphi$,
$\tilde{s}=s_\psi$ be the corresponding Castelnuovo diagrams. Thus we have
\begin{equation}
\label{ref-3.3-17}
\psi(t) = \varphi(t) + t^{u} + t^{u+1} + \cdots + t^{v}
\end{equation}
and
\begin{equation}
\label{ref-3.4-18}
\tilde{s}=s+t^u-t^{v+1}
\end{equation}
for some $0<u\le v$.
The corresponding generic Betti numbers (cfr \S\ref{ref-2.1-10}) are written as $(a_{i}),(b_{i})$ resp. $(\tilde{a}_{i}),(\tilde{b}_{i})$. We also write
\begin{align*}
\sigma &= \min \{i \mid s_i\ge s_{i+1}\}=\min \{i \mid a_{i} > 0 \}\\
\quad \tilde{\sigma} &= \min \{i \mid \tilde{s}_i\ge
\tilde{s}_{i+1}\}=\ \min \{i \mid \tilde{a}_{i} > 0 \}
\end{align*}
\subsection{Translation of the length zero condition}
The proof of the following result is left to the reader.
\begin{propositions}
\label{ref-3.1.1-19} If $v\geq u+1$ then we have
\begin{eqnarray*}
\begin{array}{c|ccccccccc}
i & \ldots & u & u+1 & u+2 & \ldots & v+1 & v+2 & v+3 & \ldots \\
\hline
a_{i} \vrule height 1.2em width 0pt& \ldots & \ast & 0 & 0 & \ldots & 0 & \ast & \ast & \dots \\
b_{i} & \ldots & \ast & \ast & 0 & \ldots & 0 & 0 & \ast & \ldots
\end{array}
\end{eqnarray*}
where
\[
a_{u} \leq b_{u+1}+1, \,\, a_{v+2} > 0, \,\, b_{v+3} \leq a_{v+2}.
\]
\end{propositions}
This result is based on the identity \eqref{ref-3.2-16}. The zeroes among the Betti numbers are caused by the
``plateau'' in $s$ between the $u$'th and the $v+1$'th column (see \eqref{ref-1.4-7}).
\subsection{Translation of the dimension condition}
The following result allows us to compare the dimensions of the strata $H_\varphi$ and $H_\psi$.
\begin{propositions} \label{ref-3.2.1-20}
One has
\begin{equation}
\dim H_{\psi} = \dim H_{\varphi} +
\sum_{i=u}^{v}(a_{i}-b_{i}) - \sum_{i=u+3}^{v+3}(a_{i}-b_{i}) + e
\end{equation}
and
\begin{equation} \label{ref-3.6-21}
\begin{split}
\dim H_{\psi} & = \dim H_{\varphi}-s_{u-2}+s_{u-1}+s_{u+1}-s_{u+2} \\
& \hspace{1.5cm}+s_{v-1}-s_{v}-s_{v+2}+s_{v+3}+e \\
\end{split}
\end{equation}
where
\begin{eqnarray*}
e =
\left\{
\begin{array}{cl}
-1 & \mbox{ if } v = u \\
1 & \mbox{ if } v = u+1 \\
0 & \mbox{ if } v \geq u+2
\end{array}
\right.
\end{eqnarray*}
\end{propositions}
\begin{proof}
The proof uses only \eqref{ref-3.4-18}. One has the formula \cite{DV2}
\[
\dim H_{\varphi} = 1 + n + c_{\varphi}
\]
where $c_{\varphi}$ is the constant term of
\begin{equation*}
f_{\varphi}(t) = (t^{-1}-t^{-2})s_{\varphi}(t^{-1})s_{\varphi}(t)
\end{equation*}
We find
\begin{equation*}
\begin{split}
f_{\psi}(t) & = (t^{-1}-t^{-2})s_{\psi}(t^{-1})s_{\psi}(t) \\
& = (t^{-1}-t^{-2})(s_{\varphi}(t^{-1}) + t^{-u} - t^{-v-1})
(s_{\varphi}(t) + t^{u} - t^{v+1}) \\
& = (t^{-1}-t^{-2})\biggl(\sum_{i}s_{i}t^{-i} + t^{-u} - t^{-v-1}\biggr)
\biggl(\sum_{j}s_{j}t^{j} + t^{u} - t^{v+1}\biggr) \\
& = f_{\varphi}(t) + (t^{-1}-t^{-2})\biggl(\sum_{i}s_{i}t^{u-i} -
\sum_{i}s_{i}t^{v+1-i} \\
& \hspace{0.7cm}
+ \sum_{j}s_{j}t^{j-u} -
\sum_{j}s_{j}t^{j-v-1} - t^{v+1-u} - t^{u-v-1} + 2\biggr)
\end{split}
\end{equation*}
Taking constant terms we obtain \eqref{ref-3.6-21}. Applying \eqref{ref-3.1-15} finishes the proof.
\end{proof}
We obtain the following rather strong consequence of the dimension condition.
\begin{corollarys} \label{ref-3.2.2-22}
If $v \geq u+2$ then
\[
\dim H_{\varphi} < \dim H_{\psi} \Leftrightarrow a_{u} = b_{u+1}+1 \mbox{ and } a_{v+2} = b_{v+3}
\]
and if this is the case then we have in addition
\[
\dim H_{\psi} = \dim H_{\varphi} + 1 \mbox{ and } u = \sigma, \,\, a_{u} > 0 \,\, a_{v+2} = b_{v+3} > 0
\]
\end{corollarys}
\begin{proof}
Due to Proposition \ref{ref-3.1.1-19} we have $s_{u+1} = s_{u+2}$ and $s_{v-1}=s_{v}$ so \eqref{ref-3.6-21} becomes
\[
\dim H_{\varphi} < \dim H_{\psi} \Leftrightarrow
(s_{u-2}-s_{u-1})+(s_{v+2}-s_{v+3})<0
\]
We have that $1 \leq \sigma \leq u$, which implies $s_{v+2} \geq s_{v+3}$, and either $s_{u-2} \geq s_{u-1}$ or $s_{u-1}=s_{u-2}+1$. From this it is easy to see that we have $(s_{u-2}-s_{u-1})+(s_{v+2}-s_{v+3})<0$ if and only if
$s_{u-1} = s_{u-2}+1$ and $s_{v+2} = s_{v+3}$.
First assume that this is the case. Then it follows from \eqref{ref-3.1-15} and Proposition \ref{ref-3.1.1-19} that $\sigma = u$ hence $a_{u} > 0$, $b_{u} = 0$. Equation \eqref{ref-3.1-15} together with $s_{u} = s_{u+1}$ gives $\sum_{i \leq u+1}(a_{i} - b_{i}) = 1$ and since $a_{u+1} = 0$ (see Proposition \ref{ref-3.1.1-19}) we have $a_{u} =
b_{u+1} + 1$. Further, \eqref{ref-3.1-15} together with $s_{v+2} = s_{v+3}$ gives $\sum_{i \leq v+3}(a_{i} - b_{i}) = 1$. Combined with $\sum_{i \leq u+1}(a_{i} - b_{i}) = 1$ and Proposition \ref{ref-3.1.1-19} we get $a_{v+2} + (a_{v+3} - b_{v+3}) = 0$ where $a_{v+2} > 0$. This gives $a_{v+2} = b_{v+3} > 0$.
Conversely, assume that $a_{u} = b_{u+1}+1$ and $a_{v+2} = b_{v+3}$. Observe that Proposition \ref{ref-3.1.1-19} implies $s_{u} = s_{u+1}$ and $a_{u+1} = 0$, so using \eqref{ref-3.1-15} yields
\[
1 = \sum_{i \leq u+1}(a_{i} - b_{i}) = \sum_{i \leq u-1}(a_{i} - b_{i}) + a_{u} - b_{u+1}
\]
Since we assumed that $a_{u} = b_{u+1}+1$, we find that $\sum_{i \leq u-1}(a_{i} - b_{i}) = 0$ and using \eqref{ref-3.1-15} again we get $s_{u-2} + 1 = s_{u-1}$. Next, the fact that $s_{v} = s_{v+1}$ (see Proposition \ref{ref-3.1.1-19}) together with \eqref{ref-3.1-15} yields $\sum_{i \leq v+1}(a_{i} - b_{i}) = 1$. In combination with
equation \eqref{ref-3.1-15} for $l = v+3$ and Proposition \ref{ref-3.1.1-19} we get that $s_{v+2} - s_{v+3} = a_{v+2} + (a_{v+3} - b_{v+3}) = 0$. Since we assumed that $a_{v+2} = b_{v+3}$ this implies that $s_{v+2} - s_{v+3} = a_{v+3}$. Further, since $b_{v+3} = a_{v+2} > 0$ (see Proposition \ref{ref-3.1.1-19}) we have $a_{v+3} = 0$. We conclude that $s_{u-2} + 1 = s_{u-1}$ and $s_{v+2} = s_{v+3}$ which finishes the proof.
\end{proof}
\subsection{Translation of the tangent condition}
Recall from the introduction that the tangent function $t_{\varphi}$ is the Hilbert function of $\Iscr_{X} \otimes_{\PP^{2}} \Tscr_{\PP^{2}}$ for $X\in H_\varphi$ generic.
\begin{propositions}
\label{ref-3.3.1-23}
(See also \cite[Lemme 2.2.24]{Guerimand}) We have
\begin{equation} \label{ref-3.7-24}
t_{\varphi}(t) = h_{\Tscr_{\PP^2}}(t) - (3t^{-1}-1)\varphi(t) +
\sum_{i}b_{i+3}t^{i}
\end{equation}
\end{propositions}
\begin{proof}
From the exact sequence
\[
0\r \Tscr_{\PP^{2}} \r \Oscr(2)^{3}\r \Oscr(3) \r 0
\]
we deduce
\begin{equation} \label{ref-3.8-25}
H^1(\PP^2,\Tscr_{\PP^2}(n))=
\begin{cases}
k&\text{if $n=-3$}\\
0&\text{otherwise}
\end{cases}
\end{equation}
Let $\Iscr=\Iscr_X$ ($X$ generic) and consider the associated resolution.
\begin{equation*}
0 \r \oplus_{j} \Oscr(-j)^{b_{j}} \r \oplus_{i} \Oscr(-i)^{a_{i}} \r
\Iscr \r 0
\end{equation*}
Tensoring with $\Tscr_{\PP^2}(n)$ and applying the long exact sequence for $H^\ast(\PP^2,-)$ we obtain an exact sequence
\begin{multline*}
0 \r \oplus_{j} \Gamma(\PP^2,\Tscr(n-j)^{b_{j}} )\r
\oplus_{i} \Gamma(\PP^2,\Tscr(n-i)^{a_{i}}) \r
\Gamma(\PP^2,\Iscr\otimes \Tscr(n))\r \\
\oplus_{j} H^1(\PP^2,\Tscr(n-j)^{b_{j}} )\r
\oplus_{i} H^1(\PP^2,\Tscr(n-i)^{a_{i}})
\end{multline*}
It follows from \eqref{ref-3.8-25} that the rightmost arrow is zero. This easily yields the required formula.
\end{proof}
\begin{remarks}
The previous proposition has an easy generalization which is perhaps useful and which is proved in the same way. Let $M$ be the second syzygy of a finite dimensional graded $A$-module $F$ and let $\Mscr$ be the associated coherent sheaf. Write $h_M(t)=q_M(t)/(1-t)^3$. Then the Hilbert series of $\Iscr_X\otimes\Mscr$ is given by
\[
q_M(t)h_{I_X}(t)+h_{\Tor_1^A(F,I_X)}(t)
\]
The case where $\Mscr$ is the tangent bundle corresponds to $F=k(3)$.
\end{remarks}
\begin{propositions} \label{ref-3.3.3-26}
We have
\begin{enumerate}
\item
$t_{\psi}(l) \leq t_{\varphi}(l)$ for $l \neq u-3,v$
\item
$t_{\psi} \leq t_{\varphi} \Leftrightarrow a_{u} \neq 0 \mbox{ and } b_{v+3} \neq 0$
\end{enumerate}
\end{propositions}
\begin{proof}
The proof uses only \eqref{ref-3.4-18}. Comparing \eqref{ref-3.7-24} for $\varphi$ and $\psi$ gives
\begin{equation} \label{ref-3.9-27}
t_{\varphi}(t) - t_{\psi}(t) = 3t^{u-1} + 2(t^{u} + t^{u+1} + \ldots + t^{v-1}) - t^{v} + \sum_{i}(b_{i+3} - \tilde{b}_{i+3})t^{i}
\end{equation}
where we have used \eqref{ref-3.3-17}. In order to prove the statements, we have to estimate the polynomial
$\sum_{i}(b_{i+3} - \tilde{b}_{i+3})t^{i}$. For this, substituting \eqref{ref-2.2-12} for $\varphi$ and $\psi$ in
\eqref{ref-3.3-17} gives
\begin{equation*}
\begin{split}
\sum_{i}(\tilde{a}_{i} - \tilde{b}_{i})t^{i} & = \sum_{i}(a_{i} - b_{i})t^{i}- (t^{u}-t^{v+1})(1-t)^{2} \\
& = \sum_{i}(a_{i} - b_{i})t^{i} - t^{u} + 2t^{u+1} -t^{u+2} + t^{v+1} -2t^{v+2} + t^{v+3}
\end{split}
\end{equation*}
hence
\begin{equation} \label{ref-3.10-28}
\begin{split}
\tilde{a}_{u} - \tilde{b}_{u} & = a_u - b_u - 1 \\
\tilde{a}_{u+1} - \tilde{b}_{u+1} & = a_{u+1} - b_{u+1} +
\left\{
\begin{array}{cl}
3 & \mbox{ if } v = u \\
2 & \mbox{ if } v \geq u+1
\end{array}
\right.
\\
\tilde{a}_{u+2} - \tilde{b}_{u+2} & = a_{u+2} - b_{u+2} +
\left\{
\begin{array}{cl}
-3 & \mbox{ if } v = u \\
0 & \mbox{ if } v = u+1 \\
-1 & \mbox{ if } v \geq u+2
\end{array}
\right.
\\
\tilde{a}_{v+1} - \tilde{b}_{v+1} & = a_{v+1} - b_{v+1} +
\left\{
\begin{array}{cl}
3 & \mbox{ if } v = u \\
0 & \mbox{ if } v = u+1 \\
1 & \mbox{ if } v \geq u+2
\end{array}
\right.
\\
\tilde{a}_{v+2} - \tilde{b}_{v+2} & = a_{v+2} - b_{v+2} +
\left\{
\begin{array}{cl}
-3 & \mbox{ if } v = u \\
-2 & \mbox{ if } v \geq u+1
\end{array}
\right.
\\
\tilde{a}_{v+3} - \tilde{b}_{v+3} & = a_{v+3} - b_{v+3} + 1 \\
\tilde{a}_{l} - \tilde{b}_{l} & = a_l - b_l \quad \mbox{ if } l \not\in \{ u,u+1,u+2,v+1,v+2,v+3 \}
\end{split}
\end{equation}
To obtain information about the differences $b_{i+3} - \tilde{b}_{i+3}$, we observe that for $c \geq 0$ and for all integers $l$ we have
\begin{equation} \label{ref-3.11-29}
\begin{split}
\tilde{a}_{l} - \tilde{b}_{l} & = a_l - b_l + c \Rightarrow \tilde{b}_{l}
\leq b_{l} \\
\tilde{a}_{l} - \tilde{b}_{l} & = a_l - b_l - c \Rightarrow \tilde{b}_{l}
\leq b_{l} + c
\end{split}
\end{equation}
Indeed, first let $\tilde{a}_{l} - \tilde{b}_{l} = a_l - b_l + c$. In case $0 \leq b_{l} \leq c$ then
$0 = \tilde{b}_{l} \leq b_{l}$. And in case $c < b_{l}$ then $\tilde{b}_{l} = b_{l}-c \leq b_{l}$. \\
Second, let $\tilde{a}_{l} - \tilde{b}_{l} = a_l - b_l - c$. In case $0 \leq a_{l} \leq c$ then $\tilde{a}_{l} = 0$ hence $\tilde{b}_{l} = b_{l} + c - a_{l} \leq b_{l} + c$. And in case $c < a_{l}$ then $0 = \tilde{b}_{l} \leq c = b_{l} + c$. So this proves \eqref{ref-3.11-29}. \\
Applying \eqref{ref-3.11-29} to \eqref{ref-3.10-28} yields
\begin{equation} \label{ref-3.12-30}
\begin{split}
\tilde{b}_{u} & \leq b_{u} + 1 \\
\tilde{b}_{u+1} & \leq b_{u+1} \\
\tilde{b}_{u+2} & \leq b_{u+2} +
\left\{
\begin{array}{cl}
3 & \mbox{ if } v = u \\
0 & \mbox{ if } v = u+1 \\
1 & \mbox{ if } v \geq u+2
\end{array}
\right.
\\
\tilde{b}_{v+1} & \leq b_{v+1} \\
\tilde{b}_{v+2} & \leq b_{v+2} +
\left\{
\begin{array}{cl}
3 & \mbox{ if } v = u \\
2 & \mbox{ if } v \geq u+1
\end{array}
\right.
\\
\tilde{b}_{v+3} & \leq b_{v+3} \\
\tilde{b}_{l} & \leq b_{l} \mbox{ if } l \not\in \{ u,u+1,u+2,v+1,v+2,v+3 \}
\end{split}
\end{equation}
Now we are able to prove the first statement. Combining \eqref{ref-3.12-30} and \eqref{ref-3.9-27} gives
\begin{equation}
\begin{split}
t_{\varphi}(t) - t_{\psi}(t) \geq
\left \{
\begin{array}{ll}
-t^{u-3} - t^{v} & \mbox{ if } v=u \\
-t^{u-3} + 3t^{u-1} - t^{v} & \mbox{ if } v=u+1 \\
-t^{u-3} + 2(t^{u-1} + t^{u} + \ldots + t^{v-2}) - t^{v} &
\mbox{ if } v \geq u+2
\end{array}
\right.
\end{split}
\end{equation}
and therefore $t_{\varphi}(t) - t_{\psi}(t) \geq -t^{u-3} - t^{v}$ which concludes the proof of the first statement. \\
For the second part, assume that $t_{\psi} \leq t_{\varphi}$. Equation \eqref{ref-3.9-27} implies that
\begin{equation} \label{ref-3.14-31}
\begin{split}
\tilde{b}_{u} & \leq b_{u} \\
\tilde{b}_{v+3} & \leq b_{v+3} - 1
\end{split}
\end{equation}
Since $\tilde{b}_{v+3} \geq 0$ we clearly have $b_{v+3} > 0$. Assume, by contradiction, that $a_{u} = 0$. From \eqref{ref-3.10-28} we have $\tilde{a}_{u} - \tilde{b}_{u} = a_u - b_u - 1$ hence $\tilde{a}_{u} = 0$ and $\tilde{b}_{u} = b_u + 1$. But this gives a contradiction with \eqref{ref-3.14-31}. Therefore
\[
t_{\psi} \leq t_{\varphi} \Rightarrow a_{u} > 0 \mbox{ and }
b_{v+3} > 0
\]
To prove the converse let $a_{u} > 0$ and $b_{v+3} > 0$. Due to the first part we only need to prove that $t_{\psi}(u-3) \leq t_{\varphi}(u-3)$ and $t_{\psi}(v) \leq t_{\varphi}(v)$. Equation \eqref{ref-3.9-27} gives us
\begin{equation} \label{ref-3.15-32}
\begin{split}
& t_{\varphi}(u-3) - t_{\psi}(u-3) = b_{u} - \tilde{b}_{u} \\
& t_{\varphi}(v) - t_{\psi}(v) = b_{v+3} - \tilde{b}_{v+3} - 1
\end{split}
\end{equation}
while from \eqref{ref-3.10-28} we have
\begin{equation*}
\begin{split}
& \tilde{a}_{u} - \tilde{b}_{u} = a_u - b_u - 1 \\
& \tilde{a}_{v+3} - \tilde{b}_{v+3} = a_{v+3} - b_{v+3} + 1
\end{split}
\end{equation*}
Since $a_{u} > 0$, $b_{v+3} > 0$ we have $b_{u} = 0$, $a_{v+3} = 0$ hence
\begin{equation*}
\begin{split}
& \tilde{a}_{u} - \tilde{b}_{u} = a_u - 1 \\
& \tilde{a}_{v+3} - \tilde{b}_{v+3} = - b_{v+3} + 1
\end{split}
\end{equation*}
which implies $\tilde{a}_{u} - \tilde{b}_{u} \geq 0$, $\tilde{a}_{v+3} - \tilde {b}_{v+3} \leq 0$ hence $\tilde{b}_{u} = 0$, $\tilde{a}_{v+3} = 0$. Thus $b_{u} = \tilde{b}_{u} = 0$ and $\tilde {b}_{v+3} = b_{v+3} - 1$. Combining with \eqref{ref-3.15-32} this proves that $t_{\varphi}(u-3) = t_{\psi}(u-3)$ and $t_{\varphi}(v) = t_{\psi}(v)$, finishing the proof.
\end{proof}
\subsection{Combining everything}
In this section we prove that Condition $B$ implies Condition $C$. So assume that Condition $B$ holds.
Since the tangent condition holds we have by Proposition \ref{ref-3.3.3-26}
\[
a_u\neq 0\qquad\text{and}\qquad b_{v+3}\neq 0
\]
This means there is nothing to prove if $u=v$. We discuss the two remaining cases.
\begin{case}
$v = u+1$
\end{case}
The fact that $a_{u} \neq 0$, $b_{v+3} \neq 0$ implies $b_{u} = 0$, $a_{v+3} = 0$. Proposition \ref{ref-3.2.1-20} combined with Proposition \ref{ref-3.1.1-19} now gives
\[
\dim H_{\psi} = \dim H_{\varphi} + a_{u} - b_{u+1} - a_{v+2} + b_{v+3}
+1
\]
Hence $0 \leq (a_{u} - b_{u+1}) + (b_{v+3} - a_{v+2})$. But Proposition \ref{ref-3.1.1-19} also states that $a_{u} \leq b_{u+1}+1$, $a_{v+2} > 0$ and $b_{v+3} \leq a_{v+2}$. Therefore either we have that
\[
b_{u+1} \leq a_{u} \leq b_{u+1}+1 \mbox{ and } b_{v+3} = a_{v+2}
\]
or
\[
a_{u} = b_{u+1}+1 \mbox{ and } b_{v+3} = a_{v+2}-1
\]
\begin{case}
$v \geq u+2$
\end{case}
It follows from Corollary \ref{ref-3.2.2-22} that
\[
a_{u} = b_{u+1} + 1 \mbox{ and } a_{v+2} = b_{v+3}
\]
This finishes the proof.
\begin{remarks}
\label{ref-3.4.1-33} The reader will have noticed that our proof of the implication $B\Rightarrow C$ is rather involved.
Since the equivalence of $B$ and $C$ is purely combinatorial it can be checked directly for individual $n$. Using a computer we have verified the equivalence of $B$ and $C$ for $n\le 70$.
As another independent verification we have a direct proof of the implication $C\Rightarrow B$ (i.e.\ without going through the other conditions).
\end{remarks}
\begin{remarks}
The reader may observe that in case $v = u$ we have
\begin{eqnarray}
\label{ref-3.16-34}
t_{\psi} \leq t_{\varphi} \Rightarrow \dim H_{\varphi} < \dim H_{\psi}
\end{eqnarray}
while if $v \geq u+2$ we have
\begin{eqnarray}
\label{ref-3.17-35}
\dim H_{\varphi} < \dim H_{\psi} \Rightarrow t_{\psi} \leq t_{\varphi}
\end{eqnarray}
It is easy to construct counter examples which show that the reverse implications do not hold, and neither \eqref{ref-3.16-34} nor \eqref{ref-3.17-35} is valid in case $v = u+1$.
\end{remarks}
\section{The implication $C\Rightarrow D$}
In this section $(\varphi,\psi)$ will have the same meaning as in \S\ref{ref-3-14} and we also keep the associated notations.
\subsection{Truncated point modules}
\label{ref-4.1-36}
A truncated point module of length $m$ is a graded $A$-module generated in degree zero with Hilbert series $1 + t + \cdots + t^{m-1}$.
If $F$ is a truncated point module of length $>1$ then there are two independent homogeneous linear forms $l_{1},l_{2}$
vanishing on $F$ and their intersection defines a point $p\in \PP^2$. We may choose basis vectors $e_i\in F_i$ such that
\[
xe_i=x_pe_{i+1}, \qquad ye_i=y_pe_{i+1}, \quad ze_i=z_pe_{i+1}
\]
where $(x_p,y_p,z_p)$ is a set of homogeneous coordinates of $p$. It follows that if $f\in A$ is homogeneous of degree $d$ and $i+d\le m-1$ then
\[
fe_i = f_p e_{i+d}
\]
where $(-)_p$ stands for evaluation in $p$ (with respect to the homogeneous coordinates $(x_p,y_p,z_p)$).
If $G=\oplus_i A(-i)^{c_i}$ then we have
\begin{equation}
\label{ref-4.1-37}
\Hom_A(G,F)=\oplus_{0\le i \le m-1} F_i^{c_i}\cong k^{\sum_{0\le i\le m-1} c_i}
\end{equation}
where the last identification is made using the basis $(e_i)_i$ introduced above.
\medskip
In the sequel we will need the minimal projective resolution of a truncated point module $F$ of length $m$. It is easy to see that it is given by
{\tiny
\begin{equation}
\label{ref-4.2-38}
0\r A(-m-2)
\xrightarrow[f_3]{
\begin{pmatrix}
l_1\\
l_2\\
\rho
\end{pmatrix}\cdot
}
A(-m-1)^2\oplus A(-2)
\xrightarrow[f_2]{
\begin{pmatrix}
0& -\rho & l_2\\
\rho& 0 & -l_1\\
-l_2 & l_1 & 0
\end{pmatrix}\cdot
}
A(-1)^2\oplus A(-m)
\xrightarrow[f_1]{\begin{pmatrix}l_1& l_2&\rho\end{pmatrix}\cdot}
A\r F\r 0
\end{equation}
}
where $l_1,l_2$ are the linear forms vanishing on $F$ and $\rho$ is a form of degree $m$ such that $\rho_p\neq 0$ for the point $p$ corresponding to $F$. Without loss of generality we may and we will assume that $\rho_p=1$.
\subsection{A complex whose homology is $J$}
\label{ref-4.2-39}
In this section $I$ is a graded ideal corresponding to a generic point in $H_\varphi$. The following lemma gives the connection between truncated point modules and Condition D.
\begin{lemmas}
If an ideal $J\subset I$ has Hilbert series $\psi$ then $I/J$ is a (shifted by grading) truncated point module of length $v+1-u$.
\end{lemmas}
\begin{proof}
Since $F=I/J$ has the correct (shifted) Hilbert function, it is sufficient to show that $F$ is generated in degree $u$.
If $v=u$ then there is nothing to prove. If $v\ge u+1$ then by Proposition \ref{ref-3.1.1-19} the generators of $I$ are in degrees $\le u$ and $\ge v+2$. Since $F$ lives in degrees $u,\ldots,v$ this proves what we want.
\end{proof}
Let $J,F$ be as in the previous lemma. Below we will need a complex whose homology is $J$. We write the minimal resolution of $F$ as
\[
0\r G_3 \xrightarrow{f_3} G_2 \xrightarrow{f_2} G_1 \xrightarrow{f_1} G_0 \xrightarrow{} F \r 0
\]
where the maps $f_{i}$ are as in \eqref{ref-4.2-38}, and the minimal resolution of $I$ as
\[
0\r F_1 \r F_0 \r I \r 0
\]
The map $I\r F$ induces a map of projective resolutions {\small
\begin{eqnarray} \label{ref-4.3-40}
\begin{CD}
&& && 0 @>>> F_1 @>{M}>> F_0 @>>> I @>>> 0 \\
&& && && @V{\gamma_1}VV @V{\gamma_0}VV @VVV && \\
0 @>>> G_3 @>{f_{3}}>> G_2 @>{f_2}>> G_1 @>{f_1}>> G_0 @>{f_0}>> F @>>> 0
\end{CD}
\end{eqnarray}
}
Taking cones yields that $J$ is the homology at $G_1\oplus F_0$ of the following complex
\begin{equation}
\label{ref-4.4-41}
0 \r G_3 \xrightarrow{
\begin{pmatrix}f_3\\ 0\end{pmatrix}
}G_2\oplus F_1
\xrightarrow{
\begin{pmatrix}
f_2 & \gamma_1\\
0 & -M
\end{pmatrix}
}
G_1\oplus F_0
\xrightarrow{\begin{pmatrix}f_1 & \gamma_0\end{pmatrix}}
G_0\r 0
\end{equation}
Note that the rightmost map is split here. By selecting an explicit splitting we may construct a free resolution of $J$, but it will be convenient not to do this.
\medskip
For use below we note that the map $J\r I$ is obtained from taking homology of the following map of complexes.
\begin{equation}
\label{ref-4.5-42}
\xymatrix{
0 \ar[r] & G_3 \ar[r]^(0.4)
{\begin{pmatrix}f_3\\ 0\end{pmatrix}}
& G_2\oplus F_1 \ar[d]_{\begin{pmatrix} 0 & -1\end{pmatrix}}
\ar[rr]^{
\begin{pmatrix}
f_2 & \gamma_1\\
0 & -M
\end{pmatrix}
}
&&
G_1\oplus F_0\ar[d]^{\begin{pmatrix} 0 & 1\end{pmatrix}}
\ar[rr]^(0.6){\begin{pmatrix}f_1 & \gamma_0\end{pmatrix}}
&&G_0 \ar[r]& 0\\
&0\ar[r]& F_1 \ar[rr]_M &&F_0\ar[r]& 0
}
\end{equation}
\subsection{The Hilbert scheme of an ideal}
In this section $I$ is a graded ideal corresponding to a generic point in $H_\varphi$.
Let $\Vscr$ be the Hilbert scheme of graded quotients $F$ of $I$ with Hilbert series $t^u+\cdots+t^v$. To see that $\Vscr$ exists one may realize it as a closed subscheme of
\[
\Proj S(I_u\oplus\cdots \oplus I_v)
\]
where $SV$ is the symmetric algebra of a vector space $V$. Alternatively see \cite{AZ2}.
\medskip
We will give an explicit description of $\Vscr$ by equations. Here and below we use the following convention: if $N$ is a matrix with coefficients in $A$ representing a map $\oplus_j A(-j)^{d_j}\r \oplus_i A(-i)^{c_i}$ then $N(p,q)$ stands for the submatrix of $N$ representing the induced map $ A(-q)^{d_q}\r A(-p)^{c_p}$.
\medskip
We now distinguish two cases.
\begin{itemize}
\item[$v=u$]
In this case it is clear that $\Vscr\cong \PP^{a_u-1}$.
\item[$v\ge u+1$]
Let $F\in \Vscr$ and let $p\in \PP^2$ be the associated point. Let $(e_i)_i$ be a basis for $F$ as in \S\ref{ref-4.1-36}. The map $I\r F$ defines a map
\[
\lambda:A(-u)^{a_u}\r F
\]
such that the composition
\begin{equation}
\label{ref-4.6-43}
A(-u-1)^{b_{u+1}} \xrightarrow{M(u,u+1)\cdot} A(-u)^{a_u} \r F
\end{equation}
is zero.
We may view $\lambda$ as a scalar row vector as in \eqref{ref-4.1-37}. The fact that \eqref{ref-4.6-43} has zero composition then translates into the condition
\begin{equation}
\label{ref-4.7-44}
\lambda \cdot M(u,u+1)_p=0
\end{equation}
It is easy to see that this procedure is reversible and that the equations \eqref{ref-4.7-44} define $\Vscr$ as a subscheme of $\PP^{a_u-1}\times \PP^2$.
\end{itemize}
\begin{propositions}
\label{ref-4.3.1-45} Assume that Condition C holds. Then $\Vscr$ is smooth and
\[
\dim \Vscr=
\begin{cases}
a_u-1 &\text{if $v=u$} \\
a_u+1-b_{u+1} &\text{if $v\ge u+1$}
\end{cases}
\]
\end{propositions}
\begin{proof}
The case $v=u$ is clear so assume $v\ge u+1$. If we look carefully at \eqref{ref-4.7-44} then we see that it describes
$\Vscr$ as the zeroes of $b_{u+1}$ generic sections in the very ample line bundle $\Oscr_{\PP^{a_u-1}}(1) \boxtimes \Oscr_{\PP^2}(1)$ on $\PP^{a_u-1}\times \PP^2$. It follows from Condition C that $b_{u+1}\le \dim (\PP^2\times \PP^{a_u-1})=a_u+1$. Hence by Bertini (see \cite{H}) we deduce that $\Vscr$ is smooth of dimension $a_u+1-b_{u+1}$.
\end{proof}
\subsection{Estimating the dimension of {$\Ext^1_A(J,J)$}}
In this section $I$ is a graded ideal corresponding to a generic point of $H_\varphi$. We prove the following result
\begin{propositions}
\label{ref-4.4.1-46}
Assume that Condition C holds. Then there exists $F\in \Vscr$ such that for $J=\ker(I\r F)$ we have
\begin{equation}
\label{ref-4.8-47}
\dim_{k} \Ext^1_A(J,J)
\ge
\begin{cases}
\dim H_\psi + a_{v+3} = \dim H_\psi & \text{if $v=u$} \\
\dim H_\psi + a_{v+2} - b_{v+3} + 1 & \text{if $v=u+1$} \\
\dim H_\psi + a_{v+2} - b_{v+3} + 2=\dim H_\psi + 2 & \text{if $v \ge u+2$}
\end{cases}
\end{equation}
\end{propositions}
It will become clear from the proof below that in case $v\ge u+1$ the righthand side of \eqref{ref-4.8-47} is one higher than the expected dimension.
\medskip
Below let $J \subset I$ be an arbitrary ideal such that $h_J = \psi$. Put $F = I/J$.
\begin{propositions}
We have
\[
\dim_{k} \Ext^1_A(J,J) = \dim H_\psi + \dim_{k} \Hom_A(J,F(-3))
\]
\end{propositions}
\begin{proof}
For $M,N\in \gr A$ write
\[
\chi(M,N)=\sum_i (-1)^i \dim_{k} \Ext^i_A(M,N)
\]
Clearly $\chi(M,N)$ only depends on the Hilbert series of $M$, $N$. Hence, taking $J'$ to be an arbitrary point in $H_\psi$ we have
\[
\chi(J,J) = \chi(J',J') = 1 - \dim_{k} \Ext^1_A(J',J') = 1 - \dim H_\psi
\]
where in the third equality we have used that $\Ext^1_A(J',J')$ is the tangent space to $H_\psi$ \cite{DV2}.
Since $J$ has no socle we have $\pdim J \le 2$. Therefore $\Ext^i_A(J,J) = 0$ for $i \ge 3$. It follows that
\begin{align*}
\dim_{k} \Ext^1_A(J,J) & = -\chi(J,J) + 1 + \dim_{k} \Ext^2_A(J,J) \\
& = \dim H_\psi + \dim_{k} \Ext^3_A(F,J)
\end{align*}
By the approriate version of Serre duality we have
\[
\Ext^3_A(F,J) = \Hom_A(J,F \otimes \omega_A)^\ast = \Hom_A(J,F(-3))^\ast
\]
This finishes the proof.
\end{proof}
\begin{proof}[Proof of Proposition \ref{ref-4.4.1-46}]
It follows from the previous result that we need to control $\dim_{k} \Hom_A(J,F(-3))$. Of course we assume throughout that Condition C holds and we also use Proposition \ref{ref-3.1.1-19}.
\setcounter{case}{0}
\begin{case}
Assume $v=u$. For degree reasons any extension between $F$ and $F(-3)$ must be split. Thus we have $\Hom_A(F,F(-3)) = \Ext^1_A(F,F(-3)) = 0$. Applying $\Hom_A(-,F(-3))$ to
\[
0 \r J \r I \r F \r 0
\]
we find
\[
\Hom_A(J,F(-3)) = \Hom_A(I,F(-3))
\]
Hence
\[
\dim_{k}\Hom_A(J,F(-3)) = a_{v+3} = 0
\]
\end{case}
\begin{case}
Assume $v=u+1$. As in the previous case we find $\Hom_A(J,F(-3)) = \Hom_A(I,F(-3))$.
Thus a map $J\r F(-3)$ is now given (using Proposition \ref{ref-3.1.1-19}) by a map
\[
\beta:A(-v-2)^{a_{v+2}}\r F(-3)
\]
(identified with a scalar vector as in \eqref{ref-4.1-37}) such that the composition
\[
A(-v-3)^{b_{v+3}} \xrightarrow{M(v+2,v+3)} A(-v-2)^{a_{v+2}} \xrightarrow{\beta} F(-3)
\]
is zero. This translates into the condition
\begin{equation}
\label{ref-4.9-48}
\beta\cdot M(v+2,v+3)_p=0
\end{equation}
where $p$ is the point corresponding to $F$. Now $M(v+2,v+3)$ is a $a_{v+2}\times b_{v+3}$ matrix. Since $b_{v+3}\le a_{v+2}$ (by Proposition \ref{ref-3.1.1-19}) we would expect \eqref{ref-4.9-48} to have $a_{v+2}-b_{v+3}$ independent solutions. To have more, $M(v+2,v+3)$ has to have non-maximal rank. I.e.\ there should be a non-zero solution to the equation
\begin{equation}
\label{ref-4.10-49}
M(v+2,v+3)_p\cdot \delta=0
\end{equation}
This should be combined with (see \eqref{ref-4.7-44})
\begin{equation}
\label{ref-4.11-50}
\lambda\cdot M(u,u+1)_p=0
\end{equation}
We view \eqref{ref-4.10-49} and \eqref{ref-4.11-50} as a system of $a_{v+2}+b_{u+1}$ equations in $\PP^{a_{u}-1}\times \PP^2\times \PP^{b_{v+3}-1}$.
Since (Condition C)
\[
a_{v+2}+b_{u+1}\le \dim (\PP^{a_{u}-1}\times \PP^2\times \PP^{b_{v+3}-1})=a_u+b_{v+3}
\]
the system \eqref{ref-4.10-49}\eqref{ref-4.11-50} has a solution provided the
divisors in $\PP^{a_{u}-1}\times \PP^2\times \PP^{b_{v+3}-1}$ determined by the
equations of the system have non-zero intersection product.
Let $r,s,t$ be the hyperplane sections in $\PP^{a_{u}-1}$, $\PP^2$ and
$\PP^{b_{v+3}-1}$ respectively. The Chow ring of $\PP^{a_{u}-1}\times
\PP^2\times \PP^{b_{v+3}-1}$ is given by
\begin{equation}
\label{ref-4.12-51}
\ZZ[r,s,t]/(r^{a_u}, s^3, t^{b_{v+3}})
\end{equation}
The intersection product we have to compute is
\[
(s+t)^{a_{v+2}}(r+s)^{b_{u+1}}
\]
This product contains the terms
\begin{gather*}
t^{a_{v+2}-2}s^2 r^{b_{u+1}}\\
t^{a_{v+2}-1}s^2 r^{b_{u+1}-1}\\
t^{a_{v+2}}s^2 r^{b_{u+1}-2}
\end{gather*}
at least one of which is non-zero in \eqref{ref-4.12-51} (using Condition C).
\end{case}
\begin{case}
Now assume $v\ge u+2$. We compute $\Hom(J,F(-3))$ as the homology of $\Hom_A((\mathrm{eq. }\ref{ref-4.4-41}),F(-3))$. Since $G_0=A(-u)$ we have $\Hom_A(G_0,F(-3))=0$ and hence a map $J\r F(-3)$ is given by a map
\[
G_1\oplus F_0 \r F(-3)
\]
such that the composition
\[
G_2\oplus F_1 \xrightarrow{\begin{pmatrix} f_2& \gamma_1\\ 0 & -M\end{pmatrix}}
G_1\oplus F_0\r F(-3)
\]
is zero.
Introducing the explicit form of $(G_i)_i$, $(f_i)_i$ given by \eqref{ref-4.2-38},
and using Proposition \ref{ref-3.1.1-19} we find that a map $J \r F(-3)$ is given by a pair of maps
\[
\mu:A(-v-1)\r F(-3)
\]
\[
\beta:A(-v-2)^{a_{v+2}}\r F(-3)
\]
(identified with scalar vectors as in \eqref{ref-4.1-37}) such that the composition
\[
A(-v-2)^2\oplus A(-v-3)^{b_{v+3}} \xrightarrow{
\begin{pmatrix}
-l_2 & l_1 & \gamma_1(v+1,v+3)\\
0 & 0 & -M(v+2,v+3)
\end{pmatrix}
}
A(-v-1)\oplus A(-v-2)^{a_{v+2}} \xrightarrow{
\begin{pmatrix}
\mu & \beta
\end{pmatrix}
} F
\]
is zero.
Let $p$ be the point associated to $F$. Since $(l_1)_p=(l_2)_p=0$ we obtain the conditions
\begin{equation}
\label{ref-4.13-52}
\begin{pmatrix}
\mu & \beta
\end{pmatrix}
\begin{pmatrix}
\gamma_1(v+1,v+3)_p\\M(v+2,v+3)_p
\end{pmatrix}=0
\end{equation}
To use this we have to know what $\gamma_1(v+1,v+3)$ is. From the commutative diagram \eqref{ref-4.3-40} we obtain the identity
\[
\rho \cdot \gamma_1(v+1,v+3)=\lambda\cdot M(u,v+3)
\]
where $\lambda=\gamma_0(u,u)$. Evaluation in $p$ yields
\[
\gamma_1(v+1,v+3)_p=\lambda\cdot M(u,v+3)_p
\]
so that \eqref{ref-4.13-52} is equivalent to
\[
\begin{pmatrix}
\mu & \beta
\end{pmatrix}
\begin{pmatrix}
\lambda\cdot M(u,v+3)_p\\M(v+2,v+3)_p
\end{pmatrix}=0
\]
Now $
\begin{pmatrix}
\lambda \cdot M(u,v+3)_p \\
M(v+2,v+3)_p
\end{pmatrix}
$ is a $(a_{v+2}+1)\times b_{v+3}$ matrix. Since $b_{v+3} < a_{v+2}+1$ (Proposition \ref{ref-3.1.1-19}) we would expect \eqref{ref-4.13-52} to have $a_{v+2}+1-b_{v+3}$ independent solutions. To have more, $
\begin{pmatrix}
\lambda \cdot M(u,v+3)_p \\
M(v+2,v+3)_p
\end{pmatrix}
$ has to have non-maximal rank. I.e.\ there should be a non-zero solution to the equation
\[
\begin{pmatrix}
\lambda\cdot M(u,v+3)_p\\M(v+2,v+3)_p
\end{pmatrix}\cdot \delta=0
\]
which may be broken up into two sets of equations
\begin{equation}
\label{ref-4.14-53}
\lambda\cdot M(u,v+3)_p\cdot \delta=0
\end{equation}
\begin{equation}
\label{ref-4.15-54}
M(v+2,v+3)_p\cdot \delta=0
\end{equation}
and we also still have
\begin{equation}
\label{ref-4.16-55}
\lambda\cdot M(u,u+1)_p=0
\end{equation}
We view \eqref{ref-4.14-53}\eqref{ref-4.15-54} and \eqref{ref-4.16-55} as a system of $1+a_{v+2}+b_{u+1}$ equations in the variety $\PP^{a_{u}-1} \times \PP^2 \times \PP^{b_{v+3}-1}$. Since (Condition C)
\[
1 + a_{v+2} + b_{u+1} = \dim (\PP^{a_{u}-1} \times \PP^2 \times \PP^{b_{v+3}-1}) = a_u + b_{v+3}
\]
the existence of a solution can be decided numerically. The intersection product we have to compute is
\[
(r+s+t)(s+t)^{a_{v+2}}(r+s)^{b_{u+1}}
\]
This product contains the term
\[
s^2 t^{a_{v+2}-1}r^{b_{u+1}}
\]
which is non-zero in the Chow ring (using Condition C).
\end{case}
\def{}
\end{proof}
\subsection{Estimating the dimension of { $\Ext^1_{\hat{A}}(\hat{J},\hat{J})$}}
In this section we prove the following result.
\begin{propositions}
Assume that Condition C holds. Let $I \in H_\varphi$ be generic and let $J$ be as in Condition D. Then
\begin{equation}
\label{ref-4.17-56}
\dim_{k} \Ext^1_{\hat{A}}(\hat{J},\hat{J})\le
\begin{cases}
\dim H_\varphi+a_u-1 & \text{if $v=u$}\\
\dim H_\varphi+a_u+1-b_{u+1} & \text{if $v\ge u+1$}
\end{cases}
\end{equation}
\end{propositions}
\begin{proof}
It has been shown in \cite{DV2} that $H_\varphi$ is the moduli-space of ideals in $A$ of projective dimension one which have Hilbert series $\varphi$. Let $\tilde{I}\subset A_{H_\varphi}$ be the corresponding universal bundle. Let $\Mscr$ be the moduli-space of pairs $(J,I)$ such that $I\in H_\varphi$ and $h_J=\psi$. To show that $\Mscr$ exists on may realize it as a closed subscheme of
\[
\underline{\Proj}\, S_{H_\varphi}(\tilde{I}_u\oplus\cdots \oplus
\tilde{I}_v)
\]
Sending $(J,I)$ to $I$ defines a map $q:\Mscr\r H_\varphi$. We have
an exact sequence
\begin{equation}
\label{ref-4.18-57}
0\r T_{(J,I)}q^{-1}I\r T_{(J,I)}\Mscr\r T_{I} H_\varphi
\end{equation}
Assume now that $I$ is generic and put $\Vscr=q^{-1}I$ as above. By Proposition \ref{ref-4.3.1-45} we know that $\Vscr$ is smooth. Hence
\[
\dim T_{(J,I)} \Mscr \le \dim \Vscr + \dim H_\varphi
\]
Applying Proposition \ref{ref-4.3.1-45} again, it follows that for $I$ generic the dimension of $T_{(J,I)}\Mscr$ is bounded by the right hand side of \eqref{ref-4.17-56}.
Since $\Ext^1_{\hat{A}}(\hat{J},\hat{J})$ is the tangent space of $\Mscr$ at $(J,I)$ for $\hat{J}=(J \,\,\, I)$ this finishes the proof.
\end{proof}
\begin{remarks}
It is not hard to see that \eqref{ref-4.17-56} is actually an equality. This follows from the easily proved fact that
the map $q$ is generically smooth.
\end{remarks}
\subsection{Tying things together}
Combining the results of the previous two sections we see that if Condition C holds we have for a suitable
choice of $J$
\[
\dim_{k} \Ext^1_A(J,J) - \dim_{k} \Ext^1_{\hat{A}}(\hat{J},\hat{J})
\ge
\begin{cases}
\dim H_\psi-\dim H_\varphi+a_{v+3}-a_u+1&\text{if $v=u$}\\
\dim H_\psi-\dim H_\varphi+a_{v+2}-b_{v+3}-a_u+b_{u+1}&\text{if $v=u+1$}\\
\dim H_\psi-\dim H_\varphi+a_{v+2}-b_{v+3}-a_u+b_{u+1}+1&\text{if $v\ge u+2$}
\end{cases}
\]
We may combine this with Proposition \ref{ref-3.2.1-20} which works out as (using Proposition \ref{ref-3.1.1-19})
\[
\dim H_\psi-\dim H_\varphi=
\begin{cases}
a_u+b_{v+3}-1 &\text{if $v=u$}\\
a_u-b_{u+1}-a_{v+2}+b_{v+3}+1&\text{if $v=u+1$}\\
a_u-b_{u+1}-a_{v+2}+b_{v+3}&\text{if $v\ge u+2$}
\end{cases}
\]
We then obtain
\[
\dim_{k} \Ext^1_A(J,J)-\dim_{k} \Ext^1_{\hat{A}}(\hat{J},\hat{J})
\ge
\begin{cases}
b_{v+3}&\text{if $v=u$} \\
1&\text{if $v\ge u+1$}
\end{cases}
\]
Hence in all cases we obtain a strictly positive result. This finishes the proof that Condition C implies Condition D.
\begin{remarks}
As in Remark \ref{ref-3.4.1-33} it is possible to prove directly the converse implication $D \Rightarrow C$.
\end{remarks}
\section{The implication $D \Rightarrow A$}
In this section $(\varphi,\psi)$ will have the same meaning as in \S\ref{ref-3-14} and we also keep the associated notations. We assume that Condition D holds. Let $I$ be a graded ideal corresponding to a generic point in $H_\varphi$. According to Condition D there exists an ideal $J\subset I$ with $h_J=\psi$ such that there is an $\eta \in \Ext^1_A(J,J)$ which is not in the image of $\Ext^1_{\hat{A}}(\hat{J},\hat{J})$.
We identify $\eta$ with a one parameter deformation $J'$ of $J$. I.e.\ $J'$ is a flat $A[\epsilon]$-module where $\epsilon^2=0$ such that $J'\otimes_{k[\epsilon]}k\cong J$ and such that the short exact sequence
\[
0\r J \xrightarrow{\epsilon\cdot} J'\r J \r 0
\]
corresponds to $\eta$.
In \S\ref{ref-4.2-39} we have written $J$ as the homology of a complex. It follows for example from (the dual version of) \cite[Thm 3.9]{lowen3}, or directly, that $J'$ is the homology of a complex of the form
\begin{equation}
\label{ref-5.1-58}
0 \r G_3[\epsilon] \xrightarrow{
\begin{pmatrix}
f'_3 \\
P\epsilon
\end{pmatrix}
} G_2[\epsilon] \oplus F_1[\epsilon]
\xrightarrow{
\begin{pmatrix}
f'_2 & \gamma'_1 \\
Q\epsilon & -M'
\end{pmatrix}
}
G_1[\epsilon]\oplus F_0[\epsilon]
\xrightarrow{
\begin{pmatrix}
f'_1 & \gamma'_0
\end{pmatrix}}
G_0[\epsilon]\r 0
\end{equation}
where for a matrix $U$ over $A$, $U'$ means a lift of $U$ to $A[\epsilon]$. Recall that $G_3=A(-v-3)$.
\begin{lemma} \label{ref-5.1-59}
We have $P(v+3,v+3)\neq 0$.
\end{lemma}
\begin{proof}
Assume on the contrary $P(v+3,v+3)=0$. Using Proposition \ref{ref-3.1.1-19} it follows that $P$ has its image in
$F_{11}=\oplus_{j\le u+1}A(-j)^{b_{j}}$.
The fact that \eqref{ref-5.1-58} is a complex implies that $Qf_3=MP$. Thus we have a commutative diagram
\[
\begin{CD}
0 @>>> G_3 @>{f_{3}}>> G_2 @>{f_2}>> G_1 @>{f_1}>> G_0 \\
@. @VP_1VV @VVQV\\
@. F_{11}@>M_{11}>> F_0\\
@. @VP_2VV @|\\
@. F_1 @>>M> F_0
\end{CD}
\]
where $P_2$ is the inclusion and $M_{11}=MP_2$, $P=P_2P_1$. Put
\[
D=\coker(F_{11}\r F_1)
\]
Then $(P_1,Q)$ represents an element of $\Ext^2_A(F,D)=\Ext^1_A(D,F(-3))^\ast=0$, where the last equality is for
degree reasons.
It follows that there exist maps
\begin{align*}
R:G_1&\r F_0\\
T_1:G_2&\r F_{11}
\end{align*}
such that
\begin{align*}
Q&=Rf_2+M_{11}T_1\\
P_1&=T_1f_3
\end{align*}
Putting $T=P_2T_1$ we obtain
\begin{align*}
Q&=Rf_2+MT\\
P&=Tf_3
\end{align*}
We can now construct the following lifting of the commutative diagram
\eqref{ref-4.5-42}:
\[
\xymatrix{
0 \ar[r] & G_3[\epsilon] \ar[r]^(0.4)
{\begin{pmatrix}f'_3\\ P\epsilon\end{pmatrix}}
& G_2[\epsilon]\oplus F_1[\epsilon] \ar[d]_{\begin{pmatrix}T\epsilon & -1\end{pmatrix}}
\ar[rr]^{
\begin{pmatrix}
f'_2 & \gamma'_1\\
Q\epsilon & -M'
\end{pmatrix}
}
&&
G_1[\epsilon]\oplus F_0[\epsilon]\ar[d]^{\begin{pmatrix} -R\epsilon & 1\end{pmatrix}}
\ar[rr]^(0.6){\begin{pmatrix}f'_1 & \gamma'_0\end{pmatrix}}
&&G_0[\epsilon] \ar[r]& 0\\
&0\ar[r]& F_1[\epsilon] \ar[rr]_{M'+R\gamma_1\epsilon} &&F_0[\epsilon]\ar[r]& 0
}
\]
Taking homology we see that there is a first order deformation $I'$ of $I$ together with a lift of the inclusion $J\r I$ to a map $J'\r I'$. But this contradicts the assumption that $\eta$ is not in the image of $\Ext_{\hat{A}}^1(\hat{J},\hat{J})$.
\end{proof}
In particular, Lemmma \ref{ref-5.1-59} implies that $b_v+3 \neq 0$.
It will now be convenient to rearrange \eqref{ref-5.1-58}. Using the previous lemma and the fact that the rightmost map in \eqref{ref-5.1-58} is split it follows that $J'$ has a free resolution of the form
\[
0\r G_3[\epsilon]\xrightarrow{
\begin{pmatrix}
\epsilon\\
\alpha_0+\alpha_1\epsilon
\end{pmatrix}
}
G_3[\epsilon]\oplus H_1[\epsilon]
\xrightarrow{
\begin{pmatrix}
\beta_0+\beta_1\epsilon & \delta_0+\delta_1\epsilon
\end{pmatrix}
}
H_0[\epsilon]\r J' \r 0
\]
which leads to the following equations
\begin{align*}
\delta_0\alpha_0&=0\\
\beta_0+\delta_1\alpha_0+\delta_0\alpha_1&=0
\end{align*}
Using these equations we can construct the following complex $C_t$ over $A[t]$
\[
0\r G_3[t]\xrightarrow{
\begin{pmatrix}
t\\
\alpha_0+\alpha_1t
\end{pmatrix}
}
G_3[t]\oplus H_1[t]
\xrightarrow{
\begin{pmatrix}
\beta_0-\delta_1\alpha_1 t& \delta_0+\delta_1 t
\end{pmatrix}
}
H_0[t]
\]
For $\theta\in k$ put $C_\theta=C\otimes_{k[t]}k[t]/(t-\theta)$. Clearly $C_0$ is a resolution of $J$. By semi-continuity we find that for all but a finite number of $\theta$, $C_\theta$ is the resolution of a rank one $A$-module $J_\theta$. Furthermore we have $J_0=J$ and $\pd J_\theta=1$ for $\theta\neq 0$.
\medskip
Let $\Jscr_\theta$ be the rank one $\Oscr_{\PP^2}$-module corresponding to $J_\theta$. $\Jscr_\theta$ represents a point of $H_\psi$. Since $I/J$ has finite length, $J_0=J$ and $I$ define the same object in $\Hilb_n(\PP^2)$. Hence we have constructed a one parameter family of objects in $\Hilb_n(\PP^2)$ connecting a generic object in $H_\varphi$ to an object in $H_\psi$. This shows that indeed $H_\varphi$ is in the closure of $H_\psi$.
|
1,314,259,995,899 | arxiv | \section{Introduction}
\vspace{-0.3cm}
\label{sec:intro}
The mammalian brain is a complex system, mainly due to the large number of neurons that form complicated interconnected networks. Human brains contain almost 100 billion neurons, and mouse brains have about 75 million neurons \cite{azevedo2009equal}. Each of the neurons can receive thousands of inputs and can innervate thousands of downstream neurons. Since brain functions require coordinated activation of large groups of neurons across different brain regions, recording and understanding neural activity at circuit level are fundamental for understanding brain cognitive functions.
In the 1950s, Hubel and Wiesel discovered links between visual stimuli and neurons in the visual cortex using implanted electrodes that monitor single cells. The current revolution in calcium imaging is accommodating such observation of neural function for a multitude of neurons without invasive electrodes. Calcium imaging with fluorescent proteins provides a convenient optical way to record large neuron populations with precision down to the single action potential level \cite{chen2013ultrasensitive}. With progress from two parallel areas, calcium sensitive fluorescent protein engineering \cite{chen2013ultrasensitive,dana2016sensitive} and calcium imaging techniques \cite{ghosh2011miniaturized,ahrens2013whole,zong2017fast}, \textit{in vivo} calcium imaging has become a standard method to investigate neural circuit mechanisms underlying cognitive behaviors. However, new calcium imaging methods, in contrast to previous sparse neuron and short-term imaging recordings, generate a huge volume of data with hundreds and thousands of neurons resulting from several hours of recording. To analyze these large datasets, high throughput automated image analysis methods, such as advanced image segmentation techniques, are required to not only be able to identify single neurons, but also to detect single calcium events.
Over the years, researchers have explored numerous image segmentation techniques for the analysis of biomedical images. Existing segmentation techniques \cite{acton2009biomedical} such as thresholding, edge detection, active contours \cite{mansouri2004constraining,Mukherjee2015} and morphological methods \cite{acton2000area,bosworth2003morphological} are still limited to single-neuron analysis. Recently, efforts in the image analysis research community have attempted to avoid the identification of multiple components as a single component by using graph cut method \cite{browet2016cell} and an iterative local level set evolution \cite{Wang2017Bact3D}. Both approaches allow identification of individual cells; however, the seeded segmentation strategy in both methods is challenging in our datasets given noisy signals from non-activated neurons. Also, the \textit{in vivo} videos increase the difficulty of locating seeds in individual cells. There are also some post-segmentation algorithms that attempt to isolate connected components, including the watershed transform \cite{yang2014automatic} and concavity identification \cite{kong2011partitioning}. The former algorithm fails in the inhomogeneity present in our datasets, while the latter method suffers from the rough boundaries extracted from preliminary segmentation.
\begin{figure*}[t]
\centering
\includegraphics[width=0.86\textwidth]{FlowChart.png}
\vspace{-0.3cm}
\caption{Flow chart of the shape filtering approach.}
\label{fig:flowchart}
\vspace{-0.6cm}
\end{figure*}
In this paper, we explore a novel perspective, a post-segmentation shape filter, to transform the preliminary segmentation produced on a temporal sequence of shapes. It provides a smooth shape evolution path targeting on single neuron activity along time by filtering out the unnecessary and incomplete segments and smoothing the boundaries of preliminary results. To realize this method, a modified smooth nonlinear regression (spline) of shapes on a Riemannian manifold is proposed based on the shape space introduced in \cite{Anuj2011SRVD}. Although several works \cite{baust2015total,su2012fitting, hinkle2014intrinsic,kuhnel2017stochastic} proposed smooth splines for regression on a manifold by evaluating polynomial energy-minimization functions, they are not able to automatically detect and exclude the non-smooth shapes while preserving the structure of the target object. \cite{Liu2017face,Cao2012ShapeRegression} use deep learning on shape regression for face alignment. These methods are not able to split connected components and are not suitable for a limited dataset.
\vspace{-0.3cm}
\section{shape filter}
\label{sec:shaperep}
\vspace{-0.3cm}
Due to the limitations of the segmentation methods, some of the results show separate cells as one single connected component. In this paper, we propose a novel method to automatically filter out such outliers and then provide a smooth segmentation of the cell of interest from the rest of the connected component. We design a shape filter that generates a smooth path on a manifold to represent shape evolution through time. Fig. \ref{fig:flowchart} shows the flow chart of the shape filter framework. In the following sections, we present a detailed description of a locally weighted shape smooth regression method that provides a smooth segmentation based on the shape evolution path fitted to the data.
\vspace{-0.6cm}
\subsection{Data preparation and shape representation}
\vspace{-0.2cm}
As shown in Fig. \ref{fig:flowchart}, the input of the shape filter is the preliminary result after any segmentation. Each time, only one neuron is chosen to be analyzed. Then, the shape information is extracted and represented by equidistant points located on the boundary of the segmented body in the Cartesian coordinates (see Fig. \ref{fig:flowchart}C). We use the square-root velocity (SRV) representation of the shapes \cite{Anuj2007SRV,Anuj2011SRVD} in order to compare the shapes extracted from the time-indexed segmented data. Under the SRV representation, each shape is defined on a Riemannian manifold $\mathcal{M}$ with locally assembled SRV Euclidean coordinates $q$, defined as follows:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
q(s) = \frac{\dot{\beta(s)}}{\sqrt{||\dot{\beta(s)}||}}
\end{equation}
where $\beta(s)$ represents the shape, which is a closed curve parameterized by equal distant arclength $s\in [0,2\pi]$. $\dot{\beta(s)}$ shows the gradient of this curve and $||\cdot||$ denotes the Euclidean norm. Suppose $\alpha: \mathbb{R} \rightarrow \mathcal{M}$ is the geodesic path between two arbitrary shapes on the defined manifold. Then, the difference of two shapes can be evaluated using the geodesic distance between the two SRV-transformed shapes:
\vspace{-0.2cm}
\begin{equation}
\vspace{-0.3cm}
d_{g} = \int_{0}^{1}\sqrt{\langle\dot{\alpha(t)},\dot{\alpha(t)}\rangle}dt
\end{equation}
where $\alpha(0)$ and $\alpha(1)$ represent the initial and final positions of the path respectively. $\dot{\alpha(t)}$ shows the gradient of this geodesic path. When $dt$ is infinitesimally small, the geodesic distance is approximately the integral length of the gradient magnitudes.
\vspace{-0.3cm}
\subsection{Weighted smoothed nonlinear regression}
\vspace{-0.2cm}
After transforming the shape data into the SRV representation, an optimization problem is solved to filter out the extra components (\textit{outliers}) in the preliminary result by interpolating the likely smooth shapes along the time-indexed calcium firing path. At the same time, the filtered path can also smooth the boundary of segmentation results by imposing several constraints.
We use $\phi: \mathbb{R} \rightarrow \mathcal{M}$ to denote the original evolution path and $\gamma: \mathbb{R} \rightarrow \mathcal{M}$ to represent the filtered new path on the manifold. Both of these two paths are approximately differentiable and have the geodesic distance defined as aforementioned. Then, $\gamma$ is estimated by optimizing the following regression problem, which is modified from De Boor's approach \cite{Deboor2001Spline} by proposing automatic outlier detectors ($w$) on the space of a shape manifold:
\vspace{-0.2cm}
\begin{equation}
\vspace{-0.2cm}
\label{eq:main}
\small{\min_{\gamma} \rho\underbrace{\sum_{t=start}^{end}w(t)|\phi(t)-\gamma(t)|^{2}}_{data\ term}+(1-\rho)\underbrace{\int|\mathit{D}^{2}\gamma(t)|}_{smoothness\ term}}
\end{equation}
This modified shape regression model aims to find a desired minimizer $\gamma$ that balances the trade-off between approaching the original data and smoothing the filtered path (using MATLAB function csaps). In equation (\ref{eq:main}), $t$ is time index for each shape, which is also used in the following sections in this paper. $\mathit{D}^{2}\gamma(t)$ is the second derivative of path $\gamma$, which characterizes the changes of shapes along the fitted path. $\rho\in[0,1]$ is the smoothing parameter that reflects the emphasis on data or smoothness. Note that, when $\rho$ is close to 1, $\gamma$ becomes the spline of the input data $\phi$; and when $\rho$ becomes small, $\gamma$ will be smoother with fewer shape changes along time in terms of SRV transformed shape representation. $w(t)$ is the set of local weights used to estimate the outliers automatically, which will be explained in details in section \ref{ssec:w}.
\vspace{-0.3cm}
\subsection{Local weight selections}
\vspace{-0.2cm}
\label{ssec:w}
To detect clutter found in the segmentation of the calcium images, we propose a locally weighted shape regression strategy. Four weight selection methods are discussed in this section, where the third and the fourth are the proposed local weights for the shape filter.
(1) \textit{Unity weighting}: $\mathbf{w_{1}} = \mathbf{1}$ is a column vector with all the values equal to 1, which tends to no preference in the weighting of segments.
(2) \textit{Piecewise constant weighting}: The second weighting scheme is given by $w_{2}(t)$, which is a step function (Fig. \ref{fig:w}(i)).
\vspace{-0.2cm}
\begin{equation}
\vspace{-0.2cm}
w_{2}(t) = \left\{
\begin{aligned}
C, &\text{ if not outliers}\\
0, &\text{ otherwise}
\end{aligned}
\right.
\end{equation}
Any constant $C$ can be chosen based on the desire. For those outliers, which need additional steps (out of the main regression problem) to identify, original data will be ignored with data term equals to zero.
(3) \textit{Bi3 local shape weighting}: Inspired by tricube kernel model and robust local regression model in \cite{altman1992tricube},
Bi3 local shape weight is defined as:
\vspace{-0.3cm}
\begin{equation}
\label{eq:bi3}
w_{3}(t) = A*(1-(\frac{r(t)}{\mathrm{median}(r(t))+\tau})^{3})^{3}
\vspace{-0.3cm}
\end{equation}
where
\begin{equation}
r(t) = d_{g}(\delta(t), \phi(t))
\vspace{-0.2cm}
\end{equation}
\begin{equation}
\vspace{-0.2cm}
\sigma_r = \frac{1}{N}\sum_{t=1}^{N}(|r(t)-median(r(t))|)
\end{equation}
\begin{equation}
\vspace{-0.2cm}
\tau = \sigma_r +( \sigma_r -min(r(t)))
\end{equation}
where $r(t)$ is the residual geodesic distance from the true data to a fitted smooth spline $\delta: \mathbb{R} \rightarrow \mathcal{M}$ with a small weight for the data term in equation (\ref{eq:main}). $\sigma_r$ specifies the mean deviation of residuals from $\delta$. $median(r(t))$ is treated as mean shape in the sequence instead of the traditional mean because the outliers will affect this value. $\tau$ is the tolerance for residual deviations. $\tau$ will automatically assign a negative feedback to data term when the data is far larger than tolerance from the smoothed spline $\delta$. The formula to calculate this value is inspired by the skew measure in statistical analysis. $A$ is a constant that can amplify the proportion of the data term. The cubic weight is capable of yielding wider range for data when the base of inner cube is closer to zero \cite{altman1992tricube}, which increases the acceptance of shapes that is similar to the mean shape.
(4) \textit{Modified shape Gaussian (sGaussian) weighting}: The Gaussian model is also a popular weighting selection in local regression problems, such as in \cite{loader2006Gaussian}. To adapt the framework of Gaussian weighting to shapes, the weighting may be computed using:
\vspace{-0.3cm}
\begin{equation}
\vspace{-0.3cm}
w_{4}(t)= \frac{1}{\sqrt{2\pi\sigma^{2}}}\exp(-\frac{d_{g}^{2}(\phi(t),q_{median})}{2\sigma^{2}})
\end{equation}
where
\begin{equation}
\vspace{-0.2cm}
\sigma^{2}= \frac{1}{N}\sum_{t=1}^{N}(d_{g}(\phi(t),q_{median}))^{2}
\end{equation}
Here, $q_{median}$ represents the Euclidean median shape along the input path $\alpha$ with $N$ shapes in the SRV representation. Moreover, Euclidean distance in the normal Gaussian cases is replaced by the geodesic distance. The weighting response according to a sample input datum is shown in Fig. \ref{fig:w}(iii).
\vspace{-0.3cm}
\begin{figure}[h]
\centering
\setlength{\tabcolsep}{0.03cm}
\begin{tabular}{ccc}
\includegraphics[width = 0.33\linewidth,height=0.27\linewidth]{W2.png} &
\includegraphics[width = 0.33\linewidth,height=0.27\linewidth]{W3.png} &
\includegraphics[width = 0.33\linewidth,height=0.27\linewidth]{W4.png} \\
\small{(i) Piecewise Constant} & \small{(ii) Bi3 local} & \small{(iii) sGaussian}
\end{tabular}
\vspace{-0.3cm}
\caption{Three weighting responses of a sample input. All of them can modify weights based on the local shape information. However, (i) needs additional analysis to detect the \textit{outliers}; (ii) and (iii) are instantaneous and automatic data-driven weightings. }
\vspace{-0.3cm}
\label{fig:w}
\end{figure}
\vspace{-0.4cm}
\section{Experimental Results}
\vspace{-0.2cm}
The analysis and comparison of the methods examined here are performed on 10 videos with $0.065$ s time sampling. Each frame in the videos has $512\times512$ resolution with $\SI{0.9}{\micro\metre}\times \SI{0.9}{\micro\metre}$ pixels. Activated neurons observed by calcium imaging have diameters that vary from $\SI{10}{\micro\metre}$ to $\SI{20}{\micro\metre}$. For experimental data, we focused on 20 regions of interest with 10 to 40 frames per sample, where neighboring neurons are activated at the same time. In these cases, separating different neurons to individual components is a challenging task. We used basic thresholding and morphological methods to produce the preliminary segmentation results. Then, we compared the experimental results on these datasets using \textit{unity weighting}, \textit{Bi3 local shape weighting} and \textit{modified shape Gaussian weighting} quantitatively and qualitatively.
\begin{table}[b]
\vspace{-0.5cm}
\centering
\caption{Comparison of different weights.}
\vspace{-0.3cm}
\label{tbl:results}
\begin{tabular}{@{}cccc@{}}
\cmidrule(l){2-4}
& Bi3 local & sGaussian & Unity \\ \midrule
Dice & \textbf{0.918} & 0.915 & 0.802 \\
MSE & 0.016 & \textbf{0.010} & 0.226 \\ \bottomrule\end{tabular}
\vspace{-0.3cm}
\end{table}
\begin{figure*}[t]
\centering
\begin{tabular}{cc}
\includegraphics[width = 0.49\textwidth,height = 0.46\textwidth]{L.png} &
\includegraphics[width = 0.49\textwidth,height = 0.46\textwidth]{R.png}\\
\small{Data oriented} & \small{Smoothness oriented}
\end{tabular}
\vspace{-0.3cm}
\caption{Comparison of data and smoothness oriented results. In each section: 1st row: input path; 2nd row: filtered path using unity weight; 3rd row: filtered path using Bi3 local; 4th row: filtered path using sGaussian. The first column shows data oriented results with larger $\rho$ values and the second column shows smoothness oriented results with smaller $\rho$.}
\label{fig:r}
\vspace{-0.2cm}
\vspace*{\floatsep}
\centering
\begin{tabular}{cccccc}
\includegraphics[width = 0.147\textwidth]{b0_01.png} &
\includegraphics[width = 0.147\textwidth]{b0_2.png} &
\includegraphics[width = 0.147\textwidth]{b0_4.png} &
\includegraphics[width = 0.147\textwidth]{b0_6.png} &
\includegraphics[width = 0.147\textwidth]{b0_8.png} &
\includegraphics[width = 0.147\textwidth]{b1.png} \\
\includegraphics[width = 0.147\textwidth]{g0_01.png} &
\includegraphics[width = 0.147\textwidth]{g0_2.png} &
\includegraphics[width = 0.147\textwidth]{g0_4.png} &
\includegraphics[width = 0.147\textwidth]{g0_6.png} &
\includegraphics[width = 0.147\textwidth]{g0_8.png} &
\includegraphics[width = 0.147\textwidth]{g1.png}
\end{tabular}
\vspace{-0.3cm}
\caption{Two-dimensional visualization of the path in shape space using isomap \cite{tenenbaum2000global} dimensionality reduction to project the path of shapes after shape filter on the Riemannian manifold. From left to right columns: the values for $\rho$ are 0.01, 0.2, 0.4, 0.6, 0.8 ,1. Top row: isomaps for using Bi3 local; bottom row: isomaps for using sGaussian. The markers on the spline shows the indices of time. Note that the scales for of each diagram from left to right is getting larger.}
\label{fig:iso}
\vspace{-0.4cm}
\end{figure*}
We evaluate the performance of these three weights using the \textit{Dice coefficient} and \textit{normalized mean squared error}. Table. \ref{tbl:results} shows the mean scores of 20 experiments. The Dice coefficient (Dice $\in[0,1]$) compares the similarity between two sets: the ground truth segments $S_g$ and the filtered segments $S_f$. They are both 3D volumes with packed 2D time-indexed results. Dice is calculated by $\frac{2|\mathrm{S_g} \cap \mathrm{S_f}|}{|\mathrm{S_g}|+|\mathrm{S_f}|}$ , where $|\cdot|$ denotes the cardinality of the corresponding set. The mean squared error ($\text{MSE}=\frac{1}{Z}\|\mathrm{S_g}-\mathrm{S_f}\|_{2}^{2}$) measures the average squared errors between ground truth and test data. The MSE is normalized by the total number of pixels $(Z)$ of all the frames. By comparing Dice and MSE factors, we demonstrate the superiority of Bi3 local and sGaussian over the unity weighting model.
More intuitively, Fig. 3 visualizes the comparison of the paths before shape filter and after shape filters in Cartesian coordinates. These results show that the application of shape filter is twofold: 1) smoothing the non-smooth boundaries of preliminary results; 2) filtering out the extra neuron from the single neuron segmentation path by substituting a new "reasonable" shape. This figure also compares the filtered shapes with different weights graphically. Fig. 3 shows the comparison between data and smoothness oriented results based on the choice of $\rho$ in equation (\ref{eq:main}). To precisely verify the influence of $\rho$ on filtered path, we utilized the isomap dimensionality reduction method \cite{tenenbaum2000global} to project the high dimensional path on the Riemannian manifold to trajectories on a 2D Cartesian map as shown in Fig. 4. When $\rho$ is near 0, the projected trajectories become smoother and denser. In this case, using shape filter can result into a smooth shape regression on the manifold.
Overall, the proposed algorithm is computationally efficient. The major time consumption is the local weighting calculation with roughly $\mathcal{O}(2nt)$ computation complexity ($n$: number of sampling points; $t$: number of time-indexed data). The performance of shape filter will degrade if the quantity of input shapes is not sufficient to generate the path on the manifold that predicts the evolution of activated neurons.
\vspace{-0.5cm}
\section{Conclusion}
\vspace{-0.3cm}
The proposed SRV-based manifold shape filter provides a powerful theoretical basis for repairing time sequences of shapes obtained in calcium imaging of neurons. The method is agnostic to the initial segmentation even for the images depicting overlapping neurons. The experimental results demonstrate that the proposed weighting methods outperform the method with equal weights. Our shape filter is effective in detecting improper segmentation and addressing the splitting problem.
\bibliographystyle{IEEEbib}
\small
|
1,314,259,995,900 | arxiv | \section{Introduction} \label{sec:intro}
The mergers of close compact object binaries are
promising gravitational-wave (GW) sources \citep{1977ApJ...215..311C}, as demonstrated by the successful detection of the mergers of three massive black hole binaries\citep{2016PhRvX...6d1015A,2016PhRvL.116f1102A,2017PhRvL.118v1101A}. Usually no electromagnetic counterparts are expected from the binary black hole mergers unless some pre-merger objects have massive accretion disks. Therefore the information that can be directly inferred are limited. For the compact object mergers involving at least one neutron star, the situation is dramatically different. These mergers are expected to launch ultra-relativistic ejecta and neutron-rich sub-relativistic outflows. The ultra-relativistic ejecta can give rise to sGRBs \citep{1989Natur.340..126E,1993ApJ...413L.101K,2004RvMP...76.1143P,2004IJMPA..19.2385Z} while the r-process nucleosynthesis takes place in the neutron-rich sub-relativistic outflows and then generates optical/infrared transients \citep[i.e., the so-called marconova or kilonova; see][]{1998ApJ...507L..59L,2005astro.ph.10256K,2010MNRAS.406.2650M,2013ApJ...774...25K,2013ApJ...775...18B,2013ApJ...775..113T,2017LRR....20....3M}. After the historical detection of the GW emission from binary black holes, people are looking forward to catching the neutron star mergers by the advanced LIGO/Virgo. The first electromagnetic counterpart of such GW events is widely believed to be macronova/kilonova since its emission is almost isotropic \citep{2012ApJ...746...48M} and moreover a few candidates have already been reported in GRB 130603B \citep{2013Natur.500..547T,2013ApJ...774L..23B}, GRB 060614 \citep{2015NatCo...6E7323Y, 2015ApJ...811L..22J} and GRB 050709 \citep{2016NatCo...712898J}. While the sGRBs are widely known to be beamed with a typical half-opening angle of $\sim 0.1$ rad, which will suppress the GRB/GW association very effectively. Therefore it is widely suspected that the first GRB/GW association will not be established in 2020s when the advanced LIGO/Virgo are running at their full sensitivity \citep{2015ApJ...809...53C, 2016ApJ...827L..16L}. Very recently it has been noticed that the GRB/GW association chance can be high up to $\sim 10\%$ since the neutron star merger events detectable for advanced LIGO/Virgo are very nearby and hence some off-beam events (if the ejecta are uniform) or the off-axis events (if the ejecta are structured) can still be detectable \citep{2017arXiv170807008J}. Even so, it is still less likely that the first neutron star merger GW event would be accompanied by a sGRB.
On 2017 August 17, the LIGO and Virgo detectors simultaneously detected a transient GW signal
that is consistent with the merger of a pair of neutron stars \citep{LVC2017}. Surprisingly, at 12:41:06.47 UT on 17 August 2017,
the Fermi Gamma-Ray Burst Monitor (GBM) triggered and located GRB 170817A \citep{von Kienlin2017}, which is just about 1.7
seconds after the GW signal and the location also overlaps with the GW event \citep[][]{Blackburn2017}.
The optical/infrared/ultraviolet followup observations \citep[e.g.][]{Coulter2017,Pian2017} found a bright unpolarized source \citep{Covino2017} and the high quality spectra are
well consistent with the macronova/kilonova model (initially it was dominated by the lanthanide-free outflow region
that may be mainly contributed by the accretion disk wind or the neutrino-driven mass loss of the hypermassive neutron star formed in
the merger; and at late times it was dominated by the emission from the lanthanide-rich region). To the surprise of the community, a remarkable GW/GRB/macronova association
is firmly established in the first GW event involving neutron star(s). The long-standing prediction that neutron star mergers are the sources of
short duration GRBs \citep{1989Natur.340..126E} has thus been directly confirmed. Moreover, the GW/GRB/macronova association has some far-reaching implications
for both physics and astrophysics, which are the focus of this work.
After the claim of the possible detection of a transient associated with GW150914 by Fermi-GBM
\citep{2016ApJ...826L...6C}, we had discussed some
implications of the transient/GW association \citep{2016ApJ...827L..16L}. This work extends our previous
approaches significantly. In addition to comparing GRB 170817A to other sGRBs and measure the velocity of the GW,
we further test the Einstein Equal Principle (i.e., for the specific scenario that the
photons and GWs may not follow the same trajectories in the gravitational field), and rule out ``the Dark Matter
Emulators and some dark energy models". Moreover, with the unambiguous detection of a large amount of the r-process
elements in the macronova associated with GW170817A, we show that the neutron star mergers are indeed the main sites of
the very heavy elements in the Universe and the binary strange star merger model for GRB 170817A is ruled out.
\section{GRB170817A and the previous SGRBs}\label{sec:compare}
\citet{2016ApJ...827L..16L} suggested to test the merger origin of old sGRBs via the comparison with
the newly detected GRBs/GW events.
If these GW-associated GRB events are
found to be similar to the (old) events without GW observation data in
many aspects, the merger scenario for sGRBs
may be supported. Though such a test is likely non-trivial,
one of the cautions is that the advanced LIGO/Virgo can only reach $z\leq 0.1$ for neutron star mergers.
For such local events, some merger-driven GRBs can be detectable even when
our line of sight is outside the cone of the ``uniform" relativistic ejecta
or a bit far from the symmetric axis of the structured outflow \citep[e.g.][]{2017arXiv170807008J,2017arXiv170807488K,Yamazaki2002}.
The shock breakout of relativistic ejecta from surrounding sub-relativistic outflow launched during the
merger may also generate some under-luminous GRBs \citep[e.g.][]{Kasliwal2017}.
Therefore, the GW-associated GRBs are likely dominated by an apparently ``under-luminous" group
and the goal outlined in \citet{2016ApJ...827L..16L} may be potentially achievable only when a sub-group of bright local sGRBs
have been detected.
Since GRB 170817A is the first short burst unambiguously associated with a GW event, it is necessary to be compared with other sGRBs.
Following \citet{2016ApJ...827L..16L} we present the $E_{\rm p,rest}-E_{\rm iso}$ and $E_{\rm p,rest}-L_\gamma$ diagrams, where $(E_{\rm p},~E_{\rm iso},~L_\gamma)$ are the (spectral peak energy, isotropic equivalent energy, luminosity) of the prompt emission, respectively, and the subscript ${\rm rest}$ represents the parameter(s) measured in the host galaxy frame of the burster. Only the sGRBs with the well measured spectra are included.
As shown in the Fig.\ref{fig:relation}, GRB 170817A is the weakest sGRB detected so far and its $E_{\rm iso}$ and $L_\gamma$ are more than two orders of magnitude lower than those recorded before. However, its $E_{\rm p,rest}=187 \pm 63$ keV \citep{2017ApJ...848L..14G} is comparable to quite a few sGRBs \citep{2017ApJ...848L..14G}. Therefore, GRB 170817A do not follow
the regular correlations (see the solid lines in Fig.\ref{fig:relation}). One possible interpretation is that GRB 170817A is an off-beam/off-axis event or a shock breakout event.
In Fig.\ref{fig:relation} we have also compared sGRB 170817A and long event GRB 980425, the closest bursts in each group. Surprisingly, sGRB 170817A and GRB 980425, two events with completely different progenitors, have rather similar $L_\gamma$ and $E_{\rm p,rest}$ (see Fig.\ref{fig:relation}). If not just a coincidence, this might indicate similar radiation processes. The progenitor of GRB 980425 is known to be a massive star. It's prompt radiation process is still unclear and an attractive model is the shock breakout of relativistic outflow from the stellar envelope with a significant density gradient \citep{Kulkarni1998}. For sGRB 170817A originated from a neutron star binary merger, there was certainly no stellar envelope. The numerical simulation suggest that the sub-relativistic outflow launched during the merger can play a similar role and GRB 170817A could be a shock breakout event \citep{Kasliwal2017}.
\begin{figure}[ht!]
\figurenum{1}\label{fig:relation}
\centering
\includegraphics[angle=0,scale=0.42]{f1a.eps}
\includegraphics[angle=0,scale=0.42]{f1b.eps}
\caption{The left and right panels present the correlations between the rest frame spectral
peak energy $E_{\rm p,rest}$ and the isotropic total energy $E_{\rm iso}$ and the luminosity
$L_\gamma$ of SGRBs, respectively.
The solid lines are the best-fit correlations, i.e., $\log E_{\rm p,rest}=(3.24\pm0.10)+(0.45\pm0.06) \log(E_{\rm iso}/10^{52}{\rm erg})$ and $\log E_{\rm p,rest}=(2.88\pm0.10)+(0.42\pm0.08) \log(L_\gamma/10^{52}{\rm erg~s^{-1}})$, while the dashed lines represent 3-sigma scatters. Only the SGRBs with well measured spectra are included.
Clearly GRB 170817A does not well follow these two correlations. The data of GRB 980425 and GRB 170817A are adopted from \citet{Ghisellini2006} and \citet{2017ApJ...848L..14G}, respectively. Other data are either taken
from \citet{ZhangFW2012} and \citet{Gruber2014} or analyzed in this work.}
\hfill
\end{figure}
\section{Time lag between the GW and GRB signals: astrophysical and physical implications}\label{sec:lag}
\subsection{The astrophysical implications of the $\sim 1.7$ s time lag between the GW and GRB signals}\label{subsection:time lag}
In \citet[][Sec.3 therein]{2016ApJ...827...75L} the model-dependent time delay between the GW and GRB signals (i.e., $\Delta t_{\rm GW-GRB}$) has been extensively investigated. As summarized in their Tab.1,
the general prediction is $\Delta t_{\rm GW-GRB} \sim 0.01-{\rm a~few}$ seconds, depending on the collapse time of the hypermassive/supramassive remnant formed in the binary neutron star mergers and on the energy dissipation process/radius.
The $\sim 1.7$ s time delay between GW170817 and GRB 170817A is in agreement with the previous predictions. It could indicate the thermal support scenario that the hypermassive/supermassive neutron star did not collapse until the neutrinos have leaked out in a timescale of $\sim 1$s, or the (magnetic) energy dissipation took place at $\sim 10^{15}-10^{16}$ cm, or our line of sight is away from the ejecta edge (the angle is $\Delta \theta$) and the prompt emission started at a radius of $\sim 4.5\times 10^{12}~{\rm cm}~(\Delta \theta/0.15)^{-2}$.
At least for GW170817/GRB 170817A, the specific model developed to explain the sGRBs with extended X-ray emission, which predicts $\Delta t_{\rm GW-GRB}\sim 10^{2}-10^{4}$ s \citep{2015MNRAS.448.2624C, 2015ApJ...802...95R}, has been ruled out. With a reasonably large GW/GRB association sample expected in the next decade, it will be extremely interesting to see whether the distribution of $\Delta t_{\rm GW-GRB,int}$ is narrow or wide, or even highly structured, with which the long $\Delta t_{\rm GW-GRB}$ model can be partly confirmed or unconvincingly ruled out.
\subsection{Measuring the GW velocity, testing the equivalence principle and ruling out some modified gravity models for dark matter and dark energy}\label{subsec:vg}
\subsubsection{Measuring the GW velocity}\label{subsubsec:velocity}
In some modified gravity theories amazing to explain away dark matter or dark energy, GW travels in the vacuum at velocities that can be different from the speed of light \citep[i.e., $\varsigma \equiv (c-v_g)/c\neq 0$; see e.g.][for reviews]{2012PhR...513....1C,Joyce2015}.
In this work we assume a constant $\varsigma$.
The sub-luminal movement of gravitons (i.e., $\varsigma>0$) has already been tightly constrained by the absence of gravitational Cerenkov radiation of ultra-high energy cosmic rays \citep{2001JHEP...09..023M}
\footnote{In order to ``save" some dark energy models, it is argued in some literature that currently no extra-galactic source of ultra-high energy cosmic rays has been identified yet and these particles may have a galactic origin, for which the Vainshtein screening mechanism is at play and the above constraint can not be applied to the cosmological data \citep{2017A&A...600A..40N}. However, the GW170817/macronova association sets an independent stringent constraint on the sub-luminal movement of gravitons (see eq.(\ref{eq:constr-2})).\label{footnote-1}}.
The superluminal constraints of gravitons (i.e., $\varsigma<0$) are weak and model-dependent \citep{2014PhRvD..89h4067Y,2016PhRvL.116f1101B,2015JCAP...03..016A,2016JCAP...02..053B}.
The simultaneously radiated GW and electromagnetic signals can set stringent/robust constraint on $\varsigma$ \citep{1998PhRvD..57.2061W,2014PhRvD..90d4048N,2016ApJ...827...75L} because after traveling a distance of $D\sim 10^{2}$ Mpc, even a very tiny $ \varsigma$ will induce a time delay of
$\Delta t_{\varsigma} \approx 1~{\rm s}~ ({\varsigma \over 10^{-16}}) ({D \over 100~\rm Mpc})$.
Note that in the absence of equivalence principle violation, $\Delta t_{\rm GW-GRB}=\Delta t_{\varsigma}+\Delta t_{\rm e}$, where $\Delta t_{\rm e}$ represents the intrinsic delay of the emitting times of the GW signal and the GRB.
In the merger-driven scenario, the GW single always precedes the GRB emission and we have $\Delta t_{\rm e}\geq 0$ and hence $\Delta t_{\rm GW-GRB}\geq \Delta t_{\varsigma}$.
For GW170817/GRB170817A with $\Delta t_{\rm GW-GRB}\sim 1.7$ s and $D\sim 40$ Mpc, the constraint reads \citep[see also][]{2017ApJ...848L..13A}
\begin{equation}
-4.3\times 10^{-16}\leq \varsigma\leq 0.
\label{eq:constr-1}
\end{equation}
Such results imply that the super-luminal movement of gravitons, if any, should not exceed the speed of light by a velocity of $1.3\times 10^{-5}~{\rm cm~s^{-1}}$.
A reliable constraint on the sub-luminal movement of GWs with $\Delta t_{\rm GW-GRB}$ for a single GW/GRB association event is less straightforward. This is because $\Delta t_{\rm e}$ could be long (for instance $\sim 10^{2}-10^{4}$ s or even longer, as speculated in \cite{2015ApJ...802...95R}), which hampers a reliable constraint on $\varsigma$. The problem can be solved in the (near) future when NS-BH merger driven GW/GRB event has been successfully detected, for which a small $\Delta t_{\rm e}<T_{90}$ is predicted, where $T_{90}$ is the duration of the prompt emission of the GRB \citep{2016ApJ...827...75L}. In the current case, the constraint on the sub-luminal movement of GWs is still possible since the optical emission of macronova/kilonova is known to present within 1 day after the merger \citep{2013ApJ...774...25K,2013ApJ...775...18B}. The successful detection of macronova/kilonova emission at $t_{\rm mn,det}\sim 0.5$ days suggest that the time delay of the arrival of the GW signal due to its sub-luminal movement can not be longer than $\sim 0.5$ days, then we have
\begin{equation}
0\leq \varsigma \leq 10^{-11}(t_{\rm mn,det}/4\times 10^{4}~{\rm s}).
\label{eq:constr-2}
\end{equation}
\citet{2017ApJ...848L..13A} reported a much tighter constraint on the sub-luminal movement of gravitons by (arbitrarily) assuming a $\sim 10$s intrinsic delay between the merger and the prompt GRB emission. Our constraint is weaker but less assumption-dependent.
In Fig. \ref{fig:varsigma} we show our bound in comparison to some previous constraints.
\begin{figure}[ht!]
\figurenum{2}\label{fig:varsigma}
\centering
\includegraphics[angle=0,scale=0.5]{f2.eps}
\caption{Constraints on the sub-luminal movement of the gravitational wave. The cosmic ray constraints are adopted from \citet{2001JHEP...09..023M}, where the weaker constraint refers to the Galactic origin model and the much stronger constraint is for the extragalactic origin model of the ultra-high energy cosmic rays. The pulsar timing constraint is taken from \citet{2008PhRvD..78d4018B}. Interesting, the Dark Matter
Emulators and a class of dark energy models (``Covariant Galileon") have been ruled out at high confidence levels (see Section \ref{subsec:rule out}).}
\hfill
\end{figure}
\subsubsection{Testing the Einstein equivalence principle}\label{subsubsec:eep}
Another scenario yielding the different arrival time of ``simultaneous" emitted GWs and photons, two different types of massless particles, is the violation of Einstein equivalence principle (EEP). In the framework of parameterized post-Newtonian approximation, deviations from EEP can be described by a parameter $\gamma$, which is 1 in general relativity. Therefore, the GW/GRB association is very suitable to test the EEP violation \citep{1999BASI...27..627S,2016PhRvD..94b4061W,2016ApJ...827...75L}. The Shapiro delay is generally calculated as
$\Delta t_{\rm gra}=-\frac{\Delta \gamma}{c^3}\int_{r_{\rm o}}^{r_{\rm e}}U(r(t); t)$ \citep{1964PhRvL..13..789S,1988PhRvL..60Q.176K,1988PhRvL..60..173L}, where the integral is along the travelling path of photons and $U(r(t); t)$ is the gravitational potential.
The time delay caused by milky way can be calculated by $\Delta t_{\rm gra}=1.7 \times 10^{7}~{\rm s}~ \Delta\gamma ( {M_{\rm MW}}/{6\times 10^{11} M_{\odot}}) (\log(D/b)/{4\log10})$ \citep{1973grav.book.....M,1988PhRvL..60..173L},
where $\Delta \gamma \equiv \gamma_{\rm photon}-\gamma_{\rm GW}$ (if $\Delta \gamma\neq 0$, it would mean that the photons and GWs do not follow the same trajectories in the gravitational field of the galaxy and the EEP is violated), $M_{\rm MW}$ is the total mass of milky way and $b$ is the impact parameter of the particle paths relative to the center of the milky way.
Now the observed $\Delta t_{\rm GW-GRB}$ should be expressed as
$\Delta t_{\rm GW-GRB}=\Delta t_{\rm e}-\Delta t_{\varsigma} + \Delta t_{\rm gra}$.
We thus need a group of GW/GRB events, in particular those driven by NS-BH mergers, at different $D$ to self-consistently constrain $\varsigma$ and $\Delta \gamma$. This is because
for NS-BH merger driven GW/GRB events it is generally expected that $\Delta t_{\rm e}\leq T_{90}$ \citep[please see][for the extensive discussion]{2016ApJ...827...75L}. For the current data and under the assumptions of $\varsigma=0$ (i.e., in the vacuum the GW velocity equals to the speed of light) and $\Delta t_{\rm e}=0$ (i.e., $\Delta t_{\rm GW-GRB}=\Delta t_{\rm gra}$), a rough constraint on $\Delta \gamma$ reads
\begin{equation}
\Delta \gamma \leq 10^{-7}~({\Delta t_{\rm GW-GRB}\over 1.7~{\rm s}})\left(\frac{M_{MW}}{6\times 10^{11} M_{\odot}}\right)^{-1} [\frac{\log(D/b)}{4\log10}]^{-1}.
\end{equation}
Such a constraint can be further improved. As noticed in \citet{Nusser2016}, the potential
fluctuations from the large scale structure, which can be found from the observed peculiar
velocities (deviations from a pure Hubble
flow; $v_{\rm p}$) of galaxies, are significantly larger than the gravitational
potential of the Milky Way ($U_{\rm MW}$). Peculiar velocity data yield a bulk peculiar velocity of
$v_{\rm p}\sim 300~{\rm km~s^{-1}}$ for the sphere of radius $R\sim 50$ Mpc
around us \citep{MaYZ2013}, suggesting a gravitational potential
$U\sim v_{\rm p} R H_{0} \sim 25U_{\rm MW}$ at the site of GW170817/GRB170817A, where $H_{0}$ is the Hubble's constant.
Therefore, for the current data we have a constraint
\begin{equation}
\Delta \gamma \leq 4\times 10^{-9}.
\end{equation}
Such a constraint has taken into account the contribution of the gravitational potential of the large scale structure, which is thus stronger than the bound inferred from the better measured Milky Way gravitational potential alone \citep[see also][]{2017ApJ...848L..13A,2017JCAP...11..035W}.
Here we simply adopt the GW/GRB association to set the bound. In the future, {\it if the strong gravitational lensing of GW/GRB association events can be detected as well,
one can use the delay times of the GW/GRB signals and their corresponding lensing ``counterparts" to set stringent constraint on $\Delta \gamma$.} This is because the gravitational wave potential of the lens (in particular the galaxy clusters) will induce an additional Shapiro delay if $\Delta \gamma \neq 0$. The main challenge for such an approach is however the absence/rarity of such events in the foreseeable future.
\subsubsection{Ruling out Dark Matter
Emulators and some dark energy models}\label{subsec:rule out}
In general relativity, the GW velocity is the same as the speed of light.
However, major outstanding theoretical issues such
as the nature of dark energy and dark matter have led
to consider the possibility that gravity differs from GR
in some regimes \citep[see][for reviews]{2012PhR...513....1C,Joyce2015}.
Some of these models predict very different arrival times of the simultaneously radiated GW/GRB signals and hence can be accurately tested.
For example, motivated by the non-detection of dark matter particles so far, there are a group of modified gravity theories, known as dark matter emulators, which dispense with the need for dark matter. These models have the property that weak GWs couple to the metric that would follow from general relativity without dark matter whereas ordinary particles couple to a combination of the metric and other fields which reproduces the result of general relativity with dark matter. The absence of reliable detection of dark matter particles so far renders such a possibility attractive. \cite{2008PhRvD..77l4041D} show that there is an appreciable difference in the Shapiro delays of GWs and photons from the same source, with the GWs always arriving first.
Even for the very nearby extragalactic sources, the predicted time-lags between the GW signals and the electromagnetic counterparts ($\Delta t_{\rm DME}$) are several hundreds of days. Additional comparable time-lag arises during the propagation in the host galaxy of the source. If it is indeed the case, in the extragalactic space the GW should move subluminally to yield an almost simultaneous arrival of GW170817 and GRB 170817A, i.e., $\Delta t_{\varsigma}+\Delta t_{\rm DME}\approx 0$, which then yields
\begin{equation}
\varsigma \sim 2.1\times 10^{-8}({D\over 40~{\rm Mpc}})({\Delta t_{\rm DME}\over 10^{3}~{\rm days}}).
\end{equation}
Such a $\varsigma$, however, is already about 3 orders of magnitude larger than our subluminal bound set by GW170817/AT2017gfo in eq.(\ref{eq:constr-2}). The tension is far stronger (i.e., the divergency is by a factor of $\sim 10^{7}$ or more) if the submuminal movement constraints set by ultrahigh-energy cosmic rays applies (see however footnote \ref{footnote-1}).
We therefore conclude that the Dark Matter Emulators has been ruled out and the dark matter model is favored (See Fig.\ref{fig:varsigma}).
There is also a large class of scalar-tensor theories, which predict that GWs propagate with velocity different from the speed of light and a difference of ${\cal O}(1)$ is possible for many models of dark energy. For example, in the model of ``Covariant Galileon"\citep{2009PhRvD..79h4003D,2014JCAP...08..059B}, the violation parameter is about $\varsigma \sim 10\% - 100\%$, and the delay between GW and electromagnetic signals from distant events will run far beyond human time-scales \citep{2016JCAP...03..031L,2017PhLB..765..382L,2017PhRvD..95h4029B}, which is clearly not the case for GW170817/GRB 170817A. The first GW/GRB association event thus places very stringent
constraint on theories allowing variations in the speed of
GWs and eliminates many contender models for cosmic acceleration (See Fig.\ref{fig:varsigma}).
\section{Implications for the r-process element origin and Constraining the double strange star merger model}\label{sec:macronova}
\subsection{Neutron star mergers as the main site of the r-process element production}\label{subsection:r-process}
The heavy elements origin, also known as nucleosynthesis, is one of the mysteries in the universe \citep{2007PhR...442..237Q}. The widely discussed sites include the core collapse supernovae \citep{1957RvMP...29..547B} and neutron star mergers \citep{1974ApJ...192L.145L,1989Natur.340..126E}. Though there are increasing evidence that the neutron star mergers are significant site of the heavy elements \citep[e.g.][]{2013Natur.500..547T,2015NatCo...6E7323Y,2015NatPh..11.1042H,2016Nat...531...610,2016NatCo...712898J}, the unambiguous detection of a large amount of r-process material in AT2017gfo provides the most direct evidence \citep{Pian2017,2017Natur.551...80K,2017arXiv171005443D,2017ApJ...848L..19C}. To account for the measured total mass of galactic heavy r-process elements (i.e. $A>90$), the binary neutron star merger rate averaged over the galaxy age should be $\langle R\rangle \approx 50~\mathrm{Myr^{-1}}~({M_{\rm ej,A>90}}/{0.01M_{\odot}})^{-1}$ \citep{2015NatPh..11.1042H}, where $M_{\rm ej, A>90}$ refers to the heavy element mass ejection for a single event. However, the merger rate is actually not a constant and the inferred merger rate at present time ($R_{\rm 0}$) may be lower than the averaged one by a factor of a few (i.e., $R_{\rm o}<\langle R\rangle$). Thus we draw the lines of $R_0=(1, ~0.5,~0.2)\langle R\rangle$ in Fig.\ref{fig:heavy elements}.
Based on four ``nearby" sGRBs with reasonably estimated jet half-opening angles, a {\it conservative} estimate of the local ($z\leq 0.2$) neutron star merger rate is $\sim 583^{+923}_{-318}~{\rm Gpc^{-3}~yr^{-1}}$ \citep{2017arXiv170807008J}. Such a (conservative) merger rate is well consistent with that ($\sim 1540^{+3200}_{-1220}~{\rm Gpc^{-3}~yr^{-1}}$) inferred from the successful detection of a neutron star merger event by advanced LIGO/Virgo in their second observational run \citep{LVC2017}.
Since the density of the Milky Way Equivalence Galaxy density in the local universe is $\sim 1.16 \times 10^{-2}~\rm Mpc^{-3}$ \citep{2008ApJ...675.1459K}, we can convert the number to the Milky Way merger rate $\sim 50^{+80}_{-27} \rm Myr^{-1}$. On the other hand, the macronova spectrum modeling suggests the mass ejection of GRB170817 to be $M_{\rm ej}\sim 0.04\pm0.01~M_\odot$ and the heavy r-process material may consist of $\sim 1/2$ of the total \citep{Pian2017}. These information have been presented in Fig.\ref{fig:heavy elements}. The ``data point" is above the line of $R_0=\langle R\rangle$, which is in support of the neutron star merger origin of the r-process material \citep[see also e.g.][]{2017Natur.551...80K,2017arXiv171005443D,2017ApJ...848L..19C} and furthermore implies that either the ``averaged" rate of neutron star mergers in the Milky Way is lower than that of the ``local" Universe or the ``typical" mass ejection of such mergers is significantly smaller than $0.04M_{\odot}$. For the latter, one caveat is that the neutron-rich outflow mass estimated for GRB 130603B, GRB 060614 and GRB 050709 are in the range of $\sim 0.02-0.1M_\odot$ \citep{2013Natur.500..547T,2015NatCo...6E7323Y,2016NatCo...712898J}.
\begin{figure}[ht!]
\figurenum{3}\label{fig:heavy elements}
\centering
\includegraphics[angle=0,scale=0.5]{f3.eps}
\caption{The binary neutron star merger rate and ejected mass inferred from current GRB/macronova observations, in comparison to what needed to reproduce the Milky Way r-process material. The green solid, dotted, dashed lines are $R_0=(1.0,~0.5,~0.2)\langle R \rangle$, respectively. The blue vertical dotted lines represent the ejecta mass range inferred from current macronova modeling, assuming that $\sim 1/2$ of the ejected material with $A\geq 90$. The data points represent the neutron star merger rates and heavy element mass of GW170817. If a $\sim 0.04M_\odot$ ejecta mass is typical for the mergers, the rate in local universe is likely higher than that ``needed" in the galaxy, which may imply a merger rate of our Milky Way lower than other galaxies or typically $M_{\rm ej}\sim 0.01M_\odot$.}
\hfill
\end{figure}
\subsection{Constraining the double strange quark star merger model for GRB 170817A} \label{subsec:strange}
Strange matter made of quarks may be the ground state of matter, and neutron stars with the sufficiently-high central density may become strange stars \citep{1970PThPh..44..291I,1986ApJ...310..261A}. If strange stars exist, there may be some binary strange star systems. Thanks to the orbital decay resulting in the energy and angular momentum loss via GW, some systems will merger within the Hubble timescale and give rise to GW events and short duration gamma-ray bursts \citep{1991ApJ...375..209H,2017arXiv171004964L}. During the merger phase of binaries stars, some strange matter are injected into the interstellar medium. The mass distribution outcome of the fragmentation of the strange matter has been investigated \citep{2014PhLB..733..164P} and the expected nucleosynthesis spectra for the strange star-strange star merger scenario have been calculated \citep{2017IJMPS..4560042P}. Different from the neutron star mergers, no significant r-process nucleosynthesis is expected since the high temperature deconfinement of strange matter would produce large amounts of neutrons and protons and the mass buildup would proceed in a Big-Bang nucleosynthesis like scenario. In particular, the neutron to proton ratio (typically $\sim 0.7$) would allow to reach the iron peak only \citep{2017IJMPS..4560042P}. The decay of the heavy elements will still heat the outflow and yield optical transient. The absence of lanthanides however does not result in a relatively long-lasting infrared bump. Moreover, the spectrum should be significantly different from the neutron star merger-driven kilonova/macronova.
The high quality kilonova/macronova spectra, in particular those measured at late times \citep{Pian2017}, however are well consistent with the synthetic spectra of r-process material model \citep{2013ApJ...774...25K}. The double strange star merger scenario for GW170817/GRB 170817A is thus convincingly ruled out.
\section{Summary} \label{sec:discussion}
The GW/GRB/macronova association established in August 2017 directly confirms the long-standing suggestions that neutron star merges do take place frequently and generate strong GWs, which further produce short gamma-ray flashes and launch r-process material. In this work we have discussed some far-reaching additional physical and astrophysical implications. In particular, we show that:
\begin{itemize}
\item The short time delay between the GW and GRB signals set a very tight constraint on the possible superluminal movement of GWs and the difference between its velocity and the speed of light should be within a factor of $\sim 4.3\times 10^{-16}$ \citep[see also][]{2017ApJ...848L..13A}. The GW/macronova association set an independent constraint on the possible subluminal movement of GWs and the difference between its velocity and the speed of light should be within a factor of $\sim 10^{-11}$. The underlying assumption for these constraints is that the GW velocity is independent of the frequency (see Section \ref{subsubsec:velocity}). In the foreseeable future, these two constraints can be improved by (quite) a few orders of magnitude.
\item The possible violation of weak equivalence principle is tightly constrained (the additional assumption is that in the vacuum the GW velocity equals to the speed of light) and the difference of the gamma-ray and GW trajectories in the gravitational field of the galaxy and the local universe should be within a factor of $\sim 3.4\times 10^{-9}$ (see Section \ref{subsubsec:eep}).
\item The so-called Dark Matter Emulators and some contender models for cosmic acceleration, such as ``Covariant Galileon", which predicted long time delay of the arrival times of the simultaneously radiated GWs and photons from the same source, are ruled out (see Section \ref{subsec:rule out} and Fig.\ref{fig:varsigma}).
\item The high neutron star merger rate (inferred from both the local sGRB data and the GW data) together with the significant ejected mass strongly suggest that such mergers are main sites of heavy r-process nucleosynthesis (see Section \ref{subsection:r-process} and Fig.\ref{fig:heavy elements} and also \citet{2017Natur.551...80K}, \citet{2017arXiv171005443D}, \cite{2017ApJ...848L..19C}). {Moreover, it is likely that the ``averaged" rate of neutron star mergers in the Milky Way is lower than that of the ``local" Universe.}
\item The successful identification of Lanthanide elements in the macronova/kilonova spectrum also excludes the possibility that the progenitors of GRB 170817A are a binary strange quark star system (see Section \ref{subsec:strange}).
\end{itemize}
Finally, we'd like to mention the puzzling fact that sGRB 170817A ($D\sim 40$ Mpc) and GRB 980425 ($D\sim 36$ Mpc), two events with completely different progenitors, have almost the same $L_\gamma$ and $E_{\rm p,rest}$ (see Fig.\ref{fig:relation}), which might indicate similar prompt radiation processes if not just a coincidence.
\section*{Acknowledgments}
We thank Yi-Ming Hu and Yi-Fan Wang for useful discussions and the anonymous referee for helpful suggestions.
This work was supported in part by 973 Programme of China (No. 2014CB845800), by NSFC under grants 11525313 (the National Natural Fund for Distinguished Young Scholars), 11273063 and 11433009, by the Chinese Academy of Sciences via the Strategic Priority Research Program (No. XDB09000000) and the External Cooperation Program of BIC (No. 114332KYSB20160007).
|
1,314,259,995,901 | arxiv | \section{Introduction}
\label{sect:intro}
The detection of gravitational waves (GW) has revolutionized our understanding of black holes (BHs) and neutron stars (NSs). Since the first discovery, the LIGO and Virgo observatories have confirmed the detection of more than ten events \citep{aasi2015,acern2015,abb2019,abb2019b}. These observations have brought several surprises, including GW190412 \citep{ligo2020}, a binary black hole (BBH) merger with a mass ratio of nearly four-to-one, GW190814 \citep{ligo2020b}, a merger between a BH and a compact object of about $2.5\msun$, and GW190425 \citep{ligo2020c}, a merger of a binary NS of total mass nearly $3.4\msun$, the most massive binary NS observed so far.
The origin of binary mergers is still highly uncertain, with several possible scenarios that could potentially account for most of the observed events. These include mergers from isolated evolution of binary stars \citep{belcz2016,demi2016,giac2018}, dynamical assembly in dense star clusters \citep{askar17,baner18,fragk2018,rod18,sams18,ham2019,krem2019}, mergers in triple and quadruple systems induced through the Kozai-Lidov mechanism \citep{antoper12,ll18,fragg2019,flp2019,fragk2019,fragrasio2020}, and mergers of compact binaries in galactic nuclei \citep{bart17,sto17,rasskoc2019,mck2020}.
Another surprise is GW190521, a binary black hole (BBH) of total mass $\sim 150\msun$, consistent with the merger of two BHs with masses of $85^{+21}_{-14} \msun$ and $66^{+17}_{-18} \msun$ \citep{ligo2020new1,ligo2020new2}. Current stellar models predict a dearth of BHs both with masses larger than about $50\msun$ (high-mass gap) and smaller than about $5\msun$ (low-mass gap), with exact values depending on the details of the progenitor collapse \citep[e.g.,][]{fryer2012}. The high-mass gap results from the pulsational pair-instability process, which affects massive progenitors. Whenever the pre-explosion stellar core is in the range $45 - 65\msun$, large amounts of mass can be ejected, leaving a BH remnant with a maximum mass around $40 - 50\msun$ \citep{heger2003,woosley2017}. Therefore, GW190521 challenges our understanding of massive-star evolution.
BHs more massive than the limit imposed by pulsational pair-instability can be produced dynamically through repeated mergers of smaller BHs in the core of a dense star cluster, where three- and four-body interactions can catalyze the growth of a BH seed \citep[e.g.,][]{gultek2004}. A fundamental limit for repeated mergers comes from the recoil kick imparted to merger remnants as a result of anisotropic GW emission \citep{lou10,lou11}. Depending on the mass ratio and the spins of the merging objects, the recoil kick can be as high as $\sim 100 - 1000\kms$. If it exceeds the local escape speed, the merger remnant is ejected from the system and further growth is quenched. A number of studies have shown that massive globular clusters \citep[e.g.,][]{rodetal2019}, super star clusters \citep[e.g.,][]{rodr2020}, and nuclear clusters at the centers of galaxies \citep[e.g.,][]{anto2019,frsilk2020} are the only environments where the mergers of second- ($2$g) or higher-generation ($N$g) BHs could take place.
In this Letter, we explore the possibility that GW190521-like events (BBHs with total mass around $150\msun$) are the product of repeated mergers in a star cluster. The Letter is organized as follows. In Section~\ref{sect:limits}, we discuss the role of the cluster metallicity and escape speed in the assembly of massive BHs. In Section~\ref{sect:numer}, we discuss the assembly of massive BHs through repeated mergers in a variety of dynamically-active environments. In Section~\ref{sect:detect}, we discuss the detection probability for GW190521-like events. Finally, in Section~\ref{sect:concl}, we discuss the implications of our results and draw our conclusions.
\section{Limits on the hierarchical growth of Black Hole seeds}
\label{sect:limits}
Two main factors determine the ability of a BH seed to grow via repeated mergers: the environment metallicity and the host-cluster escape speed. The former sets the initial maximum seed mass, while the latter determines the maximum recoil kick that can be imparted to a merger remnant to be retained within the host cluster.
\subsection{Metallicity}
Dense star clusters form with a variety of initial masses, concentrations, and metallicities. Open clusters and super star clusters are high-metallicity environments \citep[e.g.,][]{portegies2010}, in contrast to most globular clusters \citep[e.g.,][]{harris1996}. Nuclear star clusters present both high- and low-metallicity stars, as a result of their complex history and various episodes of accretion and star formation \citep[e.g.,][]{anto2013}.
Metallicity is crucial in determining the maximum BH mass in a given environment. Low-metallicity systems can form BHs much more massive than high-metallicity systems. This difference is a result of stellar winds in massive stars. Higher-metallicity stars experience stronger winds and, as a consequence, larger mass-loss rates \citep{vink2001}, resulting in less massive BH progenitors prior to stellar collapse \citep{spera2017}. Therefore, typical globular clusters are expected to produce more massive BHs than open and super star clusters.
To demonstrate the role of metallicity, we consider a sample of stars in the mass range of BH progenitors, $[20\msun-150\msun]$, and evolve them using the stellar evolution code \texttt{SSE} \citep{hurley2000,hurley2002}. We use the updated version of \texttt{SSE} from \citet{banerjee2020}, with the most up-to-date prescriptions for stellar winds and remnant formation; it produces remnant populations consistent with those from \texttt{StarTrack} \citep{belc2008}. We choose four different values of the metallicity $Z$, namely $0.01\zsun$, $0.1\zsun$, $0.5\zsun$, and $\zsun$.
In Figure~\ref{fig:sse}, we show the final BH mass as a function of the zero-age main sequence (ZAMS) mass for single stars computed using \texttt{SSE}, for different metallicities. For solar metallicity, the maximum BH mass is of about $15\msun$, and this increases to about $35\msun$ and $45\msun$ for $Z=0.5\zsun$ and $Z=0.1\zsun$, respectively. Note that, for the stellar-mass range considered, metallicities lower than about $0.1\zsun$ would all produce very similar initial BH mass functions.
\begin{figure}
\centering
\includegraphics[scale=0.6]{sse.pdf}
\caption{BH mass as a function of zero-age main-sequence (ZAMS) mass for single stars, computed using \texttt{SSE} \citep{hurley2000,hurley2002} with updates from \citet{banerjee2020}. The four colors denote the four different metallicities.}
\label{fig:sse}
\end{figure}
Metallicity, therefore, limits the initial mass of the BH seed that can undergo repeated mergers. At the same time, it constrains also the maximum mass of the BHs the seed can merge with. In a low-metallicity cluster, GW190521 components could be 2g BHs, each the remnant of a merger of 1g BHs. On the other hand, they could be 4g or 5g BHs if their progenitors were born in a solar-metallicity environment.
We note in passing that runaway growth of a very massive star could also be triggered through physical collisions if the initial density of the host cluster is sufficiently high. Such a star could eventually collapse to form a BH in the high-mass gap, or even an intermediate-mass BH \citep{porte2004,gurk2006,pan2012,krgap2020}. However, stellar evolution is highly unconstrained in this regime. Some models suggest that only a low-metallicity star with mass $\gtrsim 200\msun$ could directly collapse to a BH more massive than about $80\msun$ \citep{spera2017,renzo2020}.
\subsection{Recoil kicks}
The escape speed $v_{\rm esc}$ from the core of a star cluster is determined by its mass and density profile. The more massive and dense the cluster is, the higher the escape speed. Open clusters, globular cluster, and nuclear star clusters have typical escape speeds $\sim 1\kms$, $\sim 10\kms$, and $\sim 100\kms$, respectively. Note, however, that the escape speed of a given environment may change over time depending on the details of its formation history and dynamical evolution \citep[see e.g. Fig.~3][]{rodr2020}.
\begin{figure*}
\centering
\includegraphics[scale=0.55]{probabret.pdf}
\caption{Probability to retain the merger remnant of a BBH as a function of the BBH mass ratio and BH spins, for different cluster escape speeds: $30\kms$ (top-left), $50\kms$ (top-right), $100\kms$ (center-left), $200\kms$ (center-right), $300\kms$ (bottom-left), $500\kms$ (bottom-right). BH spins are drawn from a uniform distribution in the range $[0,\chi_{\max}]$.}
\label{fig:probret}
\end{figure*}
Due to the anisotropic emission of GWs at merger, a recoil kick is imparted to the merger remnant \citep{lou12}. The host escape speed then determines the fraction of retained remnants. The recoil kick depends on the asymmetric mass ratio $\eta=q/(1+q)^2$, where $q=m_2/m_1<1$ ($m_1$ and $m_2$ are the masses of the merging BHs), and on the magnitude of the dimensionless spins, $|\mathbf{{\chi_1}}|$ and $|\mathbf{{\chi_2}}|$ (corresponding to $m_1$ and $m_2$). We model the recoil kick following \citep{lou10} as
\begin{equation}
\textbf{v}_{\mathrm{kick}}=v_m \hat{e}_{\perp,1}+v_{\perp}(\cos \xi \hat{e}_{\perp,1}+\sin \xi \hat{e}_{\perp,2})+v_{\parallel} \hat{e}_{\parallel}\,,
\label{eqn:vkick}
\end{equation}
where
\begin{eqnarray}
v_m&=&A\eta^2\sqrt{1-4\eta}(1+B\eta)\\
v_{\perp}&=&\frac{H\eta^2}{1+q}(\chi_{2,\parallel}-q\chi_{1,\parallel})\\
v_{\parallel}&=&\frac{16\eta^2}{1+q}[V_{1,1}+V_A \tilde{S}_{\parallel}+V_B \tilde{S}^2_{\parallel}+V_C \tilde{S}_{\parallel}^3]\times \nonumber\\
&\times & |\mathbf{\chi}_{2,\perp}-q\mathbf{\chi}_{1,\perp}| \cos(\phi_{\Delta}-\phi_{1})\,.
\end{eqnarray}
The $\perp$ and $\parallel$ refer to the directions perpendicular and parallel to the orbital angular momentum, respectively, while $\hat{e}_{\perp,1}$ and $\hat{e}_{\perp,2}$ are orthogonal unit vectors in the orbital plane. We have also defined the vector
\begin{equation}
\tilde{\mathbf{S}}=2\frac{\mathbf{\chi}_{2,\perp}+q^2\mathbf{\chi}_{1,\perp}}{(1+q)^2}\,,
\end{equation}
$\phi_{1}$ as the phase angle of the binary, and $\phi_{\Delta}$ as the angle between the in-plane component of the vector
\begin{equation}
\mathbf{\Delta}=M^2\frac{\mathbf{\chi}_{2}-q\mathbf{\chi}_{1}}{1+q}
\end{equation}
and the infall direction at merger. Finally, we adopt $A=1.2\times 10^4$ km s$^{-1}$, $H=6.9\times 10^3$ km s$^{-1}$, $B=-0.93$, $\xi=145^{\circ}$ \citep{gon07,lou08}, and $V_{1,1}=3678$ km s$^{-1}$, $V_A=2481$ km s$^{-1}$, $V_B=1793$ km s$^{-1}$, $V_C=1507$ km s$^{-1}$ \citep{lou12}. The final total spin of the merger product and its mass are computed following \citet{rezzolla2008}.
In Figure~\ref{fig:probret}, we show the probability (over $10^4$ realizations) to retain the merger remnant of a BBH as a function of the BBH mass ratio ($q$) for different cluster escape speeds: $30\kms$ (top-left), $50\kms$ (top-right), $100\kms$ (center-left), $200\kms$ (center-right), $300\kms$ (bottom-left), $500\kms$ (bottom-right). We sample BH spins from a uniform distribution in the range $[0,\chi_{\max}]$. The recoil kick depends crucially on the maximum intrinsic spin of the merging BHs; while for low spins $v_{\rm kick}\sim 100\kms$, for high spins $v_{\rm kick}\sim 1000\kms$ \citep[e.g.,][]{holl2008,fgk2018,gerosa2019,anto2019,fragrasio2020,mapell2020}. The mass ratio also plays an important role, with the recoil kick decreasing significantly in magnitude for $q\lesssim 0.1$ (both for spinning and non-spinning BHs) and for $q\gtrsim 0.9$ (non-spinning BHs). Clusters with low-escape speeds ($v_{\rm esc}\lesssim 100\kms$) can only retain the merger products of very unequal-mass binaries ($q\lesssim 0.1$) and the remnants of roughly equal-mass BBH mergers with low-spinning components. On the other hand, clusters with larger escape speeds ($v_{\rm esc}\gtrsim 100\kms$) can retain, with various probabilities, remnants of various mass-ratio and spins. The remnants of the merger of highly-spinning equal-mass BBHs could even be ejected in very massive and dense clusters ($v_{\rm esc}\approx 500\kms$).
The host cluster escape speed plays a crucial role in the growth of a BH seed through repeated mergers. Typical small open clusters ($v_{\rm esc}\sim 1\kms$) do not provide the right environment for growth of a BH seed. If BHs are born with low spins \citep{fullerma2019}, the recoil kick could be small enough to retain a merger product within a typical globular cluster ($v_{\rm esc}\sim 10\kms$). However, if BHs are born with high spins, only more massive and denser systems, such as nuclear star clusters ($v_{\rm esc}\sim 100\kms$), could retain the remnant. In a low-metallicity cluster, GW190521 components could be 2g BHs, remnants of the mergers of nearly equal-mass 1g BHs. To retain them in a cluster with $v_{\rm esc}\lesssim 200\kms$, the progenitors should have been born with low spins. Otherwise, the two components of GW190521 could have been formed through repeated mergers of a massive 1g BH ($\gtrsim 40\msun$) with low-mass 1g BHs ($\lesssim 10\msun$) in a nuclear star cluster ($v_{\rm esc}\gtrsim 200\kms$). On the other hand, in a high-metallicity environment, GW190521 components could be 4g or 5g BHs, since the maximum 1g BH mass is limited to about $15\msun$. Therefore, they should be retained after several mergers. Since there is a negligible probability to retain a BH remnant in the region $0.2\lesssim q\lesssim 0.8$ for $v_{\rm esc}\lesssim 200\kms$, only nuclear star clusters or the most massive globular clusters and super star clusters ($v_{\rm esc}\gtrsim 200\kms$) could still form GW190521.
\section{GW190521-like events from repeated mergers}
\label{sect:numer}
\begin{figure*}
\centering
\includegraphics[scale=0.55]{probabret2_z001Sun.pdf}
\caption{Probability of forming a BBH of total mass $150\msun$ through successive mergers as a function of the BH seed mass for different cluster escape speeds: $30\kms$ (top-left), $50\kms$ (top-right), $100\kms$ (center-left), $200\kms$ (center-right), $300\kms$ (bottom-left), $500\kms$ (bottom-right). BH spins are drawn from a uniform distribution in the range $[0,\chi_{\max}]$. The cluster metallicity is fixed to $Z=0.01\zsun$.}
\label{fig:probret2a}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[scale=0.55]{probabret2_zSun.pdf}
\caption{Same as Figure~\ref{fig:probret2a}, but for $Z=\zsun$.}
\label{fig:probret2b}
\end{figure*}
GW190521 is a remarkable event since both of its components are likely the remnant of a previous BBH merger \citep[see also][]{ligo2020new2}. In this Section, we discuss the formation of GW190521-like events, requiring that a binary of total mass $150\msun$ be formed through repeated mergers of a growing BH seed within a cluster of escape speed $v_{\rm esc}$.
We run $10^4$ Monte Carlo experiments, where we simulate the growth of a BH seed via repeated mergers. After each merger, we compute the recoil kick using Eq.~\ref{eqn:vkick}. If $\vkick>v_{\rm esc}$, we consider the BH ejected from the system and further growth is impossible; otherwise, we proceed with generating a new merger event. In our numerical experiment, the probability of forming a BBH of $150\msun$ depends mainly on four parameters:
\begin{enumerate}
\item the cluster metallicity $Z$, which fixes the maximum initial seed BH mass and the maximum mass of the BHs it can merge with;
\item the steepness of the pairing probability for BHs in binaries that merge, $\propto (m_1+m_2)^\beta$, which sets the secondary mass;
\item the maximum spin $\chi_{\max}$, which affects the maximum recoil kick;
\item the escape speed from the host cluster $v_{\rm esc}$, which fixes the maximum kick velocity for the remnant to be retained within the host cluster.
\end{enumerate}
In our study, we choose two values of the metallicity, $Z=\zsun$ and $Z=0.01\zsun$, which fix the maximum seed mass to about $15\msun$ and $45\msun$, respectively. Note that, for the stellar-mass range considered, metallicities lower than about $0.1\zsun$ would all produce very similar initial BH mass functions \citep{belcz2010}. However, as mentioned above, collisions and mergers of massive stars could produce a BH remnant in the high-mass gap, or even an intermediate-mass BH \citep{porte2004,gurk2006,krgap2020}. To explore this possibility, we also consider BH seed masses up to $100\msun$. In our models, the cluster metallicity only sets the maximum mass for the BHs the seed can merge with. We sample the intrinsic spins at birth of BHs from a uniform distribution in the range $[0,\chi_{\max}]$, with $0\le \chi_{\max} \le 1$. We set $\beta=4$, as appropriate for binaries formed via dynamical three-body processes \citep{olear2016}. Finally we consider $\vesc$ in the range $[30\kms,500\kms]$ to encompass the full range of star clusters, from small open clusters to very massive nuclear star clusters.
Figure~\ref{fig:probret2a} shows the probability to form a BBH of total mass $150\msun$ as a function of the seed mass for different cluster escape speeds. We set the cluster metallicity to $Z=0.01\zsun$. A crucial role is played by $\chi_{\max}$. The probability of forming a BBH of total mass of about $150\msun$ is nearly $3$--$4$ times larger when $\chi_{\max}=0.2$ than when $\chi_{\max}=1$, even for clusters with large escape speeds. We find that clusters with escape speeds $\lesssim 50\kms$ cannot assemble such a massive BBH since the recoil kick is too large to retain a growing BH seed, independent of the maximum spin at birth. Clusters with escape speeds of $100\kms$ can form a massive BBH with total mass $150\msun$ only for large initial seed masses, $\gtrsim 70\msun$, and low-spins, with $\chi_{\max} < 0.4$. Only star clusters with $v_{\rm esc}\gtrsim 200\kms$ could form a BBHs of $150\msun$ starting from a highly-spinning BH seed of mass $\lesssim 50\msun$, which is consistent with current stellar evolutionary models for $Z=0.01\zsun$.
In Figure~\ref{fig:probret2b}, we explore the same parameter space but with $Z=\zsun$. As a general trend, solar metallicity favors the formation of a massive BBH for a wider portion of the parameter space, since the maximum BH mass is limited to $15\msun$, thus producing mostly mergers with low mass ratios. This, in turn, leads to lower recoil kicks imparted to the merger remnant, which can be retained more easily. However, if the BH seed mass is limited to $15\msun$, about the maximum mass allowed by stellar evolutionary models at solar metallicity, the formation of a BBH of about $150\msun$ is probable only for $v_{\rm esc}\gtrsim 200\kms$.
We have also run models where we consider $\beta=3$ and $\beta=5$, to study the role of the steepness of the pairing probability for BBHs that merge. We find no significant difference from the case $\beta=4$.
\section{Detection probability for GW190521-like events}
\label{sect:detect}
\begin{figure*}
\centering
\includegraphics[scale=0.55]{pdetec.pdf}
\caption{Detection probability (Eq.~\ref{eqn:detec}) for different primary masses ($\ge 60\msun$) as a function of the secondary mass, assuming a signal-to-noise ratio threshold $\rho_{\rm thr}=8$ and a single LIGO instrument at design sensitivity. Different colors represent different primary masses. Left panel: source redshift $z=0.1$; right panel: source redshift $z=0.2$.}
\label{fig:pdetec}
\end{figure*}
We now consider the probability of detecting a BBH merger where one of the components ($m_1$) is a massive BH.
For a source with masses $m_1$ and $m_2$, merging at a luminosity distance $D_{\rm L}$, the signal-to-noise ratio (S/N) can be expressed relative to the strain noise spectrum of a single interferometer $S_{\rm n}(f)$ and the Fourier transform $\tilde{h}(f)$ of the GW strain received at the detector by an arbitrarily oriented and located source as \citep{oshau2010}
\begin{equation}
\rho=\sqrt{4 w^2\int_{0}^{f_{\rm ISCO}} \frac{|\tilde{h}(f)|^2}{S_n(f)} df}\,,
\label{eqn:rhof}
\end{equation}
where $w$ is a purely geometrical (and S/N-threshold-independent) function \citep[see Eq.~2 in][]{oshau2010}, which takes values between $0$ and $1$, and completely encompasses the detector- and source-orientation-dependent sensitivity, $f_{\rm ISCO}=c^3/(6^{1.5}\pi GM)$ is the ISCO frequency, and $|\tilde{h}(f)|$ is the frequency-domain waveform amplitude \citep[e.g., Eq.~3 in][]{abadie2010}
\begin{equation}
|\tilde{h}(f)|=\sqrt{\frac{5}{24\pi^{4/3}}} \frac{G^{5/6}}{c^{3/2}} \frac{M_{\rm c,z}^{5/6}}{D_{\rm L}(1+z)f_{\rm GW,z}^{7/6}} \,.
\label{eqn:htilde}
\end{equation}
In the previous equation, $f_{\rm GW,z}$ is the observed (detector frame) frequency, related to the binary orbital frequency by $f_{\rm GW,z}(1+z)=f_{\rm orb}$, $M_{\rm c,z}$ is the redshifted chirp mass, related to the rest-frame chirp mass by $M_\mathrm{c}=M_\mathrm{c,z}(1+z)$, and
\begin{equation}
D_{\rm L}=(1+z)\frac{c}{H_0}\int_{0}^z \frac{d\zeta}{\sqrt{\Omega_{\rm M}(1+\zeta^3)+\Omega_\Lambda}}\,,
\end{equation}
where $z$ is the redshift and $c$ and $H_0$ the velocity of light and Hubble constant \footnote{We set $\Omega_{\rm M}=0.286$ and $\Omega_\Lambda=0.714$ \citep{planck2016}.}, respectively. For LIGO/Virgo we adopt a noise model from the analytical approximation of Eq.~4.7 in \citet{aji2011}
\begin{eqnarray}
S_n(f)&=&10^{-48}\,{\rm Hz}^{-1}(0.0152 x^{-4}+0.2935x^{9/4}+2.7951x^{3/2})\nonumber\\
&-&6.5080x^{3/4}+17.7622)\,,
\end{eqnarray}
where $x=f/245.4\ {\rm Hz}$, which is in excellent agreement with the publicly available Advanced LIGO design noise curve \footnote{\url{https://dcc.ligo.org/LIGO-T0900288/public}}.
The detection probability $p_{\rm det}(m_1,m_2,z)$ is simply the fraction of sources of a given mass located at the given redshift that exceeds the detectability threshold in S/N, assuming that sources are uniformly distributed in sky location and orbital orientation, defined as \citep[e.g.][]{domin2015}
\begin{equation}
p_{\rm det}(m_1,m_2,z)=P(\rho_{\rm thr}/\rho_{\rm opt})\,,
\label{eqn:detec}
\end{equation}
where $\rho_{\rm opt}=\rho(w=1)$. A good approximation is given by Eq.~12 in \citet{domin2015}
\begin{eqnarray}
P(\mathcal{W})&=&a_2(1-\mathcal{W}/\alpha)^2+a_4(1-\mathcal{W}/\alpha)^4\nonumber\\
&+&a_8(1-\mathcal{W}/\alpha)^8+(1-a_2-a_4-a_8)(1-\mathcal{W}/\alpha)^{10}\,,
\end{eqnarray}
where $a_2=0.374222$, $a_4=2.04216$, $a_8=-2.63948$, and $\alpha=1.0$. We assume $\rho_{\rm thr}=8$.
In Figure~\ref{fig:pdetec}, we show the detection probability (Eq.~\ref{eqn:detec}) for different primary masses ($m_1\ge 60\msun$) as a function of the secondary mass, assuming a signal-to-noise-ratio threshold $\rho_{\rm thr}=8$ \citep{ligo2016} and a single LIGO instrument at design sensitivity \citep{ligo2018}. We represent different primary masses in different colors, and fix the source redshift at $z=0.1$ (left panel) and $z=0.2$ (right panel). We find that the larger the secondary mass the larger the detection probability, while it decreases for larger primary masses. For $m_1=60\msun$, we find that the the detection probability is in the range $60\%$--$90\%$ at $z=0.1$, which decreases to about $20\%$--$60\%$ for $m_1=180\msun$. As expected, a larger redshift also leads to smaller detection probabilities, which are a factor of about $2$--$6$ smaller in the case $z=0.2$.
Note that we have used an inspiral-only waveform and have not taken into account the S/N from merger and ringdown, which could important for high-mass binaries, as GW190521 \citep[see e.g.,][]{khan2016}.
\section{Discussion and conclusions}
\label{sect:concl}
GW190521 challenges our current understanding of stellar evolution for massive stars. Stellar models predict that whenever the pre-collapse stellar core is approximately in the range $45\msun$--$65\msun$, large amounts of mass can be ejected following the onset of the pulsational pair-instability process, leaving a BH remnant with a maximum mass around $40\msun$--$50\msun$ \citep{woosley2017}. Since only a rare star of extremely low-metallicity and mass $\gtrsim 200\msun$ could collapse to a BH of mass $\gtrsim 80\msun$ \citep{spera2017,renzo2020}, GW190521 is unlikely to have been born as an isolated binary.
BHs more massive than the limit imposed by pulsational pair-instability could be produced dynamically through repeated mergers of smaller BHs in the core of a dense star cluster. However, the recoil kick imparted to the merger remnant, which crucially depends on the BBH mass ratio and the distribution of BH spins at birth, could eject it out of the parent cluster, terminating growth \citep{anto2019,frsilk2020}.
We have simulated the growth of massive BHs starting from different BH seeds, as a function of the maximum BH spin, $\chi_{\max}$, in host star clusters with various metallicities and escape speeds. We have found that the probability of forming GW190521-like events with total mass around $150\msun$ depends crucially on the maximum BH spin at birth. The probability of forming such massive BBHs is about $3$ times larger with $\chi_{\max}=0.2$ than with $\chi_{\max}=1$, even for clusters with large escape speeds. Almost independent of metallicity, we have demonstrated that only nuclear star clusters or the most massive globular clusters and super star clusters could form BBHs with total mass around $150\msun$. This conclusion does not change when higher-mass seeds ($\gtrsim 50\msun$) are considered.
If GW190521 was formed in a low-metallicity cluster, such as an old globular cluster, its components could be 2g BHs, remnants of previous mergers of nearly equal-mass 1g BHs. We have shown that in a cluster with $v_{\rm esc}\lesssim 200\kms$, the progenitor 1g BHs must then have been born with low spins. Otherwise, the two components of GW190521 could have been formed through repeated minor mergers of a massive 1g BH with low-mass 1g BHs ($\lesssim 10\msun$) in a nuclear star cluster ($v_{\rm esc}\gtrsim 200\kms$). On the other hand, if GW190521 was born in a high-metallicity environment, its components could be 4g or 5g BHs, which have to be retained after several mergers. Since there is a negligible probability of retaining a remnant for $0.2\lesssim q\lesssim 0.8$ for $v_{\rm esc}\lesssim 200\kms$, we have demonstrated that only a nuclear star cluster or the most massive globular clusters and super star clusters (with $v_{\rm esc}\gtrsim 200\kms$) could form GW190521.
We have also computed the detection probability for different primary masses ($\ge 60\msun$) as a function of the secondary mass, assuming a signal-to-noise-ratio threshold $\rho_{\rm thr}=8$ \citep{ligo2016} and a single LIGO instrument at design sensitivity \citep{ligo2018}. We have found that the larger the secondary mass is, the larger the detection probability becomes. On the other hand the detection probability decreases for larger primary masses and redshifts.
GW190521 is a remarkable event that challenges our current theoretical understanding of BBH formation, opening debates about its origin and detection \citep[e.g.,][]{deluca2020,fish2020,gaya2020,liubro2020,liulai2020,rice2020,romero2020,safa2020,sakst2020,sams18}. Future detections of such massive mergers will help constrain our models for the growth of massive BHs through stellar dynamics and the formation of intermediate-mass BHs \citep{fraga2018,fragb2018,frbr2019,greene2019}.
\section*{Acknowledgements}
GF acknowledges support from a CIERA Fellowship at Northwestern University. FAR acknowledges support from NSF Grant AST-1716762. This work was supported in part by Harvard's Black Hole Initiative, which is funded by grants from JFT and GBMF.
\bibliographystyle{yahapj}
|
1,314,259,995,902 | arxiv | \section{INTRODUCTION}
Quantum dynamics play a significant role in many chemical physics and biochemical physics problems.
Frequently studied problems of this kind include exciton and electron transfer processes\cite{Khun95,YangTonuOliverRev2015}
that are involved in photosynthetic systems,\cite{KramerAspu14,LeeCoker2016,KramerFMO2DLorentz,SchultenJCP2009,SchultenJCP2011,SchultenJCP2012,SchultenFMO, Ishizaki2009,Renger2005,Renger2012,Renger2015, Renger2017,Valkunas2017, IshizakiJCP15, Nov2011, Nov2015, Mukamel2013, Renger06}
electron transfer,\cite{Garg1985,Wolynes1987,Mukamel1988,TanakaJPSJ09,TanakaJCP10}
DNA,\cite{SIM_MAKRI97,SIM2004,DijkstraNJP10DNA} and
photovoltaic systems.\cite{Gelinas2014, Thoss2015, Tamura2013, TamuraJPC2015, Prior2017,Mauro2020}
In these problems, the environments (baths), for example, proteins and solvents, play a central role;
these baths are complex and strongly coupled to a molecular system of interest at finite temperatures.
Recent theoretical works have demonstrated that such systems and baths are quantum mechanically entangled (bathentanglement)
and an understanding of these baths is essential to properly elucidate the quantum dynamics displayed by the system.\cite{TanimuraJPSJ06,YTpers}
For example, it has been shown that the optimal condition for excitation energy transfer in light-harvesting complexes is realized under non-Markovian system-bath interactions in a strong coupling regime,
in which the noise correlation time of the bath is comparable to the time scale of the system dynamics.\cite{Ishizaki2009}
To conduct high-accuracy simulations with reduced computational costs, some approaches have utilized machine learning methods to develop models that reproduce open quantum dynamics,\cite{Pavlo2018ML,Hartmann2019NNQD,Flurin2020RNN,zheng_excitonic_2019}
analyze two-dimensional spectroscopy images,\cite{rodriguez_machine_2019,namuduri_machine_2020}
and estimate chemical properties for classical molecular dynamics.\cite{Smith2017ANI1,MLMD01, MLMD02, MLMD03, MLMD04}
Although an irreversibility of the system dynamics results from quantum thermal activation and dissipation caused by the surrounding environment,
it is difficult to conduct a quantum molecular dynamics simulation that exhibits such a characteristic feature arising from macroscopic degrees of freedom.
Thus, we introduce a system-bath model in which the dynamics of excitons or electrons are described by a system Hamiltonian,
while the other degrees of freedom that arise from environmental molecules are described by a harmonic-oscillator bath (HOB).
The HOB, whose distribution takes a Gaussian form, exhibits wide applicability in simulating bath effects, despite its simplicity;
this is because the influence of the environment can, in many cases, be approximated by a Gaussian process due to the cumulative effect of the large number of environmental interactions.
In such a situation, the ordinary central limit theorem is applicable, and hence, the Gaussian distribution function is appropriate.\cite{TanimuraJPSJ06, Kampen81}
The distinctive features of the HOB model are determined by the spectral distribution function (SDF) of the coupling strength between the system and the bath oscillators for various frequency values.
By choosing the appropriate form of the SDF, the properties of the bath can be adjusted to represent various environments consisting of, for example, solid-state materials\cite{LipenHolns2015,ReichmanHolns2019}
and protein molecules.\cite{KramerAspu14,LeeCoker2016,KramerFMO2DLorentz}
Because the SDF can be different for different forms of a system Hamiltonian and system-bath coupling,
it is difficult to find an optimized Hamiltonian associated with an optimized SDF, in particular for a bath describing a fluctuation in site-site interaction energy.
In a previous study\cite{Ueno2020}, we employed a machine learning approach to construct a system-bath model for the intermolecular and intramolecular modes of molecular liquids
using atomic trajectories obtained from molecular dynamics (MD) simulations.
In this study, we extend the previous approach to investigate an exciton or electron transfer problem that is characterized
by electronic states embedded in the molecular environment using quantum mechanics/molecular mechanics (QM/MM) calculations to determine the atomic coordinates of molecules.
In particular, we focus on the exciton transfer process of the photosynthesis antenna system to investigate
how natural systems can realize such highly efficient yields, presumably by manipulating quantum mechanical processes.
As a demonstration, we consider a molecular dimer made of two dipole-coupled dye monomers as a model system that is often studied experimentally and theoretically\cite{Dwayne2014,tempelaar_laser-limited_2016,duan_origin_2015}.
Then, we construct a model Hamiltonian of an indocarbocyanine dimer compound.\cite{Dwayne2014}
The accuracy of this model is examined by calculating linear and two-dimensional electronic spectra.
This paper is organized as follows. In Section 2, we introduce a model that can be used for either exciton or electron transfer and is coupled to a harmonic heat bath.
We then describe the machine learning approach that we use to determine the system parameters, the system-bath interactions, and the SDFs on the basis of QM/MM simulations.
In Section 3, we present results for an indocarbocyanine dimer model constructed from the analysis of QM/MM trajectories.
Linear absorption and two-dimensional spectra are calculated from analytical linear and nonlinear response functional expressions.
Section 6 is devoted to concluding remarks.
\section{THEORY}
\label{sec:Theory}
\subsection{Hamiltonian}
\label{subsec:Hamiltonian}
We consider the situations in which an exciton or electron transfer system interacts with molecular environments that give rise to dissipation and fluctuation in the system.
The Hamiltonian of the system is expressed as
\begin{equation}
\hat H_{S} = \sum _{j} \hbar \omega_j | j \rangle\langle j | + \sum _{j\ne k} \hbar \Delta_{jk}| j \rangle \langle k|,
\label{eq:system_hamiltonian}
\end{equation}
where the $j$th exciton or electron states with energies $\hbar \omega_j$ are represented by bra and ket vectors as $| j \rangle$ and $\langle j |$.
The interaction energy between the $j$th and $k$th states is described by $\hbar \Delta_{jk}$.
In our model, each state is coupled to a different molecular environment (labeled as $a$) that is treated as $N_a$ harmonic oscillators.
The total Hamiltonian is then given by
\begin{align}
\label{eq:total_hamiltonian}
H_{tot} = & H_\mathrm{S} - \sum_{a} \sum_{l=1}^{N_{a}}\alpha_l^{a} \hat V^{a} \hat x_l^{a} \nonumber \\
& +\sum_{a} \sum_{l=1}^{N_{a}} \left[ \frac{(\hat p_l^{a})^2 }{2m_l^{a} } + \frac{1}{2}m_l^{a} (\omega _l^{a})^2 (\hat x_l^{a})^2 \right],
\end{align}
where the momentum, position, mass, and frequency of the $l$th oscillator in the $a$th bath are given by $\hat{p}_{l}^{a}$, $\hat{x}_{l}^{a}$, $m_{l}^{a}$, and $\omega_{l}^{a}$, respectively.
The system part of the system-bath interaction is expressed as
\begin{equation}
\hat V^{a}= \sum _{j, k} V_{jk}^{a} | j \rangle\langle k |,
\label{eq:sys_interaction}
\end{equation}
where $V_{jk}^{a}$ is the coupling constant for the $a$th bath between the $j$ and $k$ states.
The $a$th heat bath can be characterized by the spectral distribution function (SDF), defined as
\begin{equation}
J_a (\omega) \equiv \sum_{l=1}^{N_{a}}\frac{\hbar (\alpha_{l}^a)^2}{2m_{l}^a \omega_{l}^a } \delta(\omega-\omega_{l}^a),
\label{eq:J_wgeneral}
\end{equation}
and the inverse temperature is $\beta \equiv 1/k_{\mathrm{B}}T$, where $k_\mathrm{B}$ is the Boltzmann constant.
Various environments, for example, those consisting of nanostructured materials, solvents, and protein molecules, can be modeled by adjusting the form of the SDF.
For the heat bath to act as an unlimited heat source possessing an infinite heat capacity,
the number of heat-bath oscillators $N_a$ is effectively made infinitely large by replacing $J_a (\omega)$ with a continuous distribution.
The above model has been frequently used in the analysis of photosynthetic systems,\cite{SchultenJCP2009,SchultenJCP2011,SchultenJCP2012, Ishizaki2009,SchultenFMO,Renger2005,Renger2012,Renger2015, Renger2017,Valkunas2017, IshizakiJCP15, Nov2011, Nov2015, Mukamel2013, Renger06}
electron transfer,\cite{Garg1985,Wolynes1987,Mukamel1988,TanakaJPSJ09,TanakaJCP10}
DNA,\cite{SIM_MAKRI97,SIM2004,DijkstraNJP10DNA} and solar battery systems.\cite{Gelinas2014, Thoss2015, Tamura2013, TamuraJPC2015, Prior2017,Mauro2020}
\subsection{Learning data: QM/MM simulations}
\label{subsec:ClassicalMD}
We next consider the pigments in a molecular system, whose electric excitation or exciton states are described by Eq. \eqref{eq:system_hamiltonian}.
The electric states of the pigments depend on the configurations of the surrounding atoms at time $t$.
The time evolution of the excited states of the system and environmental molecules are described by QM/MM simulations.
Because our goal in constructing a system-bath model is to perform a full quantum simulation of the entire system,
we should use quantum molecular dynamics (MD) simulations to provide data on the basis of all atomic coordinates.
In practice, however, it is impossible to consider large environmental degrees of freedom accurately from a quantum mechanical perspective.
Fortunately, we expect that we already have reasonable SDFs for quantum simulation, even though we evaluated them using the classical MD simulation.
Such evaluations were conducted utilizing an ensemble of molecular trajectories that exhibit a Gaussian distribution in which the difference between the quantum and classical trajectories is expected to be minor.
Further, the dynamics of harmonic oscillators are identical in both the classical and harmonic cases
because both the classical and quantum Liouvillian for the $l$th oscillator in the $a$th bath are expressed as
${\hat L}_l^a =-({p_l^a}/{m_l^a})({\partial }/{\partial x_l^a})-m (\omega_l^a)^2 )({\partial }/{\partial p_l^a})$.
We thus use the classical MD simulation technique to acquire the atomic coordinates of the pigments and the molecular environment.
We then conduct quantum chemistry calculations to obtain the desired electronic states, typically the highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) states of the pigments as a function of time.
The excited energy of the $j$th pigment is denoted by $\epsilon_{jj}(t)$, and the interaction energy between the $j$th and $k$th pigments that includes the bath-induced fluctuation is denoted by $\epsilon_{jk}(t)$;
these values can be obtained using any kind of numerical program for quantum chemistry calculations.
If the main system is too large to enable evaluation of all electronic states, we evaluate the site energy $\epsilon_{jj}(t)$ and the interaction energy $\epsilon_{jk}(t)$ separately.
From the calculated $\epsilon_{jj}(t)$ and $\epsilon_{jk}(t)$,
we evaluate the system-bath coupling strength in $\hat V_{jk}^{a}$ and its SDF, in addition to the excitation energy $\hbar \omega_j$ and the interaction energy $\Delta_{jk}$, based on the machine learning approach.
While the SDFs evaluated based on the MD simulations are temperature dependent, the SDFs for the HOB are temperature independent;
we therefore eliminate the temperature dependence of optimized parameters, assuming that the sampled MD trajectories exhibit canonical ensembles at finite temperatures.
\subsection{Machine Learning}
\label{subsec:MachineLearing}
For $n$ exciton or electronic excitation sites, we express the simulated data in terms of $\epsilon_{jk}(t)$
when describing the excited and site-site interaction energies of interest obtained from the QM/MM simulation.
The learning Hamiltonian is then expressed as
\begin{equation}
\label{eq:hamiltonian_matrix}
H(t) = \sum _{j, k=1}^n \epsilon_{jk}(t) | j \rangle \langle k|.
\end{equation}
We then attempt to reproduce the trajectories of $\epsilon_{jk}(t)$
for the total Hamiltonian, Eq. \eqref{eq:total_hamiltonian}, with Eqs. \eqref{eq:system_hamiltonian} and \eqref{eq:sys_interaction}.
Although the system-bath model considers an infinite number of degrees of freedom, here,
we employ a finite number of bath oscillators to estimate the SDFs.
Then, the sampling used for machine leering training is considered the average of the classical bath oscillators
for a certain selection of the system and system-bath parameters.
The site energy and interaction energy can be expressed as
\begin{equation}
\epsilon_{jj}(t) = \hbar\omega_j - \delta_{jj}(t)
\end{equation}
and
\begin{equation}
\epsilon_{jk}(t) = \hbar\Delta_{jk} - \delta_{jk}(t),
\end{equation}
respectively, where $\delta_{jk}(t)$ is expressed in terms of the linear function of the bath coordinates as
\begin{equation}
\label{eq:ml_delta_define}
\delta_{jk}(t) = \sum_a \alpha_{jk}^a x^a_{jk}(t).
\end{equation}
Here, the $a$th bath coordinate for the $jk$ site is described as a function of time as
\begin{equation}
\label{eq:ml_xbath_define}
x^a_{jk}(t) = A^a_{jk} \sin\left(\phi^a_{jk} + \omega^a_{jk} t \right),
\end{equation}
where $A^a_{jk}$ and $\phi^a_{jk}$ are the amplitude and phase of the $a$th bath oscillator for the $jk$ site, respectively.
The phase $\phi^a_{jk}$ is randomly chosen to avoid recursive oscillator motion.
Although we can consider such correlated modes separately by introducing additional baths,
here, we assume that the influences of the individual bath modes are all independent and that the correlations between the fluctuations among different modes can be ignored.
From Eqs. \eqref{eq:ml_delta_define} and \eqref{eq:ml_xbath_define}, $\delta_{jk}(t)$ can be expressed as
\begin{equation}
\delta_{jk}(t) = \sum_a c_{jk}^a
\sin\left(\phi^a_{jk} + \omega^a_{jk} t \right),
\end{equation}
where
\begin{equation}
\label{eq:def_cjk}
c_{jk}^a = \alpha_{jk}^a A^a_{jk}
\end{equation}
and we treat the system-bath coupling parameters as the product of these two variables.
In the machine learning context, the bath parameters and the system-bath interactions are expressed as a set of latent variables, defined as
\begin{equation}
\label{eq:ML_parameters}
\theta = (\left\{\omega_{j}\right\},\left\{ \Delta_{jk}\right\},\left\{ c^a_{jk} \right\} ),
\end{equation}
where $\left\{... \right\}$ is the set of system and bath parameters.
The trajectories of $\epsilon_{jj}(t)$ and $\epsilon_{jk}(t)$, obtained from the QM/MM calculations,
are described as the vibrational motions of the pigment molecule and the surrounding molecules.
We then assume that the probability distribution of the pure state energy $\lambda_i$ is determined based on a Gaussian process and is described
by a set of bath parameters $\alpha^a_{jk}$ and $\phi^a_{jk}$ by optimizing the probability distribution defined as
\begin{equation}
P(\lambda_i\mid\theta) = \int \prod_{k,j,a} d\phi^a_{jk} P(\lambda_i\mid \theta; \phi^a_{jk})P(\phi^a_{jk}),
\end{equation}
which represents the marginalization of the phase of the oscillators $\phi^a_{jk}$ that is introduced to avoid trapping in a local minimal state due to the gradient method.
Here, $P(\phi^a_{jk})$ is the uniform distribution of $[0, 2\pi)$ and
\begin{equation}
P(\lambda_i|\theta; \phi^a_{jk}) \propto \exp\left[-\sigma\left(\lambda_i - E_i \right)^2 \right],
\end{equation}
where $E_i \equiv E_i(\theta; \phi^a_{jk})$ is the predicted energy as a function of the parameter set $\theta$ and initial phase $\phi^a_{jk}$
for the model Hamiltonian, Eq. \eqref{eq:hamiltonian_matrix}, and $\sigma$ is the error width.
Our goal in employing a machine learning method is to choose the optimal parameter set in $E_i(\theta; \phi^a_{jk})$ that maximizes the probability distribution for given data $\lambda_i$.
Among several optimization methods, we use the maximum likelihood method (MLE),
where the loss function is expressed in terms of the negative log of the probability as
\begin{equation}
L = \sum_i (\lambda_i - E_i)^2.
\end{equation}
To find the maximum value of $L$, we employ the Adam gradient method for optimization of the parameter set as
\begin{equation}
\label{eq:Loss_gradient}
\theta \leftarrow \theta + \gamma \frac{\partial L}{\partial \theta},
\end{equation}
where $\gamma$ is the learning rate.
In this way, we obtain the $J_{jk}$ element of the SDF for the $jk$ site.
Because the energy distribution of each bath oscillator $E^a_{jk} = \frac{1}{2}m^a_{jk}\left(\omega^a_{jk}\right)^2$ is assumed to obey a canonical ensemble,
the oscillator amplitude can be expressed as
\begin{equation}
\left<A^a_{jk}\right> = \frac{1}{\sqrt{\pi\beta m^a_{jk}\left(\omega^a_{jk}\right)^2}}.
\label{eq:avg_Ak}
\end{equation}
Integrating Eqs. \eqref{eq:def_cjk} and \eqref{eq:avg_Ak} into Eq. \eqref{eq:J_wgeneral}, we obtain
\begin{equation}
\label{eq:J_fromML}
J_{jk} (\omega) = \sum_{a=1}^{N_{a}}\frac{1}{2}\pi\beta\hbar\omega^a_{jk} (c_{jk}^a)^2 \delta(\omega-\omega_{jk}^a).
\end{equation}
Because $J_{jk}(\omega)$ rapidly changes over time in accordance with the structural changes in the pigment molecules and environments,
we evaluate $J_{jk}(\omega)$ by averaging the different sample trajectories.
From a mathematical perspective, $c_{jk}$ is the frequency domain expression of the time domain data, and $J_{jk}(\omega)$ can be obtained
by averaging the power spectra $c_{jk}^2$ using the Wiener-Khinchin theorem.
It should be noted that the absolute intensity of $J_{jk}(\omega)$ cannot be determined in the framework of the present study because,
for simplicity, we do not evaluate the dipole moment of this complex material;
we evaluate the intensity of $J_{jk}(\omega)$ from the width of the experimentally obtained linear absorption spectrum.
\section{Numerical demonstration}
\label{sec:numerical_demo}
\begin{figure}[htbp]
\includegraphics[width=0.4\columnwidth]{fig1_dimer_image.pdf}
\caption{ The molecular structure of the indocarbocyanine dimer.
Two pigments are connected by methylene chains.
The gray/blue/white atoms represent carbon/nitrogen/hydrogen, respectively.
The red square represents pigment 1, whereas the blue square represents pigment 2.
}
\label{fig:dimer_image}
\end{figure}
\subsection{Indocarbocyanine dimer}
We now demonstrate our numerical approach for a dimer of identical indocarbocyanine molecules.\cite{Dwayne2014}
Figure \ref{fig:dimer_image} displays the structure of the pigment molecule.
The ground and excited states of each pigment are expressed as $| 0\rangle_j$ and $| 1\rangle_j$ for $j = 1$ and 2, respectively.
The ground state energies are each set to zero.
The system Hamiltonian is then expressed as
\begin{align}
\label{eq:Hamiltonian_dimer}
\hat H = & \omega_{0}\left( | 1\rangle_1 {}_{1}\langle 1 | + | 1 \rangle_2 {}_{2}\langle 1 |\right) \nonumber \\
& + \Delta\left(| 0 \rangle_1 {}_{2} \langle 1| + | 1 \rangle_1 {}_{2}\langle 0 |\right),
\end{align}
where $\omega_{0}$ is the excitation energy of a pigment and $\Delta$ is the interaction energy between the dimers.
By diagonalizing $H$, we obtained the eigenvalues $\omega_{k}$ for the $k=+$ and $-$ eigenstates of
$| 1+ \rangle =(| 1\rangle_1| 0\rangle_2+| 0\rangle_1 | 1\rangle_2)/\sqrt{2}$ and $| 1- \rangle =(| 1\rangle_1| 0\rangle_2-| 0\rangle_1 | 1\rangle_2)/\sqrt{2}$, respectively, as
\begin{equation}
\omega_{\pm} = \omega_0 \pm \Delta.
\end{equation}
The excitation energy and interaction energy fluctuations as functions of time,
arising from intramolecular motions of the pigment and intermolecular motions of surrounding molecules,
are expressed as $\delta \omega_{\pm} (t)$ and $\delta\Delta(t)$, respectively.
These functions are evaluated based on the quantum chemistry calculations for given atomic trajectories of the entire molecular system determined by MD simulations.
In our model, because each exciton state is delocalized and the effects of the environmental modes are site specific,
we employ an individual heat bath expressed as the sum of site-specific oscillators to describe the energy fluctuation at each exciton site. The distribution of the exciton-oscillator coupling strength is then evaluated based on the machine learning approach.
Although it is possible to introduce a global heat bath to induce low-frequency environmental modes that are coupled to multiple exciton states,
we find that such effects are not significant in the present case.
Therefore, the excitation energy and interaction energy fluctuations are expressed as
\begin{align}
\label{eq:delta_omega}
\delta\omega_\pm(t) = \sum_m^{1,2} w_{\pm,m}(t) \sum_a c^a_{\omega_0 m} \sin(\omega^a t + \phi^a_{\omega_0 m}),
\end{align}
and
\begin{align}
\label{eq:delta_delta}
\delta\Delta(t) = w_{\Delta}(t)\sum_a c^a_{\Delta} \sin(\omega^a t + \phi^a_{\Delta}),
\end{align}
where $c^a_{b m}$ is the amplitude (scaled by $\alpha$, as described in Eq. \eqref{eq:def_cjk})
and $\phi^a_{b m}$ is the initial phase of the $a$th oscillator for the state indices $b = 11$ (or $22$) and $12$.
We introduce the localization weight functions $w_{\pm ,m}(t)$ and $w_{\Delta}(t)$,
as obtained from the diagonalization of the pigment-based Hamiltonian, expressed in Eq. \eqref{eq:Hamiltonian_dimer},
to describe the pigment-specific environment effects in the delocalized exciton state representation.
These localization weight functions are evaluated based on the electronic states of the pigment $m = 1$ and $2$ established by the atomic orbitals (AO) obtained from quantum chemistry calculations.
Thus, the targeting eigenenergies to be described by the system-bath model, $\lambda_{\pm}(t; \theta)$, are expressed as
\begin{align}
\lambda_{\pm}(t; \theta) = \omega_0 + \delta\omega_\pm(t) \pm \left(\Delta + \delta\Delta(t)\right),
\end{align}
where $\theta$ is a set of parameters $\theta = \left(\omega_0, \left\{ c^a_{\pm ,m}\right\},\Delta,\left\{ c^a_{\Delta}\right\} \right)$.
As learning data, we compute the exciton energy $E_\pm(t)$, the molecular orbital (MO) coefficients for each exciton state,
and wavefunctions (atomic orbital (AO) coefficients for each MO) from quantum chemistry calculations for the given atomic coordinates as a function of time.
Additionally, the movements of all atoms in the system are evaluated from the classical MD simulation.
Using these data, we optimize the set of parameters $\theta$.
To evaluate the weight function $w_{k, m}(t)$, we calculate the exciton and hole populations $p_{k,m}^{ex}(t)$ and $p_{k,m}^{h}(t)$ that are obtained as the summation of the absolute square of the AO coefficients,
which are evaluated from the AO coefficients involved in the MO in pigment $m$ for excited state $k$.
The weight function is then evaluated as $w_{k,m}(t)=p_{k,m}^{ex}(t) p_{k,m}^{h}(t)$
and $w_\Delta (t) = \sum_{k=\pm} \left( p_{k,1}^{h}(t) p_{k,2}^{ex}(t)+p_{k,2}^{h}(t) p_{k,1}^{ex}(t) \right)$.
As these definitions indicate, the exciton states are localized when $w_{\pm,m}$ is close to 1, whereas the exciton states are distributed among the pigments when $w_{\Delta}$ is close to 1.
To optimize the system and bath parameter set, we minimize the loss function
\begin{align}
\label{eq:loss_function}
L & = \sum_n\sum_t L^n(t) \nonumber \\
& = \sum_n\sum_t \left[(\lambda_- (t; \theta^n) - E^n_{-}(t))^2 + (\lambda_+(t; \theta^n) - E^n_{+}(t))^2\right],
\end{align}
where $E^n_-(t)$ and $E^n_+(t)$ are the lowest ($\left|1-\right>$) and 2nd lowest ($\left|1+\right>$) excitation energies, and the index $n$ indicates the $n$th sample.
Using the MLE method, we optimize $c^a_{\omega_0 m}$ and $c^a_{\Delta}$ for each time series as a sample set.
To apply the machine learning algorithm, the time series of the tuple $(E^n_-(t), E^n_+(t), w^n_{k,m}(t))$ are regarded as the input feature variables.
In the indocarbocyanine case, the two pigments are symmetric, and the bath SDFs for each pigment are considered to be identical.
Therefore, we use the averaged value $c^a_{\omega_0} = (c^a_{\omega_0 1} + c^a_{\omega_0 2})/2$.
We then evaluate $J_{jj}(\omega) (j = 1, 2)$ and $J_{12}(\omega)$, namely, $J(\omega)$ for $\omega_0$ and $\Delta$, from $c^a_{\omega_0}$ and $c^a_\Delta$, respectively, using Eq. \eqref{eq:J_fromML}.
\subsection{Fourier-based approach versus machine learning approach}
A commonly used approach for evaluating the SDFs of $\epsilon_{ij}(t)$ utilizes the Fourier transformation of the autocorrelation function
expressed as $\mathcal{F}\left[\left<\delta\epsilon_{ij}(0)\delta\epsilon_{ij}(t)\right>\right]$,
where $\delta\epsilon_{ij}(t) \equiv \epsilon_{ij}(t) - \left<\epsilon_{ij}\right>$.
In the actual calculation, the time series $\epsilon_{ij}^n(t)$, where $n$ is the sample index, is evaluated as the average of the autocorrelation function expressed as
\begin{equation}
C_{ij}(t) = \frac{1}{N}\sum_n\left<\delta\epsilon_{ij}^n(0)\delta\epsilon_{ij}^n(t)\right>,
\end{equation}
where $N$ is the total sample number.
We then obtain the SDF as
\begin{equation}
\label{eq:gen_J_FT_C}
J_{ij}(\omega) = \mathcal{F}\left[C_{ij}(t)\right].
\end{equation}
Alternatively, using Wiener-Khinchin’s theorem for stationary random processes,
we can obtain the SDF as an average of power spectrum $P^n_{ij}(\omega) = \left|\mathcal{F}\left[\epsilon_{ij}(t)\right]\right|^2$ as
\begin{equation}
\label{eq:gen_J_avg_P}
J_{ij}(\omega) = \frac{1}{N}\sum_n P^n_{ij}(\omega).
\end{equation}
Although this Fourier-based approach is simple and straightforward,
for the system-bath Hamiltonian, the obtained SDFs are not necessarily the optimal choice for describing the QM/MM data
because the exciton and interaction energies are mutually dependent on each other; thus, $J_{ij}(\omega)$ and $J_{ik}(\omega)$ cannot be evaluated separately.
In the machine learning approach, however, it is possible to optimize not only $J_{ij}(\omega)$ and $J_{ik}(\omega)$ but also $\omega_0$ and $\Delta$
without assuming explicit relationships between the SDFs and the system parameters.
Moreover, if necessary, we can introduce additional conditions for optimization of the SDFs and system parameters
because we employed $w_{k,m}(t)$ to account for the effects of the indocarbocyanine dimer exciton localization.
\subsection{CALCULATION DETAILS}
\label{sec:Calculation}
\subsubsection*{Step 1: Classical MD}
We prepared a system consisting of an indocarbocyanine dimer molecule with 1024 methanol molecules as the solvent.
The classical MD simulations were carried out with the GROMACS software package. \cite{berendsen1995gromacs,abraham2015gromacs,bekker1993gromacs}
The conditions for preparation MD simulations were set as 1 atm and 300 K with an NPT ensemble.
The equilibrium MD run was carried out for 20 ps in an NVT ensemble followed by a sampling MD run for 5 ps in an NVE ensemble.
These equilibrium MD runs and sampling MD runs were repeated 100 times.
The entire MD simulation was performed with a time step of 0.1 fs.
\subsubsection*{Step 2: Data Preparation using Quantum Chemistry Calculations}
To obtain the sample trajectories of the excitation energies, we conducted ZINDO calculations\cite{ZINDO1973,ZINDO1991}
and natural transition orbital analysis\cite{NTO2003} for a 1 fs period in one sample using the ORCA software package.\cite{neese2012orca}
We then obtained 100 $(E_-(t), E_+(t), w(t))$ samples that were 5 ps in length.
\subsubsection*{Step 3: Parameter Optimization for the Machine Learning Approach}
We arrange the data with 5 ps lengths obtained from step 2 according to the starting time in each of 175 steps.
We then extracted 604 trajectories containing 1000 data points in an interval of 4 fs.
These sampling data were used as the input feature values in the machine learning calculations.
To perform learning calculations, we developed Python codes using the TensorFlow library.\cite{tensorflow}
The training was performed with the learning rate $\alpha = 1\times 10^{-4}$ for the first 200 steps
and then the rate was reduced to $\alpha = 1\times 10^{-5}$ for the next 200 steps.
The number of epochs was chosen to avoid the overfitting problem arising from the MLE that occurs with a gradient method. In the present case, this effect appears in the very law frequency region below 10 cm$^{-1}$ of $J(\omega)$ (see Appendix \ref{subsec:overfitting}). Because such slow dynamics of the environment are not important in the present exciton transfer problem, we avoided this effect by simply choosing a shorter epoch known as the early-stopping technique. To minimize the loss function, we employed the Adam algorithm.
The bath oscillator number $N$ is 600.
The frequency of the $a$th bath oscillator $\omega^a$ is $a\Delta\omega$ for $a = 1, 2, \cdots, N$, where $\Delta\omega$ is approximately $8.3391 \mathrm{cm}^{-1}$.
The initial values of the target optimization variables for the SDF amplitudes were set as $c^a_{\omega_0 m}=1\times 10^{-5}$ and $c^a_{\Delta}=1\times 10^{-5}$,
and the exciton and interaction energies were set as $\omega_0 = {(\left<E_+\right> +\left<E_-\right>)}/{2}$ and $\Delta = {(\left<E_+\right> - \left<E_-\right>)}/{2}$.
The initial phases $\phi^a_{b}$ were randomized 5 times for each series of samples.
The loss functions were averaged over each set of 64 samples as a minibatch, while the parameters were optimized for every minibatch.
For the 604 samples, each epoch contained 9 iterations.
\subsubsection*{Step 4: Calculations of Optical Spectra}
We assumed that the dipole operator for the indocarbocyanine dimer was given by
$\hat \mu_1+\hat \mu_2 = \mu (| 0\rangle_1 {}_{1}\langle 1 |+| 1\rangle_1 {}_{1}\langle 0 | +| 0\rangle_2 {}_{2}\langle 1 |+| 1\rangle_2 {}_{2}\langle 0 |)$,
which created a transition between the ground state $|00 \rangle$ and the excitation states, $|1+ \rangle$, and $|11 \rangle$,
while optical transitions from these states to the state $|1- \rangle$ were forbidden.
Thus, the optical transitions in the present system were modeled by a three-level system with eigenenergies of 0, $\Omega_+$, and $2\omega_0$.
This allowed us to apply analytical expressions of the linear and nonlinear response functions, as presented in appendix \ref{subsec:Spectroscopy}.
We then calculated the linear absorption and two-dimensional (2D) electronic spectroscopy signals using line-shape functions.
\subsection{RESULTS AND DISCUSSION}
\label{sec:Results}
\begin{figure}[htbp]
\includegraphics[width=0.6\columnwidth]{fig2_data_sample.pdf}
\caption{
Samples of the data used in learning calculations for (a) the excitation energy $E_{k}$ for $k=\pm$,
(b) the weight functions $w_\Delta$ (green dashed curve),
$w_{k,m}(t)$ for pigments $m = 1$ (blue) and $m = 2$ (orange) for $k = +$ (solid line) and $k = -$ (dotted line), respectively.
The panel (c) is plotted the differences between the energy levels $E_\pm$ to illustrate the relationship between the energies and the weight functions.
}
\label{fig:data_sample}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=0.6\columnwidth]{fig3_loss_function.pdf}
\caption{The learning curve of the loss function for the indocarbocyanine dimer model.
The vertical line at epoch 200 indicates the epoch where the learning rate changed,
and the vertical line at epoch 400 indicates early stopping.}
\label{fig:loss_function}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=0.8\columnwidth]{fig4_J_omega.pdf}
\caption{ The SDFs of the indocarbocyanine dimer in the methanol environment for the exciton energy $J_{11}(\omega)$ ($=J_{22}(\omega)$) (blue) and the interaction energy $J_{12}(\omega)$ (orange)
obtained with the machine learning approach.
}
\label{fig:J_omega}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=0.7\columnwidth]{fig5_triplet.pdf}
\caption{Energy-level diagram for a dimer system that undergoes random fluctuations in the excited energy and coupling strength described by $\delta \omega(t)$ and $\delta \Delta(t)$, respectively.
For the description of pure dephasing, only the difference between the energies involved in the optical excitation is important:
the frequency fluctuation between $|00 \rangle$ and $|1+ \rangle$ is given by $\delta \omega(t)+\delta \Delta(t)$,
whereas that between $|1+ \rangle$ and $|11 \rangle$ is given by $\delta \omega(t)-\delta \Delta(t)$.
For perfectly uncorrelated fluctuations, we consider $\delta \omega(t)$ and $\delta \Delta(t)$ independently.
The dashed line represents the energy level of the forbidden state $|1 - \rangle$.
\label{triplet}
}
\end{figure}
Representative examples of the prepared dataset are plotted in Fig. \ref{fig:data_sample}.
The abrupt change in the exciton energies in Fig. \ref{fig:data_sample}(b) occurs due to the exciton transfer between pigments 1 and 2 that takes place in the time period of 10-100 fs.
As illustrated in Fig. \ref{fig:data_sample}(c), the difference in the exciton energies $E_+ - E_-$ exhibits minima in accordance with respect to the exciton transfer processes.
As depicted by the red circles in Fig. \ref{fig:data_sample}(c), although such minimal points are significantly narrower and deeper than the minimal point caused by energetic fluctuation,
it is difficult to separate the effects of exciton transfer from the energy fluctuation due to environmental motions.
By introducing the localization weight functions $w_{\pm,m}(t)$ and $w_{\Delta}(t)$ in Eqs. \eqref{eq:delta_omega} and \eqref{eq:delta_delta} to eliminate the effects of the nonenvironmental origin involved in the learning trajectories,
we can stabilize and enhance the efficiency of the machine learning process.
In Fig. \ref{fig:loss_function}, we depict the learning curve of the loss function, as defined in Eq. \eqref{eq:loss_function}.
Upon gathering random samplings of $\phi$, the loss function converged monotonically to a certain positive value, which demonstrated the efficiency of the present algorithm.
The initial parameter values of the excitation energy and the interaction energy were set as $\omega_0$ = 17736 cm$^{-1}$ and $\Delta$ = 1004 cm$^{-1}$,
whereas the optimized values of the excitation energy and the interaction energy were given by $\omega_0$ = 17794 cm$^{-1}$ and $\Delta$ = 963 cm$^{-1}$,
which are closer to the values that fit the experimentally obtained spectra.\cite{Dwayne2014} Here, to avoid overfitting problems,
we employed the early-stopping technique (see Appendix \ref{subsec:overfitting}).
In Fig. \ref{fig:J_omega}, we display the results of SDFs for the excitation energy $J_{11}(\omega)$ ($=J_{22}(\omega)$) and the interaction energy $J_{12}(\omega)$.
Various intermolecular modes below 2000 cm$^{-1}$ are observed as prominent sharp peaks near 450, 570, 1185, 1393, 1541, 1791, 1842, and 1923 cm$^{-1}$.
In the region above 2000 cm$^{-1}$, only two tiny peaks are observed at approximately 3000 cm$^{-1}$ and 3850 cm$^{-1}$.
The normal mode analysis (B3LYP/def-SV(P)) indicates that these peaks under 3300 cm$^{-1}$ arise from the intramolecular modes of the indocarbocyanine dimer,
whereas the peak at 3850 cm$^{-1}$ arises from a molecular vibration of the solvent methanol molecules.
We found that each sharp peak can be fitted by the Brownian spectral distribution,\cite{TanakaJPSJ09,TanakaJCP10}
whereas the broadened background peak in the range from 0 to 2000 cm$^{-1}$ corresponds to the intramolecular modes fitted by the Drude-Lorentz distribution.\cite{TanimruaJCP12}
The intensities of the peaks in $J_{12}(\omega)$ are considerably weaker than those in $J_{11}(\omega)$:
only the peaks near 456, 562, 1840 and 1920 cm$^{-1}$ are identified.
As we expected, the intermolecular peak positions are governed by the classical MD simulation,
whereas the heights of these peaks are predominately governed by the quantum chemistry calculation.
To verify the descriptions of the obtained SDFs and system parameters,
we computed the linear absorption and two-dimensional electronic spectra (2DES),
for the cases in which the experimentally obtained spectra were available.\cite{Dwayne2014}
In general, these spectra should be calculated in the framework of open quantum dynamics
that considers the complex interactions between the exciton sites.
However, for demonstration purposes here, we employ the analytical expressions for response functions,
ignoring the transitions to the state that are usually forbidden.
The details of these calculations are presented in Appendix A.
The linear absorption spectrum calculated from Eqs. \eqref{eq:signal_1d} and \ref{eq:response_1d} is presented in Fig. \ref{fig:linear_absorb}.
Here, the calculated peak is fitted by the Gaussian function $\lambda\exp\left[-\left(({\omega - \omega_c})/{\gamma}\right)^2\right]$, where the amplitude, central frequency,
and width are $\lambda=351$, $\omega_c=18583 \mathrm{cm}^{-1}$, and $\gamma=464 \mathrm{cm}^{-1}$, respectively.
Note that we could not determine the absolute SDF intensities because, for simplicity, we did not calculate the amplitude of the dipole operator.
Here, we chose to use the intensity of $J_{11}(\omega)$ to fit the experimentally obtained signal.
As presented in Fig. \ref{fig:linear_absorb}, we observe a single broadened absorption peak at $\omega_0+\Delta$
corresponding to the transition between $|00 \rangle$ and $|1+ \rangle$, while the transition between $|00 \rangle$ and the state $|1- \rangle$ is forbidden (see Fig. \ref{triplet}).
Although the experimentally observed linear absorption spectrum exhibits a $0-1$ phonon sideband peak near $\omega=19500 \mathrm{cm}^{-1}$,
here, we observe this phenomenon only as an asymmetry of the Gaussian peak in the high-frequency region.
\begin{figure}[htbp]
\includegraphics[width=8cm]{fig6_linear_absorb.pdf}
\caption{
Linear absorption spectrum of an indocarbocyanine dimer, as calculated with Eqs. \eqref{eq:signal_1d}-\ref{eq:response_1d}
and the line-shape function Eq. \eqref{LSF} for the system parameters and SDFs obtained with the machine learning approach.
The dotted line is the fitted Gaussian peak centered at $18583 \mathrm{cm}^{-1}$, indicating that the calculated peak is asymmetric due to the $0 - 1$ phonon transition near $19500 \mathrm{cm}^{-1}$.
}
\label{fig:linear_absorb}
\end{figure}
\begin{figure}
\includegraphics[width=0.8\columnwidth]{fig7_2DES.pdf}
\caption{
2DES of an indocarbocyanine dimer, as calculated with Eqs. \eqref{resp1} -\ref{resp3} and the line-shape function Eq. \eqref{LSF}
for the system parameters and SDFs obtained with the machine learning approach.
The waiting time $t_2$ for each signal is displayed at the top left of each panel.
The peak intensity of the signal was normalized for each $t_2$.
The waiting time $t_2$ was chosen to illustrate the maximal/minimal points of the oscillating feature of the peak elongation (see text).
}
\label{fig:2DES_sig}
\end{figure}
The 2D correlation electronic spectra calculated using the analytical expressions of the response function (Eqs. \eqref{resp1} -\ref{resp3}) \cite{TanimuraMukamelJCP95}
are presented in Fig \ref{fig:2DES_sig}.
At $t_2 = 0$ fs, only one peak stretched near the $\omega_1 = \omega_3$ line, arising from the $|00\rangle \rightarrow |1+\rangle$ transition, is observed.
At $t_2 = 10$, $25$, and $40$ fs, the peak is elongated in the low-frequency $\omega_1$ direction due to a shift in the eigenenergy caused by the heat-bath-induced exciton-exciton interaction described by $J_{12}(\omega)$.
Because the system-bath interaction we considered here is non-Markovian and its effects appear only after a period longer than the inverse correlation time of noise, we do not observe such heat-bath effects for a small $t_2$.
Then, at approximately $t_2 = 70$ fs, the off-diagonal peak near $(\omega_1, \omega_3) = (17800, 20000)$, in units of cm$^{-1}$,
corresponding to the transition between $|1+\rangle \rightarrow |11\rangle$ is observed,
whereas the peak along the $\omega_1 = \omega_3$ line shifts to $(\omega_1, \omega_3) = (21000, 20000)$
due to the transition between $|00\rangle \rightarrow |1+\rangle$ that arises from the exciton-exciton interaction described by $\Delta$ and $J_{12}(\omega)$.
As $t_2$ increases, the intensities of these two peaks oscillate as a result of the population transitions
among $|10\rangle$, $|1+\rangle$, and $|11\rangle$ caused by $\Delta$ and $J_{12}(\omega)$.
This phenomenon was also observed experimentally.\cite{Dwayne2014}
The appearance of this oscillatory feature at a finite period in $t_2$ indicates the importance of the off-diagonal heat bath,
whose modeling is not easy in the framework of the existing approach.
While the off-diagonal peak still exhibits oscillatory motion at $t_2 \ge 100$ fs, the peak profile gradually elongates in the $\omega_1 = \omega_3$ direction due to the inhomogeneous broadening that arises from the diagonal bath modulation described by $J_{11}(\omega)$ and $J_{22}(\omega)$.\cite{TanimuraJPSJ06}
\section{CONCLUSION}
\label{sec:conclusion}
We introduced a machine learning approach for constructing a model that can be used to analyze the dynamics of exciton or electron transfer processes in a complex environment
on the basis of considering the energy eigenstates evaluated from QM/MM simulations as functions of time.
The key feature of the present study is the system-bath model,
in which the primary exciton/electron dynamics are described by a system Hamiltonian expressed in terms of discretized energy states,
while the other degrees of freedom are described by harmonic heat baths that are characterized by SDFs.
An optimized system-bath Hamiltonian obtained from the machine learning approach allows us
to conduct time-irreversible quantum simulations that are not feasible with a full quantum MD simulation approach.
Here, we demonstrated the above features by calculating linear and nonlinear optical spectra for the indocarbocyanine dimer system in a methanol environment
in which the quantum entanglement between the system and bath plays a central role.\cite{TanimuraJPSJ06,YTpers}
The calculated results can be used to explain the experimental results reasonably well;
we found that the heat bath plays a key role in describing the exciton transfer process for the exciton-exciton interaction in this system.
Although here we ignore the transitions to the state that are usually forbidden due to an applicability of the analytical expression, if necessary,
we can explicitly consider such transitions using the HEOM formalism.\cite{Mauro2020,TanimuraJPSJ06,YTpers,Y.Tanimura.JCP.2014,Y.Tanimura.JCP.2015}
Finally, we briefly discuss possible extensions of this study.
As shown in a previous paper,\cite{Ueno2020} the machine learning approach can be applied to a system described by reaction coordinates,
which is useful for investigating chemical reaction processes characterized by potential energy surfaces.
By combining the previous and present approaches, we can further investigate systems described by not only electronic states but also molecular configuration space,
for example, photoisomerization,\cite{T.Ikeda.JCP.2017} molecular motor,\cite{Ikeda2019} and nonadiabatic transition problems,\cite{Ikeda2018} with frameworks based on the system-bath model.
In this way, we may construct a system-bath model for entire photosynthesis reaction processes consisting of photoexcitation,\cite{Khun95,YangTonuOliverRev2015}
exciton transfer,\cite{KramerAspu14,LeeCoker2016,KramerFMO2DLorentz,SchultenJCP2009,SchultenJCP2011,SchultenJCP2012,SchultenFMO, Ishizaki2009,Renger2005,Renger2012,Renger2015, Renger2017,Valkunas2017, IshizakiJCP15, Nov2011, Nov2015, Mukamel2013, Renger06}
electron transfer,\cite{Garg1985,Wolynes1987,Mukamel1988,TanakaJPSJ09,TanakaJCP10} and proton transfer processes,\cite{Shi2009PT,Shi2011PT,Jianji2020}
including conversion processes, such as exciton-coupled electron transfer\cite{Sakamoto2017}
and electron coupled proton transfer processes.\cite{Jianji2021}
Further theoretical and computational efforts must be put forth that include providing learning data based on accurate and large quantum simulations, improving learning algorithms,
and developing an accurate and efficient open quantum dynamics theory to treat a complex system-bath model.
We leave such additional endeavors to future studies in accordance with recent progress in theoretical techniques.
\begin{acknowledgement}
The authors are thankful to Professor Yuki Kurashige for helpful discussions concerning the QM/MM simulations for providing an indocarbocyanine dimer system.
Financial support from HPC Systems Inc. is acknowledged.
\end{acknowledgement}
|
1,314,259,995,903 | arxiv | \section{Introduction}\label{sec:intro}
Approximately 20~percent of the Galactic main sequence O-stars are isolated field stars \citep[e.g.,][]{Mason1998}. After correcting for clustered environments and runaways, only \mbox{4\,--\,10}~percent of all O-stars appear to be truly isolated \citep[e.g.,][]{deWit2004,deWit2005,Zinnecker2007}. Isolated field O-stars are also suggested to account for 20\,to\,30\,percent of the high-mass stellar populations in star-forming galaxies \citep{Oey2004}. The existence of these stars is perplexing when one considers two theoretical expectations: 1) the relation between the maximum stellar mass and the hosting-cluster mass excludes O-stars from forming in clusters with masses $\leq$ 250~M$_\sun$ \citep[e.g.,][]{Weidner2006}, and 2) the maximum stellar mass is set by the high-mass end of a fully-populated stellar initial mass function \citep[IMF;][]{OeyClarke2005}. In favor of {\em in situ} formation, Monte Carlo simulations of a randomly sampled IMF suggest that ``isolated'' O-stars are likely formed in clusters with numerous unseen lower-mass stars \citep{Parker2007}, while contrary to being formed {\em in situ}, field O-type stars are proposed to be explained as runaway stars that are difficult to trace back to their original cluster or are remnants of clusters that have undergone significant dissolution \citep[e.g.,][]{Pflamm2010, Gvaramadze2012}.
According to the generally accepted paradigm of star formation, stars typically form in giant molecular clouds (GMCs). \citet{Lamb2010} presented a \emph{Hubble Space Telescope} (\emph{HST}) study on isolated high-mass stars for eight main sequence OB-stars in the Small Magellanic Cloud (SMC). With a detection limit of 1 M$_\sun$, these authors found that two stars are runaways, three are in small clusters, and the remaining three appear to be isolated. Furthermore, two of these isolated OB stars are in \ion{H}{2} regions without bow-shocks, increasing the likelihood that they are in their natal environment. \cite{Oey2013} identified in the SMC 14 additional field OB stars with symmetric dense \ion{H}{2} regions around them, minimizing the likelihood that these objects have transverse runaway velocities. All stars are confirmed spectroscopically to be strong candidates for field high-mass stars that formed in situ \citep{Lamb2015}. Given that the main sequence lifetime of these particular O-stars is about an order of magnitude shorter than that of a GMC \citep[$\sim$20--40 Myr in the Local Group,][]{Kawamura2009,Miura2012}, these observations suggest that high-mass star formation may not require GMCs. Therefore, the fact that some O-stars may form in isolation allows for a new and interesting probe of high-mass star formation.
\begin{figure*}[t!]
\begin{center}
\includegraphics[scale=0.55,angle=90]{fig01.pdf}
\caption{H$\alpha$ image (with no background subtraction) of the LMC from the Magellanic Cloud Emission Line Survey \citep{SmithMCELS1999}. As indicated in the legend, small green $\times$'s are known stars in OB associations from \citet{Lucke1970}, magenta large $\times$'s are MYSOs observed with $HST$ presented in this paper, large red filled circles are MYSOs identified in GC09, small black empty black circles are sources which were confirmed to be MYSOs in \citet{Seale2009}, and orange, yellow, and blue empty circles are definite, probable, and possible YSOs, respectively, as categorized in GC09. The cyan contours show the CO(1--0) from MAGMA \citep{Wong2011} and indicate locations of GMCs. The cyan border indicates the entire LMC NANTEN \citep{Fukui2008} CO(1--0) survey. The MYSOs observed with $HST$ are numbered, corresponding to the following names: (1) 045403.62--671618.5, (2) 050941.94--712742.1, (3) 051906.69--682137.4, (4) 052124.90-660412.9, (5) 053244.25--693005.7, (6) 053342.15--684602.8, and (7) 053431.46--683513.9. \label{LMC_YSOs}
}
\end{center}
\end{figure*}
If high-mass field stars represent a population of stars that previously formed in isolation, there should be many high-mass stars that are currently forming in isolation. Specifically, considering that there are thousands of stars with $M>$10\,M$_\sun$ that are currently in the accretion phase in the Galaxy \citep{Zinnecker2007}, and it has been proposed that 4--10\% of O-stars are formed in isolation, the Galaxy should contain 100s of isolated high-mass stars under formation. However, convincing evidence of isolated field stars that are currently in the accretion phase is lacking. The only investigation of such a candidate is that of the compact star-forming region N33 in the SMC, reported by \cite{Selier2011}. These authors did not find any traces of a stellar clustering around the region on scales $\gsim$\,3\,pc, while on smaller scales a marginal concentration of faint stellar sources was discovered clustered around a high-mass O6.5-O7 main-sequence star.
As pointed out by \citet{Bressert2012}, the term ``isolated high-mass star formation'' can be unclear. Specifically, these authors suggested three possible criteria that may suggest a high-mass star is \emph{not} isolated: 1) a high-mass star is forming with other high-mass stars in a molecular cloud; 2) the formation of a high-mass star may be triggered by another high-mass star; and 3) a high-mass star was gravitationally bound (within $\sim$3 pc) with another high-mass star sometime in the past. \citet{Bressert2012} were specifically interested in criterion 3, the least restrictive of the criteria, and found 15 candidates in the 30~Doradus region that may satisfy this criterion. This study is more concerned with the most restrictive of the criteria, criterion 1, and therefore it is more akin to the investigations of field O-stars by \citet{deWit2004,deWit2005} and \citet{Zinnecker2007}, who suggested that 4--10\% of all Galactic O-stars are not runaways, but formed in isolation.
Our analysis also focuses in particular on high-mass stars at early stages of their formation. During its formation, the high-mass star will typically reach the main-sequence (i.e., commencing hydrogen fusion) while still accreting \citep[e.g.,][]{YorkeSonnhalter2002,Zinnecker2007}. Since the term ``protostar" is typically reserved for pre--main-sequence (PMS) stars, we use the term young stellar object (YSO) for embedded sources. Indeed, the massive YSOs (MYSOs) targeted in this study are associated with ionized gas and are embedded \citep{Seale2009}, and thus are on the main sequence and are likely still accreting.
While observations of high-mass star forming regions can be studied at the highest resolution in our Galaxy, surveys of these regions have complications. Distances are typically measured kinematically and have high uncertainties -- especially since there is an ambiguity of assigning the velocity to a ``near arm'' distance or a ``far arm'' distance. Moreover, the Galaxy has high extinction and confusion along the line of sight, which causes significant difficulty in assigning which emission is happening at which distance. Therefore, it is very challenging to analyze Galactic emission at GMC-scales around MYSOs and to create unbiased and uniform surveys for high-mass star formation in the Galaxy. The Large Magellanic Cloud (LMC), being one of the nearest galaxies to the Milky Way, mitigates most of these problems, and therefore it is an ideal laboratory for uniform surveys of high-mass star formation. Specifically, all sources are at a similar distance of about 50~kpc \citep[][$\sim$0.25~pc per arcsecond]{Feast1999} and the nearly face-on orientation and low extinction allows for large regions to be studied unambiguously. Due to the observational advantages over the Galaxy, the entire LMC has been targeted by large surveys (e.g., \emph{Spitzer}, \citealt{Meixner2006}; \emph{Herschel}, \citealt{Meixner2013}).
Based on the first criterion for isolation proposed by \citet{Bressert2012}, a high-mass star forms in isolation if it is not member of an OB association or of a runaway population.
This criterion extended to MYSOs requests that the isolated source should be forming away from an OB association, as well as of any GMC.
The close connection between GMCs and high-mass star formation was confirmed by \citet{Wong2011} in the LMC, where the more CO luminous GMCs are found more likely to contain MYSOs.
Using $Spitzer$ observations, \citet[][hereafter GC09]{GC09} constructed one of the best, carefully-selected samples of MYSOs across the entirety of the LMC. Specifically, they compiled a catalog of 248 best MYSO
candidates. \citet{CG08} found that 85\% of these MYSOs are in GMCs and 65\% are in OB associations. Only 7\% of the MYSOs are outside of both GMCs and OB associations, comparable to the amount of Galactic O-stars that appear to be isolated, non-runaway field stars.
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.5]{fig02.pdf}
\caption{
Ground-based H$\alpha$ images (no continuum subtraction) in inverted grayscale illustrates that the seven isolated MYSO targets have central H$\alpha$ regions. North is up and east is left. Observations were taken with the MOSAIC2 camera on the Blanco 4~m telescope; see \citet{Stephens2014} for details of the observations. The field-of-view for each panel is 150$\arcsec$ (37.5~pc) on a side. The \citet{GC09} position is marked with an open cross. MYSO numbering is the same as Figure\,\ref{LMC_YSOs}. \label{Halpha}
}
\end{center}
\end{figure}
We employed \emph{HST} to follow up on seven of the sources identified in \citet{CG08} since they are the best candidates for isolated MYSOs in the LMC. This sample is selected based upon the fact that within 80\,pc (see Section\,\ref{ssel}), none of the sources are associated with (i) other MYSOs, (ii) OB associations, or (iii) any GMC. In all cases ground-based H$\alpha$ observations show that these MYSOs are affiliated with non-elongated, small \ion{H}{2} regions and therefore are unlikely to be part of a runaway population. We acquired WFC3 observations in the F656N, F555W, F814W, F110W, and F160W bands to examine the interstellar environment and determine the surrounding stellar populations down to $\sim$\,0.7\,M$_\sun$. The exquisite resolution of \emph{HST} immediately demonstrated in the reduced images that in fact none of the sources is single and therefore actually isolated. Instead, they are all associated with prominent stellar clusterings around them.
In this paper we present our observations for the search of ongoing isolated high-mass star formation in the LMC and describe the data reduction and point spread function (PSF) photometry applied. We present the analysis of the data for these seven MYSOs in order to characterize in depth the natal environments of high-mass stars that appear as forming in isolation and to constrain more accurately the definition of isolated high-mass star formation. In Section\,\ref{ssandobs}, we describe our source selection of the seven MYSOs, the Mopra and \emph{HST} observations, and the \emph{HST} photometry. In Section\,\ref{section3}, we characterize the isolation of each target in our sample. In Section\,\ref{identPMS} and \ref{s:clusanl}, we identify the stellar populations associated to these seven MYSOs and characterize their clustering behavior through the entire \emph{HST} fields. Finally, in Section\,\ref{discussion} we discuss our findings in the general context of the phenomenon of isolated high-mass star formation.
\section{Source Selection and Observations}\label{ssandobs}
\subsection{Source selection}\label{ssel}
As discussed in Section\,\ref{sec:intro}, the LMC provides a uniform, unbiased survey of high-mass star formation. GC09 identified 248 MYSOs with [8.0~$\mu$m]~$\leq$~8 mag. Almost all of these MYSOs had follow-up $Spitzer$ IRS observations to confirm their MYSO-like spectral energy distributions \citep{Seale2009}. In Figure\,\ref{LMC_YSOs}, we show the distribution of all GC09 YSOs throughout the LMC. As per GC09, the figure includes: (1) \emph{Definite YSOs}, where the spectral features are very consistent with a YSO, (2) \emph{Probable YSOs}, which appear to be YSOs but have a suggestion of a feature of an alternate source, such as a background galaxy, and (3) \emph{Possible YSOs}, which are most likely other sources (i.e., stars, background galaxies, diffuse non-sources, and planetary nebulae) but cannot be ruled out as a YSO. Figure\,\ref{LMC_YSOs} also indicates the most massive definite and probable YSOs (8~$\mu$m magnitude, [8.0]~$\leq$~8, as per GC09), OB stars in known OB associations from \citet{Lucke1970}, and GMCs from the MAGMA CO(1--0) survey \citep{Wong2011}. The MAGMA survey shows the locations of LMC GMCs with molecular gas masses $M_{\rm{CO}} \gtrsim 2\times 10^4~M_\sun$. This survey used Mopra to re-observe GMCs identified in the LMC NANTEN CO(1--0) survey \citep{Fukui2008} with a higher resolution (11~pc) and approximately the same mass detection limit.
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.335]{fig03.pdf}
\caption{Histogram of the projected distance (using 1$\arcsec$ = 0.25~pc) between each of the GC09 248 MYSO and its nearest neighbor. The red numbers indicate which MYSO lies within each bin, using the same MYSO numbering as Figure\,\ref{LMC_YSOs}. \label{nearestneighbor}
}
\end{center}
\end{figure}
\begin{deluxetable*}{crccccccccc}
\setlength{\tabcolsep}{0pt}
\tablewidth{0pc}
\tablecaption{Magnitudes of the MYSOs in our sample in various bands.\label{tab:MYSOmags} }
\tablehead{
\colhead{No.} &
\colhead{Isolated} &
\colhead{2MASS} &
\colhead{2MASS} &
\colhead{2MASS} &
\colhead{$Spitzer$} &
\colhead{$Spitzer$} &
\colhead{$Spitzer$} &
\colhead{$Spitzer$} &
\colhead{$Spitzer$} &
\colhead{$Spitzer$} \\
\colhead{} &
\colhead{MYSO} &
\colhead{J} &
\colhead{H} &
\colhead{K}&
\colhead{3.6~$\mu$m} &
\colhead{4.5~$\mu$m}&
\colhead{5.8~$\mu$m}&
\colhead{8.0~$\mu$m} &
\colhead{24~$\mu$m} &
\colhead{70~$\mu$m}
}
\startdata
1 & 045403.62--671618.5 & & & $14.72\pm0.18$ & $12.12\pm0.09$ & $11.19\pm0.07$ & $9.37\pm0.08$ & $7.69\pm0.09$\\
2 & 050941.94--712742.1 & $15.29\pm0.09$ & $14.77\pm0.15$ & $14.15\pm0.09$ & $11.99\pm0.06$ & $11.89\pm0.06$ & $9.43\pm0.06$ & $7.68\pm0.06$ & $3.21\pm0.11$ & $-1.62\pm0.22$ \\
3 & 051906.69--682137.4 & & & & $11.39\pm0.06$ & $10.99\pm0.06$ & $8.97\pm0.07$ & $7.10\pm0.06$ & $2.28\pm0.11$ & $-2.42\pm0.22$ \\
4 & 052124.90--660412.9 & $16.66\pm0.18$ & $15.56\pm0.17$ & $14.63\pm0.12$ & $12.35\pm0.06$ & $11.96\pm0.06$ & $9.58\pm0.06$ & $7.82\pm0.06$ & $2.69\pm0.11$ & $-1.38\pm0.22$ \\
5 & 053244.25--693005.7 & $15.65\pm0.09$ & $15.14\pm0.10$ & $14.82\pm0.14$ & $12.25\pm0.06$ & $12.26\pm0.06$ & $9.47\pm0.06$ & $7.73\pm0.06$ & $4.04\pm0.11$ & $-0.87\pm0.22$ \\
6 & 053342.15--684602.8 & & & & $11.47\pm0.08$ & $10.75\pm0.07$ & $8.73\pm0.07$ & $6.83\pm0.07$ & $0.80\pm0.10$ & $-3.95\pm0.22$ \\
7 & 053431.46--683513.9 & $15.32\pm0.08$ & $14.32\pm0.07$ & $13.14\pm0.04$ & $11.12\pm0.06$ & $10.56\pm0.05$ & $9.27\pm0.06$ & $7.74\pm0.06$ & $3.33\pm0.11$ & $-1.50\pm0.22$
\enddata
\tablecomments{All numbers are apparent magnitudes for the MYSO taken from GC09. All numbers are apparent magnitudes for the MYSO taken from GC09.}
\tablenotetext{a}{ The isolated MYSO name is based on the GC09 location of the right ascension ($\alpha$) and declination ($\delta$) of each MYSO; e.g., MYSO 045403.62--671618.5\ has the coordinates $\alpha$ = 4$^{\rm{h}}$54$^{\rm{m}}$3$\fs$62 and $\delta$ = --67$^\circ$16$\arcmin$18$\farcs$5.}
\end{deluxetable*}
To select for the most isolated MYSOs, we reduced the sample of isolated MYSOs from \citet{CG08} down to seven sources based on the following criteria:
\begin{enumerate}[itemsep=0.5mm]
\item The MYSO must be spectroscopically confirmed as a MYSO with Spitzer IRS \citep{Seale2009}.
\item The MYSO must have an H$\alpha$ region centered around it (see Figure\,\ref{Halpha}) in order to confirm that the star is massive enough to have significant hydrogen ionizing photons.
\item The MYSO must be far from any GMC, i.e., CO emission from the NANTEN/MAGMA surveys.
\item The MYSO must be far from known OB associations.
\item The MYSO must not be near another MYSO.
\end{enumerate}
The latter three criteria are particularly important because MYSOs can achieve high-velocities, and we do not want to include runaway YSOs that have been ejected from their natal environment (e.g., a GMC or OB association). Runaway stars are defined to have peculiar velocities larger than 40~km$\,$s$^{-1}$\ but can achieve velocities upward of 200~km$\,$s$^{-1}$\ \citep{Blaauw1961}, though the observed MYSOs likely have lower velocities since bow-shocks are not seen in the H$\alpha$ emission in Figure\,\ref{Halpha}. Assuming these MYSOs have velocities in the plane of the sky of 40~km$\,$s$^{-1}$\ and ages of 10$^6$~yr, a MYSO can travel a projected distance from its natal environment of $\sim$40~pc. We chose MYSOs with distances that were twice this value, i.e., sources that have projected distances of more than 80~pc from any GMC, OB association, or other MYSOs. This reduces our sample of sources to seven isolated MYSOs, listed\footnote{The sources listed in the table are referred to by their full GC09 names (based on their central positions). Throughout the paper for simplicity we refer to them by their first six digits of the right ascension, i.e., MYSOs 045403, 050941, 051906, 052124, 053244, 053342, and 053431.} in Table\,\ref{tab:MYSOmags}.
Given these criteria, the members of our sample are found particularly far from other MYSOs, with projected distances ranging from $\sim$\,150\,pc to 600\,pc. The distribution of the projected distances between all MYSOs and their nearest neighbor is shown in Figure\,\ref{nearestneighbor}. Seventy-seven (31\%) of the 248 MYSOs in the GC09 catalog do not have another MYSO within a projected distance of 80\,pc. Of these, 14 sources have brighter 8\,$\mu$m emission than the brightest (and most likely the most massive) of our seven sources (MYSO 053342). While these sources could also be appropriate candidates for isolated high-mass star formation, they did not satisfy all of our isolation criteria, and therefore they were not considered for the sample. These criteria are set to assure that our sample observes some of the most isolated MYSOs in the LMC.
\subsection{Observations}\label{s:obs}
\subsubsection{\emph{HST} Observations and Photometry}\label{s:obsphot}
We acquired \emph{HST} images of the seven MYSOs in our sample in five different filters using the Wide Field Camera 3 (WFC3). The observations were performed during Cycle 20 for project GO-12941 (PI: I. Stephens).
Two broad-band filters (F555W and F814W) and one narrow-band filter (F656N) were used with the WFC3/UVIS imager, and the broad-band filters F110W and F160W were used with the WFC3/IR imager. Filters F555W, F814W, F110W, and F160W roughly correspond to
standard $V$, $I$, $J$, and $H$ bands respectively, while the F656N filter corresponds to H$\alpha$. We simultaneously used ACS/WFC for parallel observations in the filters F555W, F658N, and F814W ($\sim$\,$V$, H$\alpha$, and $I$). The angular resolution of these observations can be calculated as $R=\sqrt{R^{\prime2}+p^{\prime2}}$, $p^\prime$ being the pixel size in seconds of arc and $R^\prime = 0\farcs21 \lambda/ D$, where wavelength $\lambda$ is in $\mu$m and the diameter of the telescope $D$ is in meters (2.4\,m). The pixel sizes are 0\farcs04 and 0\farcs13 for WFC3/UVIS and WFC3/IR respectively and 0\farcs05 for ACS/WFC. The corresponding resolutions are given in Table\,\ref{tab:obs}, where the \emph{HST} observations are summarized. The integration times for 045403\ are lower than the rest of the MYSOs due to observing restrictions enforced for \emph{HST} Cycle 20 observations.
The field-of-view of both WFC3/UVIS and WFC3/IR is 162\arcsec\,$\times$\,162\arcsec and 123\arcsec\,$\times$\,136\arcsec, respectively, and that of ACS/WFC is 202\arcsec\,$\times$\,202\arcsec.
\begin{deluxetable*}{@{}c@{}c@{}cc@{}c@{}c@{}c@{}c@{}c@{}c@{}}
\tablewidth{0pc}
\tablecaption{Summary of the \emph{HST} observations \label{tab:obs} }
\tablehead{
\colhead{} &
\colhead{WFC3/UVIS} &
\colhead{WFC3/UVIS} &
\colhead{WFC3/UVIS} &
\colhead{WFC3/IR} &
\colhead{WFC3/IR} &
\colhead{ACS/WFC\tablenotemark{a}} &
\colhead{ACS/WFC\tablenotemark{a}} &
\colhead{ACS/WFC\tablenotemark{a}} & \\
\colhead{} &
\colhead{F555W ($V$)}&
\colhead{F656N (H$\alpha$)\tablenotemark{b}}&
\colhead{F814W ($I$)} &
\colhead{F110W ($J$)} &
\colhead{F160W ($H$)} &
\colhead{F555W ($V$)}\ &
\colhead{F658N (H$\alpha$)} &
\colhead{F814W ($I$)}
}
\startdata
Effective $\lambda$ (nm) & 530.8 & 656.1 & 802.4 & 1153.4 & 1536.9 & 536.1 & 658.4 & 805.7 & \\
Resolution (\arcsec) & 0.06 & 0.07 & 0.08 & 0.16 & 0.19 & 0.07 & 0.08 & 0.09 & \\
\hline
\multicolumn{9}{c}{Exposures A (s)}\\
\hline
& $3\times400$ & $1\times400$ & $2\times1000$ & $3\times299$ & $3\times499$ & $1\times858$ & $3\times750$ & $1\times1692$& \\
& & $1\times440$ & $1\times693$ & & $1\times599$ & $1\times980$ & & $1\times2091$ & \\
& & $1\times692$ & & & & & & $1\times460$& \\
Total time (s) & 1200 & 1532 & 2693 & 898 & 2097 & 1838 & 2250 & 4243 & \\
\hline
\multicolumn{9}{c}{Exposures B (s)}\\
\hline
& $3\times400$ & $2\times400$ & $1\times1000$ & $3\times299$ & $2\times499$ & $1\times280$ & $2\times750$ & $1\times1690$ & \\
& & $1\times429$& $1\times600$& & $1\times599$ & $1\times820$ & $1\times833$ & $1\times1353$ & \\
& & & $1\times404$ & & & & & & \\
Total time (s) & 1200 & 1229 & 2004 & 898 & 1598 & 1100 & 2333 & 3043
\enddata
\tablecomments{
``Exposures A'' describe the exposure times applied for all sources except for 045403. ``Exposures B'' describe the exposure times applied for source 045403.
}
\tablenotetext{a}{Observations taken parallel to WFC3 and thus are not focused on our targets.}
\tablenotetext{b}{We included a short 10-second exposure with the F656N filter to provide measurements for the saturated sources.}
\end{deluxetable*}
The observation strategy is primarily focused on deep imaging with two filters, F814W and F160W, in order to create color-magnitude diagrams (CMDs) that are more sensitive to low-mass protostars. Deep observations were chosen for the F814W over F555W because the F555W filter is usually more contaminated by diffuse nebular emission. F160W was chosen over F110W in order to detect the more embedded protostars. Our observations also include both F555W and F110W filters because they provide accurate identification of less embedded stellar populations. These filters also allow for the search for possible areas of extinction through color-color diagrams.
Moreover, at low extinction \emph{HST} WFC3 can reach significantly deeper magnitudes in F110W than in F160W. F656N was included in order to determine locations of classical and compact \ion{H}{2} regions, Herbig Ae/Be stars, classical T\,Tauri stars, and bow shocks that can indicate possible runaway stars. The H$\alpha$ observations are also used to estimate the spectral types of the MYSOs (see Section\,\ref{s:spclass}). Since in Cycle 20 WFC3 narrow-bands suffer from Charge Transfer Efficiency (CTE) loss, a post-flash was incorporated for F656N (using the parameter \texttt{Flash=12}). The WFC3 3-color images (using F160W, F555W, and F814W) for each MYSO are seen in Figures \ref{045403_2panel.png}\,-\,\ref{053431_2panel.png}. In these images it is immediately evident that \emph{none of these MYSOs are forming in complete isolation}. We investigate the clustering properties of stars in the vicinity of these MYSOs in Sections \ref{identPMS} and \ref{s:clusanl}.
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.5]{fig04.pdf}
\caption{Three-color image of MYSO 045403.6--671618.5. The colors in red, green, and blue are F160W, F814W, and F555W ($\sim$1.5, 0.80, and 0.53\,$\mu$m), respectively. Both panels are centered on the brightest photometric source of the high-mass star forming region.The top panel shows a large field of view, with a white box indicating the zoom-in shown in the bottom panel. Each color is on an arcsinh scale and colors were adjusted in each panel to best show the stellar content. For this image, the large green/blue spike toward the south
of the YSO is due to a bright foreground star located to the east, outside the field of view. \label{045403_2panel.png}
}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.5]{fig05.pdf}
\caption{Three-color image of MYSO 050941.9--712742.1. Image description as in Figure\,\ref{045403_2panel.png}. \label{050941_2panel.png}}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.5]{fig06.pdf}
\caption{Three-color image of MYSO 051906.7--682137.4. Image description as in Figure\,\ref{045403_2panel.png}. \label{051906_2panel.png}
}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.5]{fig07.pdf}
\caption{Three-color image of MYSO 052124.9--660412.9. Image description as in Figure\,\ref{045403_2panel.png}. \label{052124_2panel.png}
}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.5]{fig08.pdf}
\caption{Three-color image of MYSO 053244.3--693005.7. Image description as in Figure\,\ref{045403_2panel.png}. \label{053244_2panel.png}
}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.5]{fig09.pdf}
\caption{MYSO 053342.2--684602.8. Image description as in Figure\,\ref{045403_2panel.png}. This image is centered on the star-forming region rather than the brightest source. The brightest embedded source is located slightly northwest of the image center. \label{053342_2panel.png}
}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.5]{fig10.pdf}
\caption{Three-color image of MYSO 053431.5--683513.9. Image description as in Figure\,\ref{045403_2panel.png}. \label{053431_2panel.png}
}
\end{center}
\end{figure}
Photometry was performed with the {\sc dolphot} package\footnote{{\sc dolphot} is available online
at \href{http://purcell.as.arizona.edu/dolphot}{http://purcell.as.arizona.edu/dolphot}} \citep{Dolphin2000}.
This package performs PSF fitting tailored to \emph{HST} cameras. The images were first prepared
with the {\sc dolphot} routines {\tt acsmask} and {\tt splitgroups}, which respectively apply the image defect mask
and split the original HST \_FLT FITS files into a single FITS file per chip. The main {\sc dolphot} routine was then
used to make photometric measurements on the pre-processed images relative
to the coordinate system of the drizzled F555W image, which was used as a reference. The output
photometry from {\sc dolphot} is on the calibrated VEGAmag scale based
on the zeropoints provided on the WFC3 page.\footnote{\href{http://www.stsci.edu/hst/wfc3}{http://www.stsci.edu/hst/wfc3}} The VEGAmag zeropoints for F555W, F656N, F814W, F110W, and F160W are
25.8160, 19.8215, 24.6803, 26.0628, and 24.6949 mag, respectively. The $HST$ magnitudes of the MYSOs are indicated in Table\,\ref{t:bright}.
For the photometric analysis presented in this paper, only the sources with the best photometric quality parameters were kept. Specifically, the
sources in the original {\sc dolphot} output file should meet the following criteria: {\tt Object Type}\,=\,1 (i.e., a PSF consistent with stellar, non-extended objects), {\tt signal-to-noise}$\,>$\,5, {\tt sharp}$^2<$\,0.3, {\tt crowd}\,$<$\,2, and {\tt round}$^2 <$\,1. The final stellar photometric catalog is referred to as our {\em clean photometric sample}.
\begin{deluxetable*}{cccccccc}
\tablecaption{$HST$ Magnitudes of MYSOs \label{t:bright}}
\tablewidth{0pc}
\tablehead{
\colhead{MYSO} & \colhead{$\alpha$ (2000)} & \colhead{$\delta$ (2000)} & \colhead{F555W} & \colhead{F814W} & \colhead{F110W} & \colhead{F160W} & \colhead{$A_V$\tablenotemark{a}} \\
\colhead{} & \colhead{(h:m:s)} & \colhead{(d:m:s)} & \colhead{(mag)} & \colhead{(mag)} & \colhead{(mag)} & \colhead{(mag)} & \colhead{(mag)}
}
\startdata
045403 & 04:54:03.61 & --67:16:18.4 & & & 15.562 & 14.456 & 0.58 \\
050941 & 05:09:42.03 & --71:27:41.6 & & & 15.923 & 15.653 & 0.20 \\
051906 & 05:19:06.75 & --68:21:36.3 & & & 16.076 & 15.622 & 0.29 \\
052124 & 05:21:24.93 & --66:04:12.7 & 22.562 & 20.072 & 18.175 & 16.862 & 0.69 \\
053244 & 05:32:44.42 & --69:30:05.5 & & & 17.149 & 16.815 & 0.23 \\
053342 & 05:33:41.90 & --68:45:57.2 & 19.093 & 18.359 & 17.816 & 17.456 & 0.24 \\
053431 & 05:34:31.47 & --68:35:14.2 & 19.717 & 18.318 & 16.868 & 15.205 & 0.85
\enddata
\tablecomments{MYSO is defined as the central source with the brightest F160W magnitude. The right ascension and declination in the table indicate the photometric location of the MYSO. MYSOs without F555W or F814W magnitudes did not have valid photometric fits. Uncertainties in the photometric magnitudes were typically $\sim$0.001 mag.}
\tablenotetext{a}{$A_V$ was calculated based on the Padova isochrones using F110W and F160W magnitudes and assuming a zero-age main sequence star.}
\end{deluxetable*}
\subsubsection{Mopra Observations}
After our \emph{HST} observations, Mopra spectra on our sources became available. The MAGMA survey \citep{Wong2011} is a CO(1--0) survey of the LMC that targeted only the locations with NANTEN CO(1--0) emission \citep{Fukui2008}. While all seven of the isolated MYSO candidates except 050941\ were covered by the NANTEN survey (Figure\,\ref{LMC_YSOs}), none were covered by the MAGMA survey. MAGMA mapped the LMC GMCs at the higher resolution of $\sim$\,11\,pc (NANTEN had a resolution of $\sim$\,40\,pc), but it covers just $\sim$80\% of the total emission detected by NANTEN. Of the 248 MYSOs identified in GC09, 76 MYSOs were not covered in the MAGMA survey, among which are our seven selected sources. In 2011 June and July, members of our team (PI: T. Wong) performed follow-up single pointing Mopra observations of all these 76 sources, integrating for 10 minutes on each source. The sensitivity is approximately a factor of 2 times better than that of the MAGMA survey. Our seven selected MYSOs were included in these runs, and we present here their Mopra spectra, which are shown in Figure\,\ref{COspec}.
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.45]{fig11.pdf}
\caption{Mopra CO(1--0) spectra centered on the MYSO based on the coordinates given in GC09. Velocities are given the radio kinematic local standard of rest (LSRK) frame.
Spectra were smoothed with a Gaussian kernel with a width of four channels (0.36~km$\,$s$^{-1}$) after deleting the baseline. \label{COspec}
}
\end{center}
\end{figure}
\begin{deluxetable*}{@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c}
\tabletypesize{\scriptsize}
\tablewidth{0pc}
\tablecaption{Closest Objects to Isolated Sources in Parsecs \label{tab:isolation} }
\setlength{\tabcolsep}{1.5mm}
\tablehead{
\colhead{Isolated} &
\colhead{GMC\tablenotemark{a}} &
\colhead{Star in OB} &
\colhead{{\em Spitzer}} &
\colhead{{\em Spitzer} Intermediate-} &
\colhead{{\em Herschel}\tablenotemark{d}} &
\colhead{{\em Herschel}\tablenotemark{d}} &
\colhead{Cluster or }&
\colhead{Known} \\
\colhead{MYSO} &
\colhead{} &
\colhead{Association\tablenotemark{b}}&
\colhead{MYSO\tablenotemark{c}}&
\colhead{Mass YSO\tablenotemark{c} } &
\colhead{$L_{\rm{FIR}} > 3000~L_\odot$ }&
\colhead{$L_{\rm{FIR}} > 1000~L_\odot$ }&
\colhead{Association\tablenotemark{e}}&
\colhead{SNR\tablenotemark{e}}
}
\startdata
045403 & 140 & 43 & 350 & 12 & 170 & 88 & 60 & 67 \\
050941 & 530 & 230 & 600 & 220 & 510 & 380 & 64 & 1400\\
051906 & 170 & 360 & 260 & 25 & 130 & 26 & 66 & 460 \\
052124 & 320 & 160 & 400 & 7.1 & 320 & 310 & 27 & 330 \\
053244 & 210 & 170 & 240 & 10 & 240 & 10 & 41 & 300 \\
053342 & 83 & 160 & 150 & 9.3 & 130 & 130 & 48 & 430 \\
053431 & 140 & 230 & 180 & 44 & 180 & 180 & 28 & 560
\enddata
\tablecomments{All numbers are projected distances in pc, using 1$\arcsec = 0.25$~pc and rounding to two significant figures. The distances are lower limits since we consider projected distances. YSOs, clusters, and non-OB associations are not included if they are associated with the nearby star-forming region of the MYSO.}
\tablenotetext{a}{CO(1--0) data from MAGMA \citep[][Data Release 2]{Wong2011}, measuring the distance to the center of the nearest pixel in the masked CO integrated intensity map. 050941\ actually lies slightly outside the MAGMA and NANTEN survey region \citep[Figure\,\ref{LMC_YSOs},][]{Fukui2008}, but $Spitzer$ 8~$\mu$m emission does not indicate strong emission expected from a GMC.}
\tablenotetext{b}{This is the distance to the closest OB star in the associations from \citet{Lucke1970}. Distances to the center of the OB associations can be much larger.}
\tablenotetext{c}{MYSOs have [8.0] $\leq$ 8 mag and are considered ``definite'' or ``probable'' YSOs in \citet{GC09}. Intermediate YSOs are ``definite'' or ``probable'' YSOs with lower magnitudes. For 045403, a source identified as a galaxy in \citet{GC09} is declared as a YSO here (see Section\,\ref{sec:sone}).}
\tablenotetext{d}{Dust clumps that have a $Herschel$ derived far-infrared luminosity $L_{\rm{FIR}} > 1000~L_\odot$ should have an embedded source. $Herschel$ sources are from \citet{Seale2014}. Many YSOs and clumps have failed graybody fits \citep{Seale2014}, suggesting that some of these distances may be upper limits.}
\tablenotetext{e}{ The stellar clusters, associations, and supernova remnants (SNRs) are from \citet{Bica2008} and distances are measured to the center of these sources. }
\end{deluxetable*}
In these spectra, CO(1--0) is detected in every MYSO except for perhaps MYSO 051906. The spectrum of this source has two possible CO(1--0) peaks between 250 and 300 km$\,$s$^{-1}$, but since we do not have velocity information for this MYSO, we cannot confirm whether these are true detections. It should be noted that while there are Mopra detections for almost all considered sources, there are no NANTEN CO(1--0) detections on these MYSOs, which was one of the reasons for selecting them. The lack of NANTEN detections on them limits the molecular gas mass of any potentially associated GMCs to $M_{\rm{CO}} \lsim 2\times 10^4 M_\sun$. A possible exception is source 050941\ because it was not covered by the NANTEN survey (see Figure\,\ref{LMC_YSOs}). However, based on the lack of very bright emission of this source in the $Spitzer$ 8\,$\mu$m band \citep{Meixner2006}, which CO would typically correlate with, we deduce that there is not likely a nearby GMC.
\section{Isolation Analysis of each MYSO Based on Existing Data}\label{section3}
We characterize the isolation of the seven MYSOs observed by $HST$ by calculating the distances to known astronomical sources. We particularly used the catalogs by \citet{Bica2008} and \citet{Seale2014}, which are constructed from previous known LMC catalogs. The \citet{Bica2008} catalog contains a list of known emission nebulae, star clusters, associations, and \ion{H}{1} shells and supershells, while \citet{Seale2014} used new \emph{Herschel} observations and existing YSO catalogs to find locations of active star-forming regions. \citet{Seale2014} classified {\em Herschel} sources as YSOs, dust clumps (which may or may not have cores), galaxies, or unclassified sources. \citet{Seale2014} classified sources as YSOs if they are not identified as galaxies or other sources (e.g., supernova remnants), are detected in 3 $Herschel$ bands, and have bright 24\,$\mu$m point-like emission. Dust clumps meet the same criteria, except they are not associated with a 24\,$\mu$m point source. The unclassified sources are all dim and may be very faint YSOs or dust clumps, but they may also be fluctuations in the interstellar medium (ISM). This catalog also provides far infrared (FIR) luminosities based on {\em Herschel} graybody fits. We also used Aladin \citep{Bonnarel2000} to provide a pictorial view of the environment encompassing each MYSO. MAGMA \citep{Wong2011} was used to find the closest known GMCs, and we confirmed that these GMCs are indeed the closest based on the lower resolution NANTEN survey \citep{Fukui2008}. The catalog of \citet{Lucke1970} was used to find the locations of the largest OB associations. All distances were measured using the GC09 positions of the MYSOs.
In Table\,\ref{tab:isolation} we summarize the distances (with $1\arcsec \simeq 0.25$\,pc) to the MYSOs. We also summarize each individual MYSO in the Appendix. It should be noted that in the table and the paper in general we are referring to the \emph{projected} distances; thus these are minimum distances to each of the objects. Indeed, the LMC is not entirely face-on, with an inclination of approximately 35$^\circ$ \citep[e.g.,][]{vanderMarel2001}, which implies we are typically underestimating the distances by approximately (1 -- cos 35$^\circ$) = 17\%. The summary in the appendix characterizes the isolation for each of the MYSOs, based primarily on the \citet{Bica2008} and \citet{Seale2014} catalogs. As is shown, all these sources are mostly isolated with no nearby high-mass star formation.
\begin{figure*}[t!]
\begin{center}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{045403_fs_cmd.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{050941_fs_cmd.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{051906_fs_cmd.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{052124_fs_cmd.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{053244_fs_cmd.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{053342_fs_cmd.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{053431_fs_cmd.pdf}
\end{center}
\caption{ Color-magnitude diagrams for the seven MYSOs using the F814W and F160W filters of the entire clean photometric sample. Typical stellar populations of the nearby LMC field are plotted with blue symbols. The young PMS stellar sources of each region, determined by statistically decontaminating the complete observed CMDs from the field contribution, are plotted in red. They represent the recent star formation events for each region. An indicative reddening vector for $A_{V} = 2$ mag is shown in the CMDs only to demonstrate the effect of extinction \citep{Fitzpatrick1999}; the length of the vector does not correspond to the actual interstellar extinction in the regions. }
\label{f:fs.cmds}
\end{figure*}
\begin{figure*}[t!]
\begin{center}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{045403_s_cmd.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{050941_s_cmd.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{051906_s_cmd.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{052124_s_cmd.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{053244_s_cmd.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{053342_s_cmd.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{053431_s_cmd.pdf}
\end{center}
\caption{F814W, F160W color-magnitude diagrams of the most probable young PMS stellar populations around the seven MYSOs, as determined by statistically decontaminating the complete CMDs from the field contribution. Isochrones from the Padova family of models are shown corresponding to ages 0.5 (red), 1 (orange), 2.5 (yellow-green), 5 and 10 (green), 20 (cyan), 50 (blue), and 100~Myr (violet). While the masses probed by the isochrone strongly depends on the age, the typical isochrone shows a stellar mass range between $\sim$0.3 and 7\,$M_\odot$. Reddening vectors (top-right) represent the minimum correction applied to the evolutionary models in order to fit the observations. This correction corresponds to an $A_{V}$ of 0.50, 0.10, 0.55, 0.35, 0.50, 0.75, and 0.30 mag for 045403, 050941, 051906, 052124, 053244, 053342, and 053431, respectively. These corrections were determined so that the models fit the blue part of the main-sequence and red-giant branch of the {\em complete} CMDs of the regions. They represent thus the {\em minimum} $A_V$ in every stellar sample.}
\label{f:s.cmds}
\end{figure*}
\section{PMS stars in the Observed Regions}\label{identPMS}
\subsection{Identification of the Pre--Main-Sequence Populations}\label{PMSstars}
We investigate the young stellar populations detected with our observations within the WFC3/UVIS field-of-view of the regions around the MYSOs. From the clean photometry (see Section\,\ref{s:obsphot}), we use the CMDs of the stars detected in the filters F814W and F160W, equivalent to standard $I$ and $H$ photometric bands. The choice of these bands is based on the fact that any young currently-forming stellar population will be visible at longer wavelengths. Stellar photometric measurements using F814W are not affected by diffuse hydrogen emission as in the F555W ($\sim$\,$V$) band. In addition, the F160W ($\sim$\,$H$) filter is less sensitive to extinction and thus allows for the selection of more embedded sources than the F110W ($\sim$\,$J$) filter.
In the CMDs of the observed regions around our target MYSOs, shown in Figure\,\ref{f:fs.cmds}, it is seen that the observed fields cover a large variety of stellar populations. The observed stellar samples comprise the evolved populations of the surrounding LMC field designated by the prominent red giant branch and low--main-sequence features of the CMDs. The majority of the latter populations are essentially faint objects still in their PMS evolutionary stage, i.e., they have not started their lives on the main-sequence yet. They are located at the red part of the observed CMDs, almost parallel to the low--main-sequence.
In order to identify these PMS stars and distinguish them from the evolved main-sequence stars of the nearby LMC field, we decontaminate the observed CMDs from the contribution of the local LMC field with the application of a statistical field-subtraction technique based on the Monte Carlo method. Specifically, we construct the CMD of the most empty area in each observed field, which we consider to be the best-representative of the local LMC field population (we refer to it as the {\sl field CMD}). We then consider an circular subregion on the CMD around every star in the total CMD, and we subtract from the stars included in this region the corresponding number of randomly selected stars that belong in the same CMD-subregion of the field CMD. Since the area selected for the field CMD is only a portion of the complete observed area, the number
of expected field stars in every CMD-subregion was scaled according to the fraction of the surface of the total area over that of the field area. We construct thus the `clean' CMD of each observed field, which contains only the most probable PMS stars in the region.
After statistically subtracting the field stars from the CMD of every area, each of the remaining red sources was visually inspected in the F814W and F160W images to ensure that they indeed correspond to real stellar sources in at least one of the filters. Note that while the source might not be visually confirmed as stellar in one filter, the {\sc dolphot} algorithm may still be able to fit an accurate PSF to the source; as discussed in Section \ref{s:obsphot}, the analyzed photometry all have a {\tt signal-to-noise$\,>$\,5}. The visual inspection was primarily used to remove sources that confused the {\sc dolphot} algorithm due to diffuse emission and bright halos and spikes emanating from bright stars. Moreover, this visual inspection removed a few sources that were obvious galaxies (e.g., extended sources with some structure) and artifacts in the observations. For 045403, we removed all photometric sources lying within the ``blue" glow of the diffraction spike seen in Figure\,\ref{045403_2panel.png} since the photometry here was found to be unreliable. In the CMDs of Figure\,\ref{f:fs.cmds}, field stars are plotted in blue and the {most probable} PMS stellar populations (derived from our field-subtraction technique and visual inspection) are shown in red. A small fraction of the red PMS stellar sample is expected to be still contaminated by some main sequence stars, but to a very small degree. Therefore we treat all these sources as true PMS stars. In Section\,\ref{s:clusanl} we investigate the clustering behavior of these stellar populations in the surroundings of our MYSOs.
Typically, the positions of the PMS stars in the CMD do not overlap with those of the main-sequence stars, and therefore it is quite straightforward for our field-subtraction technique to eliminate completely features that are typical of old populations from the original CMD \citep[e.g.,][]{gouliermisetal11, gouliermisetal12}. However, our field decontamination method is not optimized for regions of high differential extinction because evolved stars in such regions strongly contaminate the CMD positions of the PMS stars due to reddening. This contamination also `hides' the main-sequence {\sl turn-on}, i.e., the position in the CMD where the PMS stars ignite hydrogen and reach the main-sequence, and which thus determines the youngest age of the PMS populations. As seen in the CMDs of Figure\,\ref{f:fs.cmds}, this issue is quite prominent in the case of the observed field around MYSO\,053342, where the main-sequence {\sl turn-off} contaminates strongly the {\sl turn-on} and therefore PMS stars are not easily distinguishable from the old field stars. While this method is not optimized for differential extinction across the field, it is sufficient at identifying young clusters in the field, which we discuss in more detail in the following sections.
\subsection{PMS Color-Magnitude Diagrams}\label{s:pmss}
The CMDs of the PMS stellar sources remaining after field-subtraction and visual inspection in the regions around each of the considered MYSOs are shown in Figure\,\ref{f:s.cmds}. In these CMDs, stellar evolutionary models for various ages are also plotted. These isochrones are taken from the {\sl Padova} grid of models \citep{Bressan2012, Chen2014, Tang2014} and range from 0.5 to 100\,Myr. They are used for guidance on the evolutionary stage of the observed PMS stars in the CMDs. Isochrones younger than $\sim$\,5\,Myr are generally considered not as well-determined as the older ones. These models also cover the PMS evolutionary phase for which they are qualitatively indistinguishable from the {\sl Pisa} family of PMS models \citep[FRANEC;][]{Tognelli2011} for the LMC metallicity ($Z=0.008$). While these isochrones provide an approximation of the age and age-spread of PMS stars in the ensembles, they cannot be used at face-value due to several physical characteristics of these PMS stars. In particular, a large fraction of these PMS stars are T\,Tauri-type stars, which are often dislocated from their theoretical CMD-positions due to, for example, rotational variability, accretion excess, and unresolved binarity \citep{Gouliermis2012, Jeffries2012, Preibisch2012}. Evolutionary models are also known to be inconsistent with each other \citep{Hillenbrand2008} at such a degree so that the choice of the appropriate grid of models practically depends on the specific dataset.
The CMDs of the areas encompassing these YSOs typically have PMS stars with ages younger than $\sim$ 5~Myr. This age-limit is more prominent for the fainter stars, while the brighter PMS stars and those of the turn-on are shown to also fit ages of up to $\sim$10~Myr. Therefore, both regions host PMS stars at very similar evolutionary stages with ages younger than 10~Myr. However, as mentioned above no actual age can be assigned by a simple fit on the CMD. On the Padova isochrones shown in Figure\,\ref{f:s.cmds}, we applied extinction corrections on the basis of their fit to the blue part of the upper main--sequence (above the turn-off) of the complete CMDs of Figure\,\ref{f:fs.cmds}. The extinction corrections are indicated in the figure caption of Figure\,\ref{f:s.cmds}, with $A_V$ varying from 0.10 to 0.75\,mag. These measurements are based on the extinction law by \citet{Schlafly2011} for a coefficient $R_V = 2.1$, which was found to fit best the two-color diagrams of the populations. Based on these isochrones, we also give the $A_V$ values for the central MYSOs in Table\,\ref{t:bright}.
Of the seven observed MYSO fields, the region encompassing 053342 is extincted the most, experiencing a strong differential (i.e., spatially-variable) reddening. This is shown in the complete CMD of Figure\,\ref{f:s.cmds}, where it can be seen that our selection of the young PMS populations was not entirely successful, including several evolved highly extincted giants seen in the bright-red part of the CMD. The isochrones plotted on this CMD are corrected for a minimum extinction of $A_V \simeq 0.75$\,mag, determined so that the upper--main-sequence fits the blue part of the total observed CMD. As a consequence, the bright main-sequence stars seen on the right of the models in Figure\,\ref{f:s.cmds} are not poor fits, but highly reddened main-sequence stars, corresponding to a maximum reddening of $A_V \sim 3.25$\,mag. We can assume that the PMS stars in the observed area also suffer from the same differential extinction and therefore their CMD-positions are shifted in a variable manner, in addition to their ``intrinsic'' dislocation due to their characteristics, as discussed above. Consequently, it is quite difficult to separate the complete PMS population in the star-forming region around MYSO\,053342 from its surrounding field population. Nevertheless, for the purpose of this paper we are only interested in clusters surrounding the MYSOs, in the area of which we expect a minimal contamination by MS stars.
It should also be noted that while we cannot evaluate accurately the age of these clusters, for the purposes of estimating their masses (Section\,\ref{s:clusterest}) we assume ages of $\sim$\,1\,Myr and 2.5\,Myr. While the ages are uncertain, based on the isochrones these values represent the best estimate of the ages of the PMS stars within the field.
\subsection{Spectral Types of the MYSOs}\label{s:spclass}
\begin{figure*}[t!]
\begin{center}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.33\textwidth]{045403_ha.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.33\textwidth]{050941_ha.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.33\textwidth]{051906_ha.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.33\textwidth]{052124_ha.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.33\textwidth]{053244_ha.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.33\textwidth]{053342_ha.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.33\textwidth]{053431_ha.pdf}
\end{center}
\caption{$HST$ F656N (H$\alpha$) observations encompassing all 7 MYSOs. The black polygon shows the area in which the flux was summed for determining the ionizing flux (Section \ref{s:spclass}). The images shown here have had their local foreground/background subtracted.}
\label{f:ha}
\end{figure*}
We determined the approximate spectral type of each MYSO using its H$\alpha$ luminosity, which is based on the {\em HST} observations in the F656N band (Figure \ref{f:ha}). These images allow for a better determination of the emission associated with the MYSO than the Blanco 4~m telescope (Figure \ref{Halpha}) since $HST$ has superior pointing accuracy and sensitivity. For each F656N image, we draw a polygon around the emission that we judge to be the H$\alpha$ emission most likely associated with the MYSO. The process of picking such emission can be ambiguous since disentangling ionization from the MYSO, other nearby stars, and the general interstellar radiation field can be very difficult (e.g., MYSO 053244). However, we find that selecting slightly larger or smaller areas than the ones shown in Figure \ref{f:ha} will not change the spectral types determined below.
For each polygon shown in in Figure \ref{f:ha}, the positive H$\alpha$ pixels were summed to estimate the total flux. We subtracted the local foreground/background contribution from each of these positive pixels. We determined the local foreground/background value by selecting an empty area in the F656N map and calculated the mean pixel value in this area. For each source, the stellar sources were subtracted from the image in order to avoid stellar continuum contamination. For both 051906\ and 053342, the stellar subtraction was negligible compared to the total H$\alpha$ emission. However, the subtraction was more significant for the other sources.
For these \emph{HST} observations, the flux surrounding each YSO is given in Electrons\,s$^{-1}$. This was converted to a flux, $F_{{\rm H}\alpha}$ in erg\,s$^{-1}$\,cm$^{-2}$, by multiplying by the inverse sensitivity given in the WFC3 FITS header, PHOTFLAM ($1.632\times10^{-17}$ erg\,cm$^{-2}$\,\AA$^{-1}$\,Electron$^{-1}$), and the root-mean-square bandwidth of filter plus detector, PHOTBW (41.89\,\AA). The H$\alpha$ luminosity was then calculated via $L_{{\rm H}\alpha} = 4\pi F_{{\rm H}\alpha} D^2$, where $D$ is the distance to the LMC ($\sim$\,50\,kpc).
Spectral types can be determined from the hydrogen ionizing luminosity, $Q_0$. $L_{{\rm H}\alpha}$ and $Q_0$ can be related by
\begin{equation}
L_{{\rm H}\alpha} = V n_p n_e \alpha_{{\rm eff},{\rm H}\alpha} E_{{\rm H}\alpha}
\end{equation}
\begin{equation}
Q_0 = V n_p n_e \alpha_B ,
\end{equation}
where $V$ is the volume of the region, $n_p$ and $n_e$ are the proton and electron densities of the region, $E_{{\rm H}\alpha}$ is the energy of an $H\alpha$ photon, $\alpha_{{\rm eff},{\rm H}\alpha}$ is the H$\alpha$ recombination rate, and $\alpha_B$ is the case B hydrogen recombination rate (i.e., optically thick to ionizing radiation; excludes recombinations into the $n=1$ state). Assuming an electron temperature of $10^4$\,K \citep[e.g.,][]{MH05}, we adopt $\alpha_B = 2.59\times10^{-13}$\,cm$^3$\,s$^{-1}$ and $\alpha_{{\rm eff},{\rm H}\alpha} = 1.17\times10^{-13}$\,cm$^3$\,s$^{-1}$ \citep{Draine1992}. Therefore,
\begin{equation}
Q_0 = L_{{\rm H}\alpha}\times \frac{\alpha_B}{\alpha_{{\rm eff},{\rm H}\alpha}\times E_{{\rm H}\alpha}} \approx 1.37\times10^{-12}~\rm{s}^{-1} \left(\frac{L_{{\rm H}\alpha}}{\rm{erg\,s}^{-1}}\right) .
\end{equation}
Note that no extinction correction was made, causing $Q_0$ to be underestimated.
In Table\,\ref{t:spclass} we match $Q_0$ to the approximate spectral type following the observational effective temperature scales of class\,V stars in \citet{Martins2005}. Note that metallicity does not have a major effect on the spectral type of these OB-stars, affecting the classification by no more than half a spectral type \citep{Smith2002}. Each spectral type also has a corresponding mass \citep{Martins2005} which we provide in the last column of Table\,\ref{t:spclass}. \citet{Martins2005} only estimated stellar parameters for O-stars, and 045403, 052124, 053244, and 053431\ have calculated $Q_0$ values that are notably smaller than an O9.5V star (log~$Q_0 = 47.56$). Interpolating $Q_0$ to later spectral types is difficult since the \citet{Martins2005} model is not well fit by a simple functional form. \cite{Hanson1997}, which calculates very similar values for $Q_0$ as \citet{Martins2005}, suggests that a B0V star has log~$Q_0 = 47.18$, and thus we adopt this spectral type for these four sources. The \citet{Martins2005} spectral types as a function of mass fits well with an exponential function, and we interpolate a B0V star to have a mass of 14\,$M_\odot$.
\begin{deluxetable}{ccccc}
\tablecaption{Determination of the Spectral Type of the MYSOs.\label{t:spclass}}
\tablewidth{0pc}
\tablehead{
\colhead{MYSO} & \colhead{Flux} & \colhead{log($Q_0$)} & \colhead{Spectral} & \colhead{Mass} \\
\colhead{} & \colhead{(10$^3$ Electrons\,s$^{-1}$)} & \colhead{(Photons s$^{-1}$)} & \colhead{Type} & \colhead{($M_\odot$)}
}
\startdata
045403 & 1.1 & 47.2 & B0V & 14 \\
050941 & 4.7 & 47.8 & O9.5V & 16 \\
051906 & 16 & 48.4 & O8V & 21 \\
052124 & 1.1 & 47.2 & B0V & 14 \\
053244 & 1.6 & 47.4 & B0V & 14 \\
053342 & 57 & 48.9 & O6V & 31 \\
053431 & 1.0 & 47.2 & B0V & 14
\enddata
\end{deluxetable}
For this method of calculating spectral types, the ionizing flux is underestimated since the H$\alpha$ emission is not corrected for extinction, and we over-subtract the stellar sources. However, our assumptions thus far assume only one main ionizing source, i.e., we do not account for multiplicity. Indeed, high-mass stars are expected to be binary systems. Since the relationship between ionizing luminosity and spectral type is far from linear, we do not drastically overestimate the mass of the highest mass YSO due to multiplicity in the system. For example, the ionizing luminosity of 2.4 O7.5V stars (each $\sim$25\,M$_\odot$) is the same as an O6V star ($\sim$31\,M$_\odot$); similarly, the ionizing luminosity of 2.4 O9V stars (each $\sim$17\,M$_\odot$) is the same as an O8V star of $\sim$21\,M$_\odot$ \citep{Martins2005}. It is unclear if extinction of the H$\alpha$ band (suggesting bias toward later spectral types) or multiplicity of the YSOs (bias toward earlier spectral types) has a greater impact for the estimated spectral types.
We also note we estimated the {\em current} spectral types of the sources; since these MYSOs are embedded as indicated by Spitzer emission, they may still be accreting and could eventually become earlier (more massive) spectral types \citep{Zinnecker2007}.
\begin{figure*}[t!]
\begin{center}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{045403_map_asec}.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{050941_map_asec}.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{051906_map_asec}.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{052124_map_asec}.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{045403.wfc3.pms.cln_kde_contour_map3}.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{050941.wfc3.pms.cln_kde_contour_map3}.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{051906.wfc3.pms.cln_kde_contour_map3}.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{052124.wfc3.pms.cln_kde_contour_map3}.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{053244_map_asec}.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{053342_map_asec}.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{053431_map_asec}.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{053244.wfc3.pms.cln_kde_contour_map3}.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{053342.wfc3.pms.cln_kde_contour_map3}.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{053431.wfc3.pms.cln_kde_contour_map3}.pdf}
\end{center}
\caption{\noindent Distribution of pre--main-sequence stars in regions encompassing the seven MYSOs. All maps
have the same scale and orientation (north is up, east is left). {\em Top Panels:}
Charts of the young stellar populations detected across the observed field-of-views. The young stars in every region are
plotted with black points. The positions of the target MYSOs are indicated by the large red star symbols.
{\em Bottom Color Scale Panels:}
The surface stellar density maps of the observed fields-of-view constructed with the Kernel Density Estimation from the detected pre--main-sequence population. Each color in the maps shows the same clustering significance above the local pre--main-sequence population.
White contours show the 3$\sigma$ isopleth surrounding each MYSO, where 1$\sigma$ is 1.20, 2.17, 2.63, 0.99, 1.91, 2.70, and 2.06\,PMS-stars\,pc$^{-2}$ for 045403, 050941, 051906, 052124, 053244, 053342, and 053431, respectively.
These maps show that the seven sources are far from isolated, in the sense that
their immediate environments are populated by a large sample of young PMS stars, which are distributed
in a clustered fashion.
}
\label{f:kdemaps}
\end{figure*}
\section{Cluster Analysis}\label{s:clusanl}
The stellar charts of the selected young stellar populations around each of the MYSOs are shown in Figure\,\ref{f:kdemaps} (top panel). These maps show that the considered MYSOs are far from being formed in an isolated environment because there is a large number of young stars around each of the MYSOs. Note that, as discussed in Section\,\ref{PMSstars}, the field encompassing 045403\ has many photometric sources removed due to the large diffraction spike in the field, making the cluster appear more isolated than it actually is. There are two facts derived from these maps: 1) the vast majority of PMS stars are assembled around the MYSOs, apparently in star-forming clusters, and 2)~these clusters are not entirely isolated themselves. Specifically, while in the region 051906\ there is only
one compact stellar over-density around the MYSO, the other regions have clear evidence of young stars
in the field surrounding the MYSOs, which are loosely distributed across almost the entire observed field. This indicates that the stellar
clusters around the MYSOs for all regions \emph{except} 051906\ are themselves not isolated either, but they probably belong to a larger stellar constellation, related to a larger-scale star formation event and to their parental molecular clouds.
\subsection{Stellar Surface Density Maps}
We investigate the clustering behavior of the PMS stars in the observed regions by first identifying and characterizing the stellar clusters in the regions. To this aim we build surface stellar density maps with the use of our stellar samples and the application of the {\em Kernel Density Estimation} (KDE) method \citep{Silverman1992}. Density maps are constructed by convolving the stellar catalog with a Gaussian kernel. The main input parameter is the full-width at half-maximum (FWHM) of the kernel, which specifies the minimum size of any stellar clustering that can be identified. There is no concrete method to define the optimal KDE kernel for smoothing the stellar maps, which is thus determined through experimentation. The minimum permitted FWHM size, corresponding to the typical PSF size of $\sim$\,2.5\,WFC3 pixels is about 0.1\arcsec, which at the distance of the LMC corresponds to $\sim$\,0.025~pc. However, this limit is essentially the {\em resolution} in our photometry (depending on waveband), and therefore a KDE map of lower resolution, i.e., built with a larger FWHM, should be used for identifying statistically important stellar over-densities.
A reasonable minimum size for the identified stellar clusterings is $\sim$\,1\,pc, which corresponds to a FWHM of $\sim$\,4\arcsec\ ($\sim$100 pixels). Our experiments showed that kernels smaller than this limit produce very noisy maps in which density fluctuations do not allow any concrete identification. On the other hand, kernels larger than 100 pixels begin to over-smooth the data so that the derived size-scale for the detected over-densities are overestimated. Therefore, we use for our cluster analysis the kernel size of $\sim$\,4\arcsec. The constructed KDE maps are also shown in Figure\,\ref{f:kdemaps} (bottom panel).
\begin{deluxetable*}{ccccccccccccc}
\tablecaption{Characteristics of young stellar clusters detected around the seven MYSOs.\label{t:cluschar}}
\tablehead{
\colhead{MYSO} &
\colhead{Spectral\tablenotemark{a}} &
\colhead{$m_{\rm{max}}$ \tablenotemark{a}} &
\multicolumn{2}{c}{Position (J2000)} &
\colhead{$N_\star$\tablenotemark{b}} &
\colhead{$r_{\rm equiv}$\tablenotemark{c}} &
\colhead{$r_{\rm max}$\tablenotemark{c}} &
\colhead{Elong.\tablenotemark{c}} &
\multicolumn{2}{c}{$M_{\rm{ecl}}$\ ($M_\odot$) Estimated\tablenotemark{d}} &
\multicolumn{2}{c}{$M_{\rm{ecl}}$\ ($M_\odot$) Extrapolated} \\
\colhead{} &
\colhead{Type} &
\colhead{($M_\odot$)} &
\colhead{Right Ascension} &
\colhead{Declination} &
\colhead{} &
\colhead{(pc)} &
\colhead{(pc)} &
\colhead{} &
\colhead{1\,Myr} &
\colhead{2.5\,Myr} &
\colhead{IMF\tablenotemark{e}} &
\colhead{$m_{\rm{max}}$\,--\,$M_{\rm{ecl}}$\,Relation\tablenotemark{f}}
\startdata
045403 & B0V & 14 & 4$^{\rm h}$54$^{\rm m}$03.6$^{\rm s}$& $-$67$^{\circ}$16$^{\prime}$18.3$^{\prime\prime}$ & 86 & 1.9 & 2.2 & 1.16 & 110 & 170 & 210 & 200 \\
050941 & O9.5V & 16 & 5$^{\rm h}$09$^{\rm m}$41.9$^{\rm s}$& $-$71$^{\circ}$27$^{\prime}$41.7$^{\prime\prime}$ & 397 & 2.7 & 3.4 & 1.28 & 250 & 360 & 240 & 280 \\
051906 & O8V & 21 & 5$^{\rm h}$19$^{\rm m}$07.0$^{\rm s}$& $-$68$^{\circ}$21$^{\prime}$35.1$^{\prime\prime}$ & 375 & 2.4 & 2.7 & 1.14 & 350 & 510 & 360 & 490 \\
052124 & B0V & 14 & 5$^{\rm h}$21$^{\rm m}$25.6$^{\rm s}$& $-$66$^{\circ}$04$^{\prime}$12.3$^{\prime\prime}$ & 86 & 2.4 & 4.2 & 1.74 & 90 & 140 & 210 & 200 \\
053244 & B0V & 14 & 5$^{\rm h}$32$^{\rm m}$44.3$^{\rm s}$& $-$69$^{\circ}$30$^{\prime}$05.5$^{\prime\prime}$ & 122 & 1.9 & 2.0 & 1.06 & 170 & 250 & 210 & 200 \\
053342 & O6V & 31 & 5$^{\rm h}$33$^{\rm m}$41.4$^{\rm s}$& $-$68$^{\circ}$46$^{\prime}$02.6$^{\prime\prime}$ & 517 & 3.1 & 4.0 & 1.29 & 400 & 610 & 670 & 1220 \\
053431 & B0V & 14 & 5$^{\rm h}$34$^{\rm m}$31.7$^{\rm s}$& $-$68$^{\circ}$35$^{\prime}$13.6$^{\prime\prime}$ & 222 & 2.7 & 3.4 & 1.29 & 220 & 350 & 210 & 200 \enddata
\tablenotetext{a}{Spectral types and masses are derived using the \emph{HST} H$\alpha$ observations according to the effective temperature scales by \citet{Martins2005}. Spectral type estimates do not take into account multiplicity or extinction. See Section\,\ref{s:spclass} for more information.}
\tablenotetext{b}{Number of stars in the cluster from the complete clean photometric sample.}
\tablenotetext{c}{Two radii are given; the equivalent radius (defined as the radius of a circle with the same area) and the maximum radius of the cluster. Elongation is the ratio of the two radii, $r_{\rm max} / r_{\rm equiv}$.}
\tablenotetext{d}{Approximate embedded cluster mass ($M_{\rm{ecl}}$) assuming isochrone ages of 1~Myr and 2.5\,Myr for all stellar sources within the cluster. Cluster mass is likely underestimated due to differential spatial extinction and high extinction of sources at the envelope scale.}
\tablenotetext{e}{$M_{\rm{ecl}}$\ expected from analytically based on the IMF \citep{Weidner2004} for the observed $m_{\rm{max}}$\ given in column 3.}
\tablenotetext{f}{$M_{\rm{ecl}}$\ expected from the typical $m_{\rm{max}}$ -- $M_{\rm{ecl}}$\ relation \citep{Weidner2013} for the observed $m_{\rm{max}}$\ given in column 3.}
\end{deluxetable*}
\subsection{Stellar Clusters around MYSOs}\label{s:cluscat}
The identification of stellar clusterings in the KDE maps as statistically important over-densities, was made for those having density above a certain threshold in the maps. This threshold is given in $\sigma$ above the local density background, where $\sigma$ is the standard deviation of each map. Regions in the KDE that appear at a minimum level of 3$\sigma$ and persist at higher levels are considered as bonafide stellar clusters. The 3$\sigma$ identification threshold for each map is shown with the white isopleth line in the KDE maps of Figure\,\ref{f:kdemaps}. With this method we identified a single important concentration in each region. These concentrations correspond to the compact clusters seen around the MYSOs in both the stellar and KDE maps of Figure\,\ref{f:kdemaps}. For 045403, 050941\, 052124, and 053244, secondary smaller clusters above the 3$\sigma$ threshold (and a tertiary for 053244) are found.
The characteristics of the MYSO clusterings, as defined within the 3$\sigma$ isopleth of the KDE maps, are given in Table\,\ref{t:cluschar}. Cols.\,2 and 3 show the spectral type and maximum mass, $m_{\rm{max}}$\,, for each source (see Section\,\ref{s:spclass}). Coordinates of the clusters' centers, which correspond to their KDE density peaks are given in Cols.\,4 and 5. Col.\,6 shows the number of the stars in the clean photometric sample within the borders of every cluster. The approximate size of each cluster is given by the so-called {\em equivalent radius} \citep[e.g.,][]{RomanZuniga2008} in Col.\,7. This radius is defined as the radius of a circle with the same area, $A_{\rm cl}$, as the area covered by the cluster ($r_{\rm equiv} = \sqrt{A_{\rm cl}/\pi}$). We also give in Col.\,8 the radius, $r_{\rm max}$, defined by the area enclosed by the smallest circle that encompass the entire cluster, equivalent to the half of the distance between the two farthest PMS stars in the system. These radii measurements imply that all seven clusters are compact. The ratio of these radii, $r_{\rm max} / r_{\rm equiv}$, provides a characterization of the {\em elongation} of each cluster which we provide in Col.\,9, \citep[e.g.,][]{Schmeja2006}. A circular distribution, with axis ratio equal to unity, has an elongation parameter of 1, while an elongated distribution with an axis ratio of 10 has an elongation parameter of $\sim$3 \citep{Schmeja2006}. The measurements of this parameter of the detected clusters show that most of the main clusters are slightly elongated. Col.\,10 and 11 gives the estimated cluster mass based on observations, while Cols.\,12 and 13 give predicted masses based on the estimated mass of the MYSO and studies by \citet{Weidner2004} and \citet{Weidner2013}. These cluster masses are discussed in more detail in the next section.
\begin{figure*}[t!]
\begin{center}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{045403_clustering_cmd_110_160_allstars}.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{050941_clustering_cmd_110_160_allstars}.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{051906_clustering_cmd_110_160_allstars}.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{052124_clustering_cmd_110_160_allstars}.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{053244_clustering_cmd_110_160_allstars}.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{053342_clustering_cmd_110_160_allstars}.pdf}
\includegraphics[clip=true, trim = 0 0 -0.5cm 0, width=0.247\textwidth]{{053431_clustering_cmd_110_160_allstars}.pdf}
\end{center}
\caption{\noindent F110W, F160W color-magnitude diagrams of the all stars (gray circles) in the clean photometric sample within the 3$\sigma$ isopleth of the KDE maps. Black circles show the locations the brightest sources (F160W magnitudes less than 18 mag) within 1$\arcsec$ of the central MYSO. Black circles without a corresponding gray circle were sources that did not have a valid photometric fit in the F555W/F814W band and therefore were not included in our clustering analysis. Isochrones from the Padova family of models are shown corresponding to ages 0.01 (red), 0.1 (yellow), 0.5 and 1.0 (green), 2.5 (blue), and 5.0~Myr (violet).
}
\label{f:clustercmd}
\end{figure*}
\subsection{Estimates of Cluster Masses}\label{s:clusterest}
For the estimation of the total mass of the cluster around each MYSO, we extract all stellar sources from the clean photometry sample within the 3$\sigma$ isopleth of the density maps. These stars within the stellar clustering are shown in Figure\,\ref{f:clustercmd}. We provide two estimates of the observed cluster masses: one using the 1\,Myr isochrone and another using the 2.5\,Myr isochrone (see Section \ref{s:pmss} for justification). The masses of each star are based on the luminosities of the F110W and F160W filters and are interpolated from the Padova grid of models \citep{Bressan2012, Chen2014, Tang2014}, corrected for the average extinction in the field (see Section\,\ref{PMSstars}). The masses of the stars were summed to calculate the {\em detected} mass of the embedded cluster, $M_{\rm{ecl}}$. The total cluster masses estimated by each of the F110W and F160W filters typically differed by less than 10\%; we adopted the average of these two measurements as the ``detected" cluster mass.
In order to add the undetected mass to the total cluster mass, we determined the mass of our photometric detection limit. The detection limit for field stars in the F814W filter ($I$-band) has an apparent magnitude $m_I \approx 27$\,mag (Figure\,\ref{f:fs.cmds}). Using a distance modulus of 18.5\,mag, the absolute magnitude detection limit is $M_I = 8.5$\,mag, which is approximately a K4 star of 0.7~$M_\odot$ \citep{Cox2004}. We assume that the IMF of the cluster behaves like a \cite{Kroupa2001} IMF, which suggests that the actual cluster mass should be 28\% higher than the detected mass, calculated above. We add the missing mass to the detected cluster masses to find the final estimated cluster masses for ages of both 1\,Myr and 2.5\,Myr, which are reported as the ``Estimated $M_{\rm{ecl}}$'' in Col.\,10 and 11 of Table\,\ref{t:cluschar}. The number of stars, radii, and masses of the clusters are comparable to typical embedded Galactic clusters \citep{LadaLada2003}.
A protostar (later than O-type) of a constant mass will have its luminosity decrease as it evolves from the pre--main-sequence to the main sequence. Therefore, using different age isochrones can drastically change the mass of the cluster. As seen in Table\,\ref{t:cluschar}, the estimated cluster masses for the 2.5\,Myr isochrones are approximately 50\% higher than that of the 1\,Myr isochrones. In order to have a better idea of the uncertainty of the tabulated cluster masses, we also fit Padova isochrones for protostellar ages of 0.5 and 5\,Myr. For 0.5\,Myr isochrones, cluster masses were $\sim$25\,--\,30\% lower than those estimated using 1\,Myr isochrones. For 5\,Myr isochrones, cluster masses were typically $\sim$15-20\% higher than those estimated using the 2.5\,Myr isochrones. Moreover, extinction, which is both differential within the cluster and local due to protostellar envelopes at small scales, makes the determination of accurate masses for the PMS stars quite difficult, causing our analysis to probably underestimate the total mass of the clusters. Although there is a considerable amount of uncertainties, we consider the masses estimated from the 1\,Myr and 2.5\,Myr isochrones to be the best estimates of the cluster masses.
We consider the MYSOs mass estimates $m_{\rm{max}}$\ (Section\,\ref{s:spclass}) and we extrapolate them to the total mass of each cluster, as it is expected analytically from the IMF \citep{Weidner2004} and from the typical relation between the mass of the most massive star in a cluster and its cluster mass \citep{Weidner2013}. These estimates for the clusters masses are also given in Table\,\ref{t:cluschar} (Cols. 12 and 13 respectively). The estimated and extrapolated cluster masses agree well with each other; the estimated cluster masses for both the 1\,Myr and 2.5\,Myr isochrones are almost all within a factor of two of the extrapolated masses using both the IMF and the $m_{\rm{max}}$\,--\,$M_{\rm{ecl}}$\ relation. There are only two exceptions: 1) the mass estimated for 1\,Myr isochrones for 052124\ is over a factor of two different from both extrapolated masses; and 2) the mass estimated for 1\,Myr isochrones for 053342\ is over a factor of two different from that mass extrapolated from the $m_{\rm{max}}$\,--\,$M_{\rm{ecl}}$\ relation. Based on Figure \ref{f:s.cmds}, the PMS population of the stars within both of the clusters encompassing 052124\ and 053342\ tend to be closer to the 2.5\,Myr isochrone than the 1\,Myr isochrone. Therefore, the higher masses estimated from the 2.5\,Myr isochrones are probably more accurate estimates of the cluster masses for these two MYSOs. Given that our estimated cluster masses are similar to both of the extrapolated cluster mass estimates, we cannot declare that any of these 7 isolated MYSOs have significantly unique clustering properties.
The investigated regions show differences in their stellar clustering behavior. The region around MYSO 051906\ encompasses one single centrally condensed star cluster with no other apparent stellar concentration around it (Figure\,\ref{f:kdemaps}). Approximately 35\% of all PMS stars observed in the region belong to the cluster, with the remaining stars being uniformly distributed in the field. On the other hand, the regions for the rest of the MYSOs clearly show signatures of multiple clustered environments, hosting additional sparse but not uniform stellar distributions. The clusters around the MYSOs enclose in these regions vary from 15--25\% of the complete observed PMS sample, with the remaining forming the surrounding stellar distributions.
With the exception of 051906, the distributed populations about the regions can account for stars that may have been formed in the same star formation event as the MYSO itself but in a less clustered fashion. Most of these MYOs have a PMS stellar distribution in the region of MYSO that is loose and somewhat remote from the cluster. However, that in the region of MYSO\,053342\ appears denser and directly related to the cluster which it encompasses. These dispersed distributions, especially the high extinction in the region of MYSO\,053342, clearly imply the existence of molecular clouds (apparently the parental), which were not detected in our ancillary ISM data, but revealed through their faint PMS stars in our \emph{HST} data.
\section{Discussion}\label{discussion}
We investigate the environments of seven apparently isolated MYSOs in the LMC in order to characterize -- and eventually parametrize -- the phenomenon of isolated high-mass star formation at its earliest \emph{HST} observable stages. In the study described in the previous sections, the lack of isolation is apparent for all seven of the MYSOs. The unparalleled resolution of {\em HST} allowed
for the direct detection of a plethora of faint PMS stars clustered around the MYSOs, which were undetected from previous low-resolution {\em Spitzer}
and ground-based imaging. This discovery showed that these MYSOs are not isolated, at least not as far as their immediate environments are concerned. The observational contradiction between high- and low-resolution imaging about the isolation of high-mass stellar sources in the Magellanic Clouds is an issue that is already discussed in the literature \citep[e.g.,][]{gouliermis07, carlson11}. Including our dataset, every MYSO resolved with {\em HST} is found to be surrounded by lower-mass red sources.
While the selected targets meet some strict criteria for isolation, such as e.g., being at least 80\,pc away from known GMCs or OB associations, this discovery introduces additional constraints to the interpretation of available observations, used as evidence for isolated high-mass star formation. If indeed it is the normal for such ``isolated'' high-mass stars to host compact clusters around them, one may ask the obvious question: {\em ``Does the clustering of stars around a high-mass star under formation, with no other high-mass stars in its vicinity, still account for isolated high-mass star formation?''} {\em HST} reveals populous distributions of faint sources (both clustered and dispersed) around our selected
MYSOs, which, based on previous low-resolution (and low-sensitivity) observations, are apparently isolated. {\em Does the existence of such distributions in the vicinity of a forming high-mass star challenge its supposed isolation?}
Observations suggest that roughly 4 ($\pm$\,2) percent of all O-type stars in the field of the Milky Way may have formed in isolation \citep{deWit2005}. This fraction was successfully reproduced by random sampling from a typical stellar IMF and by selecting clusters from a power law cluster mass function (CMF) of slope $\beta=1.7$. A study by \citet{Parker2007} showed that selecting clusters from a standard CMF with $\beta=2$ \citep[see, e.g.,][]{LadaLada2003} increases the fraction of isolated O-stars (defined in \citealt{Parker2007} as a star with a mass $>$17.5\,$M_\odot$) even more to about 17 percent. However, if \citet{Parker2007} restrict their definition of an ``isolated'' O-star as those from stellar clusters of mass less than 100\,$M_{\odot}$ that do not contain any other stars $>$10~$M_\odot$, the fraction of apparently isolated O-stars drops dramatically to between 1\,and\,5 percent. This result suggests that isolated O-stars {\em ``are low-mass clusters in which massive stars have been able to form''} \citep{Parker2007}.
On the other hand, as pointed out by \citet[][and references therein]{Weidner2013}, the IMF might not necessarily be randomly sampled, as was assumed by \citet{Parker2007}. Instead the IMF could be optimally sampled in accordance to the $m_{\rm{max}}$\,--\,$M_{\rm{ecl}}$\ relation \citep{Kroupa2013}. For this optimal sampling, the IMF is scale-free and the upper mass limit $m_{\rm{max}}$\ on which the IMF is sampled changes based on the cluster mass $M_{\rm{ecl}}$. Our observations provide a basis for testing both optimal and random sampling of the IMF.
By estimating the mass of the clusters around the investigated MYSOs according to the proposed empirical relation between the mass of the most massive star and the cluster mass \citep[e.g.,][optimal sampling]{Weidner2013}, we find masses similar to those estimated from the data (Table\,\ref{t:cluschar}). In the instance of MYSO 053342, the estimated mass is quite smaller than that estimated by the $m_{\rm{max}}$\,--\,$M_{\rm{ecl}}$\ relation. This difference may simply be due to the uncertainties for $M_{\rm{ecl}}$\ about this MYSO are especially large due to extinction and the method used in determining the area encompassing the cluster. We therefore cannot rule out the $m_{\rm{max}}$\,--\,$M_{\rm{ecl}}$\ relation.
Our analysis, which is focused on MYSOs, i.e., high-mass stars {\em that are embedded and may still be accreting}, provides observational evidence that indeed apparently isolated MYSOs do form within clusters. While the cluster masses are small ($\lesssim$600~$M_\odot$; Table\,\ref{t:cluschar}, Col.\,10 and 11), they are \emph{all} larger than 100~$M_\odot$.\footnote{Using the 1\,Myr isochrone, the cluster mass for 052124\ is only 90\,$M_\odot$ However, as discussed in Section \ref{s:clusterest}, the 2.5\,Myr isochrone is a better indicator of the actual age of this cluster.} Moreover, the O-stars in the sample contain intermediate mass B-stars. Therefore, none of the seven MYSOs in the sample would qualify for the \citet{Parker2007} definition of an ``isolated O-star'' occurring due to random sampling. We note that, although some of the MYSOs in our sample may not satisfy the \citet{Parker2007} definition of an O-star (i.e., we estimate four MYSOs to have masses of 14~$M_\odot$ while \citealt{Parker2007} require 17.5~$M_\odot$), random sampling of the IMF would suggest that these MYSOs are even more likely to be isolated. Based on our strict criteria for isolation, we selected MYSOs that are strong candidates to become isolated field stars. If these are indeed the most likely candidates to become field stars, then these data do not support the random sampling scenario suggested by studies of in situ formation of field O-stars. This may suggest that 1) our criteria poorly selects MYSOs that will become part of the isolated field population, and/or 2) observations of isolated ``evolved'' (i.e., unembedded main-sequence) O-stars \citep[e.g.,][]{deWit2004,deWit2005,Lamb2010} do not properly characterize their initial star-forming environments.
If 1 to 5\% of MYSOs form in isolation, then of the 248 GC09 MYSOs, $\sim$ 2 to 12 MYSOs in the LMC should be isolated.\footnote{The GC09 MYSO sample contains some early B-stars. Randomly sampling would predict that an even higher fraction of these 248 MYSOs will be isolated.} Our selection criteria certainly eliminates many of the definite non-isolated MYSOs in the LMC; indeed half of the GC09 MYSOs have another MYSO within 25~pc (Figure\,\ref{nearestneighbor}). In other words, if our selection criteria for isolation at least work in part, we are not randomly selecting seven MYSOs from the 248 GC09 MYSOs; instead, one could imagine that we are ``randomly'' selecting seven MYSOs from a smaller subset. Let us arbitrarily assume that we are randomly selecting 7 MYSOs from a subset of 50 MYSOs rather than 248. If only 2.48 or 12.4 (corresponding to 1 or 5\% of all GC09 MYSOs) of these 50 sources are isolated, the probability of \emph{not} random selecting an isolated MYSO from this subset is 68\% or 12\%, respectively. If we assume that our selection criteria does even better and we are randomly selecting from a subset of 25 MYSOs, these probabilities are now 43\% or 0.2\%, respectively. Given these scenarios, it is unlikely that 5\% of all MYSOs in the LMC are isolated; however, it is certainly possible that 1\% of the sources are isolated. In summary, if our selection criteria for MYSOs increase our chances of selecting isolated MYSOs and the IMF is randomly sampled, the LMC likely has significantly fewer than 5\% of its MYSOs forming in isolation.
Although we cannot rule out random sampling, we also cannot rule out the possibility that absolutely no high-mass stars form in isolation. The search for isolated high-mass star formation based on populations of ``evolved'' O-type stars certainly imposes limitations in identifying clusters about O-stars because ionization and winds from the high-mass star may have erased any signature of the original gas in the cluster, and clusters are subject to ejections and dissolution \citep[e.g.,][]{Gvaramadze2012}. \citet{Weidner2013} specifically cautioned interpreting clusters as isolated if they are known to be old ($\gtrsim$4 Myr) or gas-free since these clusters can lose a considerable amount of stars. Moreover, \citet{Pflamm2010} showed that after formation in a cluster, O-stars can be expelled via a binary ejection event coupled with a subsequent sling-shot due to a supernova explosion, making it impossible to trace them back to clusters. Furthermore, observations of the more evolved O-type stars may have been limited by the dynamic range of the telescope since the brightness of an O-star may outshine the surrounding faint sources. In other words, the field O-stars may not represent their initial environment and may not supply any evidence of how the IMF is sampled.
Another result presented in this study is that all but one MYSO (051906) has an unambiguous detection of \mbox{CO(1--0)} in its immediate vicinity. Assuming that molecular gas in the LMC is reliably traced by CO emission, the reservoir of molecular gas associated with the MYSOs is small ($M_{\rm{CO}} \lesssim 2\times 10^4~M_\sun$), and well below the mass threshold that is usually adopted for GMCs. Our results may therefore indicate that ``isolated'' high-mass star formation can occur in low-mass gas clouds, contrary to the usual assumption that high-mass stars only form in GMCs. Alternatively, previous studies of the YSO population in the LMC have suggested that lower luminosity YSOs may outlive their natal GMCs \citep{Wong2011} and this may also be the case for MYSOs. The latter scenario suggests that GMCs in the LMC can be efficiently disrupted on $\sim$Myr timescales, which is in moderate tension with empirical arguments for GMC lifetimes of 20--30~Myr \citep[e.g.,][]{Kawamura2009}. Alternatively, such observations could suggest multiple epochs of high-mass star formation in a GMC.
With the exception of 051906, the MYSOs in our sample show prevalence of multiple clusters in the region, clearly indicating that the high-mass stars are not forming in isolation across the extent of $\sim$\,few\,10\,pc. On the other hand, in the case of MYSO\,051906, apart from its own surrounding low-mass cluster, there are no additional clusters in its vicinity within a distance $\gtrsim$\,60\,pc, suggesting that this object is an {\em isolated compact cluster}. We chose the most isolated MYSOs in the entire LMC, and only one source has been confirmed as an isolated cluster. Therefore, an isolated compact cluster about an O-star appears to be rare phenomenon. Searching for in situ isolated high-mass stars may be instead a search for isolated compact clusters that contain an O-star.
\section{Summary and Conclusions}
A galaxy-wide search throughout the entire LMC shows that there are very few MYSOs that are forming outside of GMCs and not near other MYSOs or OB associations, i.e., they form in apparent isolation. Based on an ancillary set of imaging data from both {\em Spitzer} and ground-based telescopes, we constructed from typical star formation indicators a dataset of MYSOs that are considered to be the best candidates for forming in isolation. These sources are confirmed MYSOs with {\em Spitzer} IRS spectroscopy, and they emit enough ionizing photons to produce \ion{H}{2} regions around them, confirmed with H$\alpha$ imaging. They are also more than 80\,pc away from any other MYSO \citep{GC09}, OB association star \citep{Lucke1970}, or GMC \citep{Fukui2008,Wong2011}.
Our \emph{HST} follow-up observations clearly demonstrate that while these MYSOs appear to be in isolated environments, they are actually surrounded by a plethora of PMS stars. Our clustering analysis of these stars shows that all MYSOs are members of compact clusters. Six of the regions have significant sub-structure, with the PMS stars being both sparsely distributed and in the compact clusters. These stellar alignments appear to be the signatures of the parental molecular cloud, which is presently undetected by CO surveys. A seventh analyzed MYSO (051906) was found to be surrounded by a single isolated compact low-mass stellar cluster with no other stellar distribution being associated with it, indicating that the parental cloud of this object did not produce stars in a dispersed fashion. Moreover, 051906\ contains no known clusters within 60~pc \citep{Bica2008}. Such an isolated cluster containing an O-star is a rare occurrence in the context of high-mass star formation.
The observed population of isolated field O-stars that are expected to form in situ \citep[e.g.,][]{deWit2004,Lamb2010,Oey2013,Lamb2015} are often considered to be a phenomenon of random sampling of the IMF, which allows O-stars to form in relative isolation \citep[i.e., in clusters $<$100~$M_\odot$ with no other star $>$10~$M_\odot$][]{Parker2007}. In other words, in situ O-stars forming in a cluster of mass $<$100~$M_\odot$ is rare but not impossible. However, while the previous confirmations of isolated high-mass star formation among field main-sequence O-type stars (after correcting for runaways) provide evidence of {\em in situ} formation, they do not provide information on the {\em environment} where formation took place; radiation and winds from the high-mass star and dynamical events may have erased the signatures of the parental gas and the clustering around the O-star.
We investigate isolated high-mass star formation at a much earlier stage, i.e., the embedded MYSO stage. Based on our selection criteria, we have selected the best candidates for in situ, isolated high-mass star formation. We find cluster masses about these MYSOs to be larger than 100~$M_\odot$, suggesting that these MYSOs are not as isolated as typical field O-stars. While we cannot entirely rule out random or optimal sampling of the IMF, we suggest that a randomly sampled IMF should find that significantly less than 5\% of LMC MYSOs are isolated.
With the present study we demonstrate that the investigation of the phenomenon referred to as ``isolated high-mass star formation'' requires the investigation of sources at earlier stages of their formation, such as MYSOs, which should still be embedded in their natal environments. Our investigation is the only observational study \citep[apart from that presented by][]{Selier2011} that approaches the issue strictly from this perspective. Based on our findings we argue that panchromatic high-resolution observations in the vicinity of apparently isolated MYSOs (and not main-sequence stars) will allow a better understanding of the conditions and the parameters that set the stage for high-mass stars to form in isolation.
\acknowledgements
I.W.S. and L.W.L. acknowledges NASA grant HST-GO-12941 06-A. D.A.G. acknowledges the German Research Foundation (Deu\-tsche For\-schungs\-ge\-mein\-schaft, DFG) grant GO\,1659/3-2. D.R.W. is supported by NASA through Hubble Fellowship grant HST-HF-51331.01 awarded by the Space Telescope Science Institute. A.H. acknowledges support from the Centre National d'\'Etudes Spatiales (CNES). Based on observations made with the NASA/ESA {\em Hubble Space Telescope}, obtained from the data archive at the Space Telescope Science Institute (STScI). STScI is operated by the Association of Universities for Research in Astronomy, Inc.\ under NASA contract NAS 5-26555. The Mopra radio telescope is part of the Australia Telescope National Facility which is funded by the Commonwealth of Australia for operation as a National Facility managed by CSIRO. The University of New South Wales Digital Filter Bank used for the observations with the Mopra Telescope was provided with support from the Australian Research Council. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France, and APLpy, an open-source plotting package for Python hosted at http://aplpy.github.com.
|
1,314,259,995,904 | arxiv | \section{Introduction}
We consider walks in the two-dimensional square lattice with steps (1,1), (1,-1), (2,2) and (2,-2). We assign a weight $\sqrt{z}$ for a unit distance along the $x$-axis. We constrain them to lie in the region defined by $y \geq 0$ and $y \leq w$. The motivation for considering such walks is the modelling of polymers forced to lie between plates separated by a small distance.
One would then like to calculate various combinatorial quantities. In principle, one hopes to count all possible configurations of the polymer modelling it as a self-avoiding walk [WSCM, MTW]. Since this is a tough nut to crack, one simplifying approach is to treat the polymer as a \emph{directed walk}.
Studies of this kind have been done in the literature with simpler steps such as Dyck paths ($(1,1)$ and $(1,-1)$) which we review in the next section. See, for example, [DR, BORW]. For further developments on the subject, see [R] and the references therein.
Even though the motivation came from Physics [BORW], it later occured to us that this is the number of basketball games (post-1896 and pre-1961, when the three-pointer did not exist) in which the home team always leads the visitor by at most $w$ points ending in a tie!
\section{$\mathrm{\acute{E}tude}$ - Soccer Games}
As a warm-up to the study of basketball games, let us consider soccer games with the same condition [BORW]. These are exactly Dyck walks on the square lattice restricted to $0 \leq y \leq w$ starting at the origin and ending on the $x$-axis. As is usual, we assign a weight $\sqrt{z}$ for both steps.
Let $C_w(z)$ be the generating function for such a walk. And $D_w(z)$ be the generating function for an irreducible walk. That is, one which does not touch the $x$-axis in the interior of the walk. A general walk is either the null walk or is composed of an irreducible walk followed by a smaller such walk. Thus,
\begin{eqnarray}
C_w = 1 + D_w C_w.
\end{eqnarray}
And an irreducible walk starts with the $(1,1)$ step and ends with the $(1,-1)$ step with an arbitrary walk in between whose width is $w-1$.
\begin{eqnarray}
D_w & = & \sqrt{z} C_{w-1} \sqrt{z} \cr
& = & z C_{w-1},
\end{eqnarray}
\noindent
which implies
\begin{eqnarray}
C_w = \frac{1}{1- z C_{w-1}}. \label{Cw}
\end{eqnarray}
This leads to a nice continued fraction expression for $C_w$, which has the distinct aroma of Tchebyshev! Notice that $C_0 = 1$ and thus, $C_1 = 1/(1-z)$. Then
\begin{eqnarray}
C_w = \frac{1}{1-} \underbrace{\frac{z}{1-} \cdots \frac{z}{1-}}_{w-2 \, \mathrm{terms}} \frac{z}{1-z} \; \mathrm{for} \; w \geq 2
\end{eqnarray}
\noindent
(\ref{Cw}) is a patently nonlinear recurrence for the generating function. But it does lead to a \emph{linear} recurrence for the numerator and denominator of $C_w$. This can be seen by setting $C_w = \frac{P_w}{Q_w}$. It is easily seen (do it!) that the linear recurrence relations
\begin{eqnarray}
P_w & = & Q_{w-1} \\
Q_w & = & Q_{w-1} - z Q_{w-2}
\end{eqnarray}
\noindent
with suitable initial conditions gives rise to $C_w$. Notice that these are recurrences with \emph{constant coefficients} in $w$ but, of course, not in $z$. This explains the relationship of the denominators with Tchebyshev polynomials of the first kind - $T_n(z)$ which satisfies a very similar second order recurrence relation in $n$ with constant coefficients, viz.
\begin{eqnarray}
T_n(z) = 2zT_{n-1}(z) - T_{n-2}(z).
\end{eqnarray}
As an aside, note that if $w=2$, the number of walks ending at $(n,0)$ give rise to the Fibonacci numbers and if $w=\infty$, the Catalan numbers [St].
\section{{\it The Main Result} - Basketball Games}
\begin{defn} An \emph{$[ij]$ walk} is a walk that starts at the line $y=i$ and ends at the line $y=j$.
\end{defn}
\begin{defn} An \emph{irreducible} $[ij]$ walk is an $[ij]$ walk that touches the minimum of $i$ and $j$ only at the corresponding endpoint.
\end{defn}
We will need various kinds of generating functions in the proof. Let $\gf{ij}{w}(z)$ denote the generating function of the $[ij]$ walk with width $w$. And let $\igf{ij}{w}(z)$ denote the generating function for the corresponding \emph{irreducible} version of the walk. Note that, at the end of the day, we need a recurrence relation for $F_w := \gf{00}{w}$.
\begin{theo} Let $F_w$ be defined as above. Then it satisfies the following recurrence relation.
\begin{eqnarray}
& F_w = 1- z F_w + 2 z F_w F_{w-1} + 2 z^2 F_w F_{w-1} F_{w-2}\cr
& -(z^3+z^4)F_w F_{w-1} F_{w-2} F_{w-3} + z^5 F_w F_{w-1} F_{w-2} F_{w-3} F_{w-4} \label{Fw}
\end{eqnarray}
\end{theo}
To prove this, we first write down a set of equations relating different generating functions and then try to solve for $F_w$. First off, a $[00]$ walk is either the empty walk or it is composed of an irreducible $[00]$ walk followed by a smaller $[00]$ walk.
\begin{eqnarray}
\gf{00}{w} & = & 1 + \gf{00}{w} \igf{00}{w} \label{f00}
\end{eqnarray}
Next, a $[01]$ walk is always uniquely composed of an arbitrary $[00]$ walk followed by an irreducible $[01]$ walk. Similarly, a $[10]$ walk is uniquely composed of an irreducible $[10]$ walk followed by an arbitrary $[00]$ walk.
\begin{eqnarray}
\gf{10}{w} & = & \igf{10}{w} \gf{00}{w} \label{f01}\\
\gf{01}{w} & = & \igf{01}{w} \gf{00}{w} \label{f10}
\end{eqnarray}
A $[11]$ walk either never goes below the first level, in which case it is simply the same as a $[00]$ walk with width $w-1$, or if it does, it is composed of an irreducible $[10]$ walk followed by an arbitrary $[01]$ walk.
\begin{eqnarray}
\gf{11}{w} & = & \gf{00}{w-1} + \igf{10}{w} \gf{01}{w} \label{f11}
\end{eqnarray}
Now, we go on to describe the irreducible walks. Since we have a finite width, we will describe them in terms of generating functions for lower widths. In each case, we have to consider different cases for the starting step and the ending step. First, an irreducible $[00]$ walk can begin with either the $(1,1)$ or $(2,2)$ step and end with either the $(1,-1)$ or $(2,-2)$ step. If the walk starts with $(1,1)$ and ends with $(1,-1)$, then there could be an arbitrary $[00]$ walk with width $w-1$ in between. If the walk starts with $(1,1)$ and ends with $(2,-2)$, there has to be an arbitrary $[01]$ walk with width $w-1$ in between. If the walk starts with $(2,2)$ and ends with $(1,-1)$, there has to be an arbitrary $[10]$ walk with width $w-1$ in between. And finally, if the walk starts with $(2,2)$ and ends with $(2,-2)$, there is a $[11]$ walk with width $w-1$ in between.
\begin{eqnarray}
\igf{00}{w} & = & z \gf{00}{w-1} + z^{3/2} \gf{01}{w-1} + z^{3/2} \gf{10}{w-1} + z^2 \gf{11}{w-1} \label{g00}
\end{eqnarray}
For an irreducible $[01]$ walk, we just need to consider the starting steps. If it starts with $(1,1)$, the remainder is an arbitrary $[00]$ walk with width $w-1$. If it starts with $(2,2)$, the remainder is again an arbitrary $[10]$ walk with width $w-1$. A very similar argument on the ending step yields the equation for an irreducible $[10]$ walk.
\begin{eqnarray}
\igf{01}{w} & = & z^{1/2} \gf{00}{w-1} + z \gf{10}{w-1} \label{g01}\\
\igf{10}{w} & = & z^{1/2} \gf{00}{w-1} + z \gf{01}{w-1} \label{g10}
\end{eqnarray}
First we eliminate the irreducible generating functions using equations (\ref{g00}), (\ref{g01}) and (\ref{g10}). Then equations (\ref{f00}), (\ref{f01}), (\ref{f10}) and (\ref{f11}) become
\begin{eqnarray}
\gf{00}{w} & = & 1 + \gf{00}{w} (z \gf{00}{w-1} + z^{3/2} \gf{01}{w-1} + z^{3/2} \gf{10}{w-1} + z^2 \gf{11}{w-1}) \label{2f00}\\
\gf{01}{w} & = & \gf{00}{w}(z^{1/2} \gf{00}{w-1} + z \gf{01}{w-1}) \label{2f01} \\
\gf{10}{w} & = & \gf{00}{w}(z^{1/2} \gf{00}{w-1} + z \gf{10}{w-1}) \label{2f10} \\
\gf{11}{w} & = & \gf{00}{w-1} + \gf{01}{w} (z^{1/2} \gf{00}{w-1} + z \gf{01}{w-1}) \label{2f11}.
\end{eqnarray}
We clean up our notation now. Let $F_w := \gf{00}{w},G_w := \gf{01}{w},H_w := \gf{10}{w},J_w := \gf{11}{w}$. Then
\begin{eqnarray}
F_w & = & 1 + F_w (z F_{w-1} + z^{3/2} G_{w-1} + z^{3/2} H_{w-1} + z^2 J_{w-1}) \label{F} \\
G_w & = & F_w (z^{1/2} F_{w-1} + z H_{w-1}) \label{G} \\
H_w & = & F_w (z^{1/2} F_{w-1} + z G_{w-1}) \label{H} \\
J_w & = & F_{w-1} + G_w (z^{1/2} F_{w-1} + z G_{w-1}) \label{J}.
\end{eqnarray}
Using (\ref{G}) and (\ref{H}),
\begin{eqnarray}
G_w - H_w = z F_w (H_{w-1} - G_{w-1}). \label{GH}
\end{eqnarray}
But notice that $G_0 = H_0 = 0$ by definition. Therefore, inductively, $G_w = H_w$. Thus,
\begin{eqnarray}
G_w & = & F_w (z^{1/2} F_{w-1} + z G_{w-1}) \label{G2}.
\end{eqnarray}
Now we eliminate everything in (\ref{F}) in the form of $G_w$ and $F_w$ using (\ref{J}) and the result of (\ref{GH}).
\begin{eqnarray}
F_w & = & 1 + F_w (z F_{w-1} + z^2 F_{w-2} \cr
& & + z^2 G_{w-1} (z^{1/2} F_{w-2} + z G_{w-2}) + 2z^{3/2} G_{w-1}). \label{F2}
\end{eqnarray}
Substituting (\ref{G2}) in (\ref{F2}),
\begin{eqnarray}
F_w F_{w-1} & = & F_{w-1} + z F_w F_{w-1}^2 + z^2 F_w F_{w-1} F_{w-2} + z^2 F_w G_{w-1}^2 \cr
& & + 2 z^{3/2} F_w F_{w-1} G_{w-1} \cr
& = & F_{w-1} +z^2 F_w F_{w-1} F_{w-2} + z F_w F_{w-1} (F_{w-1} + z^{1/2} G_{w-1}) \cr
& & + z^{3/2} F_w G_{w-1} (F_{w-1} + z^{1/2} G_{w-1}) \cr
& = & F_{w-1} +z^2 F_w F_{w-1} F_{w-2} + z^{1/2} G_w F_{w-1} + z G_w G_{w-1}. \label{F3}
\end{eqnarray}
Both (\ref{F2}) and (\ref{F3}) have a term of the form $G_w (z^{1/2} F_{w-1} + z G_{w-1})$. From (\ref{F2}),
\begin{eqnarray}
z^2 F_w G_{w-1} (z^{1/2} F_{w-2} + z G_{w-2}) & = & F_w - 1 -z F_w F_{w-1}\cr
&- & z^2 F_w F_{w-2} - 2 z^{3/2} F_w G_{w-1}
\end{eqnarray}
\noindent
and from (\ref{F3}),
\begin{eqnarray}
z^2 F_w G_{w-1} (z^{1/2} F_{w-2} + z G_{w-2}) & = & z^2 F_w (F_{w-1} F_{w-2} \cr
&- & F_{w-2} - z^2 F_{w-1} F_{w-2} F_{w-3}).
\end{eqnarray}
Equating the two,
\begin{eqnarray}
F_w & = & 1 + z F_w F_{w-1} + 2 z^{3/2}F_w G_{w-1} \cr
&+ & z^2 F_w F_{w-1} F_{w-2} - z^4 F_w F_{w-1} F_{w-2} F_{w-3}. \label{F4}
\end{eqnarray}
Substituting the term $z F_w G_{w-1}$ using (\ref{G2}), we get an expression for $G_w$ in terms of $F_w$'s only.
\begin{eqnarray}
2 z^{1/2} G_w = F_w - 1 + z F_w F_{w-1} - z^2 F_w F_{w-1} F_{w-2} + z^4 F_w F_{w-1} F_{w-2} F_{w-3} \label{G3}
\end{eqnarray}
Finally, substituting (\ref{G3}) in (\ref{F4}) gives the desired result (\ref{Fw}) $\blacksquare$
\begin{theo}
Let $X_w$ be the generating function for the walk with steps \\ $(1,1),(1,-1),(p,2),(p,-2)$ with $p > 0$. Then $X_w$ satisfies a similar recurrence relation
\begin{eqnarray}
& X_w = 1- z^{p/2}X_w + (z+z^{p/2})X_w X_{w-1} + (z^{1+p/2}+z^p) X_w X_{w-1} X_{w-2} \cr
& - (z^{3p/2} + z^{2p})X_w X_{w-1} X_{w-2} X_{w-3} + z^{5p/2}X_w X_{w-1} X_{w-2} X_{w-3} X_{w-4}
\end{eqnarray}
\end{theo}
The proof follows exactly the same set of ideas. To start off, we define the same set of generating functions. Equations (\ref{f00}-\ref{f11}) remain the same and equations (\ref{g00}-\ref{g10}) are slightly modified. Following the steps of the previous proof yields the result $\blacksquare$
\section{Numerators and Denominators of $F_w$}
Using (\ref{Fw}), we will now derive a linear recurrence relation for the numerators and denominators of $F_w$.
\begin{theo}
Let $P_w$ and $A\hspace{-0.28cm}Z_w$ be defined as follows.
\begin{eqnarray}
& P_0 = 1, & A\hspace{-0.28cm}Z_0 = 1 \cr
& P_1 = 1, & A\hspace{-0.28cm}Z_1 = 1-z \cr
& P_2 = 1-z, & A\hspace{-0.28cm}Z_2 = 1-2z-3z^2 \cr
& P_3 = 1-2z-3z^2, & A\hspace{-0.28cm}Z_3 = 1-3z-5z^2-2z^3+z^4 \cr
& P_4 = 1-3z-5z^2-2z^3+z^4, & A\hspace{-0.28cm}Z_4 = 1-4z-6z^2+2z^3 \nonumber
\end{eqnarray}
For $w \geq 5$, they are defined recursively by
\begin{eqnarray}
P_w & = & A\hspace{-0.28cm}Z_{w-1} \label{Pw}\\
A\hspace{-0.28cm}Z_w & = & (1+z) A\hspace{-0.28cm}Z_{w-1}-2z A\hspace{-0.28cm}Z_{w-2}- 2z^2 A\hspace{-0.28cm}Z_{w-3}\cr
&+& (z^3+z^4) A\hspace{-0.28cm}Z_{w-4}-z^5 A\hspace{-0.28cm}Z_{w-5} \label{Qw}
\end{eqnarray}
Then, $F_w := \frac{P_w}{A\hspace{-0.28cm}Z_w}$ is precisely the generating function for the walk defined earlier satisfying the recurrence relation (\ref{Fw}).
\end{theo}
These denominators are to basketball what Tchebyshev polynomials are to soccer.
For $w \leq 4$, the generating functions are given by
\begin{eqnarray}
F_0 & = & 1 \\
F_1 & = & \frac{1}{1-z} \\
F_2 & = & \frac{1-z}{1-2z-3z^2} \\
F_3 & = & \frac{1-2z-3z^2}{1-3z-5z^2-2z^3+z^4} \\
F_4 & = & \frac{1-3z-5z^2-2z^3+z^4}{1-4z-6z^2+2z^3}
\end{eqnarray}
\noindent
and therefore, the initial conditions give the right generating function. To see that (\ref{Pw},\ref{Qw}) imply (\ref{Fw}), divide (\ref{Qw}) by $A\hspace{-0.28cm}Z_w$. Then,
\begin{eqnarray}
1 & = & (1+z) \frac{A\hspace{-0.28cm}Z_{w-1}}{A\hspace{-0.28cm}Z_w}-2z\frac{A\hspace{-0.28cm}Z_{w-2}}{A\hspace{-0.28cm}Z_w}- 2z^2\frac{A\hspace{-0.28cm}Z_{w-3}}{A\hspace{-0.28cm}Z_w}\cr
&+& (z^3+z^4)\frac{A\hspace{-0.28cm}Z_{w-4}}{A\hspace{-0.28cm}Z_w}-z^5 \frac{A\hspace{-0.28cm}Z_{w-5}}{A\hspace{-0.28cm}Z_w}.
\end{eqnarray}
But now, using (\ref{Pw})
\begin{eqnarray}
\frac{A\hspace{-0.28cm}Z_{w-1}}{A\hspace{-0.28cm}Z_w} & = & F_w, \\
\frac{A\hspace{-0.28cm}Z_{w-2}}{A\hspace{-0.28cm}Z_w} & = & F_{w-1}F_w, \\
\frac{A\hspace{-0.28cm}Z_{w-3}}{A\hspace{-0.28cm}Z_w} & = & F_{w-2}F_{w-1}F_w, \\
\frac{A\hspace{-0.28cm}Z_{w-4}}{A\hspace{-0.28cm}Z_w} & = & F_{w-3}F_{w-2}F_{w-1}F_w, \\
\frac{A\hspace{-0.28cm}Z_{w-5}}{A\hspace{-0.28cm}Z_w} & = & F_{w-4}F_{w-3}F_{w-2}F_{w-1}F_w.
\end{eqnarray}
which implies (\ref{Fw}) $\blacksquare$
\section{Remarks}
For the sake of completeness, we give references to the number of such basketball games for various values of $w$. For $w=2,\cdots,6$ and $w=\infty$, the sequence of games ending at $n:n$ is in [Sl]. Except for the case of $w=2$, which also arises in some other contexts, all other sequences are new.
Let us now point out why this recurrence is so special! First of all, notice that all terms in (\ref{Fw}) involve only successive generating functions. It is precisely this property that leads to a linear recurrence relation for the denominators. Let us look at this in a little more detail.
Consider the generating functions $F_w, G_w, H_w, J_w$ defined earlier by equations (\ref{F}-\ref{J}). It will not be shown, but it does turn out that the denominators for all four of them are preceisely $A\hspace{-0.28cm}Z_w$. Denote their numerators by $P_w,g_w,h_w,j_w$ respectively. Rewriting (\ref{F}-\ref{J}) gives
\begin{eqnarray}
\frac{P_w}{A\hspace{-0.28cm}Z_w} & = & 1 + \frac{P_w}{A\hspace{-0.28cm}Z_w A\hspace{-0.28cm}Z_{w-1}}(z P_{w-1} + z^{3/2} g_{w-1}+ z^{3/2} h_{w-1} + z^2 j_{w-1}) \\
\frac{g_w}{A\hspace{-0.28cm}Z_w} & = & \frac{P_w}{A\hspace{-0.28cm}Z_w A\hspace{-0.28cm}Z_{w-1}}(z^{1/2} P_{w-1} + z h_{w-1}) \\
\frac{h_w}{A\hspace{-0.28cm}Z_w} & = & \frac{P_w}{A\hspace{-0.28cm}Z_w A\hspace{-0.28cm}Z_{w-1}}(z^{1/2} P_{w-1} + z g_{w-1}) \\
\frac{j_w}{A\hspace{-0.28cm}Z_w} & = & \frac{P_{w-1}}{A\hspace{-0.28cm}Z_{w-1}} + \frac{g_w}{A\hspace{-0.28cm}Z_w A\hspace{-0.28cm}Z_{w-1}}(z^{1/2} P_{w-1} + z g_{w-1})
\end{eqnarray}
But notice that $P_w = A\hspace{-0.28cm}Z_{w-1}$ and therefore, the first three of these equations are linear but the fourth is not! In fact, if the fourth were also linear, there is no way the recurrence for $P_w$ would terminate uniformly in $w$. The nonlinearity of the fourth equation almost miraculously cancels out excess terms that arise in the fifth order recurrence.
\vspace{1cm}
{\bf REFERENCES}
[BORW] R. Brak, A.L. Owczarek, A. Rechnitzer, S.G. Whittington, {\it A directed walk model of a long chain polymer in a slit with attractive walls}, J. Phys. A, {\bf 38}, 2005, 4309-4325.
[DR] E.A. DiMarzio and R.J. Rubin, {\it Adsorption of a Chain Polymer between Two Plates}, J. Chem. Phys., {\bf 55}, 1971, 4318-36.
[MTW] Keith M. Middlemiss, Glenn M. Torrie and Stuart G. Whittington, {\it Excluded volume effects in the stabilization of colloids by polymers}, J. Chem. Phys., {\bf 66}, 1977, 3227-32.
[R] E.J. Janse van Rensburg, {\it The statistical mechanics of interacting walks, polygons, animals and vesicles}, Oxford Lecture Series in Mathematics and its Applications, 18. Oxford University Press, Oxford, 2000.
[Sl] N.J.A. Sloane, Sequences A046717,A127617-620,A122951 in the OEIS, \\
\texttt{http://www.research.att.com/$\sim$njas/sequences/Seis.html}
[St] Richard Stanley, Chapter 6 of {\it Enumerative Combinatorics V.2}, Cambridge Studies in Advanced Mathematics, 62. Cambridge University Press, Cambridge, 1999.
[WSCM] Frederick T. Wall, William A. Seitz, John C. Chin and Frederic Mandel, {\it Self-avoiding walks subject to boundary constraints}, J. Chem. Phys., {\bf 67}, 1977, 434-38.
\end{document}
|
1,314,259,995,905 | arxiv | \section{Introduction} As we are in 2012, the 125th anniversary of the Michelson-Morley experiment, we note that the historical process leading to the establishment of the constancy of the speed of light - the fundamental postulate of Special Relativity (SR) - as one of the cornerstones of modern physics is not due to a single experiment, but rather to a series of experimental and theoretical developments since 1864 by the publication of James Clerk Maxwell's equations of electromagnetism [1, 2]. The experimental demonstration of the production and detection of electromagnetic waves by Heinrich Hertz in 1887[1, 2]; the Michelson-Morley experiment by Michelson and Morley in 1887 [3]; the Lorentz transformation by Lorentz in 1904 [4] and Poincare in 1905 [5] led to the establishment of the constancy of the speed of light.\\
\\In 1905, Albert Einstein introduced Special Relativity [6] - a theoretical framework that proved immediately successful in unifying Maxwell's electrodynamics with classical Mechanics. SR maintains that all inertial frames of reference are equivalent and the velocity of light is a universal constant in these inertial frames independent of the velocity of the source or the observer. \\
\\Most of today's fundamental physics theories are based on the constancy of the speed of light. This is one of the foundation blocks upon which modern physics is built. It is well known that the Standard Model of Particle Physics ensures that it must be consistent with SR. Despite the remarkable success of the theories based on the constancy of the speed of light, several modern theoretical approaches have begun to predict variations to the constant light-speed postulate. For example, string theory which seeks to unify today's Standard-Model with general relativity predicts a violation of the constancy of the speed of light [7 - 9].\\
\\ A few years back in 2007, the MINOS Collaboration [10] at Fermilab in USA reported for neutrinos moving faster than light. Also, recently in 2011, the Oscillation Project with Emulsion-Racking
Apparatus (OPERA)-collaboration has reported [11] that "neutrinos travel faster than speed of light". There are a significant number of suggestions, arguments and counter arguments of the highly publicized preprint in arXiv of the OPERA's claim which are growing every day and produce the highest number of citations within the shortest period of time so far (100 citations within two weeks). However, the team (OPERA) has now found two problems - one in its timing gear
and one in an optical fibre connection - that may have affected their tests (such as a report in BBC News on 23 February, 2012). Indeed, there are, also, other reasons suggested
by different authors (such as Ref. [12]) which encourage severe scrutiny of the OPERA result. Therefore, as far as we know that the OPERA will perform more tests in near
future to observe how they (timing gear and faulty connection) affect measured speeds of the neutrinos. Also other teams will and should perform the experiment to ensure
that these possible errors account for the faster than speed of light results. However, if confirmed, this finding would overturn the most fundamental postulate of Einstein's
Special Relativity that "nothing travels faster than 299,792,458 meters per second (the speed of light in vacuum)". The meaning of this outcome is simple to understand but
the consequences would be far reaching. Let us look at the neutrino data of long-baseline neutrino experiments based on the consideration of frameworks and what they
indicate.
\section{Frameworks}
In an ideal consideration for a frame transformation, we use rest and moving reference frames in rectilinear relative motion. We note that the moving inertial frame is replaced by the Earth frame in a terrestrial experimental investigation which is, indeed, not an inertial frame. The motion relative to Earth's centre of mass of a point on the equator of the Earth is about $5\times10^{2} m/s$. As well the Earth travels at a speed of approximately $3\times10^{4} m/s$ in its orbit around the Sun. Also the Sun is traveling together with its planets about the galactic centre with a speed $2.5\times10^{5} m/s$, and there are other motions at higher levels of the structure of the universe. Smoot et al [13] summarize the different velocities of our Solar system (the Earth) relative to the cosmic blackbody radiation, nearby galaxies and the Milky Way galaxy; also the motion of the Milky Way galaxy relative to the cosmic blackbody radiation. Therefore, the Earth experiences a significant motion relative to a rest frame, which is $\geq 600 km/sec$ [13]. An illustration of various movements of the Earth as well as its positions in different seasons (as for example on March 7 and September 7 in 2011) is shown in Fig. 1.\\
\\ We derived the time dependent components of the velocity of the laboratory along the direction of the light/neutrino propagation in our reports [14 - 16] assuming the Cosmic Microwave Background (CMB) is the rest frame of the universe. This derivation can help us to understand the shape of the change of velocity of the laboratory relative to the rest frame. As for example, following the propagation direction of neutrinos in the laboratory at CERN, the direction to the line CERN (source) - Gran Sasso (Detector) is roughly parallel to West to East [17], we derive the time dependent component of the velocity of the laboratory relative to the rest frame $[V(t)_{E-W-CMB}]$.
Therefore:
\begin{figure}[t]
\includegraphics[width=1\textwidth]{Fig1.eps}
\caption{An illustration which represents various movements of the Earth as well as its positions in different seasons, as for example, on March 7 and September 7 in 2011.}
\end{figure}
\begin{equation}
\begin{split}
V(t)_{E-W-CMB}=&V_{O}[\sin(\Omega_{S}t)\sin(\Omega_{O}t)+\cos\varepsilon \cos(\Omega_{S}t) \cos(\Omega_{O}t)] \\
&+V_{R}+V_{S}[\sin\alpha \cos\delta \sin(\Omega_{S}t)]
\end{split}
\end{equation}
where $\chi=$co-latitude, $\alpha=$ Right Ascension, $\delta=$ Declination, $\varepsilon=$ the angle between the ecliptic and the Sun centered Celestial Equatorial plane, $\Omega_{S}=$ sidereal angular rotational frequency $(= 2\pi/(23h 56\min)\cong 4.18 \times 10^{-3} deg. s^{-1})$, $V_{R}=$ the velocity due to the Earth's rotation about its axis depending on the geographical latitude $0\leq V_{R}\leq 4.5\times 10^{2} ms^{-1}$, $\Omega_{O}=$ the Earth is orbiting relative to the Sun with the angular frequency $(=2\pi/(1yr)\cong 1.14\times 10^{-5} deg.s^{-1})$, $V_{O}=$ the velocity due to the Earth's orbital motion relative to the Sun $(\approx 3\times 10^{4} m.s^{-1})$ and $V_{S}=$ the velocity of the solar system towards $(\alpha,\delta)$ relative to the rest frame. For the CMB as the rest frame we take $V_{S}=$ the velocity of the solar system towards $[(\alpha, \delta)=(168\deg, -7.22\deg)]$ relative to the CMB $(\approx 3.71 \times 10^{5} m.s^{-1})$ [12].\\
\\ Using equation (1), we derive the time dependent component of the velocity of the laboratory relative to the center of the solar system $[V(t)_{E-W}]$ as follows:
\begin{equation}
\begin{split}
V(t)_{E-W}=V_{O}[\sin(\Omega_{S}t)\sin(\Omega_{O}t)+\cos\varepsilon \cos(\Omega_{S}t) \cos(\Omega_{O}t)]+V_{R}
\end{split}
\end{equation}
For our present discussion and analysis, we present $V(t)_{E-W-CMB}$ of equation (1) for a laboratory at CERN with respect to the CMB in Fig. 2.
\begin{figure}[h]
\includegraphics[width=1\textwidth]{Fig2.eps}
\caption{Presentation of the time dependent components of the velocities of a laboratory at the CERN $V(t)_{E-W-CMB}$ of equation (1)] with respect to the CMB along the East-West direction for the Earth's positions in different seasons as shown in Fig. 1.}
\end{figure}
\\
\\ Emission theories were proposed where electrodynamics was modified by supposing that the velocity of a light wave remained associated with the source rather than with a local or universal frame [18]. Following an emission theory, we assume that the speed of neutrino $(u)$ depends on the speed of the source $V(t)_{E-W-CMB}$ or $V(t)_{E-W}$ .
\section{Long-baseline neutrino experiments}
Following Zuber [19], we know that there are three projects to perform long-baseline neutrino experiments to find out the speed of the neutrino. These projects are in Asia (sending a neutrino beam from a source at KEK to detectors at Super-Kamiokande in Japan with a baseline distance of about 250 km), North America (sending a neutrino beam from a source at Fermilab to a detector at the Soudan mine with a baseline distance about 730 km in USA) and Europe (sending a neutrino beam from a source at CERN to a detector at the Gran Sasso Laboratory with a baseline distance about 732 km). We present the outcomes of these long-baseline neutrino experiments, which were performed in different years, in Fig. 3. Here we choose the results of the neutrinos with average energy of $<40$ GeV as shown in Fig. 2 of Ref. [12] for our present analysis. The KAMIOKANDE-II Collaboration [21] data was obtained in Japan and reported in 1987. The Fermilab [I] data and the MINOS Collaboration [10] data were obtained at Fermilab in USA and reported in 1979 and 2007 respectively. The OPERA Collaboration [11] data was obtained in Europe and reported in 2011.
\begin{figure}[h]
\includegraphics[width=1\textwidth]{Fig3.eps}
\caption{Outcomes of different long-baseline neutrino ($<40$ GeV) experiments where u is the velocity of the neutrino and c is the velocity of light in vacuum. [I] As in Fig. 2 of Ref. [12] and [20].}
\end{figure}
\\
\\We already noted in section 2 that for our present analysis and discussion, as for example, we look at the frameworks of long-baseline neutrino experiment of the OPERA Collaboration in detail. This experiment lies 1,400 meters underground in the Gran Sasso National Laboratory in Italy. It is designed to study a beam of neutrinos with average energy of $\sim17$ GeV coming from the CNGS neutrino beam at CERN (CERN Neutrino beam to Gran Sasso), Europe's premier high-energy physics laboratory located 732 kilometers away near Geneva, Switzerland. Let us note the parameters of this measurement following [11]: the speed of light in vacuum, $c\cong299,792,458 m/s$; the baseline used in the OPERA measurement, $L\cong 732,000 m$; the velocity of the neutrino, $u$; the time of flight corresponding to $c$, $L/c$; the time of flight corresponding to $u$, $L/u$. \\
\\
Therefore, the difference is $(\frac{L}{c} - \frac{L}{u})=(60\pm6.9(stat.)\pm7.4(sys.)) nanosecond$.\\
\\
Fig. 4 presents the predicted differences between time-of-flights corresponding to $c$ and $u$ from CERN to LNGS $(\dfrac{L}{c+V(t)_{E-W-CMB}} -\dfrac{L}{c} )$ assuming a Galilean transformation for the speed of neutrino. We assume that the speed of neutrino is $u=c + V(t)_{E-W-CMB}$.
\begin{figure}[h]
\includegraphics[width=1\textwidth]{Fig4.eps}
\caption{Presentation of a predicted time difference between the time-of-flights corresponding to $c$ and $u$ from CERN to LNGS assuming that the speed of neutrino follows a Galilean transformation where $u=c+V(t)_{E-W-CMB}$ from equation (1). }
\end{figure}
\begin{figure}[h]
\includegraphics[width=1\textwidth]{Fig5.eps}
\caption{Presentation of a predicted time difference between the time-of-flights corresponding to c and u from CERN to LNGS assuming that the speed of neutrino follows a Galilean transformation where $u=c+V(t)_{E-W}$ from equation (2).}
\end{figure}
\section{Discussion}
The speed of the neutrino is equal to the speed of light in vacuum $(c)$ if the source of neutrinos is attached to a rest frame (say with the CMB). When the source of neutrinos is attached to a moving frame (the Earth), one can assume the speed of the neutrino $u=c+V(t)_{E-W-CMB}$ or $u=c+V(t)_{E-W}$ follows a Galilean transformation. Fig. 3, based on the outcomes of different neutrino experiments, presents an interesting variation of the speed of neutrino which indicates the possibility of our claim. Therefore, one cannot rule out the possibility that the variation is due to the movement of the Earth. \\
\\ We already presented the time dependent components of the velocities of a laboratory relative to the CMB in equations (1) and in Fig. 2. As we know from our discussion in section 2 that the velocity of the Earth relative to a rest frame is an unresolved problem. Therefore, we derive equation (2), the time dependent velocity of the Earth with respect to the centre of the solar system which was used to analyse classic experiments (such as Ref. [3]) before the invention of the CMB. We presented a predicted time difference between the time-of-flights corresponding to c and u from CERN to LNGS (a baseline distance of 732 km) assuming that the speed of neutrino follows the emission theories. Based on this presentation in Fig. 4 and Fig. 5, we can estimate that the time-difference of 60 nanoseconds reported by the OPERA is within the range of our prediction.
\section{Conclusions}
We know that the neutrino can escape the core of the star without any incident. It can pass through the Earth without any barrier. As far as we know that almost all of the important properties of neutrinos are still unresolved. All of these peculiar properties of neutrinos will put the outcome of the neutrino experiment in a huge challenge to make it acceptable unambiguously. \\
\\ However, we would like to conclude that while the measured time-difference is consistence with a frame dependent speed of neutrino it is much smaller than the maximum shown in Fig. 4 but the time-difference is comparable to that shown in Fig. 5. This is true of all of the experiments shown in Fig. 3. Thus we think that the time-difference is highly unlikely to be due to a frame dependent speed of the neutrino. Following Maccione [22] we would like to note that there are no sidereal variations that have been measured for OPERA and also the exact time and date of the neutron emission are unclear according to the OPERA report by Adam et al [11]. Also, this is true of all of the experiments shown in Fig. 3. Therefore, we would like to propose that future tests should be made to determine whether there is any sidereal variation in neutrino velocity. Also, two measurements, with six months gap as shown in Fig. 2, can give higher sensitivity to understand the reality.
\section*{Acknowledgements}
The author is indebted to Professor A. D. Stauffer, Department of Physics and Astronomy, York University, Toronto, Canada for valuable comments and discussions. Suggestions of Professor Brendan M. Quine, Department of Earth and Space Science and Space Engineering, York University, Toronto, Canada to consider this
problem are gratefully acknowledged. This work was supported by York University, Toronto, Canada. Also, the author is thankful to Mr. Nick Balaskas, Department of Physics and Astronomy, York University, Toronto, Canada for his encouragement of this work.
\section*{References}
1. Arya, A. P. (1974) Elementary Modern Physics, (Addison-Wesley, Physics).\\
\\
2. Jenkins, F. A. and White, H. E. (1987) Fundamentals of Optics (4th edn., M-\indent cGraw Hill International Editions, Physics Series).\\
\\
3. Michelson, A.A. and Morley, E.W. (1887) Am. J. Sci. 34, 333-345. \\
\\
4. Lorentz, H.A. (1904) Proc. Acad. Sci. Amsterdam, 6, 809. \\
\\
5. Poincare, H. (1905) C. Rendues Acad. Sci. Paris 140, 1504. \\
\\
6. Einstein, A. (1905) Ann. Phys. 322, 891-921. \\
\\
7. Kosteleck, V.A. and Samuel, S. (1989) Phys. Rev. D 39, 683-685. \\ \\
8. Amelino-Camelia, G., Ellis, J., Mavromatos, N.E., Nanopoulos, D.V. and Sarkar, S. \indent (1998) Nature 393, 763-765. \\
\\
9. Gambini, R. and Pullin, J. (1999) Phys. Rev. D 59, 124021. \\
\\
10. Adamson, P. et al. [MINOS Collaboration] (2007) Phys. Rev. D76, 072005. \\ \\
11. Adam, T., et al. (173 additional authors) (2011) arXiv:1109.4897v2. \\
\\
12. Amelino-Camelia, G., Gubitosi, G., Loret, N., Mercati, F., Rosati, G., and L-\indent ipari, P. (2011) arXiv:1109.5172v2. \\
\\
13. Smoot, G.F., Gorenstein, M.V. and Muller, R.A. (1977) Phys. Rev. Lett. 39(14), \indent 898-901. \\
\\
14. Ahmed, M. F., Quine, B. M., Sargoytchev, S. and Stauffer, A. D. (2011a) a- \indent ccepted for publication in the Indian Journal of Physics, arXiv:1011.1318v2. \\ \\
15. Ahmed, M. F., Quine, B. M., Sargoytchev, S. and Stauffer, A. D. (2011b) arXiv:1103.6086v3. \\ \\
16. Ahmed, M.F. (2012) PhD thesis, York University, Toronto, Canada (in progress). \\ \\
17. Elburg, R. A.J. van (2011) arXiv:1110.2685v1. \\ \\
18. Panofsky, W. K. H. and Phillips, M. (1962) Classical Electricity and Magnetism, \indent Second Edition, Addison-Wesley Publishing Company, INC. \\ \\
19. Zuber, K. (2004) Neutrino Physics, IOP Publishing. \\ \\
20. Kalbfleisch, G. R., Baggett, N., Fowler, E. C. and Alspector, J. (1979) Phys. \indent Rev. Lett. 43, 1361.\\ \\
21. Hirata, K. et al. (1987) [KAMIOKANDE-II Collaboration], Phys. Rev. Lett. \indent 58, 1490. \\ \\
22. Maccione, L., Liberati, S., and Mattingly, D. M. (2011) arXiv:1110.0783v1.
\end{document} |
1,314,259,995,906 | arxiv |
\section{Conclusion and Outlook}\label{Sec:Conclusion}
In this work, we define the Kinematic Orienteering Problem and benchmark exact and heuristically obtained solutions against optimal solutions of the DOP. For flight time estimation, we present an improved analytical approach to calculate time-optimal multidimensional trajectories with bounded acceleration and velocity and demonstrate why the state-of-the-art approach is not valid in general. Further, we show that the obtained overall trajectories can precisely be tracked by a modern MPC-based UAV flight controller. To our knowledge, this constitutes the first approach that enables time-optimal mission planning for multirotor UAVs with consideration of their full physical capabilities. In future work, we will improve the computational performance and quality of our heuristic solution approach and investigate the KOP in three-dimensional scenarios.
\section{Heuristic Solution Approach} \label{Sec:Heuristic}
For larger problem instances, the KOP is too complex to be solved exactly. Therefore, we propose a heuristic solution approach based on a Large Neighborhood Search (LNS) framework, which is a widely used and easy-to-implement approach for route planning problems \cite{Pisinger.2018}. Starting from an initial solution generated by a construction heuristic, we iteratively destroy 50\% of the solution and then apply the construction heuristic again. After 100 iterations, we use the best-found solution as a new initial solution for the same procedure for another 100 iterations but destroy only 20\% of the solution. The objective is to search for good solutions globally in the first step, and in the second step, we are locally optimizing the best-found solution so far. In the following, we present the applied construction and the destruction heuristics.
\subsection{Construction Heuristic}\label{Subsec:Construction_Heuristic}
We propose a construction heuristic that inserts unscheduled locations $l_p$ into an existing plan by evaluating the ratio of potentially collected priority $r_p$ and additional flight time used for insertion. Since a location $l_p$ can be traversed in multiple ways according to heading angle $h_p$ and velocity $v_p$, we only consider the best insertion possibility, i.e. where the additional flight time is minimal:
\begin{equation*}
c^\ast_{i,p,j}:=\min_{h_p \in \setHead, v_p \in \setVelo}c_{i, h_i, v_i}^{p, h_p, v_p} + c_{p, h_p, v_p}^{j, h_j, v_j} - c_{i, h_i, v_i}^{j, h_j, v_j}
\end{equation*}
In this term, $i$ and $j$ refer to the predecessor and successor locations $l_i$ and $l_j$ of location $l_p$. At this point, heading angles and velocities at the predecessor and successor are fixed. Hence, the term describes the additional costs of inserting location $l_p$ with optimal heading angle $h_p$ and velocity $v_p$ in between $l_i$ and $l_j$. The best insertion for location $l_p$ in between any predecessor $l_i$ and successor $l_j$ is associated with the highest ratio \begin{equation}\label{eq:ratio}
\ratio_{p}^\ast = r_p \left[\min_{i, j} c^\ast_{i,p,j} \right]^{-1}. \nonumber
\end{equation}
This is done for all unscheduled locations. The insertion with the overall highest ratio is realized. The construction heuristic terminates when no further insertion is possible without violating the maximum flight time restriction. As post-processing for each insertion, we optimize the heading angle and velocity of the start and end location.
\subsection{Destruction Heuristics}
To destroy a part of the incumbent solution, we make use of three different destruction heuristics, which are applied randomly until the predefined percentage of destruction is reached. The first heuristic removes the location from the existing solution whose ratio between gained priority and flight time used is the lowest. As a second destruction heuristic, we remove the location whose assigned heading angle and velocity do not connect it with its predecessor and successor optimally. This is checked via full enumeration. The third heuristic combines both and removes the location whose ratio between the gained priority and the difference between current flight time and best possible flight time is the lowest.
\section{Introduction}
\thispagestyle{FirstPage}
In the last decade, UAV technology has been steadily gaining momentum. With technological advancements, UAVs are proving to be extremely useful in a variety of applications. Within this work, we focus on flight planning for surveillance and data collection, which is one of the most important use cases for UAVs \cite{Otto.2018}. In practical applications, UAVs have a limited flight time due to their battery capacity, which is why a selection of the requested locations for data collection must often be made. To do this, locations can be prioritized and the problem can be modeled as an Orienteering Problem (OP), which aims to find the trajectory that maximizes the collected priorities given the maximum flight time constraint (see \cite{Penicka.2017, Faigl.2019, Fountoulakis.2020}). For related problems, current approaches consider the UAV's physics to generate efficient flight motions and use them as flight time estimation between locations. The most prominent examples are so-called Dubins paths and Bézier curves. Dubins paths are based on the principle of minimum turning radii, which are always feasible with regard to the underlying physics (see \cite{Otto.2018}). However, their disadvantage is that they are based on the assumption of a constant velocity. Especially for the mission planning of the widely used multirotor UAVs, the degree of freedom offered by longitudinal acceleration, which enables flying sharp turns, is thus omitted. Bézier curves, in turn, allow longitudinal acceleration. However, they do not intrinsically consider physical restrictions of the UAV such as maximum acceleration, which is why a feasible trajectory along the curve must be determined via post-processing. Furthermore, they do not provide any guarantees regarding time-efficiency.
\begin{figure}
\centering
\include{tikz/KOPSolution}
\vspace{-8mm}
\caption{Example solution of the KOP for the second OP instance from Tsiligirides with bounds on the allowed acceleration $a(t) \in \left[-1.5, 1.5 \right]\left(\frac{\text{m}}{\text{s}^2}\right) $ as well as bounds on the allowed velocity $v(t) \in \left[ -3, 3\right] \left(\frac{\text{m}}{\text{s}}\right)$ and a maximum flight time $C_{max}=35$\,s.The changing color of the trajectory represents the corrensponding velocity. The priority at each location is indicated by its opacity with a high opacity referring to a high priority.}
\vspace{-3mm}
\label{fig:kop}
\end{figure}
For this reason, we propose a solution approach which is able to consider acceleration in arbitrary direction and guarantee physical feasibility as well as time-efficiency. Our approach combines the generation of time-optimal trajectories with bounds on velocity and acceleration, further denoted as kinematic trajectories, with the well-known Orienteering Problem and is therefore called Kinematic Orienteering Problem (KOP). For better understanding, Fig. \ref{fig:kop} shows an example solution to the problem.
With this work, we extend our previous research \cite{Meyer.2021} and make the following contributions:
\begin{itemize}
\item We introduce the KOP as the problem of finding the kinematic trajectory that maximizes the collected priorities without exceeding the maximum allowed flight time.
\item Further, we propose a mathematical problem formulation for the KOP and propose an easy-to-implement heuristic solution approach that yields high-quality solutions in short time.
\item Additionally, we present an improved analytical approach to generate time-optimal kinematic trajectories in multiple dimensions, since we found that the state-of-the-art procedure is not generally correct.
\item We benchmark our proposed solution approach against exact solution of the DOP and show by simulation that the trajectories found by our approach can precisely be tracked by a modern MPC-based flight controller.
\end{itemize}
The outline of this paper is as follows: We give an overview of related literature in Section \ref{Sec:Related_Work}. Next, we present our approach to generating time-optimal kinematic trajectories in Section \ref{Sec:TrajectoryGeneration}, followed by the mathematical definition of the KOP in Section \ref{Sec:Routing}. Section \ref{Sec:Heuristic} presents our heuristic solution approach and Section \ref{Sec:Results} shows the results of our approach. Finally, Section \ref{Sec:Conclusion} concludes and gives an outlook.
\section{Kinematic Orienteering Problem}\label{Sec:Routing}
The objective of the KOP is to find the kinematic trajectory through a set of locations that maximizes the sum of priorities of the selected locations while considering the maximum flight time. The proposed mathematical optimization model for the KOP assumes that a UAV can move through a location with a discretized heading angle and additionally at a discretized velocity. As a generalization of the OP, the KOP is also NP-hard (see \cite{Golden1987}).
\subsection{Assumptions and Notations}
We solve the KOP for multirotor UAVs in an obstacle-free two-dimensional plane where each element in a set of locations $\setWP = \lbrace l_i: i = 1,..., \numWP\rbrace$ can be traversed with discretized heading angle and with a discretized velocity and is assigned a priority $r_i \in \mathbb{R}_0^+$. Start and end locations are represented by $l_1$ and $l_{\numWP}$. The two-dimensional case is addressed, since it is the basis of an extension into the three-dimensional space and to enable a direct comparison to the Dubins Orienteering Problem (DOP) as state-of-the-art on benchmark instances.
With $\numHead$ and $\numVelo$ representing the number of discretization levels for heading angles and velocities, we define the set of discretized heading angles as $\setHead = \left\{h_i: h_i = 2\pi i/\numHead, i = 1,...,\numHead\right\}$. The set of discretized velocities depends on the maximum allowed velocity and consists of elements $\setVelo = \lbrace v_i: v_i = (i-1)v_{max}/(\numVelo-1), i = 1,..., \numVelo\rbrace$.
Costs that describe the flight time to get from location $l_i$ with heading angle $h_k$ and velocity $v_g$ to location $l_j$ with heading angle $h_m$ and velocity $v_\text{$w$}$ are defined by $c_{ikg}^{jm\text{$w$}}$ and are determined by our trajectory generation method presented above. The maximum flight time is denoted as $C_{max}$.
\subsection{Mathematical Formulation of the KOP}
The mathematical programming formulation to determine the priority-maximizing sequence of locations, as well as the corresponding heading angles and velocity configurations that define the UAV's trajectory, is presented in the following.
The main decision variables for our formulation $x_{ikg}^{jm\text{$w$}}$ are binary and interpreted as
\begin{equation*}
x_{ikg}^{jm\text{$w$}} = \begin{cases}
1, & \text{if location $l_i$ is left with heading angle $h_k$}\\
& \text{and velocity $v_g$ towards location $l_j$, which is}\\
& \text{entered with heading angle $h_m$ and velocity $v_\text{$w$}$},\\
0, & \text{otherwise}
\end{cases}
\end{equation*}
Furthermore, integer decision variables $u_i \in \left\{ 1, ..., \numWP \right\}, i=1,..., \numWP$ define the sequence of visited locations $l_i$ in the tour. The overall KOP model is given as follows.
\begin{align}
\max & \sum_{i=2}^{\numWP-1}\sum_{j=2}^{\numWP} \sum_{k=1}^{\numHead}\sum_{m=1}^{\numHead}\sum_{g=1}^{\numVelo} \sum_{w=1}^{\numVelo} x_{ikg}^{jmw}r_j \label{OP_objective} \\
\text{s.t.} \nonumber\\
& \sum_{j=2}^{\numWP} \sum_{k=1}^{\numHead}\sum_{m=1}^{\numHead}\sum_{g=1}^{\numVelo}\sum_{w=1}^{\numVelo} x_{1kg}^{jmw} = 1 \label{start_depot_left_v_zero}\\
& \sum_{i=1}^{\numWP-1} \sum_{k=1}^{\numHead}\sum_{m=1}^{\numHead}\sum_{g=1}^{\numVelo}\sum_{w=1}^{\numVelo} x_{ikg}^{\numWP mw} = 1 \label{end_depot_entered_v_zero}\\
& \sum_{i=1}^{\numWP-1} \sum_{k=1}^{\numHead}\sum_{m=1}^{\numHead}\sum_{g=1}^{\numVelo}\sum_{w=1}^{\numVelo} x_{ikg}^{jmw} \leq 1 \quad\quad \forall j = 2, ..., \numWP\label{all_locations_only_once}\\
& \sum_{i=1}^{\numWP-1} \sum_{k=1}^{\numHead}\sum_{g=1}^{\numVelo}x_{ikg}^{jmw} = \sum_{o=2}^{\numWP} \sum_{p=1}^{\numHead}\sum_{q=1}^{\numVelo}x_{jmw}^{opq} \nonumber\\
& \qquad\qquad\qquad\qquad\qquad\qquad \forall j=2,\cdots, \numWP-1 \nonumber\\
& \qquad\qquad\qquad\qquad\qquad\qquad \forall m=1,\cdots,\numHead \nonumber\\
& \qquad\qquad\qquad\qquad\qquad\qquad \forall w=1,\cdots,\numVelo\label{flow_conservation}\\
& \sum_{i=1}^{\numWP-1}\sum_{j=2}^{\numWP} \sum_{k=1}^{\numHead}\sum_{m=1}^{\numHead}\sum_{g=1}^{\numVelo} \sum_{w=1}^{\numVelo} x_{ikg}^{jmw}c_{ikg}^{jmw} \leq C_{max}\label{maximum_flight_time}\\
\end{align}
\begin{align}
& u_i-u_j + 1 \leq (\numWP-1)\left(1-\sum_{k=1}^{\numHead}\sum_{m=1}^{\numHead}\sum_{g=1}^{\numVelo}\sum_{w=1}^{\numVelo}x_{ikg}^{jmw}\right)\nonumber\\
&\qquad\qquad\qquad\qquad\qquad\qquad \forall i,j=1,\cdots, \numWP\label{subtour_prevention_2}\\
& u_1 = 1 \label{subtour_prevention_3}\\
& u_i \in \lbrace2, ..., \numWP\rbrace \qquad\qquad\qquad \forall i=2,\cdots, \numWP \label{subtour_prevention_1}\\\
& x_{ikg}^{jmw} \in \left\lbrace 0,1 \right\rbrace \qquad\qquad\qquad \forall i,j=1,\cdots, \numWP \nonumber\\
& \qquad\qquad\qquad\qquad\qquad\qquad\, \forall k,m=1,\cdots,\numHead \nonumber\\
& \qquad\qquad\qquad\qquad\qquad\qquad \, \forall g,w=1,\cdots,\numVelo \label{binary}
\end{align}
The objective of the presented mathematical programming formulation \eqref{OP_objective} is to maximize the collected priorities. Constraint \eqref{start_depot_left_v_zero} enforces that the start location is left whereas constraint \eqref{end_depot_entered_v_zero} enforces that the end location is entered. Constraint set \eqref{all_locations_only_once} ensures that each location is entered at most once. Flow conservation is fulfilled by constraint set \eqref{flow_conservation}. These constraints ensure that location $l_j$ is left in the same direction as it is entered as well as with the same velocity. Constraints \eqref{maximum_flight_time} ensure that the maximum allowed flight time is not exceeded. To prevent subtours, we make use of the subtour elimination constraints \eqref{subtour_prevention_2}, \eqref{subtour_prevention_3} and \eqref{subtour_prevention_1} which are formulated according to the Miller-Tucker-Zemlin (MTZ) formulation \cite{Miller.1960}. Constraints \eqref{binary} enforce the decision variable $x_{ikg}^{jmw}$ to be binary.
\section{Related Work} \label{Sec:Related_Work}
Since this work is, to our knowledge, the first to combine the classical Orienteering Problem, which is a subclass of route planning, with the generation of time-optimal kinematic trajectories for UAVs, this section is divided into two parts. In the first part, we give an overview of current approaches for UAV route planning. In the second part, we present the latest approaches for UAV trajectory generation.
\subsection{Route Planning Problems for Multirotor UAVs}
Route planning problems for multirotor UAVs are enjoying growing interest in the scientific literature. Although algorithms for solving UAV route planning problems use time-of-flight estimation based on Euclidean distances until now (see \cite{Fountoulakis.2020}) the trend has moved towards more precise and practically feasible metrics \cite{Henchey.2020}.
To do this, established methods include simple physical properties of UAV movements into route planning. One such characteristic is the minimum turning radius, which results from a UAV moving at constant velocity and applying maximum lateral acceleration. It is a restriction that comes into effect especially for fixed-wing UAVs but is also considered for multirotor UAVs. The resulting flight paths are known as Dubins paths and are proven to be optimal regarding the assumed kinematic restrictions (see \cite{Dubins.1957}). This principle is used in many UAV route planning problems. Examples include \cite{Penicka.2017} and \cite{Sundar.2022}. The advantage of using Dubins paths and thus assuming constant velocity is that acceleration and deceleration are avoided, making the trajectory more energy-efficient overall. Its disadvantage is that large detours may have to be accepted since the maximum acceleration has to fight against the prevailing mass inertia. At some point, these detours cancel out the efficiency advantage of a constant velocity and that is why an optimal tradeoff has to be determined \cite{Meyer.2021}.
Another possibility to estimate flight times for UAV route planning problems is by using so-called Bézier curves \cite{Faigl.2018, Faigl.2019}. Here, two locations are connected by a smooth curve based on Bernstein polynomials. By setting intermediate control points in the right way, it is possible to guide the path safely around obstacles. However, the polynomial representation of a safe path through Bézier curves is purely spatial. To ensure physical feasibility, the Bézier curve must first be transferred to the time domain. Only by assigning each spatial point of the curve to a particular point in time is it possible to consider physical constraints such as maximum velocity and acceleration (see \cite{Gao.2018}). For routing problems where thousands of trajectories are calculated this two-step procedure is computationally expensive (see \cite{Faigl.2018}). Further, it is argued in \cite{Gao.2018} that if the initial and final velocity are not both zero, there may not exist a feasible solution. Moreover, another disadvantage is, especially for environments without obstacles, that the resulting trajectory is bound to the precomputed path and therefore likely to be time-suboptimal.
\subsection{Trajectory Generation for Multirotor UAVs}
Apart from Dubins paths and Bézier curves there are many trajectory generation approaches that plan UAV motions directly within the time domain and hence might suit better for flight time estimation. However, some of these approaches do not consider bounds on maximum velocity and acceleration either. An example is the well-known approach to generate minimum-snap trajectories, see \cite{Mellinger.2011, Richter.2016}.
An approach that considers bounds on velocity and acceleration is model-based predictive control (MPC). It is based on a discrete-time motion model of the UAV consisting of system parameters, system state, and control variables. The objective is to find an energy-efficient sequence of control inputs that minimizes the deviation of the current state to a reference, for example, described by the desired end state, while respecting limits of control input and feasible states \cite{Mueller.2013, Kamel.2017}. However, MPC does not provide time-optimality, and since MPC solves a mathematical program, usually quadratic, numerically, it is a computationally intensive method.
The last form of trajectory optimization addressed here deals with the determination of time-optimal trajectories \cite{Beul.2016, Beul.2017}. According to Pontryagin's minimum principle (see \cite{Pontryagin.1987}), time-optimality is achieved by having the system always operate at its physical limits. For this purpose, the overall motion of the UAV is divided into several time segments and for each, the control variable may be the maximum or the minimum control value or zero. This property is used to calculate the trajectories analytically.
Further, \cite{Beul.2016} argues that time- and energy-optimality are equal in terms of the total thrust integral. Faster trajectories might consume more energy for acceleration and deceleration, but since their flight time is shorter, their energy consumption is less compared to slower trajectories.
In our previous work \cite{Meyer.2021}, we adapt this approach by using acceleration as control input. This enables us to generate trajectories considering fundamental physical properties and to obtain high-quality travel time estimates in a very short time. However, we found that the state-of-the-art procedure that we based our trajectory generation on is not generally correct. In Section \ref{Sec:TrajectoryGeneration}, we give an example for our observation and present a refined method that solves that issue.
\section{Results} \label{Sec:Results}
To evaluate our LNS solution approach and the benefit of the KOP formulation, we generate globally optimal benchmark solutions of the Dubins Orienteering Problem (DOP) and compare them to exact and heuristic solutions of the KOP. Moreover, we show that the yielded solutions can be used as reference trajectories for multirotors since we demonstrate that they can precisely be tracked by a modern MPC-based flight controller.
\subsection{Benchmark against DOP}
We generate benchmarks on Tsiligirides dataset 2 (see \cite{Tsiligirides.1984}) with slightly modified time budget constraints to be more representative. We chose dataset 2 since it is the only OP dataset whose problem instances could be solved with Gurobi as a commercial solver to optimality within a reasonable time. The kinematic properties we assume for the evaluation are $v_{max} = 3\,\frac{\text{m}}{\text{s}}$ and $a_{max} = 1.5\,\frac{\text{m}}{\text{s}^2}$. Further, we assume that each waypoint can be traversed with eight different and equally distributed heading angles. To determine the optimal solutions for the DOP, we calculate Dubins paths and used their length divided by the constant velocity as edge costs. Dubins paths require a minimum turning radius as input which results from the centripetal force and is calculated by $r = v_{const}^2/a_{max}$. Since $a_{max}$ is a physically given and UAV-specific constant the turning radius can only be changed by a modification of $v_{const}$. However, decreasing the constant velocity to achieve short paths might result in higher flight times.
To find the best instance dependent constant velocity, we solved the DOP for each constant velocity $v_{const} \in \lbrace 0.1, 0.2, ..., 1.0\rbrace \cdot v_{max}$. The highest collected priorities for the different maximum allowed flight times $C_{max}$ over all $v_{const}$ are presented in the second column of Table \ref{tab:Solutionquality_instance_2}. To directly benchmark the KOP-formulation with the DOP-formulation, we solve the corresponding KOPs with $\numVelo=1$ to optimality by using the associated $v_{const}$ scaled by $\sqrt{2}^{-1}$ as traversal velocities. The scaling is applied to not exceed the maximum allowed velocity of a single axis, which must be bounded to $\left[-3, 3\right]\cdot \frac{1}{\sqrt{2}}\,\left(\frac{\text{m}}{\text{s}}\right)$ to guarantee that the total maximum velocity is not exceeded and hence to be comparable with the DOP. For the same reason, the bound on maximum allowed acceleration for each axis in our trajectory generation is scaled with $\sqrt{2}^{-1}$ as well. The results are given in the third column of Table \ref{tab:Solutionquality_instance_2} ($\text{KOP-1}^\ast$). It can be seen that the optimal solution of the KOP constantly collects approximately $20\%$ more priorities than the best possible solution of the DOP for all instances. Note that the overall highest collected priorities are marked bold in Table \ref{tab:Solutionquality_instance_2}.
To demonstrate the effectiveness of the proposed LNS, we again conduct the same procedure for the KOP with our heuristic solution approach as solver. For each problem instance and each traversal velocity, we conducted ten runs, each with a different random seed. The overall highest collected priorities and the average collected priorities for the associated traversal velocity in brackets, demonstrating the competitiveness with the exact approach, are given in the fourth column of Table \ref{tab:Solutionquality_instance_2}.
Lastly, we solved the KOP as modeled in Section \ref{Sec:Routing} with multiple options for traversal velocitities. In the first case ($\text{KOP-3}^\text{LNS}$), we consider the set of allowed velocities to be $\setVelo = \lbrace 0, 0.5, 1\rbrace\cdot v_{max}/\sqrt{2}$. In the second case ($\text{KOP-6}^\text{LNS}$), $\setVelo = \lbrace 0, 0.2, 0.4, 0.6, 0.8, 1\rbrace\cdot v_{max}/\sqrt{2}$ holds. Table \ref{tab:Solutionquality_instance_2} shows that for many problem instances the LNS approach yields even better solutions than found by the exact approach ($\text{KOP-1}^\ast$). This is possible since a higher degree of freedom for the travesal velocity is offered, which can successfully be exploited. However, for $C_{max} = 10$\, no improvements compared to the optimal DOP solutions can be found. Sometimes the solution is even worse than the optimal DOP solution. We assume this to be the case since the velocity yielding the best solution for $\text{KOP-1}^\ast$ is not considered. To illustrate the resulting trajectories, an example solution for $\numVelo=6$ and $\numHead=8$ is given in Fig. \ref{fig:kop}. The efficiency of the trajectory is clearly visible since slow velocities only occur when sharp turns are required.
Our exact approach, LNS and the trajectory generation are implemented in Python and executed on an Intel Core i7-8565U CPU. The average computation time of a single trajectory is 62\,µs. The average runtimes for each solution method are presented in a logarithmic scale in Figure \ref{fig:runtimes}. Some problem instances for the $\text{DOP}^\ast$ and $\text{KOP-1}^\ast$ could not even be solved to optimality within a runtime of 100.000\,s. In these cases, we set the average runtime for the ten different traversal velocities to be greater than 10.000\,s. Nevertheless, since the dual bounds for the suboptimal solutions provided by Gurobi are worse than the proven optimal solutions for other traversal velocities, the optimality of the values presented in Table \ref{tab:Solutionquality_instance_2} holds. Contrary to the exact approaches, the LNS finds high-quality solutions in a short time and with slowly increasing runtimes for increasing problem sizes. Therefore, we see the benefit of our heuristic solution approach when it comes to larger problem instances.
\begin{table}
\centering
\scriptsize
\begin{tabular}{cccccc}
\hline
\hline
\addlinespace[1ex]
\multirow{2}{*}{$C_{max}$}& \multicolumn{3}{c}{Best fixed traversal velocity}& \multicolumn{2}{c}{Varying traversal velocity}\\
\cmidrule(lr){2-4}\cmidrule(lr){5-6}
& $\text{DOP}^\ast$ & $\text{KOP-1}^\ast$ & $\text{KOP-1}^{\text{\tiny LNS}}$ & $\text{KOP-3}^{\text{\tiny LNS}}$& $\text{KOP-6}^{\text{\tiny LNS}}$\\
\addlinespace[0.5ex]
\hline
\addlinespace[1ex]
10\,s& 80 & \textbf{95} & 80 (80) & 70 (70) & 80 (75)\\
15\,s& 155 & \textbf{180} & \textbf{180} (153) & 175 (170) & 165 (165) \\
20\,s& 215 & \textbf{250} & 235 (225) & 230 (226.5) & \textbf{250} (237.5) \\
25\,s& 275 & 325 & 315 (295.2) & \textbf{330} (311.5) & \textbf{330} (316.5) \\
30\,s& 315 & \textbf{390} & \textbf{390} (367.5) & 385 (369.5) & \textbf{390} (377.5)\\
35\,s& 370 & 430 & 425 (411.5) & 430 (415) & \textbf{435} (422.5) \\
40\,s& 415 & \textbf{450} & \textbf{450} (447) & \textbf{450} (446) & \textbf{450} (450) \\
\hline
\hline
\end{tabular}
\caption{Results for each solution method Tsiligirides dataset 2 containing 21 locations. The asterisk as superscript denotes that the problem instances where solve exactly, whereas LNS indicates the application of our heuristic solver. Average collected priorities for our LNS approach are given in brackets. }
\label{tab:Solutionquality_instance_2}
\end{table}
\begin{figure}[]
\centering
\include{tikz/solution_times}
\vspace{-8mm}
\caption{Average runtimes for each solution approach over all problem instances for a specific $C_{max}$.}
\vspace{-3mm}
\label{fig:runtimes}
\end{figure}
\subsection{Trackability of Solutions}
We simulatively indicate the precise trackability of the solutions by the utilization of a modern MPC-based UAV trajectory tracking controller. For that, we apply a twelve state dynamic quadrotor model from \cite{Luukkonen.2011} with squared angular velocity for each rotor as the control input to represent real dynamic behavior. Next, we apply a nonlinear MPC according to \cite{Tzorakoleftherakis.2018}, which is provided by the MPC toolbox in MATLAB, with a sampling time of $0.1$\,s and a prediction horizon of $1.8$\,s. We use the position of the trajectory shown in Figure \ref{fig:kop} as a reference for demonstration. Figure \ref{fig:traj_tracking} shows that the reference trajectory is tracked precisely with neglectable overshoots in position due to the deviation between the kinematic model for generating the reference trajectory and the nonlinear dynamic quadrotor model representing real behavior. The tracking error for the position and as well as the actual velocity and acceleration profile are given in blue in Figure \ref{fig:err_and_acc}, whereas the solution of the KOP as reference is given in green. The root mean square error (RMSE) of the obtained trajectory compared with the reference trajectory is $0.035$\,m. It can be seen that the velocity and acceleration profiles of the reference exactly comply with the predefined bounds. The same holds for the actual trajectory with a few minor exceptions for the control inputs, since the applied MPC has constraints on the maximum thrust but not on the maximum acceleration.
\begin{figure}[t]
\centering
\include{tikz/trajectory_tracking}
\vspace{-8mm}
\caption{Plot of the actual and the reference position for the solution of Tsiligirides problem instance number two with $C_{max} = 35$\,s.}
\vspace{-3mm}
\label{fig:traj_tracking}
\end{figure}
\begin{figure}[h!]
\centering
\begin{minipage}[]{0.7\textwidth}
\begin{flushleft}
\begin{subfigure}{.5\textwidth}
\include{tikz/trajectory_tracking_pos_err}
\vspace{-0.9cm}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\include{tikz/trajectory_tracking_velo}
\vspace{-0.9cm}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\include{tikz/trajectory_tracking_acc}
\label{fig:traj_tracking_acc}
\end{subfigure}
\end{flushleft}
\end{minipage}
\vspace{-8mm}
\caption{Position error $\epsilon_p$, total velocity $v_{tot}$ and total acceleration $a_{tot}$ (with limits) for tracking the reference trajectory shown in Figure \ref{fig:traj_tracking}.}
\vspace{-3mm}
\label{fig:err_and_acc}
\end{figure}
\section{Trajectory Generation} \label{Sec:TrajectoryGeneration}
The state-of-the-art to determine time-optimal trajectories from an arbitrary initial state to an arbitrary final state with constraints on system state and control input is based on the decoupling of axes (see \cite{Beul.2017, Mueller2.2013, Hehn.2015}). This means that in a first step all spatial coordinate axes $i \in \left\{1,..., n\right\}$ of the trajectory in the $n$-dimensional space are separated. Pontryagin's minimum principle is applied for each axis yielding a bang-zero-bang control input pattern. For the $i$-th axis with bounded acceleration as control input and a maximum allowed velocity, the resulting bang-zero-bang acceleration pattern $(a_1, a_2, a_3)$, which defines the acceleration profile
\begin{equation*}
a_i(t) = \begin{cases}
a_1, & 0 \leq t < t_{1, i} \\
a_2, & t_{1, i} \leq t < t_{1, i}+t_{2, i} \\
a_3, & t_{1, i}+t_{2, i} \leq t \leq t_{1, i}+t_{2, i}+t_{3, i} = t_e,\\
\end{cases}
\end{equation*}
is given by $(+a, 0, -a)$ with either $a = -a_{max}$ or $a = +a_{max}$ and $a_{max}$ representing the maximum allowed acceleration. The durations of each time segment of constant acceleration are described by $t_{1, i},t_{2, i}$ and $t_{3, i}$. In case the velocity limit is not reached, $t_{2, i} = 0$ holds. Consequently, the time-optimal trajectory consists of two time segments of constant maximum acceleration with opposite sign and, if the velocity limit is reached, one segment of no acceleration. The time-optimal trajectory duration $T_{opt,i}$ can be calculated analytically (e.g. see \cite{Meyer.2021}). Next, the state-of-the-art postulates that the overall duration $T_{sync}$ is defined as the largest of the time-optimal durations $T_{opt,i}$ over all axes, i.e.
\begin{equation}
T_{sync} = \max_{i\in \left\{1, ...,n\right\}}\lbrace T_{opt, i} \rbrace. \label{eq:best_traj_time}
\end{equation}
However, this procedure sometimes results in unexpected behavior since not all axes can be synchronized with the duration $T_{sync}$. The reason is that this approach does not consider the inertia of the movement properly, and therefore, sometimes results in overshooting the desired final state. This effect has not been reported in the literature so far. Next, we give an example where this approach is invalid.
Figure \ref{fig:insync} illustrates the trajectory generation in two dimensions $x$ and $y$ based on the state-of-the-art. For the $x$ axis the initial state is $p_{x, s} = 0$\,m, $v_{x,s} = 0\,\frac{\text{m}}{\text{s}}$, where $p_{x, s}$ describes the initial position and $v_{x,s}$ the initial velocity. The desired end state for the $x$ axis is $p_{x, e} = 5$\,m, $v_{x,e}=2\,\frac{\text{m}}{\text{s}}$. The time-optimal duration for a maximum allowed acceleration $a \in \left[-0.5, 0.5\right]\,(\frac{\text{m}}{\text{s}^2})$ and velocity $v \in \left[-2, 2\right]\,(\frac{\text{m}}{\text{s}})$ can be calculated as described in \cite{Meyer.2021} and is $T_{opt, x} = 4.5$\,s. For the $y$ axis the initial and end state are $p_{y, s} = 0$\,m, $v_{y,s} = 2\,\frac{\text{m}}{\text{s}}$, $p_{y, e} = 5$\,m, $v_{y,e}=2\,\frac{\text{m}}{\text{s}}$. Here, the calculation of the time-optimal duration yields $T_{opt, y} = 2.5$\,s. According to Equation \eqref{eq:best_traj_time}, both axes must be synchronized at $T_{sync} = 4.5$\,s. By utilizing MPC for trajectory synchronization with a fixed duration, it can be seen that the resulting two dimensional trajectory (blue dots in Figure \ref{fig:insync}) misses the required end state by far. This behavior is due to the inertia of the system. The high initial velocity along the $y$ axis in combination with an insufficient acceleration power leads to overshooting the end position $p_{y, e}$. As a result, although the $y$ axis has the potentially faster execution, it cannot be synchronized with the slower time-optimal duration of the $x$ axis.
\begin{figure}[tb]
\centering
\include{tikz/insync}
\vspace{-8mm}
\caption{Example for insynchronizability of a given initial and end state.}
\vspace{-3mm}
\label{fig:insync}
\end{figure}
\begin{figure}
\centering
\include{tikz/insync2}
\vspace{-8mm}
\caption{Range of valid trajectory durations shown in green. The velocity profiles that lead to the limits of the corresponding range are given by colored lines. The associated areas underneath these profiles are highlighted in the respective colors.}
\vspace{-3mm}
\label{fig:insync2}
\end{figure}
For the above scenario, Figure \ref{fig:insync2} shows feasible velocity profiles for different synchronization times $T_{sync}$ for the $y$ axis with respect to the desired end state. The property of a feasible trajectory is that the desired start and end velocity must be achieved, and the integral underneath the velocity profile must equal $p_{y,e} - p_{y,s}$. The velocity profile $v_y(t)$ of the time-optimal trajectory is shown by a solid black line. Since its initial velocity already equals the maximum allowed value, it remains constant and $T_{y} = T_{opt,y} = 2.5$\,s results. If it is required to increase the trajectory duration, i.e. $T_{y} > 2.5$\,s, the UAV first decelerates the motion and accelerates afterwards to meet the required final velocity $v_{y, e} = 2\,\frac{\text{m}}{\text{s}}$. However, this procedure works only as long as the integral underneath the velocity profile equals $p_{y,e} - p_{y,s}$. The slowest velocity profile that does not violate this condition corresponds to $T_{y} \approx \text{3.1\,s}$(see blue line). Further increasing the required trajectory duration leads to overshooting until $T_{y} \geq \text{12.9\,s}$(see red line). From this duration on it is possible to compensate for overshooting by flying a turn. In total, the range of feasible trajectory durations with respect to the requirements on the $y$ axis is given in green. As can be seen, a synchronization with $T_{sync} = T_{opt, x} = 4.5$\,s is not possible.
In the following, we show our general approach to design time-optimal trajectories for multiple axes. First, we present acceleration patterns needed for our approach and discuss how we can check whether these patterns can be synchronized with a particular trajectory duration $T_{sync}$. Second, we describe a general procedure to obtain the time-optimal duration.
\subsection{Synchronization Feasibility}
In this subsection, we only focus on the feasibility of synchronizing a single axis with a particular $T_{sync}$, and therefore the subscript $i$ will be discarded. To check feasibility, we first define the considered acceleration patterns. On the one hand, we consider the acceleration patterns for time-optimality in a single axis given by $(+a, 0, -a)$ with $a \in \lbrace -a_{max}, a_{max}\rbrace$. These patterns are from now on called classical patterns. However, we further consider the patterns defined by $(+a, 0, +a)$ with $a \in \lbrace -a_{max}, a_{max}\rbrace$, which we denote as synchronization patterns. The reason for the latter is that classical patterns aim at finding the time-optimal behavior, however, they are not sufficient in some cases of synchronization with large synchronization times. We give the following example for illustration (see Figure \ref{fig:insync3}): It is assumed that an axis with $p_s = 0\,\text{m}, p_e = \text{$1.75$\,m}$and $v_s = 0\,\frac{\text{m}}{\text{s}}, v_e =0.5\,\frac{\text{m}}{\text{s}}$ has to be synchronized with a large enough duration $T_{sync}$. Here, $p_s$, $p_e$, $v_s$ and $v_e$ describe initial and end position as well as velocity.
The pattern yielding time-optimality is given by $(+a_{max}, 0, -a_{max})$, with the black line giving the overall time-optimal velocity profile. However, this pattern is only applicable as long as $T_{sync} \leq 4$\,s. When $T_{sync}$ is increased, the segment of deceleration at the end of the velocity profile shortens in time (blue line) until it becomes zero for synchronization time $T_{sync} = 4$ (red line). If it is required to synchronize the axis with $T_{sync} > 4$\,s using the classical pattern $(+a_{max}, 0, -a_{max})$, no solution can be found anymore since it is not possible to meet the desired final velocity and area underneath the velocity profile at the same time. In this case, the synchronization patterns $(+a, 0, +a)$, $a \in \lbrace -a_{max}, a_{max}\rbrace$ with two phases of acceleration pointing towards the same direction come into play (green line). With such a pattern it is possible to synchronize the axis with $T_{sync} > 4$\,s. We define these patterns as synchronization patterns because they are only needed for axis synchronization.
\begin{figure}
\centering
\include{tikz/insync3}
\vspace{-8mm}
\caption{Example of velocity profiles for different acceleration patterns and trajectory durations. }
\vspace{-3mm}
\label{fig:insync3}
\end{figure}
To guarantee synchronization feasibility of one axis with the trajectory time $T_{sync}$, one has to find a pattern from the set of classical and synchronization patterns where $t_1, t_2, t_3 \geq 0$\,s and $v_{min} \leq v(t) \leq v_{max}, \forall t\in \left[0, T_{sync}\right]$. We define these inequations as synchronization conditions. Note that, if the initial and end velocities $v_s$ and $v_e$ are within the velocity bounds, it is sufficient to show that the constant velocity $v_c=v(t_1)$ in the segment of no acceleration is within the velocity bound to guarantee that $v_{min} \leq v(t) \leq v_{max}$. The following subsections present how the values $t_1, t_2, t_3, v_c$ are determined for classical and synchronization patterns.
\subsubsection{Classical Patterns} \label{sec:optimality_pattern}
The classical patterns are defined by the acceleration profile
\begin{equation}
a(t) = \begin{cases}
+a, & 0 \leq t < t_1 \\
0, & t_1 \leq t < t_1+t_2\\
-a, & t_1+t_2 \leq t \leq t_1+t_2+t_3.
\end{cases} \label{eq:acc_profile_opt}
\end{equation}
with $a \in \lbrace a_{max}, -a_{max}\rbrace$. Based on this acceleration profile, the velocity profile results in
\begin{equation}
v(t) = \begin{cases}
v_s + a t, & 0 \leq t < t_1 \\
v_s + a t_1, & t_1 \leq t < t_1+t_2 \\
v_s + 2a t_1 - a\text{$(t - t_2)$} & t_1+t_2 \leq t \leq t_1+t_2+t_3
\end{cases}\label{eq:vel_profile_opt}
\end{equation}
In order to meet the required velocity $v_e$ at time $t_1 + t_2 + t_3$ the following equation has to hold:
\begin{align}\label{eq:vel_constraint_opt}
v_e - v_s &= \int_{0}^{t_1 + t_2 + t_3}a(t)\text{d}t \nonumber\\
&= at_1 - at_3
\end{align}
To meet the required position $p_e$ at time $t_1 + t_2 + t_3$ the equation
\begin{align}\label{eq:pos_constraint_opt}
p_e - p_s &= \int_{0}^{t_1 + t_2 + t_3}v(t)\text{d}t \nonumber\\
&=v_st_1+\frac{1}{2}at_1^2+(v_s+at_1)t_2+(v_s+at_1)t_3-\frac{1}{2}at_3^2
\end{align}
has to hold as well.
Additionally, the equation
\begin{equation}\label{eq:t_constraint_opt}
t_1 + t_2 + t_3 = T_{sync}
\end{equation}
has to hold to guarantee time synchronization. Further, since the velocity $v_c$ for $t \in \left[t_1, t_2\right]$ is constant, the equation
\begin{equation}\label{eq:vel_limit}
v_c = at_1 +v_s
\end{equation}
applies as well.
Equations \eqref{eq:vel_constraint_opt}, \eqref{eq:pos_constraint_opt}, \eqref{eq:t_constraint_opt} and \eqref{eq:vel_limit} form a system of equations with variables $t_1, t_2, t_3, v_c$ whose solution is given by
\begin{align}
t_1 &= \frac{aT_{sync}+v_e-v_s \pm\sqrt{A}}{2a} \label{eq:t_1_opt}\\
t_2 &= \mp\frac{\sqrt{A}}{a}\label{eq:t_2_opt}\\
t_3 &= \frac{aT_{sync}-v_e+v_s \pm\sqrt{A}}{2a}\label{eq:t_3_opt}\\
v_c &=\frac{aT_{sync} + v_s + v_e + \sqrt{A}}{2} \label{eq:v_c}
\end{align}
with
\begin{equation}
A= a^2T_{sync}^2+2(v_e+v_s)aT_{sync}-4a(x_e-x_s)-(v_e-v_s)^2.
\end{equation}
Equations \eqref{eq:t_1_opt}, \eqref{eq:t_2_opt}, \eqref{eq:t_3_opt} and \eqref{eq:v_c} only depend on the trajectory duration $T_{sync}$. Hence, it is sufficient to insert $T_{sync}$ into these equations and verify that $t_1(T_{sync}), t_2(T_{sync}), t_3(T_{sync}) \geq 0$ and $v_{min}\leq v_c(T_{sync}) \leq v_{max}$ to show synchronization feasibility for the respective classical pattern.
\subsubsection{Synchronization Patterns}
Synchronization feasibility for the synchronization patterns is checked analogously. The only difference is in the applied acceleration pattern
\begin{equation}
a(t) = \begin{cases}
+a, & 0 \leq t < t_1 \\
0, & t_1 \leq t < t_1+t_2\\
+a, & t_1+t_2 \leq t \leq t_1+t_2+t_3
\end{cases} \label{eq:acc_profile_sync}
\end{equation}
with $a \in \lbrace a_{max}, -a_{max}\rbrace$, which results in the following velocity profile
\begin{equation}
v(t) = \begin{cases}
v_s + a t, & 0 \leq t < t_1 \\
v_s + a t_1, & t_1 \leq t < t_1+t_2 \\
v_s + a\text{$(t - t_2)$} & t_1+t_2 \leq t \leq t_1+t_2+t_3
\end{cases}\label{eq:vel_profile_sync}
\end{equation}
Analogously, this leads to a system of four equations and four variables whose solution is given by
\begin{align}
t_1 &= \frac{(-2v_sT_{sync} + 2(p_e-p_s))a - (v_e-v_s)^2}{2a(T_{sync}a-v_e+v_s)}\label{eq:t_1_sync}\\
t_2 &= \frac{aT_{sync}-v_e+v_s}{a}\label{eq:t_2_sync}\\
t_3 &= \frac{(-2v_eT_{sync} - 2(p_e-p_s))a - (v_e-v_s)^2}{2a(T_{sync}a-v_e+v_s)}\label{eq:t_3_sync}\\
v_c &= \frac{2a(p_e - p_s) - v_e^2 + v_s^2}{2(aT_{sync} - v_e + v_s)} \label{eq:vc_sync}
\end{align}
Again, equations \eqref{eq:t_1_sync}, \eqref{eq:t_2_sync}, \eqref{eq:t_3_sync} and \eqref{eq:vc_sync} only depend on $T_{sync}$ and the synchronization conditions can easily be checked via insertion.
\subsection{Optimal Synchronization Time}
In this subsection, we show how to find the time-optimal duration for multiple axes. The set of valid synchronization times $\Omega$ is continuous and constrained by the synchronization conditions. To find the time-optimal duration for multiple axes, one has to investigate the boundary of $\Omega$. This is where $t_1, t_2, t_3 \geq 0$ as well as $v_c \geq v_{min}$ and $v_c \leq v_{max}$ holds and at least one of these inequalities is fulfilled with equality. Hence, each $T_{sync}$ that fulfills equality for any of the synchronization conditions is a potential candidate for the time-optimal duration for multiple axes. To determine the best synchronization time for multiple axes, all candidates $T_{sync}\geq \max_{i\in \left\{1, ...,n\right\}}\lbrace T_{opt, i} \rbrace$, with $T_{opt, i}$ for the $i$-th axis calculated as described in \cite{Meyer.2021}, are inserted into the synchronization conditions of each axis and pattern and checked for feasibility. The lowest trajectory synchronization time that yields feasibility for at least one pattern in each axis is defined as the optimal synchronization time $T^\ast$. With the associated values $t_1, t_2, t_3$ for the corresponding patterns it is possible to reconstruct the trajectory.
|
1,314,259,995,907 | arxiv | \subsection*{The Mandelbrot martingales}
To get a solution to Equation~\eqref{equation} one way is to use the
Mandelbrot construction~\cite{M1}. We set
\begin{equation*}
Y_n = b^{-n}\sum_{1\le j_1\le N} W_{j_1}
\sum_{\substack{1\le j_i\le N_{j_1,j_2\dots,j_{i-1}}\\ \text{for }1\le
i \le n}} W_{j_1,j_2}W_{j_1,j_2,j_3}\dots W_{j_1,j_2,\dots,j_n},
\end{equation*}
where all the variables in the right hand side are independent, the
$W_{j_1,j_2\cdots j_n}$ are distributed according to~$\mu$, and~$N$
and the variables $N_{j_1,j_2\cdots j_n}$ are equidistributed.
One has
\begin{multline*}
Y_{n+1} = b^{-n}\sum_{1\le j_1\le N} W_{j_1} \times\\
\sum_{\substack{1\le j_i\le N_{j_1,j_2\dots,j_{i-1}}\\ \text{for
}1\le i \le n}} W_{j_1,j_2}W_{j_1,j_2,j_3}\dots
W_{j_1,j_2,\dots,j_n}\ b^{-1}\hspace{-1.5em}\sum_{1\le j_{n+1}\le
N_{n+1}} W_{j_1,j_2,\dots,j_{n+1}}.
\end{multline*}
Therefore $(Y_n)_{n\ge 1}$ is a martingale.
We also have
\begin{equation}
Y_{n+1} =b^{-1} \sum_{1\le j_1\le N_1} W_{j_1}Y_{n}(j_1) \label{recur}
\end{equation}
where all the variables are independent and the $Y_n(j)$ are
equidistributed with~$Y_n$.
\noindent So,
$\esp Y_{n+1}^2 = b^{-2}\esp W^2 \esp Y_n^2 \esp N + b^{-2} \esp
N(N-1)$, i.e.,\\[3pt]
$ \esp Y_{n+1}^2 = b^{-1}\esp W^2 \esp Y_n^2 + b^{-2} \esp N(N-1)$.
We therefore see that if $\esp W^2< b$ the martingale $(Y_n)$ is
bounded in~$L^2$. Then it has a limit $Y$, almost surely and in $L^2$,
$\esp Y =1$, and
$$\esp Y^2 = \displaystyle \frac{\esp N(N-1)}{b(b-\esp W^2)}.$$
Due to~\eqref{recur} we see that~$Y$ is a solution to
Equation~\eqref{equation}.\medskip
B.~Mandelbrot~\cite{M1,M2} introduced this construction (with
constant~$N$) to give a simplified statistical description of the
dissipation of energy in a turbulent flow. Since then this model and
Equation~\eqref{equation} have been extensively studied and
generalized (for instance~\cite{KP,DL,G,L}). See~\cite{BFP,BP} for
a survey.\medskip
As a matter of fact, the necessary and sufficient condition for the
uniform integrability of the martingale~$Y_n$ is
$\esp W\log W< \log b$ (see~\cite{KP} when~$N$ is constant and
\cite{L} when~$N$ is not constant).\medskip
Let $\alpha = \proba(W\ne 0)$ and $\beta = \proba(Y=0)$. It results
from Equation~\eqref{equation} that
$\beta = \varphi(\alpha\beta + 1-\alpha)$. Due to convexity, the
function $t\mapsto \varphi(\alpha t+1-\alpha)$ has a fixed point less
than~1 if and only if its derivative at~1 is larger than~1, which
means $\alpha b>1$. Due to hypotheses $\esp W = 1$ and $\esp W^2< b$
this condition is fulfilled.
\subsection*{Laplace transform}
For $t\ge 0$ set $g(t) = \esp \mathrm{e}^{-tY}$. Then
\begin{equation}\label{init}
g(0)=-g'(0)=1.
\end{equation}
From~\eqref{equation} we get
\begin{eqnarray}
&&\hspace{-6em} \esp\bigl( \mathrm{e}^{-tY} \mid N,\ (W_j)_{j\ge 1}\bigr) = \prod_{1\le
j\le N} g\bigl( tW_j/b\bigr) \nonumber\\
&&\hspace{-6em} \esp\bigl( \mathrm{e}^{-tY} \mid N\bigr) = \left( \int g\bigl(
tx/b\bigr)\,{\mathsf d} \mu(x)\right)^N\nonumber\\
&&\hspace{-6em} g(t) = \varphi\Bigl(\int g\bigl(
tx/b\bigr)\,{\mathsf d} \mu(x)\Bigr). \label{eqLaplace}
\end{eqnarray}
Let $\displaystyle u(t) = \int g\bigl( tx/b\bigr)\,{\mathsf d} \mu(x)$. So,
$g(t) = \varphi\bigl(u(t)\bigr)$.
Relations~\eqref{init} become
\begin{equation}\label{init2}
u(0) = 1 \text{\quad and\quad } u'(0) = -\frac{1}{b}.
\end{equation}
Also $g$ and $u$ are decreasing,
$\displaystyle \lim_{t\to +\infty} g(t) = \proba(Y=0)= \beta$,
$ \displaystyle \lim_{t\to +\infty} u(t) = \varphi^{-1}(\beta) =
\alpha\beta+1-\alpha$, and $g(0)=u(0)= 1$.
\subsection*{A particular case}
Now we consider the particular case when
$$
{\mathsf d} \mu(x) = (1-\alpha )\delta({\mathsf d} x) + \alpha (1-\gamma)
b^{\gamma-1}x^{-\gamma}{\large\bf 1}_{(0,b)}(x)\,{\mathsf d} x,
$$
with $\gamma<1$, $0<\alpha \le 1$, and where~$\delta$ stands for the
unit Dirac mass at~0. If $W$ is distributed according to~$\mu$, the
condition $\esp W = 1$, means $\gamma = 1-\frac1{\alpha b-1}$ and
$\esp W^2 < b$ means $\gamma>1-\frac2{\alpha b-1}$. So the only
constraints on the paramters are
$$\frac1b< \alpha\le 1 \ \text{ and}\quad \gamma = 1-\frac{1}{\alpha b-1}.
$$
\\[1em]
We have
$\displaystyle u(t) = 1-\alpha + \alpha (1-\gamma)b^{\gamma-1}\int_0^b
g\bigl( tx/b\bigr)x^{-\gamma}{\mathsf d} x$.\\[3pt]
Then
\begin{eqnarray*}
u'(t) &=& \alpha (1-\gamma)b^{\gamma-1} \int_0^b g'(tx/b)b^{-1}x^{1-\gamma}{\mathsf d}
x\\
&=& \frac{\alpha (1-\gamma)}{t}g(t)-\frac{(1-\gamma)^2b^{\gamma-1}}{t} \int_0^b
g(tx/b)x^{-\gamma} {\mathsf d} x\\
&=& \frac{\alpha (1-\gamma)}{t}g(t) - \frac{1-\gamma}{t}\bigl(u(t)-(1-\alpha)\bigr).
\end{eqnarray*}
We see that $u$ satisfies the following differential equation
\begin{equation}\label{ode}
(\alpha b-1)u'(t) = \frac{1}{t}\Bigl( \alpha\varphi\bigl(u(t)\bigr)-u(t) + 1 -\alpha\Bigr).
\end{equation}
Let
$\displaystyle \omega(x) = \frac{\alpha b-1}{\alpha \varphi(x)-x + 1 -
\alpha}+\frac{1}{1-x}$. As
$\varphi(x) = 1 + b(x-1) + \mathrm{O}\bigl((x-1)^2\bigr)$, $\omega$ is
bounded in a neighborhood of~1. Indeed, by continuity, we have
$\displaystyle \omega(1) = -\frac{\esp N(N-1)}{2(\alpha b
-1)}$. Then~\eqref{ode} rewrites as
\begin{equation}
\left(\omega(u)-\frac{1}{1-u}\right){\mathsf d} u = \frac{{\mathsf d} t}{t}.
\end{equation}
Let $\displaystyle \Omega(x) = \exp \int_1^x
\omega(\tau)\,{\mathsf d}\tau$. The function
$x \mapsto \alpha \varphi(x)-x + 1 - \alpha$ is convex and vanishes for
$x=1$ and $x=\varphi^{-1}(\beta)$; so it is negative on the interval
$\bigl(\varphi^{-1} (\beta),1\bigr)$. This means that, on this
interval, the derivative of $(1-x)\Omega(x)$ is negative.
It follows that there is a constant~$c$ such that, for $t\ge 1$,
$u(t)$ is the unique solution to equation
$$\bigl(1-u(t)\bigr)\Omega\bigl(u(t)\bigr) = ct, \text{\quad
with\quad} u(0)=1,$$
in the interval~$\bigl(\varphi^{-1} (\beta),1\bigr)$.
By taking in account the initial conditions~\eqref{init2} we see that
$c=1/b$. Finally, $u$ is implicitly defined by
$$ b(1-u)\Omega(u)=t.$$
\subsection*{Examples}
We give six examples of computations. The fourth one is interesting
because it shows that the mapping $(N,\mu)\longmapsto \nu$ is not
one-to-one.
\subsubsection*{Example 1
We take $\varphi(x) = x^{n+1}$ (where ~$n\ge 1$
is an integer) and $\alpha=1$. Then $b=n+1$,
$\gamma=1-\frac1n$, $\beta=0$, and
\begin{eqnarray*}
&&(1-x)\Omega(x) = \frac{1-x^n}{nx^n},\\
&&u(t) = \Bigl(1+\frac{nt}{n+1}\Bigr)^{-1/n},\\
&&g(t) =
\Bigl(1+\frac{nt}{n+1}\Bigr)^{-(n+1)/n}.
\end{eqnarray*}
This means that in this case the variable $Y$ follows the
$\displaystyle\Gamma\biggl( \frac{n+1}{n},\frac{n+1}{n}\biggr)$
distribution, i.e.,
$$\nu({\mathsf d} s) = \Gamma \left(\frac{n+1}{n}\right)^{-1} \left(\frac{n+1}{n}\right)^{\frac{n+1}{n}}s^\frac1n \exp {\displaystyle -\frac{(n+1)s}{n}}\ {\mathsf d} s. $$
\medskip
This situation has been independently studied by G.~Letac and the
author. It is mentioned in~\cite{G} (page 264), but up to now (\cite{M3},
p.387--388) seems to be the only written trace of this formula.
\medskip
\subsubsection*{Example 2}
This time $\varphi(x) = 1-\rho+\rho x^2$ with $0< \rho\le 1$.\\
Then $b=2\rho\ \text{(therefore $\alpha>\frac1{2\rho}$)},\ \beta = 1-\frac{2\alpha\rho-1}{\rho\alpha^2},\ \gamma = 1- \frac{1}{2 \alpha \rho -1},\\
\omega(x) = -\frac{\alpha \rho}{\alpha \rho x +\alpha \rho -1}$,
\begin{eqnarray*}
\Omega(x) &=& \frac{2\alpha\rho-1}{\alpha\rho x+\alpha-1},\\
u(t) &=& \frac{1-\alpha\rho}{\alpha\rho} + \frac{2(2\alpha\rho-1)^2}{ \alpha\rho(4 \alpha\rho +\alpha t-2)},\\
g(t) &=& \frac{\alpha^2\rho-2\alpha\rho+1}{\alpha^2\rho} + \frac{4(1-\alpha\rho)(2\alpha\rho-1)^2}{\rho\alpha^2(4\alpha\rho+\alpha t-2)}+\frac{4(2 \alpha\rho-1)^{4}}{\rho\alpha^2(4\alpha\rho+\alpha t-2)^2},
\end{eqnarray*}
and
\begin{multline*}
\nu({\mathsf d} s) =
\frac{(\alpha^2\rho-2\alpha\rho+1)}{\rho\alpha^2}\,\delta({\mathsf d} s)\\
+\frac{4(2\alpha\rho-1)^2\bigl((2\alpha\rho-1)^2s+\alpha(1-\alpha\rho)\bigr)}{\rho
\,\alpha^{4}}\,{\mathrm e}^{-4 s \rho +\frac{2 s}{\alpha}}\,{\mathsf d} s.
\end{multline*}\medskip
\subsubsection*{Example 3}
{$\varphi = (1-\rho) x+\rho x^{n+1}$, ($0< \rho\le 1$)
and $\alpha=1$}.
Then $b=\rho n+1$, $\gamma = (\rho n-1)/\rho n$,
$(1-x)\Omega(x) = (x^{-n}-1)/n,$
\begin{eqnarray*}
u(t) &=& \Bigl(1+\frac{nt}{\rho n+1}\Bigr)^{-1/n},\text{ and}\\
g(t) &=&(1-\rho)\Bigl(1+\frac{nt}{\rho n+1}\Bigr)^{-1/n}
+\rho\Bigl(1+\frac{nt}{\rho n+1}\Bigr)^{-(n+1)/n}.
\end{eqnarray*}
Finally $\nu$ is a barycenter of Gamma distributions.\medskip
\subsubsection*{Example 4}
This time $\varphi(x) = \displaystyle \frac{1-p}{1-px}$, with
$p>1/2$.\\[3pt]
Then $b = \frac{p}{1-p}\ \text{(so $\alpha> \frac{1-p}{p}$)},\ \gamma
= 1-\frac{1-p}{\alpha p +p -1},\ \omega = -\frac{\alpha
\,p^{2}}{\left(\alpha p +p x -1\right) \left(1-p \right)}$, and
$$\Omega(x) = \left(\frac{\alpha p +p x -1}{\alpha p +p -1}\right)^{-\frac{\alpha p}{1-p}}.$$
By taking $\alpha = 2(1-p)/p$ the calculation can be pushed
forward. In this condition
\begin{eqnarray*}
\Omega(x) &=& \left(\frac{1-p}{1+p(x-2)}\right)^2,\\
u(t) &=& \frac{2p-1}{p}+\frac{2(1-p)}{p(\sqrt{4 t +1}+1)},\\
g(t) &=& \frac{1}{2}+\frac{1}{2 \sqrt{4 t +1}},\\
\nu({\mathsf d} s) &=& \frac12\delta({\mathsf d} s)+\frac{{\mathrm e}^{-\frac{s}{4}}}{4 \sqrt{\pi s}}\,{\mathsf d} s.
\end{eqnarray*}
\medskip
\subsubsection*{Example 5}
This time $\varphi = \displaystyle \frac{(1-p)x}{1-px}$, with
$0< p< 1$ and $\alpha > 1-p$. Then
$b = 1+\frac{p}{1-p},\ \gamma = 1-\frac{1-p}{\alpha -1+p}$, and
\begin{eqnarray*}
\omega(x) &=& -\frac{\alpha p}{\left(p x +\alpha -1\right) \left(1-p \right)},\\ \Omega(x) &=& \left(\frac{p x +\alpha -1}{p+\alpha -1}\right)^{-\frac{\alpha}{1-p}}
\end{eqnarray*}
Let us end the computation in case when $\alpha = 2(1-p)$. Then we have
\begin{eqnarray*}
\Omega(x) &=& \left(\frac{1-p}{1+p(x -2)} \right)^{2}\\
u(t) &=& \frac{2p-1}{p}+\frac{2(1-p)}{p(\sqrt{4 p t +1}+1)},\\
g(t) &=& \frac{2p-1}{2 p}+\frac{1}{2p\sqrt{4 p t +1}},\\
\nu({\mathsf d} x) &=& \frac{2p-1}{2p}\,\delta({\mathsf d} s)+\frac{{\mathrm e}^{-\frac{s}{4 p}}}{4 p^{\frac{3}{2}} \sqrt{\pi s}}\,{\mathsf d} s.
\end{eqnarray*}
\medskip
\subsubsection*{Example 6}
This time $\displaystyle \varphi(x) = \frac{(1-p)x^2}{1-px}$. Then
$\displaystyle b=2+\frac{p}{1-p}\text{ and }\alpha
>1-\frac{1}{2-p}$. Also\\
$\gamma =1+\frac{1-p}{\alpha p -2 \alpha -p +1},\ \omega =
-\frac{\alpha}{\left(\left(\alpha +p -\alpha p\right) x +\alpha
-1\right) \left(1-p \right)}$, and
$$
\Omega(x) = \left(\frac{(\alpha +p -\alpha p) x +\alpha -1}{2\alpha+p -\alpha p-1}\right)^{-\frac{\alpha}{(1-p)(\alpha+p-\alpha p)}}.$$
Now take $\displaystyle \alpha = -\frac{2 p(1-p)}{2 p^{2}-4 p
+1}$. Due to $1-1/(2-p)< \alpha \le 1$, we have to assume that
$p>1/2$. Then $\gamma = 2(1-p)^2$,
\begin{eqnarray*}
\Omega(x) &=& \left(\frac{1-p}{1-p(x-2)}\right)^2,\\
u(t) &=& \frac{2 p -1}{p} + \frac{2(1- p)}{p\left(1+\sqrt{\frac{t}{2-p}+1}\right)},\\
g(t) &=& \frac{\left(2 p -1\right)^{2}}{2 p^{2}}+\frac{1}{2 p^2\sqrt{\frac{t}{2-p}+1}}-\frac{2(1-p)^{2}}{p^{2} \left(1+\sqrt{\frac{t}{2-p}+1}\right)},
\end{eqnarray*}
and
\begin{multline*}
\nu({\mathsf d} s) = \frac{(2 p -1)^{2}}{2p^2}\,\delta({\mathsf d} s )+ \frac{(2 p
-1) (3-2p)}{2 p^{2}}\,\sqrt{\frac{2-p}{\pi s}}\, {\mathrm
e}^{(-2+p ) s}\,{\mathsf d} s \\+\frac{2(2-p ) (1-p
)^{2}}{p^{2}}\mathrm{erfc}\bigl(\sqrt{(2-p)s}\,\bigr)\,{\mathsf d} s.
\end{multline*}
|
1,314,259,995,908 | arxiv | \section{Introduction}
Once a specialized field for applications that required large data sets, large-scale distributed applications have become commonplace in our globalized society.
Regardless of whether you are developing a rich-web application or a native mobile application, managing distributed data is challenging.
For simplicity, developers today typically resort to using a single database that provides a form of strong\footnote{For instance, linearizability, where a value follows the real-time order of updates.} consistency.
In essence, the database serves as shared memory for the clients in the system.
A single database is an obvious bottleneck as it introduces a serialization point for all operations; this restricts the possible throughput of the system.
As developers strive to provide a near-native experience where operations appear to happen immediately, and since not all clients can be geographically located close to the database, application performance can suffer as users move farther from the database; or worse, when clients can't communicate with the database at all because they are offline.
To provide good user experience, including high availability and low latency, developers are forced to integrate replication in the system design.
Systems that favor weak consistency scale better: data items can be locally replicated, locally mutated by the application, and their state can be disseminated asynchronously, outside of the critical path. Weak consistency allows applications to continue to operate while offline. While these systems provide for high scalability and high performance, programming with weak consistency can be a challenge for the application developer as updates to data items have no guarantee on update visibility or update order. Concurrency poses an additional problem, as updates happening concurrently at different replicas may be conflicting.
Numerous systems and programming models\cite{Sivaramakrishnan:2015:DPO:2737924.2737981, conway2012logic, terry1995managing, meiklejohn2015lasp, alvaro2011consistency, kuperjoining, burckhardt2015global, myter2016now} have been proposed for working with weak consistency, however few have seen adoption. Many of the systems have sound theoretical foundations, but few perform evaluations at scale to demonstrate the benefits in practice. We believe that the lack of these results comes from the difficulty in the required infrastructure for large-scale experiments, and the challenges in engineering an implementation of a theoretical model using existing software languages and libraries.
In this paper, we discuss the practical issues encountered when evaluating
one of these programming models, Lasp~\cite{lasp-implementation, lasp-documentation}, originally presented at PPDP `15.
Lasp is designed using a holistic approach where the programming model was co-designed
with its runtime system to ensure scalability.
We examine the challenges of engineering an implementation capable of scaling to a large number of nodes running in a public cloud environment, using a real world application scenario.
Further, we report on the engineering challenges of demonstrating the scalability of the Lasp model.
Our experience report substantiates that empirically validating scalability is non-trivial, regardless of the programming model.
\section{Advertisement Counter}
Lasp was invented to ease the development of distributed applications with weak consistency. The advertisement counter scenario from Rovio Entertainment, creator of Angry Birds, is an ideal fit for Lasp. This application counts the total number of times each advertisement is displayed on all client mobile phones, up to a given threshold for each.
The application has the following properties:
\begin{itemize}
\item \textbf{Replicated data.} Data is fully replicated to every client in the system. This replicated data is under high contention by each client in the system.
\item \textbf{High scalability.} Clients resemble individual mobile phone instances of the application, so the application should scale up to millions of clients.
\item \textbf{High availability.} Clients need to continue operation when disconnected as mobile phones frequently have periods of signal loss (offline operation).
\end{itemize}
As part of the large-scale evaluation done in the SyncFree project,
and following the personal curiosity of the developers, we decided to invest resources in using industrial-strength engineering techniques to evaluate the scalability of this application running in a real world production cloud environment.
\subsection{Lasp}
Lasp~\cite{meiklejohn2015lasp} is a programming model that allows developers to write applications with Conflict-Free Replicated Data Types (CRDTs)~\cite{shapiro2011comprehensive, DBLP:journals/corr/AlmeidaSB16}. CRDTs are abstract data types, designed for use in concurrent and distributed programming, that have a binary merge operation to join any two replicas of a single CRDT. Under concurrent modification without coordination, different replicas of a single CRDT may diverge; the merge operation supports value convergence by ensuring that given enough communication, all replicas, without coordination, will converge to a single deterministic value regardless of the order that data is received and merged.
Historically, before CRDTs were introduced, ad-hoc merge functions were used, often with few formal guarantees.
Later, after their development, programmers who wanted to use CRDTs in their applications would have two choices: either, using a single CRDT from existing literature to store application state, fitting their problem to an existing data structure; or, building a custom CRDT that fits their application domain, which requires to ensure that the merge operation is both deterministic and convergent.
Lasp improves this choice in two ways:
\begin{itemize}
\item \textbf{Composition.} Lasp provides set-theoretic and functional combinators for composing CRDTs into larger CRDTs.
\item \textbf{Monotonic conditional.} Lasp introduces a conditional operation that allows the execution of application logic based on monotonic conditions\footnote{Monotonicity implies that once a condition becomes true, it remains true; a monotonicity check can be done without distributed coordination.} on CRDTs.
\end{itemize}
These two concepts allow Lasp applications to be both transparently and arbitrarily distributed across a set of nodes without altering application behavior. For brevity, the reader is referred to~\cite{meiklejohn2015lasp} for a full treatment of the Lasp semantics.
The advertisement counter uses two data structures from Lasp: the Add-Wins Set CRDT\footnote{a.k.a. Observed-Remove Set}, where elements can be arbitrarily removed and inserted without coordination and under concurrent add and remove operations the add will `win'; and the Grow-Only Counter CRDT, which models a counter that only increments.
\subsection{Overview}
The design of the advertisement counter is roughly broken into three components.
\begin{itemize}
\item \textbf{Initialization.} When the advertisement counter application is first initialized, we first create Grow-Only Counters for each unique advertisement we want to track impressions for, and we then insert references to them into an initial Add-Wins Set of advertisements.
\item \textbf{Selection of displayable advertisements.} We define a dataflow computation in Lasp that will derive an Add-Wins Set of advertisements to display to the clients based on advertisements that have valid ``contracts'': records that represent that an advertisement is allowed to be displayed at the current time
(Figure~\ref{fig:advertisement-counter-async-dataflow}).
\item \textbf{Enforcing invariants.} Since clients increment each advertisement counter as advertisement impressions occur, when the target number of impressions is reached both the client and the server will fire a trigger to remove the advertisement counter from the set of advertisements, to prevent the advertisement from being further displayed. This can be done without coordination through the use of the Add-Wins Set.
\end{itemize}
\begin{figure}[h]
\begin{center}
\noindent\includegraphics[scale=0.25]{advertisement-counter-async-dataflow.pdf}
\end{center}
\caption{Asynchronous dataflow computation in Lasp that derives the set of displayable advertisements.}
\label{fig:advertisement-counter-async-dataflow}
\end{figure}
The advertisement counter has two important design choices, which makes its implementation in Lasp ideal.
\begin{itemize}
\item \textbf{Offline support.} As Angry Birds is a mobile application, there will be periods without connectivity. During this time, advertisements should still be displayable.
\item \textbf{Lower-bound invariant.} Advertisements need to be displayed a minimum number of times; additional impressions are not problematic. This is a monotonic condition: once the condition is true, it remains true.
\end{itemize}
\subsection{Implementation}
The advertisement counter is broken into two components that work in concert. Both components track a single replica of a set of identifiers of displayable advertisements, and for each identifier a replica of an advertisement counter that tracks the total number of times the advertisement has been displayed to the user. Each node in our experiment runs either a single client or server process.
\begin{itemize}
\item{\textbf{Server processes.}} One or more server processes, each responsible for propagating their state to clients and disabling advertisements that have been displayed a minimum number of times by monotonically removing them from the set of displayable advertisements.
\item{\textbf{Client processes.}} Many client processes that periodically propagate their state with other nodes, and increment their counter replicas based on a synthetic workload.
\end{itemize}
The prototype implementation of the Lasp programming model is built in the Erlang programming language and exposed to the user as an application library.
The fully instrumented Lasp advertisement counter client is implemented in 276 lines of Erlang code, and the fully instrumented advertisement counter server is 333 lines of Erlang code. Around 50$\%$ of this code is for instrumentation and orchestration, to ensure we can perform a full analysis of the application during experimentation. The Lasp runtime system takes care of cluster maintenance, data synchronization and storage, which are done manually in the previous approaches (ad-hoc merge or custom CRDT design).
\section{System Architecture}
To perform a real world evaluation of the advertisement counter, we implemented an efficient, scalable runtime system for Lasp. Lasp's runtime system is a highly-scalable eventually consistent data store with two different dissemination mechanisms (state-based vs. delta-based) and two different cluster topologies (datacenter vs. hybrid gossip). Lasp's programming model, presented in~\cite{meiklejohn2015lasp}, sits above the data store and exposes a programming interface.
Datacenter Lasp~\cite{meiklejohn2015lasp} operates using a structured overlay network. Hybrid Gossip Lasp~\cite{meiklejohn2015selective} uses an unstructured overlay network, and by design should achieve greater scalability and provide better fault-tolerance~\cite{rodrigues2010peer}.
\subsection{Datacenter Lasp}
Datacenter Lasp refers to the prototype implementation of the runtime system presented with the programming model, at this conference two years ago~\cite{meiklejohn2015lasp}.
In Datacenter Lasp, all CRDT state is both partitioned and replicated across several datacenter nodes. Client processes communicate directly with server processes that are running on datacenter nodes; client processes do not communicate amongst each other. Replication is used across datacenter nodes for fault tolerance, and partitioning/sharding is used for horizontal scalability: this is achieved through the use of consistent hashing and hash-space partitioning. In our experiments this is simplified and there is no partitioning, since the data set for our experiments never exceeds a single datacenter node's available capacity.
\subsection{Hybrid Gossip Lasp}
Hybrid Gossip Lasp is inspired by two Hybrid Gossip protocols, HyParView~\cite{leitao2007hyparview}, and Plumtree~\cite{leitao2007epidemic}. In Hybrid Gossip Lasp, nodes are assembled in a peer-to-peer topology, where client processes can communicate either with server processes running on datacenter nodes or client processes. State is delivered transitively through other processes in the system: there is no need to communicate directly with a server process running on a datacenter node.
Hybrid Gossip Lasp uses a membership protocol heavily inspired by HyParView, to compute an overlay network containing all of the members in the cluster. The notable differences between the HyParView protocol and our membership protocol were the results of adapting the theoretical treatment in the HyParView paper to an actual implementation that was used for this experiment.
Specifically, the original HyParView protocol was evaluated in a low-churn environment, whereas our environment has much higher churn. {\em Churn} is defined as rate of node turnover, i.e., percentage of nodes leaving and being replaced by new nodes, per time unit. The higher churn in our environment was a byproduct of attempting to reduce experimentation time to save costs when operating large clusters: this allowed experiments that would normally take hours for cluster deployment and operations to be reduced to fractional hours at significant cost savings. For details on the modifications to the protocol, the reader is referred to~\cite{meiklejohn2017loquat}.
\subsection{Dissemination Protocols}
The system supports two data dissemination protocols.
\begin{itemize}
\item{\textbf{State-based.}} Objects are locally updated through mutators that inflate the state.
Objects are periodically sent to
peers that merge the received object with their local state.
\item{\textbf{Delta-based.}} Objects are locally updated by merging the state with the result of $\delta$-mutators \cite{DBLP:journals/corr/AlmeidaSB16}, called deltas, that compactly represent changed portions of state. These deltas are buffered locally and sent to each local peer in every propagation interval.
\end{itemize}
\section{Engineering Scale}
The Lasp semantics ensures that the runtime system is correct in theory for arbitrary
distribution of the computation.
However, engineering a scalable real-world system requires
a significant amount of sophisticated tooling
to ensure scalability both for deployment and
for observability during execution.
Near the end of the SyncFree project, we designed an experiment
with the goal of scaling to 10\,000 nodes.
We finally achieved a scale of 1024 nodes at a total cloud computing cost of about \euro 9000.
\subsection{Experiment Configuration}
For the purposes of the experiment, we used a total of 70 m3.2xlarge instances in the Amazon EC2 cloud computing environment, within the same region and availability zone. We used the Apache Mesos~\cite{hindman2011mesos} cluster computing framework to subdivide each of these machines into smaller, fully-isolated machines using cgroups. Each virtual machine, representing a single Lasp node, communicated with other nodes in the cluster using TCP, and given the uniform deployment across all of the allocated instances, had varying latencies to other nodes in the system depending on their physical location.
When subdividing resources for the experiment, we allocated each server task 4\,GB of memory
with 2 virtual CPUs, and each client task 1\,GB of memory, with 0.5 virtual CPUs.
Here a {\em task} is a logical unit of computation
that is executed on one virtual machine.
We consider that these numbers vastly underrepresent the capabilities of modern mobile devices in widespread deployment today and therefore will lead to conservative results in the evaluation. We allocate more resources to servers, specifically in Datacenter Lasp mode, as servers are required to maintain connections to more nodes in the system; the advertisement counter does not require more resources between Datacenter and Hybrid Gossip modes.
\subsection{Experimental Workflow}
As running experiments in an unsimulated cloud environment can be challenging due to the inherent nondeterminism across different executions of the same experiment, we created a workflow targeted at reducing nondeterminism by controlling the experiments' setup and teardown procedures with detailed instrumentation for post-experimental analysis. We describe that workflow below.
\begin{itemize}
\item{\textbf{Bootstrapping.}} Initially, all of the server and client processes are bootstrapped and joined into a single cluster.
The experiment does not begin until we ensure that all of the nodes in the system are connected and the connection graph forms a single connected component. Each node should be reachable by every other node in the system, either directly as a local neighbor, or indirectly via multi-hop. During this process, the system creates advertisement counters and the set of displayable ads.
\item{\textbf{Simulation.}} Once we ensure the cluster is connected, each node starts collecting metrics and generating its own workload that randomly selects a counter to increment based on the set of displayable advertisements every predefined impression interval. Periodically, each process propagates local replicas with neighbor processes.
It should be noted that each client has its own workload generator: using a centralized harness for running the experiment introduces coordination, which reduces the scalability of the system.
\item{\textbf{Convergence.}} As each of the experiments has a controlled number of events that will be generated based on the number of clients participating in the system, the experiment continues to run until each node has observed the effects of all events: we refer to this process as convergence.
\item{\textbf{Metrics aggregation and archival.}} Once convergence is reached, the experiment is complete. Each node, upon observing convergence begins uploading metrics recorded during the experiment to a central location: these logs are used for analysis of the runtime system. Once this process is complete, the experiment harness waits for the system to fully teardown the cluster before starting a subsequent run, to prevent state leakage between runs when reusing the same hardware to reduce costs.
\end{itemize}
\subsection{Experimental Infrastructure}
Evaluation of a large-scale distributed programming model is difficult. This is due to failures in the underlying frameworks that are used to provide mechanisms for deployment and operations, and because of inadequate tools required to observe the system during execution to ensure it is operating properly.
\subsubsection{Apache Mesos}
While experimentation shows Lasp scalability to 1024 nodes, we do not believe that this number is a firm upper limit. When attempting to run experiments with 2048 nodes we quickly ran into problems with the Apache Mesos cloud computing framework. One issue is that when attempting to bootstrap a cluster containing 70 instances too quickly, instances become disconnected and need to be manually reprovisioned. This required a slower cluster deployment where a cluster would be scaled from 35 instances, first to 50 instances, and then to 70 instances. As the 2048 experiment required 140 m3.2xlarge instances to operate, cluster deployment would take significantly longer.
When attempting to launch 2048 tasks in Mesos (with a single task representing a single application node),
instances would become overloaded quickly and fail to respond to heartbeat messages: this triggered these instances being marked as offline by Mesos and the tasks orphaned. This would require restarting the experiment and reallocating the cluster to account for the lost tasks.
\subsubsection{Sprinter}
Once tasks were launched by Apache Mesos, we needed a mechanism for client processes to discover other client processes in the system and connect to them.
Therefore, we built an open source service discovery library called Sprinter that was used to fetch a list of running tasks from the Mesos framework, Marathon, and supply them to the system as targets to connect to. Sprinter also performs the following functions:
\begin{itemize}
\item \textbf{Graph analysis for connectedness.} Each node uploads its local membership view to Amazon S3. The first, lexicographically ordered, server periodically pulls this membership information and builds a local graph that is analyzed to determine if the graph contains all clients, and that the connection graph forms a single connected component.
\item \textbf{Delay experiment for connectedness.} Based on graph analysis, the experiment's start is delayed until the connection graph forms a single connected component.
\item \textbf{Periodic reconnection if isolated.} If a node becomes isolated from the cluster, it will rejoin the cluster, using the information provided by Marathon.
\end{itemize}
To assist in operator debugging of the experiments, a graphical tool was built to visualize the graph information from Sprinter along with extensive logging to the server node with information about cluster conditions.
\subsubsection{Partisan} Distributed Erlang has known scalability problems when operated in the range of 50 or more nodes as it tracks full membership information in the cluster at each node and maintains full connectivity between nodes using a single TCP connection that is used for both data transmission and heartbeat messages. Single connections are problematic because of head-of-line blocking when large messages are transmitted.
We knew that for the experiment to scale we would need: (1) to move away from Distributed Erlang, (2) to configure network topologies for both Datacenter Lasp and Hybrid Gossip Lasp in a single specification, and (3) to specify configurations at runtime without having to modify application code. To do this we built Partisan, an open source Erlang library that provides an alternative communication layer that eschews the use of Distributed Erlang. Partisan supports multiple network configurations and topologies: a client-server star topology, a full connectivity topology mirroring Distributed Erlang's, a static topology where per-node membership is explicitly maintained, and a random unstructured overlay membership protocol inspired by the HyParView membership protocol.
\subsubsection{Workflow CRDT (W-CRDT)}
In our experiments, a central task could not be used to orchestrate the execution: early experiments demonstrated that the central task quickly became a bottleneck and slowed down execution to the speed of the central task. Therefore, we eliminated the central task.
However, without a central task performing orchestration, it becomes more difficult to control when nodes should perform certain actions. For example, after event generation is complete, we should wait for convergence before proceeding to metrics aggregation. Therefore, we needed a mechanism for asynchronously controlling the workflow of the application scenario.
We devised a novel data structure, called the \textit{Workflow-CRDT} (W-CRDT), that is disseminated between nodes for controlling when certain actions should take place. This object is not instrumented by our runtime or included in any of the application logging, to prevent the structure itself from influencing the results of the experiment. The W-CRDT is a sequence of Grow-Only Map CRDTs, where each map is a function from opaque node identifiers to booleans. The sequence is implemented with the recursive Pair CRDT
(similar to a recursive list type).
The W-CRDT operates as follows:
\begin{itemize}
\item \textbf{Per node flag.} Each node's portion of a task to be completed is modeled as a flag; each node toggles its flag when it has completed its work.
\item \textbf{Tasks as grow-only maps.} Each task that needs to be performed is represented by one grow-only map. When all the map's flags are true, the task is considered as complete. This corresponds to a barrier synchronization.
\item \textbf{Sequential composition of tasks.} Each task can be sequenced with another task. A task starts when its preceding task has completed.
\item \textbf{Workflow completion.} The workflow is considered complete when all of the tasks that make up the sequential composition are complete.
\end{itemize}
The W-CRDT is used to model the following sequential workflow in each experiment.
\begin{itemize}
\item \textbf{Perform event generation.} Once event generation is complete, nodes mark event generation complete.
\item \textbf{Blocking for convergence.} Once convergence is reached, nodes mark convergence complete.
\item \textbf{Log aggregation.} Once convergence is reached, nodes begin uploading their logs to a central location and mark log aggregation complete.
\item \textbf{Shutdown.} Shutdown once log aggregation is complete.\end{itemize}
\section{Evaluation}
For Datacenter Lasp, we ran experiments using state-based dissemination, with a single server, and 32, 64, 128, 256 clients, forming with the server a star graph topology. For Hybrid Gossip Lasp, we ran experiments using both dissemination strategies, with a single server, and 32, 64, 128, 256, 512, and 1024 clients.
Each experiment was run twice, with the advertisement impression interval fixed at 10 seconds and the propagation interval at 5 seconds. The total number of impressions was configured to ensure that, in all executions, the experiment ran for 30 minutes.
Figure~\ref{fig:transmission-modes} and Figure~\ref{fig:transmission-scale} evaluate three different operational modes for Lasp, examining the state transmission for the duration of the experiment. Two Hybrid Gossip dissemination strategies, state-based and delta-based, are evaluated using a single overlay generated by the HyParView protocol. We also evaluated Datacenter Lasp, where clients propagate changes to the server using a state-based dissemination strategy. We did not evaluate delta-based for Datacenter Lasp, as it is unrealistic to believe that the server could buffer all changes in the system. In this evaluation, we scale up to 256 client processes: this is the largest number of client processes a single server could support in Datacenter Lasp. Hybrid Gossip scaled to 1024 nodes, before we ran into issues with Apache Mesos.
Datacenter Lasp performs the best in terms of state transmission when compared to Hybrid Gossip Lasp using the same dissemination strategy. This results from Datacenter Lasp have no redundancy at all: the star topology has a single point of failure that is used for communication between all nodes in the system. Delta-based dissemination demonstrates a clear advantage for Hybrid Gossip Lasp where redundancy is required to keep the system operating: state transmission can be reduced without sacrificing the fault-tolerance properties of the underlying overlay network. In terms of protocol transmission in Hybrid Gossip Lasp, delta-based dissemination performs better than state-based, even though it is a more complex protocol: in delta-based dissemination a process can track which updates have been seen by its neighbor processes and it will not disseminate an unchanged object, while in state-based dissemination an object is always propagated.
\begin{figure}[h]
\begin{center}
\noindent\includegraphics[scale=0.6]{transmission-modes.pdf}
\end{center}
\caption{Comparison of state- and delta-based dissemination in both Datacenter and Hybrid Gossip Lasp with 32/64 clients.}
\label{fig:transmission-modes}
\end{figure}
\begin{figure}[h]
\begin{center}
\noindent\includegraphics[scale=0.6]{transmission-scale.pdf}
\end{center}
\caption{Comparison of state- and delta-based dissemination in both Datacenter and Hybrid Gossip Lasp with Datacenter Lasp $\leq$ 256 clients (limited in scalability) and Hybrid Gossip Lasp $\leq$ 1024 clients.}
\label{fig:transmission-scale}
\end{figure}
Our experiments confirm several design considerations made in Lasp. First, as demonstrated by the graphs, in the Datacenter Lasp model the transmission cost is reduced as there is no redundancy in messaging and subsequently no fault-tolerance. In this model, because of communication through a datacenter node, an update takes two hops to reach all clients in the system. However, this model has limited scalability because a centralized point, which could be partitioned and replicated across multiple servers, is used as a coordination point for all clients.
Hybrid Gossip Lasp adds additional redundancy by constructing a random overlay network using the HyParView protocol and gossiping state to local peers. This model has additional cost, but provides fault-tolerance through redundancy. In the worst case, an update will be observed by all nodes $V$ after $\log \left| V \right|$ propagation intervals, since in this topology the diameter is logarithmic on the number of nodes.
\section{Conclusion}
Designing new programming models for building large-scale distributed applications requires not only a solid theoretical design, but a well-engineered solution to demonstrate that the system can scale as advertised. Specifically, large-scale evaluations are plagued by the following problems.
\begin{itemize}
\item \textbf{Existing tooling can be problematic.} Existing infrastructure, frameworks, and languages can be treacherous as they can reduce the scalability of the system because of their design choices.
\item \textbf{Visualizations are invaluable.} Visualizations assist in debugging the system in real time.
\item \textbf{Achieving reproducibility is non-trivial.} Clouds provide high-level abstractions over machines, removing visibility into server location and isolation which makes controlled experiments difficult.
\item \textbf{Performance can fluctuate.} Virtual machine placement and migration, compounded by a language VM layer, are factors that make performance measurement unpredictable. Cost considerations also limit the statistical smoothing possible by running multiple experiments.
\item \textbf{Evaluations are expensive.} To provide a real world evaluation,
significant funding is required for the infrastructure resources
and significant time is required for developing deployment tools and for debugging experiments.
\end{itemize}
Lasp's scalable design was achieved by taking a holistic approach: both the runtime system and programming model were designed to accommodate one another in a way that allows scalability. However, the effort required to demonstrate Lasp as both scalable and practical remained a non-trivial challenge.
{\small
\paragraph{Acknowledgements} This work was partially funded by the SyncFree Project in the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n\textsuperscript{o} 609551, by the LightKone Project in the European Union Horizon 2020 Framework Programme for Research and Innovation (H2020/2014-2020), under grant agreement n\textsuperscript{o} 732505, by SMILES within project ``TEC4Growth – Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01- 0145-FEDER-000020'' financed by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, and through the European Regional Development Fund (ERDF). Chris is funded by the Erasmus Mundus Doctorate Programme under grant agreement n\textsuperscript{o} 2012-0030.}
\balance
\bibliographystyle{ACM-Reference-Format}
|
1,314,259,995,909 | arxiv | \section{Introduction}
\label{intro:sec}
Though they are both probabilistic theories, probability theory and
quantum mechanics have historically developed along very different lines.
Nonetheless the two theories are remarkably close, and indeed a rigorous
development of quantum probability \cite{maassen-qprob} contains classical
probability theory as a special case. The embedding of classical into
quantum probability has a natural interpretation that is central to the
idea of a quantum measurement: any set of {\it commuting} quantum
observables can be represented as random variables on some probability
space, and conversely any set of random variables can be encoded as
commuting observables in a quantum model. The quantum probability model
then describes the statistics of any set of measurements that we are
allowed to make, whereas the sets of random variables obtained from
commuting observables describe measurements that can be performed in a
single realization of an experiment. As we are not allowed to make
noncommuting observations in a single realization, any quantum measurement
yields even in principle only partial information about the system.
The situation in quantum feedback control
\cite{vanhandel-05,vanhandel-review} is thus very close to classical
stochastic control with partial observations \cite{Bensoussan1992}. A
typical quantum control scenario, representative of experiments in quantum
optics, is shown in Fig.\ \ref{fig:model}. We wish to control the state
of a cloud of atoms, e.g.\ we could be interested in controlling their
collective angular momentum. To observe the atoms, we scatter a laser
probe field off the atoms and measure the scattered light using a homodyne
detector (a cavity can be used to increase the interaction strength
between the light and the atoms). The observation process is fed into a
controller which can feed back a control signal to the atoms through some
actuator, e.g.\ a time-varying magnetic field. The entire setup can be
described by a Schr{\"o}dinger equation for the atoms and the probe field,
which takes the form of a ``quantum stochastic differential equation'' in
a Markovian limit. The controller, however, only has access to the
observations of the probe. The laser probe itself contributes quantum
fluctuations to the observations, hence the observation process can be
considered as a noisy observation of an atomic variable.
\begin{figure}
\centering
\includegraphics[width=0.72\textwidth]{Model.eps}
\caption{A typical feedback control scenario in quantum optics. A probe
laser scatters off a cloud of atoms in an optical cavity, and is
ultimately detected. The detected signal is processed by a controller
which feeds back to the system through a time varying magnetic field.
\label{fig:model}}
\end{figure}
As in classical stochastic control we can use the properties of the
conditional expectation to convert the output feedback control problem
into one with complete observations. The conditional expectation
$\pi_t(X)$ of an observable $X$ given the observations $\{Y_s:0\le s\le
t\}$ is the least mean square estimate of $X_t$ (the observable $X$ at
time $t$) given $Y_{s\le t}$. One can obtain a quantum filtering equation
\cite{belavkin,bouten,BvHJ-05} that propagates $\pi_t(X)$, or
alternatively the conditional density matrix $\rho_t$ defined by the
relation $\pi_t(X)={\rm Tr}[\rho_t X]$. This is the quantum counterpart
of the classical Kushner-Stratonovich equation, due to Belavkin
\cite{belavkin}, and plays an equivalent role in quantum stochastic
control. In particular, as $\mathbb{E}X_t=\mathbb{E}\pi_t(X)$ we can
control the expectations of observables by designing a state feedback
control law based on the filter.
Note that as the observation process $Y_{s\le t}$ is measured in a
single experimental realization, it is equivalent to a classical
stochastic process (i.e.\ the observables $Y_t$ commute with each
other at different times). But as the filter depends only on the
observations, it is thus equivalent to a classical stochastic
equation; in fact, the filter can be expressed as a classical
(It\^o) stochastic differential equation for the conditional
density matrix $\rho_t$. Hence ultimately any quantum control
problem of this form is reduced to a classical stochastic control
problem for the filter.
In this paper we consider a class of quantum control problems of the
following form. Rather than specifying a cost function to minimize, as in
optimal control theory, we desire to asymptotically prepare a particular
quantum state $\rho_f$ in the sense that $\mathbb{E}X_t\to{\rm Tr}[\rho_f
X]$ as $t\to\infty$ for all $X$ (for a deterministic version see
e.g.~\cite{mirrahimi-et-al2-04}). As $\mathbb{E}X_t=\mathbb{E}\pi_t(X)$,
this comes down to finding a feedback control that will ensure the
convergence $\rho_t\to\rho_f$ of the conditional density $\rho_t$. In
addition to this convergence, we will show that our controllers also
render the filter stochastically stable around the target state, which
suggests some degree of robustness to perturbations. In \S\ref{single:sec}
we will discuss the preparation of states in a cloud of atoms where the
$z$-component of the angular momentum has zero variance, whereas in
\S\ref{multi:sec} we will discuss the preparation of correlated states of
two spins. Despite their relatively simple description the creation of
such states is not simple. Quantum feedback control may provide a
desirable method to reliably prepare such states in practice (though other
issues, e.g.\ the reduction of quantum filters \cite{vanhandel-05b} for
efficient real-time implementation, must be resolved before such schemes
can be realized experimentally; we refer to \cite{Geremia-science} for a
state-of-the-art experimental demonstration of a related quantum control
scenario.)
Though we have attempted to indicate the origin of the control problems
studied here, a detailed treatment of either the physical or mathematical
considerations behind our models is beyond the scope of this paper; for a
rigorous introduction to quantum probability and filtering we refer to
\cite{BvHJ-05}. Instead we will consider the quantum filtering equation
as our starting point, and investigate the classical stochastic control
problem of feedback stabilization of this equation. In \S\ref{geom:sec}
we first introduce some tools from stochastic stability theory and
stochastic analysis that we will use in our proofs. In \S\ref{model:sec}
we introduce the quantum filtering equation and study issues such as
existence and uniqueness of solutions, continuity of the paths, etc. In
\S\ref{single:sec} we pose the problem of stabilizing an angular momentum
eigenstate and prove global stability under a particular control law.
It is our expectation that the methods of \S\ref{single:sec} are
sufficiently flexible to be applied to a wide class of quantum state
preparation scenarios. As an example, we use in \S\ref{multi:sec} the
techniques developed in \S\ref{single:sec} to stabilize particular
entangled states of two spins.
Additional results and numerical simulations will appear in
\cite{MvHMM-05}.
\section{Geometric tools for stochastic processes}
\label{geom:sec}
In this section we briefly review two methods that will allow us to apply
geometric control techniques to stochastic systems. The first is a
stochastic version of the classical Lyapunov and LaSalle invariance
theorems. The second, a support theorem for stochastic differential
equations, will allow us to infer properties of stochastic sample paths
through the study of a related deterministic system. We refer to the
references for proofs of the theorems.
\subsection{Lyapunov and LaSalle invariance theorems}
\label{lasalle:sec}
The Lyapunov stability theory and LaSalle's invariance theorem are
important tools in the analysis of and control design for deterministic
systems. Similarly, their stochastic counterparts will play an essential
role in what follows. The subject of stochastic stability was studied
extensively by Has'minski\u{\i} \cite{hasminskii} and by Kushner
\cite{kushner-67}. We will cite a small selection of the results that
will be needed in the following: a Lyapunov (local) stability theorem for
Markov processes, and the LaSalle invariance theorem of Kushner
\cite{kushner-67,kushner-68,kushner-72}.
\begin{definition}
Let $x_t^z$ be a diffusion process on the metric state space $X$,
started at $x_0=z$, and let $\tilde z$ denote an equilibrium
position of the diffusion, i.e.\ $x_t^{\tilde z}=\tilde z$.
Then
\begin{enumerate}
\item the equilibrium $\tilde z$ is said to be {\rm stable in probability}
if
\begin{equation}
\lim_{z\to\tilde z}\mathbb{P}\left(
\sup_{0\le t<\infty}
\|x_t^z-\tilde z\|\ge\varepsilon\right)
=0\qquad \forall\varepsilon>0.
\end{equation}
\item the equilibrium $\tilde z$ is {\rm globally stable} if it is stable
in probability and additionally
\begin{equation}
\mathbb{P}\left(\lim_{t\to\infty}x_t^z=\tilde z\right)=1\qquad
\forall z\in X.
\end{equation}
\end{enumerate}
\end{definition}
In the following theorems we will make the following assumptions.
\begin{enumerate}
\item The state space $X$ is a complete separable metric space and $x_t^z$
is a homogeneous strong Markov process on $X$ with continuous
sample paths.
\item $V(\cdot)$ is a nonnegative real-valued continuous function on $X$.
\item For $\lambda>0$, let $Q_\lambda=\{x\in X:V(x)<\lambda\}$
and assume $Q_\lambda$ is nonempty. Let $\tau_\lambda=
\inf\{t:x_t^z\not\in Q_\lambda\}$ and define the stopped process
$\tilde x_t^z=x^z_{t\wedge\tau_\lambda}$.
\item $\mathscr{A}_\lambda$ is the weak infinitesimal operator
of $\tilde x_t$ and $V$ is in the domain of $\mathscr{A}_\lambda$.
\end{enumerate}
The following theorems can be found in Kushner \cite{kushner-67,kushner-68,kushner-72}.
\begin{theorem}[Local stability]
\label{thm:localstab}
Let $\mathscr{A}_\lambda V\le 0$ in $Q_\lambda$. Then the following
hold:
\begin{enumerate}
\item $\lim_{t\to\infty}V(\tilde x_t^z)$ exists a.s., so $V(x_t^z)$
converges for a.e.\ path remaining in $Q_\lambda$.
\item $\mathbb{P}\mbox{\rm-lim}_{t\to\infty}\mathscr{A}_\lambda V(\tilde x_t^z)=0$,
so $\mathscr{A}_\lambda V(x_t^z)\to 0$ in probability as $t\to\infty$
for almost all paths which never leave $Q_\lambda$.
\item For $z\in Q_\lambda$ and $\alpha\le\lambda$ we have the
uniform estimate
\begin{equation}
\mathbb{P}\left(
\sup_{0\le t<\infty}V(x_t^z)\ge\alpha
\right)=
\mathbb{P}\left(
\sup_{0\le t<\infty}V(\tilde x_t^z)\ge\alpha
\right)\le
\frac{V(z)}{\alpha}.
\end{equation}
\item If $V(\tilde z)=0$ and $V(x)\ne 0$ for $x\ne\tilde z$,
then $\tilde z$ in stable in probability.
\end{enumerate}
\end{theorem}
The following theorem is a stochastic version of the LaSalle invariance
theorem. Recall that a diffusion $x_t^z$ is said to be Feller continuous if
for fixed $t$, $\mathbb{E}G(x_t^z)$ is continuous in $z$ for any bounded
continuous $G$.
\begin{theorem}[Invariance]
\label{thm:lasalle}
Let $\mathscr{A}_\lambda V\le 0$ in $Q_\lambda$. Suppose $Q_\lambda$
has compact closure, $\tilde x_t^z$ is Feller continuous, and
that $\mathbb{P}(\|\tilde x_t^z-z\|>\varepsilon)
\to 0$ as $t\to 0$ for any $\varepsilon>0$, uniformly for
$z\in Q_\lambda$. Then $\tilde x_t^z$ converges in probability to
the largest invariant set contained in $C_\lambda=\{x\in Q_\lambda:
\mathscr{A}_\lambda V(x)=0\}$. Hence $x_t^z$ converges in
probability to the largest invariant set contained in $C_\lambda$
for almost all paths which never leave $Q_\lambda$.
\end{theorem}
\subsection{The support theorem}
\label{support:sec}
In the nonlinear control of deterministic systems an important role is
played by the application of geometric methods, e.g.\ Lie algebra
techniques, to the vector fields generating the control system. Such
methods can usually not be directly applied to stochastic systems,
however, as the processes involved are not (sufficiently) differentiable.
The support theorem for stochastic differential equations, in its original
form due to Stroock and Varadhan \cite{stroock-varadhan}, connects events
of probability one for a stochastic differential equation to the solution
properties of an associated deterministic system. One can then apply
classical techniques to the latter and invoke the support theorem to apply
the results to the stochastic system; see e.g.\ \cite{kunita-supp} for
the application of Lie algebraic methods to stochastic systems.
We quote the following form of the theorem \cite{kunita-flow,kunita-supp}.
\begin{theorem}
\label{thm:supportth}
Let $M$ be a connected, paracompact $C^\infty$-manifold and let $X_k$,
$k=0\ldots n$ be $C^\infty$ vector fields on $M$ such that all
linear sums of $X_k$ are complete.
Let $X_k=\sum_l X_k^l(x)\partial_l$ in local coordinates
and consider the Stratonovich equation
\begin{equation}
dx_t=X_0(x_t)\,dt+\sum_{k=1}^n X_k(x_t)\circ dW_t^k,
\qquad x_0=x.
\end{equation}
Consider in addition the associated deterministic control system
\begin{equation}
\frac{d}{dt}x_t^u=X_0(x_t^u)+\sum_{k=1}^n X_k(x_t^u)u^k(t),
\qquad x_0^u=x
\end{equation}
with $u^k\in\mathscr{U}$, the set of all piecewise constant
functions from $\mathbb{R}_+$ to $\mathbb{R}$. Then
\begin{equation}
\mathscr{S}_x=
\overline{\{x^u_\cdot:u\in\mathscr{U}^n\}}\subset
\mathscr{W}_x
\end{equation}
where $\mathscr{W}_x$ is the set of all continuous paths from
$\mathbb{R}_+$ to $M$ starting at $x$, equipped with the topology
of uniform convergence on compact sets, and $\mathscr{S}_x$ is the
smallest closed subset of $\mathscr{W}_x$ such that
$\mathbb{P}(\{\omega\in\Omega:x_\cdot(\omega)\in\mathscr{S}_x\})=1$.
\end{theorem}
\section{Solution properties of quantum filters}
\label{model:sec}
The purpose of this section is to introduce the dynamical equations for
a general quantum system with feedback and to establish their basic
solution properties.
We will consider quantum systems with finite dimension $1<N<\infty$. The
state space of such a system is given by the set of density matrices
\begin{equation}
\mathcal{S}=\{\rho\in\mathbb{C}^{N\times N}:
\rho=\rho^*,~{\rm Tr}\,\rho=1,~\rho\ge 0\}
\end{equation}
where $\rho^*$ denotes Hermitian conjugation. In noncommutative
probability the space $\mathcal{P}$ is the analog of the set of
probability measures of an $N$-state random variable. Finite-dimensional
quantum systems are ubiquitous in contemporary quantum physics; a
system with dimension $N=2^n$, for example, can represent the collective
state of $n$ qubits in the setting of quantum computing, and $N=2J+1$
represents a system with fixed angular momentum $J$. The following lemma
describes the structure of $\mathcal{S}$:
\begin{lemma}
\label{l:hullc}
$\mathcal{S}$ is the convex hull of
$\{\rho\in\mathbb{C}^{N\times N}:
\rho=vv^*,~v\in\mathbb{C}^N,~v^*v=1\}$.
\end{lemma}
\begin{proof}
The statement is easily verified by diagonalizing the elements of
$\mathcal{P}$. \qquad
\end{proof}
We now consider continuous measurement of such a system, e.g.\ by weakly
coupling it to an optical probe field and performing a diffusive
observation of the field. When the state of the system is conditioned on
the observation process we obtain the following matrix-valued It\^o
equation for the conditional density, which is a quantum analog of the
Kushner-Stratonovich equation of nonlinear filtering
\cite{belavkin,bouten,vanhandel-05}:
\begin{equation}
\label{eq:qfilt}
\begin{split}
d\rho_t=-i(H_t\rho_t-\rho_t H_t)\,dt
+(c\rho_t&c^* - \tfrac{1}{2}(c^*c\rho_t+\rho_tc^*c))\,dt \\
&+\sqrt{\eta}\,(c\rho_t+\rho_tc^*
-{\rm Tr}[(c+c^*)\rho_t]\rho_t)\,dW_t.
\end{split}
\end{equation}
Here we have introduced the following quantities:
\begin{itemize}
\item The Wiener process $W_t$ is the innovation
$dW_t=dy_t-\sqrt{\eta}\,{\rm Tr}[(c+c^*)\rho_t]dt$. Here $y_t$, a
continuous semimartingale with quadratic variation $\langle
y,y\rangle_t=t$, is the observation process obtained from the system.
\item $H_t=H_t^*$ is a Hamiltonian matrix which describes the action of
external forces on the system. We will consider $H_t$ of the form
$H_t=F+u_tG$ with $F=F^*$, $G=G^*$ and the (real) scalar control input
$u_t$.
\item $u_t$ is a bounded real c{\`a}dl{\`a}g process that is adapted to
$\mathcal{F}_t^y=\sigma(y_s,0\le s\le t)$, the filtration generated by the
observations up to time $t$.
\item $c$ is a matrix which determines the coupling to the external
(readout) field.
\item $0<\eta\le 1$ is the detector efficiency.
\end{itemize}
Let us begin by studying a different form of the equation (\ref{eq:qfilt}).
Consider the linear It\^o equation
\begin{equation}
\label{eq:zakai}
d\tilde\rho_t=-i(H_t\tilde\rho_t-\tilde\rho_t H_t)\,dt
+(c\tilde\rho_tc^* -
\tfrac{1}{2}(c^*c\tilde\rho_t+\tilde\rho_tc^*c))\,dt
+\sqrt{\eta}\,(c\tilde\rho_t+\tilde\rho_tc^*)\,dy_t,
\end{equation}
which is the quantum analog of the Zakai equation. As it obeys a
global (random) Lipschitz condition, this equation has a unique strong
solution (\cite{protter}, pp.\ 249--253).
\begin{lemma}
\label{lem:invzakai}
The set of nonnegative nonzero matrices is a.s.\ invariant
for {\rm (\ref{eq:zakai})}.
\end{lemma}
\begin{proof}
We begin by expanding $\tilde\rho_0$ into its eigenstates,
i.e.\ $\tilde\rho_0=\sum_i\lambda_iv_0^{i}v_0^{i*}$ with
$v_0^{i}\in\mathbb{C}^N$ being the $i$th eigenvector and $\lambda_i$
the $i$th eigenvalue. As $\tilde\rho_0$ is nonnegative all the
$\lambda_i$ are nonnegative.
Now consider the set of equations
\begin{equation}
\label{eq:flibbered}
d\rho_t^{i}=-i(H_t\rho_t^{i}
-\rho_t^{i}H_t)\,dt
+(c\rho_t^{i}c^* -
\tfrac{1}{2}(c^*c\rho_t^{i}+\rho_t^{i}c^*c))\,dt
+(c\rho_t^{i}+\rho_t^{i}c^*)\,dW_t'
\end{equation}
with $\rho_0^{i}=v_0^{i}v_0^{i*}$. Here we have extended our
probability space to admit a Wiener process $\hat W_t$ that is
independent of $y_t$, and $W'_t=\sqrt{\eta}\,y_t+\sqrt{1-\eta}\,\hat W_t$.
The process $\tilde\rho_t$ is then equivalent in law to
$\mathbb{E}[\rho_t'|\mathcal{F}_t^y]$, where $\rho_t'=
\sum_i\lambda_i\rho_t^{i}$.
Now note that the solution of the set of equations
\begin{equation}
\label{eq:linlin}
dv_t^{i}=-iH_tv_t^{i}\,dt
-\tfrac{1}{2}c^*c\,v_t^{i}\,dt+
c\,v_t^{i}\,dW_t',
~~~~~~~ ~~~~~~~ v_t^{i}\in\mathbb{C}^N
\end{equation}
satisfies $\rho_t^{i}=v_t^{i}v_t^{i*}$, as is readily verified by
It\^o's rule. By \cite{protter}, pp.\ 326 we have that
$v_t^{i}=U_tv_0^{i}$ where the random matrix $U_t$ is a.s.\
invertible for all $t$. Hence a.s.\ $v_t^{i}\ne 0$ for any
finite time unless $v_0^{i}=0$. Thus clearly $\rho_t'$ is a.s.\ a
nonnegative nonzero matrix for all $t$, and the
result follows. \qquad
\end{proof}
\begin{proposition}
Eq.\ {\rm (\ref{eq:qfilt})} has a unique strong solution
$\rho_t=\tilde\rho_t/{\rm Tr}\,\tilde\rho_t$ in $\mathcal{S}$.
\end{proposition}
Clearly this must be satisfied if (\ref{eq:qfilt}) is to propagate a density.
\begin{proof}
As the set of nonnegative nonzero matrices is invariant for
$\tilde\rho_t$, this implies in particular that
${\rm Tr}\,\tilde\rho_t>0$ for all $t$ a.s. Thus the result
follows simply from application of It\^o's rule to (\ref{eq:zakai}),
and from the fact that if $M=\sum_i\lambda_iv_i$ is a nonnegative
nonzero matrix, then $M/{\rm Tr}\,M=
\sum_i(\lambda_i/\sum_j\lambda_j)v_i\in\mathcal{S}$. \qquad
\end{proof}
\begin{proposition}
\label{pro:uniformstoch}
The following uniform estimate holds for {\rm (\ref{eq:qfilt})}:
\begin{equation}
\mathbb{P}\left(\sup_{0\le\delta\le\Delta}
\|\rho_{t+\delta}-\rho_t\|>\varepsilon\right)
\le C\Delta(1+\Delta) \qquad \forall\varepsilon>0
\end{equation}
where $0<C<\infty$ depends only on $\varepsilon$ and
$\|\cdot\|$ is the Frobenius norm. Hence the solution of {\rm
(\ref{eq:qfilt})} is stochastically continuous uniformly in $t$
and $\rho_0$.
\end{proposition}
\begin{proof}
Write $\rho_t=\rho_0+\Phi_t+\Xi_t$ where
\begin{equation}
\label{eq:uniformphit}
\Phi_t=
\int_0^t\left[-i(H_s\rho_s-\rho_s H_s)
+(c\rho_sc^* - \tfrac{1}{2}(c^*c\rho_s+\rho_sc^*c))
\right]ds,
\end{equation}
\begin{equation}\label{eq:uniformxi}
\Xi_t=
\int_0^t
\sqrt{\eta}\,
(c\rho_s+\rho_sc^*
-{\rm Tr}[(c+c^*)\rho_s]\rho_s)\,dW_s.
\end{equation}
For $\Xi_t$ we have the estimate (\cite{arnold}, pp.\ 81)
\begin{equation}
\label{eq:uniformxit}
\mathbb{E}\left(\sup_{0\le\delta\le\Delta}
\|\Xi_{t+\delta}-\Xi_t\|^2\right)\le
4\eta
\int_t^{t+\Delta}\mathbb{E}\|
c\rho_s+\rho_sc^*-{\rm Tr}[(c+c^*)\rho_s]\rho_s\|^2
\,ds.
\end{equation}
As the integrand is bounded clearly this expression is bounded by
$C_1\Delta$ for some positive constant $C_1<\infty$.
For $\Phi_t$ we can write
\begin{equation}
\mathbb{E}\left(
\sup_{0\le\delta\le\Delta}\|\Phi_{t+\delta}-\Phi_t\|^2\right)
\le \mathbb{E}\left[\sup_{0\le\delta\le\Delta}\int_t^{t+\delta}
\|G_s\|\,ds\right]^2
=\mathbb{E}\left[\int_t^{t+\Delta}\|G_s\|\,ds\right]^2
\end{equation}
where $G_s$ denotes the integrand of (\ref{eq:uniformphit}).
As $\|G_s\|$ is bounded we can estimate this expression by
$C_2\Delta^2$ with $C_2<\infty$.
Using $\|A+B\|^2\le 2(\|A\|^2+\|B\|^2)$ we can write
\begin{equation}
\sup_{0\le\delta\le\Delta}\|\rho_{t+\delta}-\rho_t\|^2
\le 2\left(\sup_{0\le\delta\le\Delta}\|\Phi_{t+\delta}-\Phi_t\|^2+
\sup_{0\le\delta\le\Delta}\|\Xi_{t+\delta}-\Xi_t\|^2\right).
\end{equation}
Finally, Chebychev's inequality gives
\begin{equation}
\mathbb{P}\left(\sup_{0\le\delta\le\Delta}
\|\rho_{t+\delta}-\rho_t\|>\varepsilon\right)\le
\frac{1}{\varepsilon^2}
\mathbb{E}\left(\sup_{0\le\delta\le\Delta}
\|\rho_{t+\delta}-\rho_t\|^2\right)
\le \frac{2C_1\Delta+2C_2\Delta^2}{\varepsilon^2}
\end{equation}
from which the result follows. \qquad
\end{proof}
{\em Remark.} The statistics of the observation process $y_t$ should of
course depend both on the control $u_t$ that is applied to the system and
on the initial state $\rho_0$. We will always assume that the filter
initial state $\rho_0$ matches the state in which the system is initially
prepared (i.e.\ we do not consider ``wrongly initialized'' filters) and
that the same control $u_t$ is applied to the system and to the filter
(see Fig.\ \ref{fig:model}). Quantum filtering theory then guarantees
that the innovation $W_t$ is a Wiener process. To simplify our proofs, we
make from this point on the following choice: for all initial states and
control policies, the corresponding observation processes are defined in
such a way that they give rise to the same innovation process
$W_t$\footnote{
This is quite contrary to the usual choice in stochastic control theory:
there the system and observation noise are chosen to be fixed Wiener
processes, and every initial state and control policy give rise to a
different innovation (Wiener) process. However, in the quantum case the
system and observation noise do not even commute with the observations
process, and thus we cannot use them to fix the innovations. In fact, the
observation process $y_t$ that emerges from the quantum probability model
is only defined in a ``weak'' sense as a $^*$-isomorphism between an
algebra of observables and a set of random variables on
$(\Omega,\mathcal{F},\mathbb{P})$ \cite{BvHJ-05}. Hence we might as well
choose the isomorphism for each initial state and control in such a way
that all observations $y_t[\rho_0,u_t]$ give rise to the fixed innovations
process $W_t$, regardless of $\rho_0,u_t$. That such an isomorphism
exists is evident from the form of the filtering equation at least in the
case that $u_t$ is a functional of the innovations (e.g.\ if
$u_t=u(\rho_t)$): if we calculate the strong solution of (\ref{eq:qfilt})
given a fixed driving process $W_t$, $\rho_0$, and $u_t[W]$, then $dy_t =
dW_t + \sqrt{\eta}\,{\rm Tr}[(c+c^*)\rho_t]dt$ must have the same law as
$y_t[\rho_0,u_t]$.
Note that the only results that depend on the precise choice of
$y_t[\rho_0,u_t]$ on $(\Omega,\mathcal{F},\mathbb{P})$ are joint
statistics of the filter sample paths for different initial states or
controls. However, such results are physically meaningless as the
corresponding quantum models generally do not commute.
}.
We now specialize to the following case:
\begin{itemize}
\item $u_t=u(\rho_t)$ with $u\in C^1(\mathcal{S},\mathbb{R})$.
\end{itemize}
In this simple feedback case we can prove several important properties of
the solutions. First, however, we must show existence and uniqueness for
the filtering equation with feedback: it is not a priori obvious that the
feedback $u_t=u(\rho_t)$ results in a well-defined c{\`a}dl{\`a}g control.
\begin{proposition}
\label{pro:feedbackxu}
Eq.\ {\rm (\ref{eq:qfilt})} with $u_t=u(\rho_t)$, $u\in C^1$ and
$\rho_0=\rho\in\mathcal{S}$ has a unique strong solution
$\rho_t\equiv\varphi_t(\rho,u)$ in $\mathcal{S}$, and
$u_t$ is a continuous bounded control.
\end{proposition}
\begin{proof}
As $\mathcal{S}$ is compact, we can find an open set $\mathcal{T}\subset
\mathbb{C}^{N\times N}$ such that $\mathcal{S}$ is strictly contained in
$\mathcal{T}$. Let $C(\rho):\mathbb{C}^{N\times N}\to[0,1]$ be a smooth
function with compact support such that $C(\rho)=1$ for
$\rho\in\mathcal{T}$, and let $U(\rho)$ be a $C^1(\mathbb{C}^{N\times
N},\mathbb{R})$ function such that $U(\rho)=u(\rho)$ for
$\rho\in\mathcal{S}$. Then the equation
\begin{multline*}
d\bar\rho_t=-iC(\bar\rho_t)[F+U(\bar\rho_t)G,\bar\rho_t]\,dt
+C(\bar\rho_t)(c\bar\rho_tc^* - \tfrac{1}{2}(c^*c\bar\rho_t+\bar\rho_tc^*c))\,dt \\
+C(\bar\rho_t)\sqrt{\eta}\,(c\bar\rho_t+\bar\rho_tc^*
-{\rm Tr}[(c+c^*)\bar\rho_t]\bar\rho_t)\,dW_t,
\end{multline*}
where $[A,B]=AB-BA$, has global Lipschitz coefficients and hence has a
unique strong solution in $\mathbb{C}^{N\times N}$ and a.s.\ continuous
adapted sample paths \cite{protter}. Moreover $\bar\rho_t$ must be
bounded as $C(\rho)$ has compact support. Hence $U_t=U(\bar\rho_t)$ is an
a.s.\ continuous, bounded adapted process.
Now consider the solution $\rho_t$ of (\ref{eq:qfilt}) with
$u_t=U(\bar\rho_t)$ and $\rho_0=\bar\rho_0\in\mathcal{S}$. As both
$\rho_t$ and $\bar\rho_t$ have a unique solution, the solutions must
coincide up to the first exit time from $\mathcal{T}$. But we have
already established that $\rho_t$ remains in $\mathcal{S}$ for all $t>0$,
so $\bar\rho_t$ can certainly never exit $\mathcal{T}$. Hence
$\bar\rho_t=\rho_t$ for all $t>0$, and the result follows. \qquad
\end{proof}
In the following, we will denote by $\varphi_t(\rho,u)$ the solution of
(\ref{eq:qfilt}) at time $t$ with the control $u_t=u(\rho_t)$ and initial
condition $\rho_0=\rho\in\mathcal{S}$.
\begin{proposition}
\label{pro:feller}
If $V(\rho)$ is continuous, then $\mathbb{E}V(\varphi_t(\rho,u))$
is continuous in $\rho$; i.e., the diffusion {\rm (\ref{eq:qfilt})} is
Feller continuous.
\end{proposition}
\begin{proof}
Let $\{\rho^n\in\mathcal{S}\}$ be a sequence of points converging to
$\rho^\infty\in\mathcal{S}$. Let us write
$\rho^n_t=\varphi_t(\rho^n,u)$ and
$\rho^\infty_t=\varphi_t(\rho^\infty,u)$.
First, we will show that
\begin{equation}
\label{eq:toprove1feller}
\EE\|\rho^n_t-\rho^\infty_t\|^2 \rightarrow 0
\quad \mbox{as} \quad n \rightarrow \infty.
\end{equation}
where $\|\cdot\|$ is the Frobenius norm ($\|A\|^2=(A,A)$ with
the inner product $(A,B)=\tr{A^*B}$). We will write
$\delta_t^n=\rho^n_t-\rho^\infty_t$. Using It\^o's rule we obtain
\begin{equation}
\label{eq:estimatesfeller}
\begin{split}
\EE\|\delta_t^n&\|^2 =
\|\delta_0^n\|^2
+\int_0^t \eta\EE\tr{(c\delta_s^n+\delta_s^nc^*
-{\rm Tr}[(c+c^*)\rho_s^n]\rho_s^n
+{\rm Tr}[(c+c^*)\rho_s^\infty]\rho_s^\infty)^2}ds
\\&
+\int_0^t 2\,\EE\left[
\tr{(i[\rho_s^n,H(\rho_s^n)]
-i[\rho_s^\infty,H(\rho_s^\infty)])\delta_s^n}
+\tr{c\delta_s^n c^*\delta_s^n-c^*c(\delta_s^n)^2}
\right]ds
\end{split}
\end{equation}
where $[A,B]=AB-BA$. Let us estimate each of these terms. We have
\begin{equation}
\begin{split}
\tr{c^*c(\delta_t^n)^2} &=
\|c\delta_t^n\|^2\le C_1\|\delta_t^n\|^2 \\
\tr{c\delta_t^n c^*\delta_t^n} &=
(\delta_t^n c,c\delta_t^n)\le
\|\delta_t^n c\|~\|c\delta_t^n\|\le C_2\|\delta_t^n\|^2
\end{split}
\end{equation}
where we have used the Cauchy-Schwartz inequality and the fact
that all the operators are bounded. Next we tackle
\begin{equation}
\tr{(i[\rho_t^n,H(\rho_t^n)]
-i[\rho_t^\infty,H(\rho_t^\infty)])\delta_t^n}\le
\|i[\rho_t^n,H(\rho_t^n)]-i[\rho_t^\infty,H(\rho_t^\infty)]\|
~\|\delta_t^n\|.
\end{equation}
Now note that $S(\rho)=i[\rho,H(\rho)]=i[\rho,F+u(\rho)G]$ is
$C^1$ in the matrix elements of $\rho$, and its derivatives
are bounded as $\mathcal{S}$ is compact. Hence $S(\rho)$ is
Lipschitz continuous, and we have
\begin{equation}
\|S(\rho_t^n)-S(\rho_t^\infty)\|\le C_3\|\rho_t^n-
\rho_t^\infty\|=C_3\|\delta_t^n\|
\end{equation}
which implies
\begin{equation}
\tr{(i[\rho_t^n,H(\rho_t^n)]
-i[\rho_t^\infty,H(\rho_t^\infty)])\delta_t^n}\le
C_3\|\delta_t^n\|^2.
\end{equation}
Finally, we have $\|c\delta_t^n+\delta_t^nc^*\|\le C_4\|\delta_t^n\|$
due to boundedness of multiplication with $c$, and a similar
Lipschitz argument as the one above can be applied to
$S'(\rho)={\rm Tr}[(c+c^*)\rho]\rho$, giving
\begin{equation}
\|{\rm Tr}[(c+c^*)\rho_t^n]\rho_t^n
-{\rm Tr}[(c+c^*)\rho_t^\infty]\rho_t^\infty\|
\le C_5\|\delta_t^n\|.
\end{equation}
We can now use $\|A+B\|^2\le\|A\|^2+2\|A\|\,\|B\|+\|B\|^2$ to estimate
the last term in (\ref{eq:estimatesfeller}) by $C_6\|\delta_t^n\|^2$.
Putting all these together, we obtain
\begin{equation}
\EE\|\delta_t^n\|^2 \le
\|\delta_0^n\|^2+C\int_0^t \EE\|\delta_s^n\|^2 ds
\end{equation}
and thus by Gronwall's lemma
\begin{equation}
\EE\|\delta_t^n\|^2 \leq e^{Ct}\|\delta_0^n\|^2=
e^{Ct}\|\rho^n-\rho^\infty\|^2.
\end{equation}
As $t$ is fixed, Eq.\ (\ref{eq:toprove1feller}) follows.
We have now proved that $\rho_t^n\to\rho_t^\infty$ in mean square
as $n\to\infty$, which implies convergence in probability.
But then for any continuous $V$, $V(\rho_t^n)\to V(\rho_t^\infty)$
in probability (\cite{gikhman}, pp.\ 60). As $\mathcal{S}$ is compact,
$V$ is bounded and we have
\begin{equation}
\mathbb{E}V(\rho_t^\infty)=
\mathbb{E}[
\mathop{\mathbb{P}\mbox{-lim}}_{n\to\infty}
V(\rho_t^n)]=
\lim_{n\to\infty}\mathbb{E}V(\rho_t^n)
\end{equation}
by dominated convergence (\cite{gikhman}, pp.\ 72).
But as this holds for any convergent sequence $\rho^n$, the
result follows.
\qquad
\end{proof}
\begin{proposition}
\label{pro:markov}
$\varphi_t(\rho,u)$ is a strong Markov process in $\mathcal{S}$.
\end{proposition}
\begin{proof}
The proof of the Markov property in \cite{oksendal},
pp.\ 109--110, carries over to our case. But then
the strong Markov property follows from
Feller continuity \cite{kushner-67}.
\qquad
\end{proof}
\begin{proposition}
\label{pro:stopopen}
Let $\tau$ be the first exit time of $\rho_t$ from an
open set $Q\subset\mathcal{S}$ and consider the stopped
process $\rho_t^Q=\varphi_{t\wedge\tau}(\rho,u)$. Then $\rho_t^Q$
is also a strong Markov process in $\mathcal{S}$.
Furthermore, for $V$ s.t.\ $\mathscr{A}V$ exists and is continuous,
where $\mathscr{A}$ is the weak infinitesimal operator associated to
$\varphi_t(\rho,u)$, we have $\mathscr{A}_QV(x)=\mathscr{A}V(x)$ if
$x\in Q$ and $\mathscr{A}_QV(x)=0$ if $x\ne Q$ for the weak
infinitesimal operator $\mathscr{A}_Q$ associated to $\rho_t^Q$.
\end{proposition}
\begin{proof}
This follows from \cite{kushner-67}, pp.\ 11--12, and Proposition
\ref{pro:uniformstoch}. \qquad
\end{proof}
\section{Angular momentum systems}
\label{single:sec}
In this section we consider a quantum system with fixed angular momentum $J$
($2J\in\mathbb{N}$), e.g.\ an atomic ensemble, which is detected through a
dispersive optical probe \cite{vanhandel-review}. After conditioning,
such systems are described by an equation of the form (\ref{eq:qfilt}) where
\begin{itemize}
\item The Hilbert space dimension $N=2J+1$;
\item $c=\beta F_z$, $F=0$ and $G=\gamma F_y$ with $\beta,\gamma>0$.
\end{itemize}
Here $F_y$ and $F_z$ are the (self-adjoint) angular momentum operators
defined as follows. Let $\{\psi_k:k=0\ldots 2J\}$ be the standard basis in
$\mathbb{C}^N$, i.e.\ $\psi_i$ is the vector with a single nonzero
element $\psi_i^i=1$. Then \cite{merzbacher}
\begin{equation}
\begin{split}
F_y\psi_k &= ic_{k-J}\psi_{k+1}-ic_{J-k}\psi_{k-1}, \\
F_z\psi_k &= (k-J)\psi_k
\end{split}
\end{equation}
with $c_m=\tfrac{1}{2}\sqrt{(J-m)(J+m+1)}$. Without loss of generality
we will choose $\beta=\gamma=1$, as we can always rescale time and $u_t$
to obtain any $\beta,\gamma$.
Let us begin by studying the dynamical behavior of the resulting
equation,
\begin{equation}
\label{single:eq}
d\rho_t=-iu_t[F_y,\rho_t]\,dt
-\tfrac{1}{2}[F_z,[F_z,\rho_t]]\,dt
+\sqrt{\eta}\,(F_z\rho_t+\rho_t F_z
-2\,{\rm Tr}[F_z\rho_t]\rho_t)\,dW_t
\end{equation}
without feedback $u_t=0$.
\begin{proposition}[Quantum state reduction]
\label{pro:reduction}
For any $\rho_0\in\mathcal{S}$, the solution $\rho_t$
of {\rm (\ref{single:eq})} with $u_t=0$ converges a.s.\ as $t\to\infty$
to one of $\psi_m\psi_m^*$.
\end{proposition}
\begin{proof}
We will apply Theorem \ref{thm:localstab} with $Q_\lambda=\mathcal{S}$.
Consider the Lyapunov function $v(\rho)=\mbox{Tr}[F_z^2\rho]-
(\mbox{Tr}[F_z\rho])^2$. One easily calculates $\mathscr{A}v(\rho)=
-4\eta\,v(\rho)^2\le 0$ and hence
\begin{equation}
\mathbb{E}v(\rho_t)=v(\rho_0)
-4\eta\int_0^t\mathbb{E}v(\rho_s)^2\,ds
\end{equation}
by using the It\^o rules. Note that $v(\rho)\ge 0$, so
\begin{equation}
4\eta\int_0^t\mathbb{E}v(\rho_s)^2\,ds
=v(\rho_0)-\mathbb{E}v(\rho_t)\le v(\rho_0)<\infty.
\end{equation}
Thus we have by monotone convergence
\begin{equation}\label{eq:mononoco}
\mathbb{E}\int_0^\infty v(\rho_s)^2\,ds<\infty \quad
\Longrightarrow \quad \int_0^\infty v(\rho_s)^2\,ds<\infty
\quad\mbox{a.s.}
\end{equation}
By Theorem \ref{thm:localstab} the limit of $v(\rho_t)$ as
$t\to\infty$ exists a.s., and hence Eq.\ (\ref{eq:mononoco}) implies
that $v(\rho_t)\to 0$ a.s. But the only states $\rho$ that satisfy
$v(\rho)=0$ are $\rho=\psi_m\psi_m^*$.
\qquad
\end{proof}
The main goal of this section is to provide a feedback control
law that globally stabilizes \eqref{single:eq} around the equilibrium
solution $(\rho_t\equiv\rho_f,u\equiv0)$, where we select a target state
$\rho_f=v_fv_f^*$ from one of $v_f=\psi_m$.
Stabilization of quantum state reduction for low-dimensional
angular momentum systems has been studied in~\cite{vanhandel-05}.
It is shown that the main challenge in such a stabilization
problem is due to the geometric symmetry hidden in the state space
of the system. Many natural feedback laws fail to stabilize the
closed-loop system around the equilibrium point $\rho_f$ because
of this symmetry: the $\omega$-limit set contains points other
than $\rho_f$. The approach of~\cite{vanhandel-05} uses computer
searches to find continuous control laws that break this symmetry
and globally stabilize the desired state. Unfortunately, the
method is computationally involved and can only be applied to
low-dimensional systems. Additionally, it is difficult to prove
stability in this way for arbitrary parameter values, as the
method is not analytical.
Here we present a different approach which avoids the unwanted
limit points by changing the feedback law around them. The approach is
entirely analytical and globally stabilizes the desired target state for
any dimension $N$ and $0<\eta\le 1$. The main result of this section can
be stated as follows:
\begin{theorem}
\label{single:thm}
Consider the system~\eqref{single:eq} evolving in the set $\SSS$.
Let $\rho_f=v_fv_f^*$ where $v_f$ is one of $\psi_m$, and let
$\gamma>0$. Consider the following control law:
\begin{enumerate}
\item $u_t=-\tr{i[F_y,\rho_t]\rho_f}$ if $\tr{\rho_t \rho_f} \geq \gamma$;
\item $u_t=1$ if $\tr{\rho_t \rho_f} \leq \gamma/2 $;
\item If $\rho_t\in\mathcal{B}=\{\rho:\gamma/2<\tr{\rho\rho_f}<\gamma\}$,
then $u_t=-\tr{i[F_y,\rho_t]\rho_f}$ if $\rho_t$
last entered $\mathcal{B}$ through the boundary
$\tr{\rho\rho_f}=\gamma$, and $u_t=1$ otherwise.
\end{enumerate}
Then $\exists\gamma>0$ s.t.\ $u_t$ globally stabilizes
\eqref{single:eq} around $\rho_f$ and $\mathbb{E}\rho_t\to\rho_f$ as
$t\to\infty$.
\end{theorem}
Throughout the proofs we use the ``natural'' distance function
$$
V(\rho)=1-\tr{\rho\rho_f}:\SSS\rightarrow[0,1]
$$
from the state $\rho$ to the target state $\rho_f$. For future reference,
let us define for each $\alpha\in[0,1]$ the level set $\SSS_\alpha$ to be
$$
\SSS_\alpha=\{\rho\in\SSS:V(\rho)=\alpha\}.
$$
Furthermore, we define the following sets:
\begin{equation*}
\begin{split}
\SSS_{>\alpha} &= \{\rho\in\SSS : \alpha<V(\rho)\leq 1\}, \\
\SSS_{\ge\alpha} &= \{\rho\in\SSS : \alpha\le V(\rho)\leq 1\}, \\
\SSS_{<\alpha} &= \{\rho\in\SSS : 0\le V(\rho)< \alpha\}, \\
\SSS_{\le\alpha} &= \{\rho\in\SSS : 0\le V(\rho)\leq \alpha\}. \\
\end{split}
\end{equation*}
The proof of Theorem~\ref{single:thm} proceeds in four steps:
\begin{enumerate}
\item In the first step we show that when the initial state lies in the
set $\SSS_1$, the constant control field $u=1$ ensures the exit of the
trajectories (at least) in expectation from the level set $\SSS_1$.
\item In the second step we use the result of step 1 to show that there
exists a $\gamma>0$ such that whenever the initial state lies inside the
set $\SSS_{>1-\gamma}$ and the control field is taken to be $u=1$, the
expectation value of the first exit time from this set takes a finite
value. Thus if we start the controlled system in the set
$\SSS_{>1-\gamma}$, it will exit this set in finite time with probability
one.
\item In the third step we show that whenever the initial state lies
inside the set $\SSS_{\le 1-\gamma}$ and the control is given by the
feedback law $u(t)=-\tr{i[F_y,\rho_t]\rho_f}$, the sample paths never exit
the set $\SSS_{<1-\gamma/2}$ with a probability uniformly larger than a
strictly positive value. We also show that almost all paths that never
leave $\SSS_{<1-\gamma/2}$ converge to the equilibrium point $\rho_f$.
\item In the final step, we prove that there is a unique solution $\rho_t$
under the control $u_t$ by piecing together the solutions with fixed
controls $u=1$ and $u=-\tr{i[F_y,\rho_t]\rho_f}$. Combining the results
of the second and the third step, we show that the resulting trajectories
of the system eventually converge toward the equilibrium state $\rho_f$
with probability one.
\end{enumerate}
\subsection*{Step 1}
Let us take a fixed time $T>0$ and define the nonnegative function
$$
\chi(\rho)=\min_{t\in[0,T]}\EE V(\varphi_t(\rho,1)),\qquad
\rho \in \SSS.
$$
Recall that $\varphi_t(\rho,1)$ denotes the solution of (\ref{single:eq})
at time $t$ with the control $u_t=1$ and initial condition $\rho_0=\rho$.
The goal of the first step is to show the following result:
\begin{lemma}\label{first:lem}
$\chi(\rho)<1 ~~ \forall \rho\in \SSS_1.$
\end{lemma}
To prove this statement we will first show the following deterministic
result.
\begin{lemma}\label{add:lem}
Consider the deterministic differential equation
\begin{equation}\label{det:eq}
\frac{d}{dt}v_t=(-i F_y-F_z^2+CF_z)v_t,\qquad
v_0\in\CC^N\setminus\{0\}.
\end{equation}
For sufficiently large $C\gg 1$, $v_t$ exits the set
$\{v:v^*v_f=0\}$ in the interval $[0,T]$, i.e.\ there exists
$t\in[0,T]$ such that $v_t^*v_f\neq 0$.
\end{lemma}
\begin{proof}
The matrices $F_z$ and $F_y$ are of the form
$$
F_z =
\begin{pmatrix}
* & && & 0 \\
& * &&& \\
& &\ddots&& \\
&&&*& \\
0 & && & * \\
\end{pmatrix}, \qquad
F_y =
\begin{pmatrix}
0 & * & & &0 \\
* & 0 &*& &\\
& \ddots &\ddots &\ddots& \\
& &* &0 & *\\
0 & & & *&0\\
\end{pmatrix}
$$
where $F_z$ has no repeated diagonal entries ($F_z$ has a nondegenerate
spectrum) and the starred elements directly above and below the diagonal
of $F_y$ are all nonzero.
Now choose a constant $\kappa$ so that the matrix
$$
A=-iF_y-F_z^2+\kappa F_z
$$
admits distinct eigenvalues. This is always possible by choosing
sufficiently large $\kappa$, as $F_z$ has nondegenerate eigenvalues and
the eigenvalues of $A$ depend continuously\footnote{
Note that the coefficients of the characteristic polynomial of $A$
are continuous functions of $\kappa$, and the roots of a
polynomial depend continuously on the polynomial coefficients.
} on $\kappa$. For $k\in\{1,..,N\}$ define the matrices $A_{k-1}$ and
$\tilde A_{k+1}$ to be:
$$
A_{k-1}=[A_{ij}]_{1\leq i,j\leq k-1},\qquad \tilde
A_{k+1}=[A_{ij}]_{k+1\leq i,j\leq N}.
$$
The fact that the matrices $[(F_z)_{ij}]_{1\leq i,j \leq k-1}$ and
$[(F_z)_{ij}]_{k+1\leq i,j \leq N}$ have different eigenvalues then imply
that for sufficiently large $\kappa$ the matrices $A_{k-1}$ and $\tilde
A_{k+1}$ have disjoint spectra as well.
Suppose that the solution of
$$
\dot v= A v, \qquad v|_{t=0}=v_0
$$
never leaves the set $\{v:v^*v_f=0\}$ in the interval $t\in[0,T]$.
Then in particular
$$
\frac{d^n}{dt^n}v^*v_f|_{t=0}=(A^nv_0)^*v_f=0,\qquad
n=0,1,\ldots
$$
The matrix $A$ is diagonalizable as it has distinct eigenvalues, i.e.\
$A=P D P^{-1}$ where $D$ is a diagonal matrix. Thus
\begin{equation}\label{deriv:eq}
(D^n \tilde v_0)^*\tilde v_f=0, \qquad n=0,1,\ldots
\end{equation}
where $\tilde v_0=P^{-1}v_0$ and $\tilde v_f=P^*v_f$. Eq.\
\eqref{deriv:eq} implies that $M\tilde v_0=0$ where
$$
M =
\begin{pmatrix}
(\tilde v_f)_1^* & & \ldots & (\tilde v_f)_N^* \\
(\tilde v_f)_1^* D_{11} & &\ldots& (\tilde v_f)_N^* D_{NN}\\
(\tilde v_f)_1^* D_{11}^{2}& &\ldots& (\tilde v_f)_N^* D_{NN}^{2} \\
\vdots & & \vdots & \vdots \\
(\tilde v_f)_1^* D_{11}^{N-1} &
&\ldots& (\tilde v_f)_N^* D_{NN}^{N-1} \\
\end{pmatrix}.
$$
The determinant of this Vandermonde matrix is
$$
{\rm det}\,M=
(\tilde v_f)_1^*\cdots(\tilde v_f)_N^*
\prod_{i>j}(D_{ii}-D_{jj}).
$$
As the matrix $A$ has distinct eigenvalues, all the entries
$D_{11},D_{22},...,D_{NN}$ are different. Thus if we can show that
all the entries of the vector $\tilde v_f$ are non-zero then the
matrix $M$ must be invertible. But then $M\tilde v_0=0$ implies
that $\tilde v_0=0$ and hence $v_0=0$ is the only initial state
for which the dynamics does not leave the set $\{v:v^*v_f=0\}$ in
the interval $t\in[0,T]$, proving our assertion.
Let us thus show that in fact all elements of $\tilde v_f$ are nonzero.
Note that
$$
(\tilde v_f)_k=(P^*v_f)_k=P_{fk}^*,
$$
so it suffices to show that the eigenvectors of the matrix $A$ have
only nonzero elements. Suppose that an eigenvector $\Xi$ of $A$ admits
a zero entry, i.e.
$$
A\Xi=\lambda\Xi,\qquad \Xi_k=0 \text{ for some }
k\in\{1,..,N\}.
$$
Defining $\chi_{k-1}=[\Xi_j]_{j=1,..,k-1}$ and $\tilde
\chi_{k+1}=[\Xi_j]_{j=k+1,..,N}$, a straightforward computation
shows that due to the structure of the matrix $A$
$$
A_{k-1}\chi_{k-1}=\lambda\chi_{k-1}\quad \text{and} \quad \tilde
A_{k+1}\tilde\chi_{k+1}=\lambda\tilde \chi_{k+1}.
$$
But by the discussion above $A_{k-1}$ and $\tilde A_{k+1}$ have
disjoint spectra, so $\Xi$ can only be an eigenvector if either
$\chi_{k-1}=0$ or $\tilde\chi_{k+1}=0$.
Let us consider the case where $\chi_{k-1}=0$; the treatment of the second
case follows an identical argument. Let $j > k$ be the first non-zero
entry of $\Xi$, i.e.\
\begin{equation}\label{zero:eq}
\Xi_1=\Xi_2=...=\Xi_{j-1}=0\quad \text{and} \quad \Xi_j\neq 0.
\end{equation}
As $A\Xi=\lambda\Xi$, we have that
$$
0=\lambda\Xi_{j-1}=
A_{j-1,j-2}\Xi_{j-2}+A_{j-1,j-1}\Xi_{j-1}
+A_{j-1,j}\Xi_j=A_{j-1,j}\Xi_j
=-i(F_y)_{j-1,j}\Xi_j.
$$
As $(F_y)_{j-1,j}\neq 0$ this relation ensures that $\Xi_j=0$. But this is
in contradiction with~\eqref{zero:eq} and so $\Xi$ cannot admit any zero
entry. This completes the proof. \qquad
\end{proof}
{\em Proof of Lemma \ref{first:lem}}.
We begin by restating the problem as in the proof of Lemma
\ref{lem:invzakai}. We can write $\varphi_t(\rho,1)=
\tilde\rho_t/{\rm Tr}\,\tilde\rho_t$ with $\tilde\rho_t=
\sum_i\lambda_i\mathbb{E}[v_t^iv_t^{i*}|\mathcal{F}_t^y]$, where
$\lambda_i$ are convex weights and $v_t^i$ are given by the equations
\begin{equation}\label{eq:zaksupp}
dv_t^i=-iF_yv_t^i\,dt-\tfrac{1}{2}F_z^2v_t^i\,dt+
F_zv_t^i\,dW_t',\qquad v_0^i\in\CC^N\setminus\{0\}.
\end{equation}
Note that $\mathbb{E}{\rm Tr}[\varphi_t(\rho,1)\rho_f]=0$ iff
$\mathbb{E}{\rm Tr}[\tilde\rho_t\rho_f]=
\sum_i\lambda_i\mathbb{E}[v_t^{i*}\rho_fv_t^i]=0$. But as
$v_t^{i*}\rho_fv_t^i\ge 0$, we obtain $\EE V(\varphi_t(\rho,1))=1$
iff $v_t^{i*}v_f=0$ a.s.\ for all $i$.
To prove the assertion of the Lemma, it suffices to show that there exists
a $t\in[0,T]$ such that $\EE V(\varphi_t(\rho,1))<1$. Thus it is
sufficient to prove that
\begin{equation}\label{exitp2:eq}
\exists t\in[0,T]\quad\mbox{s.t.}\quad\PP(v_t^{*}v_f\ne 0)>0
\end{equation}
where $v_t$ is the solution of an equation of the form (\ref{eq:zaksupp}).
To this end we will use the support theorem, Theorem \ref{thm:supportth},
together with Lemma \ref{add:lem}.
To apply the support theorem we must first take care of two preliminary
issues. First, the support theorem in the form of Theorem
\ref{thm:supportth} must be applied to stochastic differential equations
with a Wiener process as the driving noise, whereas the noise $W_t'$ of
Eq.\ (\ref{eq:zaksupp}) is a Wiener process with (bounded) drift:
\begin{equation}
dW_t'=\sqrt{\eta}\,dy_t+\sqrt{1-\eta}\,d\hat W_t=
2\eta\,{\rm Tr}[F_z\rho_t]dt
+\sqrt{\eta}\,dW_t+\sqrt{1-\eta}\,d\hat W_t.
\end{equation}
Using Girsanov's theorem, however, we can find a new measure $\mathbb{Q}$
that is equivalent to $\mathbb{P}$, such that $W_t'$ is a Wiener process
under $\mathbb{Q}$ on the interval $[0,T]$. But as the two measures are
equivalent,
\begin{equation}\label{exitp:eq}
\exists t\in[0,T]\quad\mbox{s.t.}\quad\mathbb{Q}(v_t^{*}v_f\ne 0)>0
\end{equation}
implies (\ref{exitp2:eq}). Second, the support theorem refers to an
equation in the Stratonovich form; however, we can easily find the
Stratonovich form
\begin{equation}\label{eq:suppstrat}
dv_t=-iF_yv_t\,dt-F_z^2v_t\,dt+
F_zv_t\circ dW_t'
\end{equation}
which is equivalent to (\ref{eq:zaksupp}). It is easily verified that
this linear equation satisfies all the requirements of the support
theorem.
To proceed, let us suppose that~\eqref{exitp:eq} does not hold true.
Then
\begin{equation}\label{cont:eq}
\mathbb{Q}(v_t^{*}v_f=0)=1\qquad
\forall t\in[0,T].
\end{equation}
Recall the following sets: $\mathscr{W}_{v_0}$ is the set of continuous
paths starting at $v_0$, and $\mathscr{S}_{v_0}$ is the smallest closed
subset of $\mathscr{W}_{v_0}$ such that
$\mathbb{Q}(\{\omega\in\Omega:v_\cdot(\omega)\in\mathscr{S}_{v_0}\})=1$.
Now denote by $\mathscr{T}_{v_0,t}$ the subset of $\mathscr{W}_{v_0}$ such
that $v_t^{*}v_f=0$, and note that $\mathscr{T}_{v_0,t}$ is closed in the
compact uniform topology for any $t$. Then (\ref{cont:eq}) would imply
that $\mathscr{S}_{v_0}\subset\mathscr{T}_{v_0,t}$ for all $t\in[0,T]$.
But by the support theorem the solutions of (\ref{det:eq}) are elements of
$\mathscr{S}_{v_0}$, and by Lemma \ref{add:lem} there exists a time
$t\in[0,T]$ and a constant $C$ such that the solution of (\ref{det:eq}) is
not an element of $\mathscr{T}_{v_0,t}$. Hence we have a contradiction,
and the assertion is proved.
\qquad\endproof
\subsection*{Step 2}
We begin by extending the result of Lemma \ref{first:lem} to hold
uniformly in a neighborhood of the level set $\mathcal{S}_1$.
\begin{lemma}\label{fourth:lem}
There exists $\gamma>0$ such that $\chi(\rho)<1-\gamma$ for all
$\rho\in\SSS_{\ge 1-\gamma}$.
\end{lemma}
\begin{proof}
Suppose that for every $\xi>0$ there exists a matrix
$\rho_\xi\in\SSS_{>1-\xi}$ such that
$$
1-\xi<\chi(\rho_\xi)\leq 1.
$$
By extracting a subsequence $\xi_n \searrow 0$ and using the
compactness of $\SSS$, we can assume that $\rho_{\xi_n}\rightarrow
\rho_\infty\in \SSS_1$ and that $\chi(\rho_{\xi_n})\rightarrow 1$.
But by Lemma~\ref{first:lem} $\chi(\rho_\infty)=1-\epsilon<1$.
Now choose $s\in[0,T]$ such that
$$
\EE V(\varphi_s(\rho_\infty,1))=1-\epsilon.
$$
Using Feller continuity, Prop.\ \ref{pro:feller}, we can now write
$$
1=\lim_{n\rightarrow\infty}\chi(\rho_{\xi_n})
\leq\lim_{n\rightarrow \infty}\EE
V(\varphi_s(\rho_{\xi_n},1))
=\EE V(\varphi_s(\rho_\infty,1))
=1-\epsilon<1.
$$
which is a contradiction. Hence there exists $\xi>0$ such that
$\chi(\rho)\le 1-\xi$ for all $\rho\in\mathcal{S}_{>1-\xi}$.
The result follows by choosing $\gamma=\xi/2$.
\qquad
\end{proof}
The following Lemma is the main result of the second step.
\begin{lemma}\label{fifth:lem}
Let $\tau_{\rho}(\mathcal{S}_{>1-\gamma})$ be the first exit time of
$\varphi_t(\rho,1)$ from $\mathcal{S}_{>1-\gamma}$. Then
$$
\sup_{\rho\in\mathcal{S}_{>1-\gamma}}
\EE\tau_{\rho}(\mathcal{S}_{>1-\gamma})<\infty.
$$
\end{lemma}
\begin{proof}
The following result can be found in Dynkin (\cite{dynkin-book1}, pp.\
111, Lemma 4.3):
$$
\EE\tau_\rho(\SSS_{>1-\gamma})\leq
\frac{T}{
1-\sup_{\zeta\in\SSS}\PP\{\tau_\zeta(\SSS_{>1-\gamma})> T\}}.
$$
We will show that
\begin{equation}\label{prob:eq}
\sup_{\zeta \in \SSS}
\PP\{\tau_\zeta(\SSS_{>1-\gamma})>T\}<1.
\end{equation}
This holds trivially for $\zeta\in\SSS_{\le 1-\gamma}$, as then
$\tau_\zeta(\SSS_{>1-\gamma})=0$. Let us thus suppose that
$$
\forall \epsilon>0 \quad
\exists \zeta_{\epsilon}\in\SSS_{>1-\gamma}
\quad \text{such that} \quad
\PP\{\tau_{\zeta_\epsilon}(\SSS_{>1-\gamma})>T\}>1-\epsilon.
$$
Then for all $s\in[0,T]$, we have that
$$
\EE V(\varphi_s(\zeta_\epsilon,1))
> (1-\epsilon)\inf_{\rho\in\SSS_{>1-\gamma}}V(\rho)
=(1-\epsilon)(1-\gamma).
$$
By compactness there exists a sequence $\epsilon_n \searrow 0$ and
$\zeta_\infty \in \SSS_{\ge 1-\gamma}$ such that
$\zeta_{\epsilon_n}\rightarrow \zeta_\infty$ as $n\rightarrow
\infty$. Thus by Prop.\ \ref{pro:feller}
$$
\EE V(\varphi_s(\zeta_\infty,1))> 1-\gamma \quad \forall s\in[0,T].
$$
But this is in contradiction with result of Lemma~\ref{fourth:lem}.
Hence there exists an $\epsilon>0$ such that
$\sup_{\zeta\in\SSS}\PP\{\tau_{\zeta}(\SSS_{>1-\gamma})>T\}=
1-\epsilon$, and we obtain
$$
\EE(\tau_{\rho}(\SSS_{>1-\gamma}))\leq
\frac{T}{1-(1-\epsilon)}
=\frac{T}{\epsilon}<\infty
$$
uniformly in $\rho$. This completes the proof. \qquad
\end{proof}
\subsection*{Step 3}
In this step we deal with the situation where the initial state lies
inside the set $\SSS_{\le 1-\gamma}$. We will denote by
$u_1(\rho)=-\tr{i[F_y,\rho]\rho_f}$ and by $\varphi_t(\rho,u_1)$ the
solution of (\ref{single:eq}) with $\rho_0=\rho$ and with
$u_t=u_1(\rho_t)$. Denote by $\mathscr{A}$ the weak infinitesimal
operator of $\varphi_{t}(\rho,u_1)$. We will apply the stochastic
Lyapunov theorems with $Q_\lambda=\mathcal{S}$.
We begin by showing that there is a non-zero probability $p>0$ that
whenever the initial state lies inside $\SSS_{\le 1-\gamma}$ the
trajectories of the system never exit the set $\SSS_{<1-\gamma/2}$.
\begin{lemma}\label{sixth:lem}
For all $\rho \in \SSS_{\le 1-\gamma}$
$$
\PP\left[
\sup_{0\le t<\infty}V(\varphi_t(\rho,u_1))
\geq 1-\gamma/2
\right]
\leq 1-p=\frac{1-\gamma}{1-\gamma/2}<1.
$$
\end{lemma}
\begin{proof}
This follows from Theorem \ref{thm:localstab} and
$\mathscr{A}V(\rho)=-u_1(\rho)^2\leq 0$.
\qquad
\end{proof}
We now restrict ourselves to the paths that never leave
$\SSS_{<1-\gamma/2}$. We will first show that these paths converge toward
$\rho_f$ in probability. We then extend this result to prove almost
sure convergence.
\begin{lemma}\label{seventh2:lem}
The sample paths of $\varphi_t(\rho,u_1)$ that never exit the set
$\SSS_{<1-\gamma/2}$ converge in probability to $\rho_f$ as
$t\to\infty$.
\end{lemma}
\begin{proof}
Consider the Lyapunov function
$$
\VV(\rho)=1-\tr{\rho\rho_f}^2.
$$
It is easily verified that $\VV(\rho)\geq 0$ for all $\rho\in\SSS$ and
that $\VV(\rho)=0$ iff $\rho=\rho_f$. A straightforward computation gives
\begin{equation*}
\mathscr{A}\VV(\rho)=
-2u_1(\rho)^2\,\tr{\rho\rho_f}
-4\eta\,(\lambda_f-\tr{\rho F_z})^2\,\tr{\rho\rho_f}^2
\le 0
\end{equation*}
where $\lambda_f$ is the eigenvalue of $F_z$ associated to $v_f$. Now
note that all the conditions of Theorem \ref{thm:lasalle} are satisfied by
virtue of Prop.\ \ref{pro:feller} and \ref{pro:uniformstoch}. Hence
$\varphi_{t}(\rho,u_1)$ converges in probability to the largest invariant
set contained in $\mathcal{C}=\{\rho\in\mathcal{S}:\mathscr{A}\VV(\rho)=0\}$.
In order to satisfy the condition $\mathscr{A}\VV(\rho)=0$, we must have
$u_1(\rho)^2\,\tr{\rho\rho_f}=0$ as well as $(\lambda_f-\tr{\rho
F_z})^2\,\tr{\rho\rho_f}^2=0$.
The latter implies that
$$
\text{either}\quad \tr{\rho\rho_f}=0 \qquad \text{or} \quad
\tr{\rho F_z}=\lambda_f.
$$
Let us investigate the largest invariant set contained in
$\mathcal{C}'=\{\rho\in\SSS:\tr{\rho F_z}=\lambda_f\}$. Clearly this
invariant set can only contain $\rho\in\mathcal{C}'$ for which
$\tr{\varphi_{t}(\rho,u_1)F_z}$ is constant. Using It\^o's rule we obtain
$$
d\,\tr{\rho_tF_z}=-iu_1(\rho_t)\,\tr{[F_y,\rho_t]F_z}\,dt
+2\sqrt{\eta}\,(\tr{F_z^2\rho_t}-\tr{F_z\rho_t}^2)\,dW_t.
$$
Hence in order for $\tr{\varphi_{t}(\rho,u_1)F_z}$ to be constant, we must
at least have
$$
\tr{F_z^2\rho}-\tr{F_z\rho}^2=0.
$$
But as in the proof of Prop.\ \ref{pro:reduction}, this implies that
$\rho=\psi_m\psi_m^*$ for some $m$, and thus the only possibilities are
$V(\rho)=0$ (for $\rho=v_fv_f^*$) or $V(\rho)=1$.
From the discussion above it is evident that the largest
invariant set contained in $\mathcal{C}$ must be contained inside the set
$\{\rho_f\}\cup\mathcal{S}_1$. But then the paths that never exit
$\mathcal{S}_{<1-\gamma/2}$ must converge in probability to $\rho_f$.
Thus the assertion is proved.
\qquad
\end{proof}
\begin{lemma}\label{seventh:lem}
$\varphi_t(\rho,u_1)$ converges to $\rho_f$ as $t\to\infty$ for
almost all paths that never exit the set $\SSS_{<1-\gamma/2}$.
\end{lemma}
\begin{proof}
Define the event $P^\rho_{<1-\gamma/2}=\{\omega\in\Omega:
\varphi_t(\rho,u_1)\mbox{ never exits }\SSS_{<1-\gamma/2}\}$.
Then Lemma \ref{seventh2:lem} implies that
$$
\lim_{t\to\infty}
\mathbb{P}\left(\|\varphi_t(\rho,u_1)-\rho_f\|>\varepsilon
\,\left|\,P^\rho_{<1-\gamma/2}\right.\right)=0
\qquad\forall\varepsilon>0.
$$
By continuity of $V$, this also implies
$$
\lim_{t\to\infty}
\mathbb{P}\left(V(\varphi_t(\rho,u_1))>\varepsilon
\,\left|\,P^\rho_{<1-\gamma/2}\right.\right)=0
\qquad\forall\varepsilon>0.
$$
As $V(\rho)\le 1$, we have
\begin{equation*}
\begin{split}
\mathbb{E}\left(V(\varphi_t(\rho,u_1))\,\left|
\,P^\rho_{<1-\gamma/2}\right.\right)
\le &~
\mathbb{P}\left(V(\varphi_t(\rho,u_1))>\varepsilon
\,\left|\,P^\rho_{<1-\gamma/2}\right.\right) \\
& ~~+\varepsilon\left[1-
\mathbb{P}\left(V(\varphi_t(\rho,u_1))>\varepsilon
\,\left|\,P^\rho_{<1-\gamma/2}\right.\right)\right].
\end{split}
\end{equation*}
Thus
$$
\limsup_{t\to\infty}\,
\mathbb{E}\left(V(\varphi_t(\rho,u_1))\,\left|
\,P^\rho_{<1-\gamma/2}\right.\right)
\le\varepsilon\qquad\forall\varepsilon>0
$$
which implies
$$
\lim_{t\to\infty}
\mathbb{E}\left(V(\varphi_t(\rho,u_1))\,\left|
\,P^\rho_{<1-\gamma/2}\right.\right)
=0.
$$
But we know by Theorem \ref{thm:localstab} that $V(\varphi_t(\rho,u_1))$
converges almost surely. As $V$ is bounded, we obtain by dominated
convergence
$$
\mathbb{E}\left(
\lim_{t\to\infty}
V(\varphi_t(\rho,u_1))\,\left|
\,P^\rho_{<1-\gamma/2}\right.\right)
=0
$$
from which the result follows immediately. \qquad
\end{proof}
\subsection*{Step 4}
It remains to combine the results of Steps 2 and 3 to prove existence,
uniqueness and global stability of the solution $\rho_t$. We will denote
by $u$ the control law of Theorem \ref{single:thm} and by
$\varphi_t(\rho,u)$ the associated solution. Note that
$\varphi_t(\rho,u)$ is not a Markov process, as the control $u$ depends on
the past history of the solution. We will construct $\varphi_t(\rho,u)$
by pasting together the strong Markov processes $\varphi_t(\rho,1)$ and
$\varphi_t(\rho,u_1)$ at the times where the control switches.
\begin{lemma}\label{final:lem}
There is a unique solution $\varphi_t(\rho,u)$ for all
$t\in\mathbb{R}_+$. Moreover, for almost every sample path of
$\varphi_t(\rho,u)$ there exists a time $T<\infty$ after which the
path never exits the set $\mathcal{S}_{<1-\gamma/2}$ and the active
control law is $u_1$.
\end{lemma}
\begin{proof}
Fix the initial state $\rho$. We begin by constructing a solution
$\varphi_{t\wedge n}(\rho,u)$ up to (at most) an integer time
$n\in\mathbb{N}$. To this end, define the predictable stopping time
$$
\tau_1^n=\inf\{t\ge 0:\varphi_t(\rho,1)
\in\mathcal{S}_{\le 1-\gamma}\}\wedge n.
$$
Then we can define $\rho_{\tau_1^n}=\varphi_{\tau_1^n}(\rho,1)$
and $\varphi_{t\wedge n}(\rho,u)=\varphi_t(\rho,1)$ for $t<\tau_1^n$. In
the following, we will need the two-parameter solution
$\varphi_{s,t}(\rho,u')$ of the filtering equation under the simple
control $u'$, given the initial state $\rho$ at time $s$. Define
$$
\sigma_1^n=\inf\{t\ge\tau_1^n:
\varphi_{\tau_1^n,t}(\rho_{\tau_1^n},u_1)
\in\mathcal{S}_{\ge 1-\gamma/2}\}\wedge n.
$$
We can extend our solution by
$$
\varphi_{t\wedge n}(\rho,u)=\chi_{t<\tau_1^n}\varphi_{t}(\rho,1)+
\chi_{\tau_1^n\le t<\sigma_1^n}
\varphi_{\tau_1^n,t}(\rho_{\tau_1^n},u_1),
\qquad t<\sigma_1^n
$$
where $\chi_A$ is the indicator function on the set $A$. To extend the
solution further, we continue again with the control law $u=1$.
Recursively, we define an entire sequence of predictable stopping times
$$
\sigma_k^n=
\inf\{t\ge\tau_k^n:
\varphi_{\tau_k^n,t}(\rho_{\tau_k^n},u_1)
\in\mathcal{S}_{\ge 1-\gamma/2}\}\wedge n,
$$
$$
\tau_k^n=
\inf\{t\ge\sigma_{k-1}^n:
\varphi_{\sigma_{k-1}^n,t}(\rho_{\sigma_{k-1}^n},1)
\in\mathcal{S}_{\le 1-\gamma}\}\wedge n,
$$
where
$$
\rho_{\sigma_k^n}=
\varphi_{\tau_k^n,\sigma_k^n}(\rho_{\tau_k^n},u_1),\qquad
\rho_{\tau_k^n}=
\varphi_{\sigma_{k-1}^n,\tau_k^n}(\rho_{\sigma_{k-1}^n},1).
$$
We can use these times to construct the solution
$$
\varphi_{t\wedge n}(\rho,u)=\chi_{t<\tau_1^n}\varphi_t(\rho,1)
+
\sum_{k=1}^\infty
\left[\chi_{\tau_k^n\le t<\sigma_k^n}
\varphi_{\tau_k^n,t}(\rho_{\tau_k^n},u_1)
+
\chi_{\sigma_k^n\le t<\tau_{k+1}^n}
\varphi_{\sigma_k^n,t}(\rho_{\sigma_k^n},1)
\right]
$$
for all times $t<\Sigma^n=\lim_{k\to\infty}\sigma_k^n\le n$ (the limit
exists, as $\sigma_k$ is a nondecreasing sequence of stopping times.)
Moreover, the solution is a.s.\ unique, as the segments between each two
stopping times are a.s.\ uniquely defined.
Now note that as anticipated by the notation, it is not difficult to
verify that $\varphi_{t\wedge (n+1)}(\rho,u)=\varphi_{t\wedge n}(\rho,u)$
a.s.\ for $t<\Sigma^n$, and moreover $\Sigma^n=\Sigma\wedge n$,
$\tau_k^n=\tau_k\wedge n$, $\sigma_k^n=\sigma_k\wedge n$ where
$\Sigma=\lim_{t\to\infty}\Sigma^n$ etc. Hence we can let $n\to\infty$ to
obtain the unique solution $\varphi_t(\rho,u)$ defined up to the
accumulation time $\Sigma$, where $\tau_k$, $\sigma_k$ are the consecutive
times at which the control switches. It remains to prove that the solution
exists for all time, i.e.\ that $\Sigma=\infty$ a.s. In particular, this
uniquely defines a c{\`a}dl{\`a}g control $u_t$, so that by uniqueness
$\varphi_t(\rho,u)$ must coincide with the solution of (\ref{eq:qfilt})
with the control $u_t$. Below we will prove that a.s., only finitely many
$\sigma_k$ are finite. This is sufficient to prove not only existence,
but also the second statement of the Lemma.
To proceed, we use the fact that the strong Markov property holds on each
segment between consecutive switching times $\tau_n\le t<\sigma_n$ or
$\sigma_n\le t<\tau_{n+1}$. Thus
\begin{equation*}
\begin{split}
\mathbb{P}(\sigma_n<\infty&\mbox{ and }\tau_n<\infty)= \\
&\int
\chi_{\tau_n<\infty}(\tilde\omega)\,
\mathbb{P}(\varphi_{t}(\rho_{\tau_n}(\tilde\omega),u_1)
\mbox{ exits }\mathcal{S}_{<1-\gamma/2}\mbox{ in finite time})
\,\mathbb{P}(d\tilde\omega)
\end{split}
\end{equation*}
which implies
\begin{equation*}
\begin{split}
\mathbb{P}(\sigma_n<\infty&\,|\,\tau_n<\infty)= \\
&\int
\mathbb{P}(\varphi_{t}(\rho_{\tau_n}(\tilde\omega),u_1)
\mbox{ exits }\mathcal{S}_{<1-\gamma/2}\mbox{ in finite time})
\,\mathbb{P}(d\tilde\omega\,|\,\tau_n<\infty).
\end{split}
\end{equation*}
But $\rho_{\tau_n}\in\mathcal{S}_{\le 1-\gamma}$ on a set
$\Omega_{\tau_n}$ with $\mathbb{P}(\Omega_{\tau_n}\,|\,\tau_n<\infty)=1$.
Hence by Lemma \ref{sixth:lem}
$$
\mathbb{P}(\sigma_n<\infty\,|\,\tau_n<\infty) \le 1-p.
$$
Through a similar argument, and using Lemma \ref{fifth:lem}, we obtain
$$
\mathbb{P}(\tau_n<\infty\,|\,\sigma_{n-1}<\infty) = 1.
$$
But note that by construction
$$
\mathbb{P}(\tau_n<\infty\,|\,\sigma_{n}<\infty)=
\mathbb{P}(\sigma_{n-1}<\infty\,|\,\tau_n<\infty)=1.
$$
Hence we obtain
\begin{equation*}
\begin{split}
\frac{\mathbb{P}(\sigma_n<\infty)}{\mathbb{P}(\sigma_{n-1}<\infty)}
&=
\frac{
\mathbb{P}(\tau_n<\infty\,|\,\sigma_{n}<\infty)
\mathbb{P}(\sigma_n<\infty)
}{
\mathbb{P}(\tau_n<\infty)
}\,
\frac{
\mathbb{P}(\sigma_{n-1}<\infty\,|\,\tau_n<\infty)
\mathbb{P}(\tau_n<\infty)
}{
\mathbb{P}(\sigma_{n-1}<\infty)
} \\
&=\mathbb{P}(\sigma_n<\infty\,|\,\tau_n<\infty)\,
\mathbb{P}(\tau_n<\infty\,|\,\sigma_{n-1}<\infty)\le 1-p.
\end{split}
\end{equation*}
But $\mathbb{P}(\sigma_1<\infty)=
\mathbb{P}(\sigma_1<\infty\,|\,\tau_1<\infty)\le 1-p$ as $\tau_1<\infty$
a.s. Hence
$$
\mathbb{P}(\sigma_n<\infty)\le(1-p)^n
$$
and thus
$$
\sum_{n=1}^\infty\mathbb{P}(\sigma_n<\infty)
\le \sum_{n=1}^\infty(1-p)^n=\frac{1-p}{p}<\infty.
$$
By the Borel-Cantelli lemma, we conclude that
$$
\mathbb{P}(\sigma_n<\infty\mbox{ for infinitely many }n)=0.
$$
Hence $\Sigma=\infty$ a.s.\ and for almost every sample path, there exists
an integer $N<\infty$ such that $\sigma_n=\infty$ (and hence also
$\tau_{n+1}=\infty$) for all $n\ge N$, and such that $\sigma_n<\infty$
(and hence also $\tau_{n+1}<\infty$) for all $n<N$, which implies the
assertion.
\qquad
\end{proof}
Finally, we can now put together all the ingredients and complete the
proof of Theorem \ref{single:thm}.
{\em Proof of Theorem \ref{single:thm}}.
We must check three things: that the target state $\rho_f$ is (locally)
stable in probability; that almost all sample paths are attracted to
the target state as $t\to\infty$; and that this is also true in
expectation. Existence and uniqueness of the solution follows from
Lemma \ref{final:lem}.
(i) To study local stability, we can restrict ourselves to the stopped
process
$$ \varphi_{t\wedge\tilde\tau}(\rho,u)=
\varphi_{t\wedge\tilde\tau}(\rho,u_1),\quad
\tilde\tau=\inf\{t:\varphi_t(\rho,u)\not\in\mathcal{S}_{<1-\gamma/2}\}.
$$
Denote by $\mathscr{\tilde A}$ the weak infinitesimal operator of
$\varphi_{t\wedge\tilde\tau}(\rho,u_1)$, and note that Prop.\
\ref{pro:stopopen} allows us to calculate $\mathscr{\tilde A}V$ from
(\ref{single:eq}) in the usual way. In particular, we find
$\mathscr{\tilde A}V(\rho)=-u_1(\rho)^2\le 0$ for
$\rho\in\mathcal{S}_{<1-\gamma/2}$. Hence we can apply Theorem
\ref{thm:localstab} with $Q_\lambda=\mathcal{S}_{<1-\gamma/2}$ to conclude
stability in probability.
(ii) From Lemmas \ref{seventh:lem} and \ref{final:lem}, it follows that
$\varphi_{t}(\rho,u)\to\rho_f$ a.s.\ as $t\to\infty$.
(iii) We have shown that
$$
\mathbb{E}\left[
\lim_{t\to\infty}V(\varphi_{t}(\rho,u))
\right]=V(\rho_f)=0.
$$
But as $V$ is uniformly bounded, we obtain by dominated convergence
$$
V\left(\lim_{t\to\infty}\mathbb{E}\varphi_{t}(\rho,u)\right)=
\lim_{t\to\infty}\mathbb{E}\left[V(\varphi_{t}(\rho,u))\right]=0
$$
where we have used that $V$ is linear and continuous. Hence
$\mathbb{E}\varphi_{t}(\rho,u)\to\rho_f$.
\qquad\endproof
\section{Two-qubit systems}
\label{multi:sec}
The methods employed in the previous section can be extended to other
quantum feedback control problems. As an example, we treat the case of
two qubits in a symmetric dispersive interaction with an optical probe
field. Qubits, i.e.\ two-level quantum systems (having a Hilbert space
of dimension two), and in particular correlated (entangled) states of
multiple such qubits, play an important role in quantum information
processing. Here we investigate the stabilization of two such states in
the two-qubit system.
We begin by defining the Pauli matrices
$$
\sigma_x=\left(
\begin{array}{cc}
0 & 1 \\ 1 & 0
\end{array}
\right),\qquad
\sigma_y=\left(
\begin{array}{cc}
0 & -i \\ i & 0
\end{array}
\right),\qquad
\sigma_z=\left(
\begin{array}{cc}
1 & 0 \\ 0 & -1
\end{array}
\right)
$$
and we define the basis $\psi_\uparrow=(1~0)^*$ and
$\psi_\downarrow=(0~1)^*$ in $\mathbb{C}^2$. A system of two qubits
lives on the 4-dimensional space $\mathbb{C}^2\otimes\mathbb{C}^2$ with
the standard basis
\{$\psi_{\uparrow\uparrow}=\psi_\uparrow\otimes\psi_\uparrow$,
$\psi_{\uparrow\downarrow}=\psi_\uparrow\otimes\psi_\downarrow$,
$\psi_{\downarrow\uparrow}=\psi_\downarrow\otimes\psi_\uparrow$,
$\psi_{\downarrow\downarrow}=\psi_\downarrow\otimes\psi_\downarrow$\}.
We denote by $\sigma_{x,y,z}^1=\sigma_{x,y,z}\otimes\II$ and
$\sigma_{x,y,z}^2=\II\otimes\sigma_{x,y,z}$ the Pauli matrices on the
first and second qubit, respectively, and by
$F_{x,y,z}=\sigma_{x,y,z}^1+\sigma_{x,y,z}^2$ the (unnormalized)
collective angular momentum operators.
The quantum filtering equation for the two-qubit system is given by an
equation of the form (\ref{eq:qfilt}):
\begin{equation}\label{twoq:eq}
\begin{split}
d\rho_t&=-iu_1(t)[\sigma_y^1,\rho_t]\,dt-iu_2(t)[\sigma_y^2,\rho_t]\,dt
\\
&\qquad-\tfrac{1}{2}[F_z,[F_z,\rho_t]]\,dt
+\sqrt{\eta}\,(F_z\rho_t+\rho_t F_z-2\,\tr{F_z \rho_t}\rho_t)\,dW_t
\end{split}
\end{equation}
where $u_1$ and $u_2$ are two independent controls acting as local
magnetic fields in the $y$-direction on each of the qubits.
The main goal of this section is two stabilize this system around
two interesting target states,
$$
\rho_s=\frac{1}{2}(\psi_{\uparrow\downarrow}+\psi_{\downarrow\uparrow})
(\psi_{\uparrow\downarrow}+\psi_{\downarrow\uparrow})^*,
\qquad
\rho_a=\frac{1}{2}(\psi_{\uparrow\downarrow}-\psi_{\downarrow\uparrow})
(\psi_{\uparrow\downarrow}-\psi_{\downarrow\uparrow})^*.
$$
Here $\rho_s$ is a symmetric and $\rho_a$ is an antisymmetric qubit state.
\begin{theorem}\label{main2:thm}
Consider the following control law:
\begin{enumerate}
\item $u_1(t)=1-\tr{i[\sigma_y^1,\rho_t]\rho_a},~
u_2(t)=1-\tr{i[\sigma_y^2,\rho_t]\rho_a}$
if $\tr{\rho \rho_a}\ge\gamma$;
\item $u_1(t)=1,~u_2(t)=0$ if $\tr{\rho\rho_a}\le\gamma/2$;
\item If $\rho_t\in\mathcal{B}_a=\{\rho:\gamma/2<\tr{\rho
\rho_a}<\gamma\}$,
then take $u_1(t)=1-\tr{i[\sigma_y^1,\rho_t]\rho_a}$,
$u_2(t)=1-\tr{i[\sigma_y^2,\rho_t]\rho_a}$ if $\rho_t$
last entered the set $\mathcal{B}_a$ through the boundary
$\tr{\rho\rho_a}=\gamma$, and $u_1(t)=1,~u_2(t)=0$ otherwise.
\end{enumerate}
Then $\exists\gamma>0$ s.t.\ {\rm (\ref{twoq:eq})} is globally stable
around $\rho_a$ and $\mathbb{E}\rho_t\to\rho_a$ as $t\to\infty$.
Similarly,
\begin{enumerate}
\item $u_1(t)=1-\tr{i[\sigma_y^1,\rho_t]\rho_s},~
u_2(t)=-1-\tr{i[\sigma_y^2,\rho_t]\rho_s}$
if $\tr{\rho \rho_s}\ge\gamma$;
\item $u_1(t)=1,~u_2(t)=0$ if $\tr{\rho\rho_s}\le\gamma/2$;
\item If $\rho_t\in\mathcal{B}_s=\{\rho:\gamma/2<\tr{\rho
\rho_s}<\gamma\}$,
then take $u_1(t)=1-\tr{i[\sigma_y^1,\rho_t]\rho_s}$,
$u_2(t)=-1-\tr{i[\sigma_y^2,\rho_t]\rho_s}$ if $\rho_t$
last entered the set $\mathcal{B}_s$ through the boundary
$\tr{\rho\rho_s}=\gamma$, and $u_1(t)=1,~u_2(t)=0$ otherwise.
\end{enumerate}
stabilizes the system around the symmetric state $\rho_s$.
\end{theorem}
We will prove the result for the antisymmetric case; the proof
for the symmetric case may be done exactly in the same manner.
We proceed in the same way as in the proof of Theorem~\ref{single:thm}.
\subsection*{Step 1}
The proof of Lemma \ref{first:lem} carries over directly to the two
qubit case. The proof of Lemma \ref{add:lem} also carries over after
minor modifications; in particular, in the two qubit case we can
explicitly compute that
$$
A=-i\sigma_y^1-F_z^2+2F_z=
\begin{pmatrix}
0 & -1 & 0 &0 \\
1 & 0 &0& 0\\
0 & 0& 0& -1\\
0 & 0& 1& -8\\
\end{pmatrix}
$$
admits the diagonlization $A=PDP^{-1}$ with
$$
P=
\begin{pmatrix}
1 & 1 & 0 &0 \\
-i & i &0& 0\\
0 & 0& 1& 1\\
0 & 0& .1270& 7.8730\\
\end{pmatrix},\qquad
D=\begin{pmatrix}
i & 0 & 0 &0 \\
0 & -i &0& 0\\
0 & 0& -.1270& 0\\
0 & 0& 0& -7.8730\\
\end{pmatrix}.
$$
Hence the matrix $A$ has a nondegenerate spectrum and moreover
$$
\tilde v_a=\tfrac{1}{\sqrt 2}\,
P^*(\psi_{\uparrow\downarrow}-\psi_{\downarrow\uparrow})
=\tfrac{1}{\sqrt 2}\,(i ~ -i ~ -1 ~ -1)^*
$$
has only nonzero entries. The remainder of the proof is identical to
that of Lemma~\ref{first:lem}.
\subsection*{Step 2}
The proofs of Lemmas~\ref{fourth:lem} and \ref{fifth:lem} carry over
directly.
\subsection*{Step 3}
The proofs of Lemmas \ref{sixth:lem} and \ref{seventh:lem} carry over
directly. The following replaces Lemma \ref{seventh2:lem}. We denote by
$U_1(\rho)=1-\tr{i[\sigma_y^1,\rho]\rho_a}$,
$U_2(\rho)=1-\tr{i[\sigma_y^2,\rho]\rho_a}$ and by
$\varphi_t(\rho,U_1,U_2)$ the associated solution of (\ref{twoq:eq}).
\begin{lemma}\label{qu:lem}
The sample paths of $\varphi_t(\rho,U_1,U_2)$ that never exit the set
$\SSS_{<1-\gamma/2}$ converge in probability to $\rho_a$ as
$t\to\infty$.
\end{lemma}
\begin{proof}
Consider the Lyapunov function
$$
\VV(\rho)=1-\tr{\rho\rho_a}^2.
$$
It is easily verified that $\VV(\rho)\geq 0$ for all $\rho\in\SSS$ and
that $\VV(\rho)=0$ iff $\rho=\rho_a$. A straightforward computation gives
\begin{equation*}
\mathscr{A}\VV(\rho)=
-2\left[
(U_1(\rho)-1)^2+(U_2(\rho)-1)^2
\right]\tr{\rho\rho_a}
-4\eta\,\tr{\rho F_z}^2\,\tr{\rho\rho_a}^2
\le 0
\end{equation*}
where $\mathscr{A}$ is the weak infinitesimal operator associated
to $\varphi_t(\rho,U_1,U_2)$ (here we have used $[F_y,\rho_a]=0$
in calculating this expression). Now note that all the conditions
of Theorem \ref{thm:lasalle} are satisfied by virtue of Prop.\
\ref{pro:feller} and \ref{pro:uniformstoch}. Hence
$\varphi_t(\rho,U_1,U_2)$ converges in probability to the largest
invariant set contained in
$\mathcal{C}=\{\rho\in\mathcal{S}:\mathscr{A}\VV(\rho)=0\}$.
In order to satisfy the condition $\mathscr{A}\VV(\rho)=0$ we must have
at least
$$
\text{either}\quad \tr{\rho\rho_a}=0 \qquad \text{or} \quad
\tr{\rho F_z}=0.
$$
Let us investigate the largest invariant set contained in
$\mathcal{C}'=\{\rho\in\SSS:\tr{\rho F_z}=0\}$. Clearly this
invariant set can only contain $\rho\in\mathcal{C}'$ for which
$\tr{\varphi_t(\rho,U_1,U_2)F_z}$ is constant. Using It\^o's rule we
obtain
$$
d\,\tr{\rho_tF_z}=-\sum_{j=1}^2
U_j(\rho_t)\,\tr{i[\sigma_y^j,\rho_t]F_z}\,dt
+2\sqrt{\eta}\,(\tr{F_z^2\rho_t}-\tr{F_z\rho_t}^2)\,dW_t.
$$
Hence in order for $\tr{\varphi_t(\rho,U_1,U_2)F_z}$ to be constant, we
must at least have
$$
\tr{F_z^2\rho}-\tr{F_z\rho}^2=0
$$
which implies that $\rho$ must be an eigenstate of $F_z$.
The latter can only take one of the following forms: either
$\rho=\psi_{\uparrow\uparrow}\psi_{\uparrow\uparrow}^*$ or
$\rho=\psi_{\downarrow\downarrow}\psi_{\downarrow\downarrow}^*$, or
$\rho$ is any state of the form
\begin{equation}\label{mixedset:eq}
\rho=
\alpha\psi_{\uparrow\downarrow}\psi_{\uparrow\downarrow}^*+
\beta\psi_{\uparrow\downarrow}\psi_{\downarrow\uparrow}^*+
\beta^*\psi_{\downarrow\uparrow}\psi_{\uparrow\downarrow}^*+
(1-\alpha)\psi_{\downarrow\uparrow}\psi_{\downarrow\uparrow}^*.
\end{equation}
Let us investigate in particular the latter case. Note that any density
matrix of the form (\ref{mixedset:eq}) satisfies $F_z\rho=\rho F_z=0$.
Suppose that (\ref{twoq:eq}) with $u_1=U_1$, $u_2=U_2$ leaves the set
(\ref{mixedset:eq}) invariant; then the solution at time $t$ of
\begin{equation}\label{eq:deteffeq}
\frac{d}{dt}\rho_t=-i[F_y,\rho_t]
\end{equation}
must coincide with $\varphi_t(\rho,U_1,U_2)$ when $\rho$ is of the
form (\ref{mixedset:eq}), and in particular (\ref{eq:deteffeq})
must leave the set (\ref{mixedset:eq}) invariant (here we have
used that $U_1(\rho)=U_2(\rho)=1$ for $\rho$ of the form
(\ref{mixedset:eq})). We claim that this is only the case if
$\rho=\rho_a$, which implies that of all states of the form
(\ref{mixedset:eq}) only $\rho_a$ is in fact invariant. To see
this, note that by Lemma \ref{l:hullc} we can write any $\rho$ of
the form (\ref{mixedset:eq}) as a convex combination
$\sum_i\lambda_i\psi^i\psi^{i*}$ of unit vectors $\psi^i\in {\rm
span}\{\psi_{\uparrow\downarrow},\psi_{\downarrow\uparrow}\}$.
Thus the solution of (\ref{eq:deteffeq}) at time $t$ is given by
$\sum_i\lambda_i\psi_t^i\psi_t^{i*}$ with
\begin{equation*}
\frac{d}{dt}\psi_t^i=-iF_y\psi_t^i,\qquad\psi_0^i=\psi^i.
\end{equation*}
But $F_y\psi^i\not\in{\rm span}\{\psi_{\uparrow\downarrow},
\psi_{\downarrow\uparrow}\}$ unless $\psi^i\propto
\psi_{\uparrow\downarrow}-\psi_{\downarrow\uparrow}$, which
implies the assertion.
From the discussion above it is evident that the largest invariant
set contained in $\mathcal{C}$ must be contained inside the set
$\{\rho_a\}\cup\mathcal{S}_1$. But then the paths that never exit
$\mathcal{S}_{<1-\gamma/2}$ must converge in probability to
$\rho_a$. Thus the Lemma is proved.\qquad
\end{proof}
\subsection*{Step 4}
The remainder of the proof of Theorem~\ref{main2:thm} carries over
directly.
\section*{Acknowledgments}
The authors thank Hideo Mabuchi and Houman Owhadi for helpful
discussions.
|
1,314,259,995,910 | arxiv | \section{Introduction}
Chain conditions appear frequently in the study of countable groups. These are finiteness conditions that forbid certain infinite sequences of subgroups. An elementary but interesting example of such a condition is the property of being polycyclic. From a geometric group theory perspective, these finiteness conditions ought to restrict the complexity of the groups, as in the case of polycyclic groups. From a descriptive set theory perspective, however, the chain conditions are non-Borel co-analytic statements and, therefore, either admit ``nice" non-chain-condition characterizations - e.g.\ polycyclic groups are soluble with each term of the derived series finitely generated - or describe large and wild classes. In this work, we explore this tension in four chain conditions in the space of marked groups.\par
\indent In the space of marked groups, denoted $\mathscr{G}$, we first consider three well-known chain conditions: the minimal condition on centralizers, the maximal condition on subgroups, and the maximal condition on normal subgroups. We characterize each of these in terms of well-founded descriptive-set-theoretic trees. This characterization implies the classes in question are large and wild, whereby they do not admit ``nice" characterizations.
\begin{thm}
Each of the subsets of $\mathscr{G}$ defined by the minimal condition on centralizers, the maximal condition on subgroups, and the maximal condition on normal subgroups are co-analytic and not Borel. This remains true when restricting to finitely generated groups.
\end{thm}
Our techniques additionally give new ordinal-valued isomorphism invariants unbounded below the first uncountable ordinal in the cases of the minimal condition on centralizers and the maximal condition on subgroups. The ordinal-valued isomorphism invariant we obtain in the case of the maximal condition on normal subgroups is not new and has been considered in the literature; cf. \cite{C11}. However, our approach is new, and we show that this invariant is unbounded below the first uncountable ordinal.\par
\indent We next consider the set of elementary amenable marked groups. We likewise characterize these in terms of descriptive-set-theoretic trees. It follows that elementary amenability is indeed a chain condition.
\begin{thm}
A countable group $G$ is elementary amenable if and only if there is no infinite descending sequence of the form
$$G=G_0\geq G_1\geq\ldots\geq G_n \geq \ldots $$
such that for all $n\geq 0$, $G_n\neq\{e\}$ and there is a finitely generated subgroup $K_n\leq G_n$ with $G_{n+1}= [K_n,K_n]\cap H_n$, where $H_n$ is the intersection of the index-$(\leq(n+1))$ normal subgroups of $K_n$.
\end{thm}
Our characterization gives two new invariants of elementary amenable groups: the decomposition rank and decomposition degree. We further obtain
\begin{thm}
The sets of elementary amenable groups and finitely generated elementary amenable groups are co-analytic and non-Borel in the space of marked groups.
\end{thm}
It is well-known that the set of amenable groups is Borel in the space of marked groups. Our theorem thus gives a non-constructive answer to an old question of M. Day \cite{D57}, which was open until R. I. Grigorchuk \cite{G84} constructed groups of intermediate growth: \textit{Are all finitely generated amenable groups elementary amenable?}
\begin{cor}
There is a finitely generated amenable group that is not elementary amenable.
\end{cor}
The paper is organized as follows. In Section \ref{sec:Prelim}, we discuss the basic properties of $\mathscr{G}$ and introduce concepts from descriptive set theory. In Sections \ref{sec:MinCent},\ref{sec:Max}, and \ref{sec:MaxN}, we analyze sets of groups satisfying various chain conditions. This introduces our use of descriptive-set-theoretic trees to study the structure of groups as well as the ordinal-valued invariants arising from those trees. In Section \ref{sec:EAGroups}, we use those same techniques to analyze elementary amenable groups. In Section \ref{sec:Borel}, we prove the maps used throughout the paper are indeed Borel. Those who are content to believe that our constructions are Borel can safely skip this section without missing any group-theoretic content. Finally, Section \ref{sec:Remarks} discusses some questions arising from this paper not touched upon in earlier sections.
\section{Preliminaries}\label{sec:Prelim}
\subsection{The space of marked groups}
In order to apply the techniques of descriptive set theory to groups, we need an appropriate space of groups. Let $\mathbb{F}_{\omega}$ be the free group on the letters $\{a_i\}_{i\in\mathbb{N}}$; so $\mathbb{F}_{\omega}$ is a free group on countably many generators with a distinguished set of generators. The power set of $\mathbb{F}_{\omega}$ may be naturally identified with the Cantor space $\{0,1\}^{\mathbb{F}_{\omega}}=:2^{\mathbb{F}_{\omega}}$. It is easy to check the collection of normal subgroups of $\mathbb{F}_{\omega}$, denoted $\mathscr{G}$, is a closed subset of $2^{\mathbb{F}_{\omega}}$ and, hence, a compact Polish space. Each $N\in\mathscr{G}$ is identified with a \textbf{marked group}. That is the group $G=\mathbb{F}_{\omega}/N$ along with a distinguished generating set $\{f_N(a_i)\}_{i\in \mathbb{N}}$ where $f_N:\mathbb{F}_{\omega}\rightarrow G$ is the usual projection; we always denote this projection by $f_N$. For a marked group $G$, we abuse notation and say $G\in \mathscr{G}$; of course, we formally mean $G=\mathbb{F}_{\omega}/N$ for some $N\in \mathscr{G}$. Since every countable group is a quotient of $\mathbb{F}_{\omega}$, $\mathscr{G}$ gives a compact Polish space of all countable groups. A sub-basis for this topology is given by sets of the form
$$ O_{\gamma} := \left\{N\in\mathscr{G} \mid \gamma\in N\right\}, $$
where $\gamma\in\mathbb{F}_{\omega}$ along with their complements.
\indent Similar reasoning leads us to define the space of \textbf{$m$-generated marked groups} as
\[
\mathscr{G}_m := \bigcap_{i\geq m}\{N \trianglelefteq \mathbb{F}_{\omega} \mid a_i\in N\}.
\]
This is a closed subset of $\mathscr{G}$ and so is a compact Polish space in its own right. We further let $\mathscr{G}_{fg}:=\cup_{m\geq 1} \mathscr{G}_m$ be the space of finitely generated marked groups. As this is an $F_\sigma$ subset of $\mathscr{G}$, it is a standard Borel space, with Borel sets precisely those sets of the form $\mathscr{G}_{fg}\cap B$ with $B$ Borel in $\mathscr{G}$; a \textbf{standard Borel space} is a Borel space which admits a Polish topology that induces the Borel structure. We can thus also talk about Borel functions with domain $\mathscr{G}_{fg}$.\par
\indent It is convenient to give the marked groups $G=\mathbb{F}_{\omega}/N$ a preferred enumeration. To this end, we fix an enumeration $\bfs{\gamma}:=(\gamma_i)_{i\in \mathbb{N}}$ of $\mathbb{F}_{\omega}$. Each $G$ is thus taken to come with an enumeration $f_N(\bfs{\gamma}):=(f_N(\gamma_i))_{i\in \mathbb{N}}$; note the enumeration of $G$ may have many repetitions. When we write $G$ as $G=\{g_0,g_1,\ldots\}$, we will always mean this enumeration. Later in the paper we will work with $\mathbb{N}^{<\mathbb{N}}$, i.e. the set of finite sequences of natural numbers. If $(s_0,\ldots,s_n)=:s\in\mathbb{N}^{<\mathbb{N}}$, we will write $\{g_s\}$ for the set $\{g_{s_0},\ldots,g_{s_n}\}$. Note that this set may have fewer than $n+1$ elements, e.g.\ if $s_0=s_1=\ldots=s_n$, or even if the $s_i$ are distinct but enumerate the same element.\par
\indent We will often discuss quotients of groups or particular subgroups of groups, and of course we wish to view these as elements of $\mathscr{G}$. A quotient of a marked group is obviously again a marked group. However, subgroups of marked groups do not have an obvious marking. The enumeration gives us a preferred way to select markings for subgroups. If $H\leq \mathbb{F}_{\omega}/N=G\in \mathscr{G}$, let $\pi_H\colon\mathbb{F}_{\omega}\to\mathbb{F}_{\omega}$ be induced by mapping the generators $(a_i)_{i\in \mathbb{N}}$ of $\mathbb{F}_{\omega}$ as follows:
\[
\pi_H(a_j):=
\begin{cases}
\gamma_j, & \text{ if }f_N(\gamma_j)\in H \\
e, & \text{ else.}
\end{cases}
\]
We then identify $H$ with $\mathbb{F}_{\omega}/\ker (f_N\circ\pi_H)$. In the case $H$ has a distinguished finite generating set $\{g_{i_0},\dots,g_{i_n}\}$, we instead define $\pi_H(a_{i_j})=\gamma_{i_j}$ and $\pi_H(a_j)=e$ for $j\neq i_k$; this streamlines our proofs later. We often appeal to this convention implicitly.\par
\indent We will consider maps from and on $\mathscr{G}$. A slogan from descriptive set theory is ``Borel = explicit'' meaning if you describe a map ``explicitly'', i.e. without an appeal to something like the axiom of choice, it should be Borel. All of the maps we discuss in the next few sections will be ``explicit'' in this sense, so we will not prove they are Borel when we define them, in order to keep the focus on the group-theoretic aspects of our constructions. We will often use enumerations of groups in our constructions, but this will not require choice since every marked group comes with a preferred enumeration. For those who are interested in the details, we discuss the descriptive-set-theoretic aspects of our constructions in Section \ref{sec:Borel}.
\subsection{Descriptive set theory}
\indent We are interested in certain types of non-Borel subsets of $\mathscr{G}$. The following definitions and theorems are all fundamental in descriptive set theory; a standard reference is \cite{K95}.
\begin{defn}
Let $X,Y$ be uncountable standard Borel spaces. Then $A\subseteq Y$ is \textbf{analytic} (denoted $\Sigma^1_1$) if there is a Borel set $B \subseteq X\times Y$ such that $\operatorname{proj}_Y(B)=A$. A set $C\subseteq Y$ is \textbf{co-analytic} (denoted $\Pi^1_1$) if $Y\setminus C$ is analytic.
\end{defn}
Every Borel set is analytic, but any uncountable standard Borel space contains non-Borel analytic sets. It follows that there are non-Borel co-analytic sets. The collection of analytic sets is closed under countable unions, countable intersections, and Borel preimages. It follows the collection of co-analytic sets is closed under countable unions, countable intersections, and Borel preimages. We remark that sets defined using a single existential quantifier which ranges over an uncountable standard Borel space are often analytic as such quantification can typically be rewritten as a projection of a Borel set. Thus sets defined by using a universal quantifier over an uncountable set are often co-analytic.
\begin{defn}
Let $X,Y$ be standard Borel spaces, and $A\subseteq X$, $B\subseteq Y$. We say that $A$ \textbf{Borel reduces} to $B$ if there is a Borel map $f\colon X\to Y$ such that $f^{-1}(B)=A$.
\end{defn}
If $A$ Borel reduces to $B$ and $B$ is Borel, analytic, or co-analytic, then so is $A$. This gives us a method for proving that sets are, for example, co-analytic simply by showing they Borel reduce to a co-analytic set. One important example comes from the space of (descriptive-set-theoretic) trees.
\begin{defn}
A set $T\subseteq\mathbb{N}^{<\mathbb{N}}$ of finite sequences of natural numbers is a \textbf{tree} if it is closed under initial segments. A sequence $x\in\mathbb{N}^\mathbb{N}$ is a branch of $T$ if for all $n\in\mathbb{N}$, $x\restriction n\in T$. For $s\in T$, $T_s:=\{r\in \mathbb{N}^{<\mathbb{N}}\mid s^{\smallfrown} r\in T\}$ where ``$^{\smallfrown}$" indicates concatenation of finite sequences.
\end{defn}
As with groups, we may identify $X\subseteq\mathbb{N}^{<\mathbb{N}}$ with an element $f_X\in 2^{\mathbb{N}^{<\mathbb{N}}}$. We define
\[
Tr := \{ x\in 2^{\mathbb{N}^{<\mathbb{N}}} \mid x \text{ is a tree }\}.
\]
The set $Tr$ is a closed subset of $2^{\mathbb{N}^{<\mathbb{N}}}$ and so is a compact Polish space. A sub-basis for the topology on $Tr$ is given by sets of the form
$$ O_t := \left\{T\in Tr \mid t\in T\right\}, $$
where $t\in\mathbb{N}^{<\mathbb{N}}$ along with their complements.
\indent There are two subsets of $Tr$ of particular interest to us:
\[
IF := \{ T\in Tr \mid T \text{ has a branch } \}
\]
and $WF := Tr\setminus IF$. We call $WF$ the set of \textbf{well-founded} trees and $IF$ the set of \textbf{ill-founded} trees. One can check that $IF$ is analytic, so $WF$ is co-analytic. The importance of these sets comes from the following fact.
\begin{thm}\cite[Theorem 27.1]{K95}\label{thm:WFComplete}
Every analytic set Borel reduces to $IF$. Therefore, every co-analytic set Borel reduces to $WF$.
\end{thm}
\noindent Thus a set $A$ is co-analytic if and only if it Borel reduces to $WF$.\par
\indent We are interested in $WF$ for a second reason. Let $ORD$ denote the class of ordinals. For any $T\in WF$, we can define a function $\rho_T\colon T\to ORD$ inductively as follows: If $t\in T$ has no extensions in $T$, let $\rho_T(t)=0$. Otherwise let $\rho_T(t) = \sup\{\rho_T(s)+1 \mid t\subsetneq s \}$. We may then define a rank function $\rho\colon Tr\to ORD$ by
\[
\rho(T) =
\begin{cases}
\rho_T(\emptyset)+1, & \text{if }T\in WF\\
\omega_1, & \text{else.}
\end{cases}
\]
For $T=\emptyset$, we define $\rho(T)=0$. The function $\rho$ is bounded above by $\omega_1$, the first uncountable ordinal. Furthermore, this rank function has a special property:
\begin{defn}\label{def:PiRank}
Let $X$ be a standard Borel space and $A\subseteq X$. A function $\phi\colon A\to ORD$ is a \textbf{$\Pi^1_1$-rank} if there are relations $\leq_\phi^\Pi$, $\leq_\phi^\Sigma\subseteq X\times X$ such that $\leq_\phi^\Pi$ is co-analytic, $\leq_\phi^\Sigma$ is analytic, and for all $y\in A$,
\begin{align*}
x\in A \wedge \phi(x)\leq \phi(y) &\Leftrightarrow x \leq_\phi^\Sigma y \\
&\Leftrightarrow x \leq_\phi^\Pi y.
\end{align*}
\end{defn}
Given any rank function on $A$, one may use it to define an order $\leq_\phi$ on $A$. The idea of the above definition is that if $\phi$ is a $\Pi^1_1$-rank, then the initial segments of $\leq_\phi$ are Borel, and this is witnessed in a uniform way.
\begin{thm}\cite[Exercise 34.6]{K95}
The function $\rho\colon WF\to ORD$ is a $\Pi_1^1$-rank.
\end{thm}
\noindent We may use this fact to create other $\Pi_1^1$-ranks in an easy way: Let $X$ be a standard Borel space. If $A\subseteq X$ Borel reduces to $WF$ via $f$, then the map $x \mapsto \rho(f(x))$ is a $\Pi_1^1$-rank. \par
\indent The most important fact about $\Pi_1^1$-ranks for this paper is the following (\cite[Theorem 35.23]{K95}):
\begin{thm}[The Boundedness Theorem for $\Pi_1^1$-ranks]\label{thm:BddnessThm}
Let $X$ be a standard Borel space, $A\subseteq X$ co-analytic, and $\phi\colon A\to\omega_1$ a $\Pi_1^1$-rank. Then
$$A \text{ is Borel} \;\Longleftrightarrow\; \sup \{ \phi(x) \mid x\in A \} < \omega_1.$$
\end{thm}
We will use the Boundedness Theorem to show that certain $\Pi_1^1$ sets are not Borel, by showing that they come with $\Pi^1_1$-ranks with images unbounded below $\omega_1$. To this end, we will often use the following fact about the ranks of trees, which follows immediately from the definition.
\begin{lem}\label{lem:TrRkMonotone}
Suppose $S,T$ are trees and $\phi\colon S\to T$ is a map such that $s\subsetneq t \Rightarrow \phi(s) \subsetneq \phi(t)$. (We call such a map \textbf{monotone}.) Then $\rho_S(s)\leq \rho_T(\phi(s))$ for all $s\in S$. In particular $\rho(S)\leq\rho(T)$.
\end{lem}
\section{The minimal condition on centralizers}\label{sec:MinCent}
We wish to show certain chain conditions give rise to sets of marked groups which are $\Pi^1_1$ and not Borel in $\mathscr{G}$. We begin by looking at the following chain condition.
\begin{defn}
A subgroup $H$ of $G$ is a \textbf{centralizer} in $G$ if $H=C_G(A)$ for some $A\subseteq G$.
\end{defn}
\begin{defn}
A group $G$ satisfies the \textbf{minimal condition on centralizers} if there is no strictly decreasing infinite chain $C_0 > C_1 > \ldots$ of centralizers in $G$. We denote the class of countable groups satisfying the minimal condition on centralizers by $\mc M_C$.
\end{defn}
The class $\mc M_C$ is large, containing abelian groups, linear groups, and finitely generated abelian-by-nilpotent groups; see \cite{Br79} for further discussion. It is not hard to check that a group $G$ satisfies the minimal condition on centralizers if and only if it satisfies the maximal condition on centralizers, but our analysis is easier if we think about the minimal version of the chain condition.
Given a group $G\in\mathscr{G}$, we construct a tree $T_G\subseteq\mathbb{N}^{<\mathbb{N}}$ and associated groups $G_s\in \mathscr{G}$ for each $s\in T_G$. Each $G_s$ will be a centralizer in $G$.
\begin{enumerate}[$\bullet$]
\item Put $\emptyset\in T_G$ and let $G_\emptyset:=G=C_G(\emptyset)$.
\item Suppose that $s\in T_G$ and $G_s=C_G(\{g_s\})$ has already been defined. If $C_G(\{g_s\}\cup\{g_i\})\neq C_G(\{g_s\})$, then put $s^{\smallfrown} i\in T_G$ and $G_{s^{\smallfrown} i}:=C_G(\{g_s\}\cup\{g_i\})$.
\end{enumerate}
\begin{lem}\label{lem:CentMapBorel}
The map $\Phi_C\colon\mathscr{G}\to Tr$ given by $G\mapsto T_G$ is Borel.
\end{lem}
\noindent Intuitively, Lemma~\rm\ref{lem:CentMapBorel} holds since our construction is explicit; we delay a rigorous proof until Section \ref{sec:Borel}.
\begin{lem}\label{lem:CentMapReduction}
$T_G$ is well-founded if and only if $G\in\mc M_C$.
\end{lem}
\begin{proof}
If $G\in\mc M_C$, then $T_G$ contains no infinite branches by definition. If $G\notin\mc M_C$, then there is some infinite $A\subseteq G$ such that for all finite $B\subseteq A$, $C_G(A)\neq C_G(B)$. Let $a_0<a_1<a_2<\ldots$ be such that $A=\{g_{a_0},g_{a_1},\ldots\}$. By moving to a subsequence if necessary, we may assume that $C_G(\{g_{a_0},\ldots,g_{a_n}\}) \gneq C_G(\{g_{a_0},\ldots, g_{a_{n+1}}\})$ for all $n\in\mathbb{N}$. Then $(a_0,\ldots,a_n)\in T_G$ for all $n\in\mathbb{N}$, so $T_G$ has an infinite branch.
\end{proof}
\begin{lem}\label{lem:CentSubRank}
Let $H,G\in\mathscr{G}$. If $H\hookrightarrow G$, then $\rho(T_H)\leq\rho(T_G)$.
\end{lem}
\begin{proof}
Let $\alpha\colon H\hookrightarrow G$ and let $\psi\colon\mathbb{N}\to\mathbb{N}$ be such that $\alpha(h_k)=g_{\psi(k)}$. We now define a map $\phi:T_H\rightarrow \mathbb{N}^{<\mathbb{N}}$: Let $\phi(\emptyset)=\emptyset$. If $s\in T_H$ and $s=(s_0,\ldots,s_n)$, let $\phi(s)=(\psi(s_0),\ldots,\psi(s_n))$. Clearly $\phi$ is monotone. Further, if $s\in T_H$, then $H_s = C_H(\{h_s\}) \hookrightarrow C_G(\{g_{\phi(s)}\})$. Since $C_G(\{g_{\phi(s)}\})\cap \alpha(H) \cong C_H(\{h_s\})$, we have that $C_G(\{g_{\phi(s\restriction k)}\}) \neq C_G(\{g_{\phi(s\restriction (k+1))}\})$ for all $k<|s|$. Thus $\phi(s)\in T_G$. It follows $\phi(T_H)\subseteq T_G$, and by Lemma \ref{lem:TrRkMonotone}, $\rho(T_H)\leq\rho(T_G)$.
\end{proof}
\begin{cor}\label{cor:CentIsoInv}
If $G,G'\in\mathscr{G}$ and $G\cong G'$, then $\rho(T_G)=\rho(T_{G'})$.
\end{cor}
We thus see that $\rho(T_G)$ is an isomorphism invariant, so it makes sense to talk about the rank of a group $G$ with the minimal condition on centralizers, even when not considering a specific marking.
\begin{defn}
If $G$ has the minimal condition on centralizers, then $\rho(T_G)$ for some (any) marking of $G$ is called the \textbf{centralizer rank} of $G$.
\end{defn}
We also mention that the above results, except for Lemma \ref{lem:CentMapBorel}, work with arbitrary enumerations of the group $G$, not just those that can arise from viewing $G$ as a marked group. Certain enumerations may be easier to use to calculate $\rho(T_G)$, and Corollary \ref{cor:CentIsoInv} assures us that using these enumerations will not affect the answer. The same will be true of our later constructions. Of course, in this paper Lemma \ref{lem:CentMapBorel} and analogous results are of central importance, so we will continue to work with groups as elements of $\mathscr{G}$.
We now argue the centralizer rank is unbounded below $\omega_1$.
\begin{lem}\label{lem:CentRankSucc}
For $A,B\in \mc{M}_C$ with $A$ nonabelian, $A\times B\in \mc{M}_C$ and $\rho(T_B)<\rho(T_{A\times B})$.
\end{lem}
\begin{proof}
It is easy to see $A\times B\in \mc{M}_C$. Let $a\in A$ be noncentral. Then
\[
C_{A\times B}(\{(a,e)\})=(A\times B)_i
\]
for some $i\in T_{A\times B}$ since the centralizer is not all of $A\times B$. Further,
$$B\cong\{e\}\times B\leq C_{A\times B}(\{(a,e)\}),$$
so by Lemma \ref{lem:CentSubRank}, $\rho((T_{A\times B})_i)=\rho(T_{(A\times B)_i})\geq\rho(T_B)$. The result now follows.
\end{proof}
\begin{lem}\label{lem:CentRankLim}
Let $\{A_i\}_{i\in\mathbb{N}}$ be countable groups. If $A_i\in\mc M_C$ for all $i\in\mathbb{N}$, then there is a group $A\in \mc M_C$ such that $\rho(T_A)\geq\rho(T_{A_i})$ for all $i\in\mathbb{N}$.
\end{lem}
\begin{proof}
Let $A=\ast_{i\in\mathbb{N}} A_i$. By \cite[Corollary 4.1.6]{MKS66}, which says that centralizers in free products are cyclic or centralizers of a conjugate of a free factor, we infer that $A\in\mc M_C$. Lemma \ref{lem:CentSubRank} now implies that $\rho(T_A)\geq\rho(T_{A_i})$ for all $i\in\mathbb{N}$, as desired.
\end{proof}
\begin{lem}\label{lem:CentRankUnbdd}
For all $\alpha<\omega_1$, there is $G\in\mc M_C$ such that $\rho(T_G)\geq\alpha$.
\end{lem}
\begin{proof}
We prove this inductively. Clearly the lemma holds for $\alpha=0$. Suppose $\alpha=\beta+1$ and the lemma holds for $\beta$. Let $G\in\mc M_C$ be such that $\rho(T_G)\geq\beta$ and $A\in\mc M_C$ be nonabelian. Applying Lemma~\rm\ref{lem:CentRankSucc}, we see $\rho(T_{A\times G})\geq\beta+1$.
Suppose $\alpha$ is a limit ordinal. Since $\alpha$ is countable, there is a countable increasing sequence of $\alpha_i<\alpha$ such that $\sup_{i\in\mathbb{N}} \alpha_i =\alpha$. Let $G_i\in\mc M_C$ be such that $\rho(T_{G_i})>\alpha_i$. Applying Lemma~\rm\ref{lem:CentRankLim}, there is some $G\in\mc M_C$ such that $\rho(T_G)>\alpha_i$ for all $i\in\mathbb{N}$. It now follows that $\rho(T_G)\geq\alpha$.
\end{proof}
\begin{lem}\label{lem:CentFG}
For all $\alpha<\omega_1$, there is a finitely generated $G\in\mc M_C$ such that $\rho(T_G)\geq\alpha$.
\end{lem}
\begin{proof}
Let $H\in\mc M_C$ be a group such that $\rho(T_H)\geq\alpha$. Then \cite[Corollary on pg. 949]{KS71} implies that $H$ embeds into a 3-generated group $G\in\mc M_C$. By Lemma \ref{lem:CentSubRank}, $\rho(T_G)\geq\rho(T_H)\geq\alpha$ verifying the lemma.
\end{proof}
We remark that the proof of the result cited in the previous uses nothing more complicated than free products with amalgamation and is similar to the classical Higman-Neumann-Neumann embedding result \cite{HNN49}.
\begin{thm}\label{thm:MCNotBorel}
$\mc M_C$ is $\Pi^1_1$ and not Borel in $\mathscr{G}$, and $\mc M_C\cap\mathscr{G}_{fg}$ is $\Pi^1_1$ and not Borel in $\mathscr{G}_{fg}$.
\end{thm}
\begin{proof}
Let $\Phi_C$ be the Borel map from Lemma~\rm\ref{lem:CentMapBorel}. By Lemma \ref{lem:CentMapReduction}, $\Phi_C^{-1}(WF)=\mc M_C$, and since $\Phi_C$ is Borel, $\mc M_C$ is $\Pi^1_1$. Lemma \ref{lem:CentRankUnbdd} implies the ranks of the trees in $\Phi_C(\mc M_C)$ are unbounded below $\omega_1$, so the $\Pi_1^1$-rank on $\mc M_C$ given by $G\mapsto \rho(\Phi_C(G))$ is unbounded below $\omega_1$. By Theorem \ref{thm:BddnessThm}, we conclude that $\mc M_C$ is not Borel. Lemma \ref{lem:CentFG} implies the ranks of the trees in $\Phi_C(\mc M_C\cap\mathscr{G}_{fg})$ are also unbounded below $\omega_1$, and by Theorem \ref{thm:BddnessThm}, we conclude that $\mc M_C\cap\mathscr{G}_{fg}$ is also not Borel.
\end{proof}
\section{The maximal condition on subgroups}\label{sec:Max}
We next consider a more basic chain condition. Proving the analogue of Lemma \ref{lem:CentRankLim} in this context is more complicated, which is why we present it after the previous section.
\begin{defn}
A group $G$ satisfies the \textbf{maximal condition on subgroups}, abbreviated by saying a group satisfies max, if there is no strictly increasing chain $H_0 < H_1 < H_2 < \ldots$ of subgroups of $G$. Equivalently, a group $G$ satisfies max if all of its subgroups are finitely generated. We denote the class of groups satisfying max as $\mc M_{\max{}}$.
\end{defn}
Given a group $G\in \mathscr{G}$, we construct a tree $T_G\subseteq\mathbb{N}^{<\mathbb{N}}$ and associated groups $G_s\in \mathscr{G}$ for each $s\in T_G$.
\begin{enumerate}[$\bullet$]
\item Put $\emptyset\in T_G$ and let $G_\emptyset:=\{e\}$.
\item Suppose that $s\in T_G$ and $G_s=\<\{g_s\}\>$ has already been defined. If $\<\{g_s\}\cup\{g_i\}\>\neq\<\{g_s\}\>$, then put $s^{\smallfrown} i\in T_G$ and $G_{s^{\smallfrown} i}:=\<\{g_s\}\cup\{g_i\}\>$.
\end{enumerate}
\begin{lem}\label{lem:MaxMapBorel}
The map $\Phi_M\colon\mathscr{G}\to Tr$ given by $G\mapsto T_G$ is Borel.
\end{lem}
We will prove Lemma~\ref{lem:MaxMapBorel} in Section \ref{sec:Borel}.
\begin{lem}\label{lem:MaxMapReduction}
$T_G$ is well-founded if and only if $G\in \mc M_{\max{}}$.
\end{lem}
\begin{proof}
If $G\in \mc M_{\max{}}$, then $T_G$ contains no infinite branches by definition. If $G\notin \mc M_{\max{}}$, then there is some infinitely generated subgroup $H\leq G$. There is some increasing sequence $a_0<a_1<\ldots$ of natural numbers such that $H=\<g_{a_0},g_{a_1},\ldots\>$. We may assume that $\<g_{a_0},\ldots,g_{a_n}\> \lneq \<g_{a_0},\ldots,g_{a_{n+1}}\>$ for all $n\in\mathbb{N}$. Then $(a_0,\ldots,a_n)\in T_G$ for all $n\in\mathbb{N}$, so $T_G$ has an infinite branch.
\end{proof}
\begin{lem}\label{lem:MaxSubRank}
Let $H,G\in\mathscr{G}$. If $H\hookrightarrow G$, then $\rho(T_H)\leq\rho(T_G)$.
\end{lem}
\begin{proof}
Let $\psi\colon\mathbb{N}\to\mathbb{N}$ be such that $h_k=g_{\psi(k)}$. We now define a map $\phi:T_H\rightarrow \mathbb{N}^{<\mathbb{N}}$: Let $\phi(\emptyset)=\emptyset$. If $s\in\mathbb{N}^{<\mathbb{N}}$ and $s=(s_0,\ldots,s_n)$, let $\phi(s)=(\psi(s_0),\ldots,\psi(s_n))$. Clearly $\phi$ is monotone. Furthermore, if $s\in T_H$, then $H_s \cong G_{\phi(s)}$, hence $\phi(T_H)\subseteq T_G$. Lemma \ref{lem:TrRkMonotone} now implies $\rho(T_H)\leq\rho(T_G)$.
\end{proof}
The previous lemma implies $\rho(T_G)$ is a group invariant.
\begin{cor}
If $G,G'\in\mathscr{G}$ and $G\cong G'$, then $\rho(T_G)=\rho(T_{G'})$.
\end{cor}
\begin{defn}
If $G$ has the maximal condition on subgroups, then $\rho(T_G)$ for some (any) marking of $G$ is called the \textbf{subgroup rank} of $G$.
\end{defn}
\begin{lem}\label{lem:MaxRankSucc}
For all groups $G\in \mc M_{\max{}}$, $G\times\mathbb{Z}\in \mc M_{\max{}}$ and $\rho(T_G)<\rho(T_{G\times\mathbb{Z}})$.
\end{lem}
\begin{proof}
It is easy to see $G\times \mathbb{Z}$ satisfies max. For the latter condition, let $G=\{g_0,g_1,\ldots\}$ and $G\times\mathbb{Z}=\{a_0,a_1,\ldots\}$. There is some $k\in\mathbb{N}$ such that $a_k=(e_G,z)$ where $\mathbb{Z}=\<z\>$. Let $\psi\colon\mathbb{N}\to\mathbb{N}$ be defined such that $a_{\psi(m)}=(g_m,e_\mathbb{Z})$. The map $\phi:T_G\rightarrow \mathbb{N}^{<\mathbb{N}}$ given by $(s_0,\ldots,s_n)\mapsto(\psi(s_0),\ldots,\psi(s_n))$ is clearly monotone, and further, $\phi(T_G)\subseteq (T_{G\times\mathbb{Z}})_k$. By Lemma \ref{lem:TrRkMonotone}, $\rho(T_G) \leq \rho((T_{G\times\mathbb{Z}})_k)<\rho(T_{G\times\mathbb{Z}})$.
\end{proof}
\begin{lem}\label{lem:MaxRankLim}
Let $\{A_i\}_{i\in\mathbb{N}}$ be countable groups. If $A_i\in \mc M_{\max{}}$ for each $i\in\mathbb{N}$, then there is a group $A\in \mc M_{\max{}}$ such that $\rho(T_A)\geq\rho(T_{A_i})$ for all $i\in\mathbb{N}$.
\end{lem}
\begin{proof}
This is a consequence of \cite[Theorem 2]{Ol89} due to A. Y. Olshanskii. This result gives a 2-generated group $A$ containing each of the $A_i$ such that every proper subgroup of $A$ is either contained in a conjugate of some $A_i$, is infinite cyclic, or is infinite dihedral. Thus if every subgroup of each $A_i$ is finitely generated, then every subgroup of $A$ is finitely generated, and so $A\in\mc M_{\max{}}$. Since each $A_i$ is a subgroup of $A$, Lemma \ref{lem:MaxSubRank} implies that $\rho(T_A)\geq\rho(T_{A_i})$ for all $i\in\mathbb{N}$ as desired.
\end{proof}
\begin{lem}\label{lem:MaxRankUnbdd}
For all $\alpha<\omega_1$, there is $G\in \mc M_{\max{}}$ such that $\rho(T_G)\geq\alpha$.
\end{lem}
\begin{proof}
The proof is the same as that of Lemma \ref{lem:CentRankUnbdd}, with Lemmas \ref{lem:MaxRankSucc} and \ref{lem:MaxRankLim} referenced at the appropriate places.
\end{proof}
\begin{thm}
$\mc M_{\max{}}$ is $\Pi^1_1$ and not Borel in $\mathscr{G}$ and $\mathscr{G}_{fg}$.
\end{thm}
\begin{proof}
The proof is the same as that of Theorem \ref{thm:MCNotBorel} using Lemmas \ref{lem:MaxMapBorel} and \ref{lem:MaxRankUnbdd} where appropriate. The statement is true for $\mc G_{fg}$ simply because $\mc M_{\max{}}\subseteq\mc G_{fg}$.
\end{proof}
\section{The maximal condition on normal subgroups}\label{sec:MaxN}
Given a group $G$ and a set $S\subseteq G$, we write $\<\<S\>\>_G$ to denote the normal closure of $S$ in $G$. We suppress the subscript $G$ when the group is clear from context.
\begin{defn}
A group $G$ satisfies the \textbf{maximal condition on normal subgroups}, abbreviated by saying a group satisfies max-n, if there is no infinite strictly increasing chain of normal subgroups of $G$. Equivalently, a group $G$ satisfies max-n if each of its normal subgroups is the normal closure of finitely many elements of $G$. We denote the class of groups satisfying max-n as $\mc M_n$.
\end{defn}
Given $G\in \mathscr{G}$, we construct a tree $T_G\subseteq\mathbb{N}^{<\mathbb{N}}$ and associated groups $G_s\in \mathscr{G}$ for each $s\in T_G$.
\begin{enumerate}[$\bullet$]
\item Put $\emptyset\in T_G$ and let $G_\emptyset:=G$.
\item Suppose that $s\in T_G$ and $G_s=G/\ngrp{\{g_s\}}$ has already been defined. If $\ngrp{\{g_s\}\cup\{g_i\}}\neq \ngrp{\{g_s\}}$, then put $s^{\smallfrown} i\in T_G$ and $G_{s^{\smallfrown} i}:=G/\ngrp{\{g_s\}\cup\{g_i\}}$.
\end{enumerate}
\begin{lem}\label{lem:MaxNMapBorel}
The map $\Phi_{M_n}\colon\mathscr{G}\to Tr$ given by $G\mapsto T_G$ is Borel.
\end{lem}
We prove Lemma~\ref{lem:MaxNMapBorel} in Section \ref{sec:Borel}.
\begin{lem}\label{lem:MaxNMapReduction}
$T_G$ is well-founded if and only if $G\in\mc M_n$.
\end{lem}
\begin{proof}
If $G\in\mc M_n$, then $T_G$ contains no infinite branches by definition. If $G\notin\mc M_n$, then there is a normal subgroup $N\trianglelefteq G$ such that $N=\ngrp{g_{a_0},g_{a_1},\ldots}$ and
$$\ngrp{g_{a_0},\ldots,g_{a_n}} \lneq \ngrp{g_{a_0},\ldots,g_{a_{n+1}}}$$
for all $n\in\mathbb{N}$. Thus the sequence $(a_0,\ldots, a_n)$ is an element of $T_G$ for any $n\in\mathbb{N}$, so $T_G$ has an infinite branch.
\end{proof}
\begin{lem}\label{lem:MaxNImRank}
If $G\in\mc M_n$ and $f:G\twoheadrightarrow G'$, then $\rho(T_{G})\geq \rho(T_{G'})$ with equality if and only if $f$ is injective.
\end{lem}
\begin{proof}
Since $G$ is max-n, $\ker(f)=\ngrp{S}$ for some finite $S=\{g_{s_0},\ldots,g_{s_n}\}$. We may assume that $n$ is minimal, so no element of $S$ is in the normal closure of the others. Setting $(s_0,\dots,s_n)=:s\in\mathbb{N}^{<\mathbb{N}}$, the minimality of $S$ implies $s\in T_G$; in the case $\ker(f)=\{1\}$, we take $s=\emptyset$. Let $\psi\colon\mathbb{N}\to\mathbb{N}$ be a map such that $g'_k=f(g_{\psi(k)})$. Then for all $i_0,\ldots,i_k\in\mathbb{N}$,
\[
G'/\<\<g'_{i_0},\ldots,g'_{i_k}\>\>_{G'} \cong G/\<\<g_{\psi(i_0)},\ldots,g_{\psi(i_k)},S\>\>_G,
\]
and the monotone map $\phi:T_{G'}\rightarrow \mathbb{N}^{<\mathbb{N}}$ given by $(r_0,\ldots,r_n)\mapsto(\psi(r_0),\ldots,\psi(r_n))$ sends $T_{G'}$ into $(T_G)_s$. By Lemma \ref{lem:TrRkMonotone}, $\rho(T_{G'}) \leq \rho((T_G)_s)\leq\rho(T_G)$, and the rightmost inequality is strict if and only if $s\neq \emptyset$.
\end{proof}
We conclude this rank is also isomorphism invariant.
\begin{cor}\label{cor:MaxNRankIsoInv}
If $G,G'\in\mathscr{G}$ and $G\cong G'$, then $\rho(T_G)=\rho(T_{G'})$.
\end{cor}
Recall that a group is \textbf{hopfian} if it is not isomorphic to any of its proper quotients. The following corollary is easy enough to prove directly, but it follows immediately from Lemma \ref{lem:MaxNImRank} and Corollary~\ref{cor:MaxNRankIsoInv}.
\begin{cor}
If $G\in\mc M_n$, then $G$ is hopfian.
\end{cor}
Unlike the previous invariants, this rank has appeared before in the literature; cf.\ \cite{C11}.
\begin{defn}
If $G$ has the maximal condition on normal subgroups, then $\rho(T_G)$ for some (any) marking of $G$ is called the \textbf{length} of $G$.
\end{defn}
If we were to follow our template from previous sections, we would move on to analogues of Lemmas \ref{lem:CentRankSucc} and \ref{lem:CentRankLim}. However, we were unable to prove an analogue of Lemma \ref{lem:CentRankLim} which would take advantage of Lemma \ref{lem:MaxNImRank}. Such a result would be a sort of dual version of the result of Olshanskii cited in the proof of Lemma \ref{lem:MaxRankLim}. Specifically, the following question is open to the best of the authors' knowledge:
\begin{quest}
Suppose $\{A_i\}_{i\in\mathbb{N}}$ is a set of normally $k$-generated max-n groups. Is there a max-n group $A$ such that $A\twoheadrightarrow A_i$ for all $i\in\mathbb{N}$?
\end{quest}
A positive answer to this question would give us exactly the right analogue of Lemma \ref{lem:CentRankLim}. Lacking this, we will use a construction involving (restricted) wreath products. Recall the wreath product of $H$ and $G$ is $H\wr G:=H^{<G}\rtimes G$ where $H^{<G}$ denotes the direct sum and $G\curvearrowright H^{<G}$ by shift; in the case $G\curvearrowright X$ for some set $X$, we write $H\wr_X G:=H^{<X}\rtimes G$. We will see that we can relate $\rho(T_{H\wr G})$ to both $\rho(T_H)$ and $\rho(T_G)$, while Lemma \ref{lem:MaxNImRank} alone only gives us information about how $\rho(T_{H\wr G})$ and $\rho(T_G)$ relate.
We will focus on perfect max-n groups with no central factors; let us call the set of such groups $\mc M'_n$. A group $G$ is said to have a \textbf{central factor} if there are normal subgroups $L\trianglelefteq M $ in $G$ such that $M/L$ is nontrivial and central in $G/L$. Since $\mc M'_n\subseteq \mc M_n$, it is enough for our purposes to show that $\rho$ is unbounded below $\omega_1$ on $\mc M'_n$ and $\mc M'_n\cap \mathscr{G}_{fg}$.
\begin{lem}\label{lem:MaxNRankSucc}
Let $S$ be an infinite simple group. For all groups $G\in\mc M_n$, $G\times S\in\mc M_n$ and $\rho(T_G)<\rho(T_{G\times S})$. If $G\in\mc M'_n$, then so is $G\times S$.
\end{lem}
\begin{proof}
It is easy to see that $G\times S\in\mc M_n$, and since $G$ is a quotient of $G\times S$, Lemma \ref{lem:MaxNImRank} implies $\rho(T_G)<\rho(T_{G\times S})$. If $G$ is perfect, then $G\times S$ is perfect, so for the last statement we need only to check that if $G$ has no central factors, then $G\times S$ has no central factors. Suppose that $L, M\trianglelefteq G\times S$ give a central factor. Let $\pi\colon G\times S \to G$ be the usual projection. Since $G$ has no central factors, $\pi(M)=\pi(L)$. Thus $MS=LS$, so $M=L(S\cap M)$. Since $S$ has no central factors, $S\cap M=S\cap L$. We conclude that $M=L$, whereby $G\times S$ has no central factors, a contradiction.
\end{proof}
Lemma~\rm\ref{lem:MaxNRankSucc} allows us to find a group in $\mc M'_n$ with rank greater than a given group in $\mc M'_n$. However, we also need to be able to find a group in $\mc M'_n$ with rank greater than a countable family of groups from $\mc M'_n$. We begin by looking at properties of the ranks of wreath products.
\begin{lem}\label{lem:max-n}
Suppose $H$ and $G$ are groups satisfying max-n. Then $\rho(T_{H\wr G})\geq \rho(T_G)+\rho(T_H)$.
\end{lem}
\begin{proof}
For each $h\in H$ define $f_h\in H^{<G}$ by
\[
f_h(g)=
\begin{cases}
h, & \text{ if } g=e\\
e, & \text { else.}
\end{cases}
\]
Let $H=\{h_0,h_1,\ldots\}$ and let $\psi\colon\mathbb{N}\to\mathbb{N}$ be a map such that $f_{h_i}=g_{\psi(i)}$. We now define a monotone $\phi:T_H\rightarrow \mathbb{N}^{<\mathbb{N}}$: Put $\phi(\emptyset)=\emptyset$. For non-empty $s\in T_H$, define $\phi$ by
$$(s_0,\dots,s_k)\mapsto \left(\psi(s_0),\dots, \psi(s_k)\right).$$
\indent We argue $\phi$ maps $T_H$ into $T_{H\wr G}$ by induction on the length of $s\in T_H$. As the base case is immediate, say $s\in T_H$ and $s^{\smallfrown} k\in T_H$. By construction, it is the case that $\ngrp{\{h_s\}\cup \{h_k\}}_H\neq\ngrp{\{h_s\}}_H$. For all $t\in T_H$, $\ngrp{\{g_{\phi(t)}\}}_{H\wr G}=\ngrp{\{h_t\}}_H^{<G}$, hence
\[
\ngrp{\{g_{\phi(s)}\}\cup\{g_{\psi(k)}\}}_{H\wr G}\neq \ngrp{\{g_{\phi(s)}\}}_{H\wr G}.
\]
We conclude that $\phi(s^{\smallfrown} k)\in T_{H\wr G}$, so $\phi$ maps $T_H$ into $T_{H\wr G}$.\par
Now if $s=(s_0,\ldots,s_n)\in T_H$ is a terminal node, then $\ngrp{\{h_s\}}_H = H$. In this case $(H\wr G)/\ngrp{\{g_{\phi(s)}\}}_{H\wr G} \cong G$, so $\rho(T_G)=\rho((T_{H\wr G})_{\phi(s)})$ by Corollary \ref{cor:MaxNRankIsoInv}. The desired result now follows.
\end{proof}
In general, $H\wr G$ need not be max-n. A theorem of P. Hall provides a sufficient condition for this.
\begin{thm}[Hall, {\cite[Theorem 4]{H54}}]\label{thm:max-n_wreath}
Let $H$ and $G$ be groups satisfying max-n. If $H$ has no central factors, then $H\wr G$ satisfies max-n.
\end{thm}
Our next lemma allows us to iterate wreath products and remain in $\mc M'_n$.
\begin{lem}\label{lem:central_factors}
If $G$ and $H$ have no central factors, then $H\wr G$ has no central factors.
\end{lem}
\begin{proof}
Suppose $L\trianglelefteq M$ gives a central factor of $H\wr G$. Let $\pi:H\wr G\rightarrow G$ be the usual projection. Since $G$ has no central factors, it must be the case that $\pi(L)=\pi(M)$, so $LH^{<G}=MH^{<G}$. Thus, $M=L(H^{<G}\cap M)$, and it suffices to show $H^{<G}\cap M\leq L$. Since $H$ has no central factors, it follows similarly to the proof of Lemma~\rm\ref{lem:MaxNRankSucc} that $H^{F}\cap M=H^{F}\cap L$ for all finite $F\subseteq G$. We conclude that $H^{<G}\cap M\leq L$ verifying the lemma.
\end{proof}
It is easy to see the wreath product of two perfect groups is perfect, so using Theorem \ref{thm:max-n_wreath} and Lemma \ref{lem:central_factors}, the class $\mc M'_n$ is closed under wreath products. With the following fact from the literature, we are equipped to prove the desired lemma.
\begin{lem}[{\cite[Lemma 3.6]{GKO14}}]\label{lem:wreath}
Suppose $A,B$ are countable groups and form $G=A\wr B$. If $N\trianglelefteq G$ meets $B$ non-trivially, then $[A,A]^{<B}\leq N$.
\end{lem}
\begin{lem}\label{lem:MaxNRankLim}
Let $\{A_i\}_{i\in\mathbb{N}}$ be countable groups. If $A_i\in\mc M'_n$ for all $i\in\mathbb{N}$, then there is a group $A\in\mc M'_n$ such that $\rho(T_{A_i})\leq\rho(T_A)$ for all $i\in\mathbb{N}$.
\end{lem}
\begin{proof}
For each $n$, put $G_n:=A_n\wr\left(\dots \wr A_0\right)$. By making the natural identification, we may assume $G_{n}\leq G_{n+1}$ for all $n$ and form $A:=\bigcup_{n\in \mathbb{N}} G_n$. (Alternatively, one may take the direct limit.) \par
\indent Consider $N\trianglelefteq A$. Certainly, $N\cap G_n$ is non-trivial for some $n$. Fix such an $n$ and take $k>n$. We now see $N\cap G_{k}\trianglelefteq G_{k}=A_{k}\wr G_{k-1}$ is a normal subgroup that meets $G_{k-1}$ non-trivially. Applying Lemma~\rm\ref{lem:wreath}, $[A_{k},A_{k}]^{<G_{k-1}}\leq N\cap G_{k}$. Since $A_{k}$ is perfect, we have that $A_{k}^{<G_{k-1}}\leq N$. It now follows that $A/N$ is isomorphic to a quotient of $G_n$.
\indent Suppose $(N_i)_{i\in \mathbb{N}}$ is an increasing sequence of normal subgroups of $A$. By the previous paragraph, $A/N_0$ is a quotient of $G_n$ for some $n$, and Theorem~\rm\ref{thm:max-n_wreath} implies that each $G_n$ is a max-n group. Letting $\pi:A\rightarrow A/N_0$ be the usual projection, it is thus the case that $\pi(N_i)=\pi(N_j)$ for all sufficiently large $i$ and $j$. Therefore, $N_i=N_j$ for all sufficiently large $i$ and $j$, and $A$ satisfies max-$n$. \par
\indent For each $n$ and $k>n$, define
\[
L_n^k:=A_k\wr_{G_{k-1}}\left( \dots\wr_{G_{n+2}}\left(A_{n+2}\wr_{G_{n+1}}A_{n+1}^{<G_n}\right)\right)
\]
and put $L_n:=\bigcup_{k>n}L_n^k$. We see $L_n\trianglelefteq A$ and $A/L_n\cong G_n$. By Lemmas \ref{lem:max-n} and \ref{lem:MaxNImRank}, $\rho(T_{A_n})\leq\rho(T_{G_n})\leq\rho(T_A)$ for all $n$.
\indent We finally verify $A$ is perfect and has no central factors. That $A$ is perfect is immediate. It follows from Lemma~\rm\ref{lem:central_factors} and induction that each $G_n$ has no central factors. Since any factor of $A$ is a factor of $G_n$ for some $n$, $A$ has no central factors.
\end{proof}
\begin{lem}\label{lem:MaxNRankUnbdd}
For all $\alpha<\omega_1$, there is $G\in\mc M'_n$ such that $\rho(T_G)\geq\alpha$.
\end{lem}
\begin{proof}
The proof is the same as that of Lemma \ref{lem:CentRankUnbdd}, with Lemmas \ref{lem:MaxNRankSucc} and \ref{lem:MaxNRankLim} referenced at the appropriate places.
\end{proof}
The groups given by Lemma~\rm\ref{lem:MaxNRankUnbdd} are not, in general, finitely generated. For finding finitely generated examples another result of Hall is needed.
\begin{lem}[Hall, {\cite[cf. Theorem 4]{H61}}]\label{lem:Hall}
Let $H$ be a countable group. Then there exists a short exact sequence
\[
\{e\}\rightarrow M\rightarrow G\rightarrow \mathbb{Z}\rightarrow \{e\}
\]
where $G$ is 2-generated, $[M,M]=[H,H]^{<\mathbb{Z}}$, and there is $t\in G$ so that the conjugation action of $t$ on $[M,M]$ is by unit shift.
\end{lem}
It is useful to sketch Hall's construction of $G$. Let $\{h_i\}_{i\in \mathbb{N}}$ list $H$ and form the unrestricted wreath product $H^{\mathbb{Z}}\rtimes \mathbb{Z}$. Define $\sigma\in H^{\mathbb{Z}}$ by
\[
\sigma(i):=
\begin{cases}
h_n, & \text{ if }i=2^n\\
e, & \text{ else}
\end{cases}
\]
and let $t$ be a generator for $\mathbb{Z}$ in $H^{\mathbb{Z}}\rtimes \mathbb{Z}$. The desired group is then $G:=\grp{t,\sigma}$. The subgroup $M$ equals $\grp{g\sigma g^{-1}\mid g\in G}$.\par
\indent We point out a consequence of the construction for later use: Suppose $H$ is perfect and $h\in H$. Taking $f_h\in [H,H]^{<\mathbb{Z}}=H^{<\mathbb{Z}}$ as defined in Lemma \ref{lem:max-n}, the construction of $G$ implies $\ngrp{f_h}_G=\ngrp{h}_H^{<\mathbb{Z}}$.
\begin{cor}\label{cor:MaxNFG}
For each $\alpha<\omega_1$, there is a finitely generated group $G\in\mc M_n$ with \mbox{$\rho(T_G)\geq \alpha$}.
\end{cor}
\begin{proof}
Fix $\alpha<\omega_1$ and apply Lemma~\rm\ref{lem:MaxNRankUnbdd} to find a group $H\in\mc M'_n$ with $\rho(T_H)\geq \alpha$. We now apply Lemma~\rm\ref{lem:Hall} to find a 2-generated group $G$ with a short exact sequence
\[
\{e\}\rightarrow M\rightarrow G\rightarrow \mathbb{Z}\rightarrow \{e\}
\]
where $[M,M]=[H,H]^{<\mathbb{Z}}=H^{<\mathbb{Z}}$. \par
\indent The group $G/[M,M]$ is a finitely generated metabelian group, hence it satisfies max-n by \cite[Theorem 3]{H54}. On the other hand, any normal subgroup of $G$ that lies in $[M,M]=H^{<\mathbb{Z}}$ is shift-invariant because $G$ contains an element that acts by shift on $[M,M]$. Since $H\wr\mathbb{Z}$ is max-n, it follows that $H^{<\mathbb{Z}}$ is max-$G$; that is to say $H^{<\mathbb{Z}}$ has the maximal condition on subgroups invariant under the conjugation action by $G$. We conclude the group $G$ is max-n.\par
\indent It remains to compute a lower bound for $\rho(T_G)$. Using again the notation from Lemma \ref{lem:max-n}, find $\psi:\mathbb{N}\rightarrow \mathbb{N}$ such that for each $k\in \mathbb{N}$ we have $f_{h_k}=g_{\psi(k)}$. Since $\ngrp{f_h}_G=\ngrp{h}_H^{<\mathbb{Z}}$, we may argue as in Lemma~\ref{lem:max-n} to conclude that $\alpha\leq \rho(T_H)\leq \rho(T_G)$. That is to say, we can define a monotone $\phi\colon T_H\to T_G$ and by Lemma~\rm\ref{lem:TrRkMonotone} conclude that $\alpha\leq \rho(T_H)\leq \rho(T_G)$.
\end{proof}
\begin{thm}
$\mc M_n$ is $\Pi^1_1$ and not Borel in $\mathscr{G}$, and $\mc M_n\cap G_{fg}$ is $\Pi^1_1$ and not Borel in $\mc G_{fg}$.
\end{thm}
\begin{proof}
This follows from Theorem \ref{thm:BddnessThm}, Lemma \ref{lem:MaxNMapBorel}, and Corollary \ref{cor:MaxNFG}.
\end{proof}
\section{Elementary amenable groups}\label{sec:EAGroups}
Perhaps surprisingly, the property of being elementary amenable may also be characterized by well-founded trees. This in turn gives a chain condition equivalent to elementary amenability.
\subsection{Preliminaries} We study the collection of elementary amenable groups. This class is typically defined as follows:
\begin{defn}\label{def:EA}
The collection of \textbf{elementary amenable groups}, denoted $\mathop{\rm EG} \nolimits$, is the smallest collection of countable groups such that
\begin{enumerate}[(i)]
\item $\mathop{\rm EG} \nolimits$ contains all finite groups and abelian groups.
\item $\mathop{\rm EG} \nolimits$ is closed under group extensions.
\item $\mathop{\rm EG} \nolimits$ is closed under countable increasing unions.
\item $\mathop{\rm EG} \nolimits$ is closed under taking subgroups.
\item $\mathop{\rm EG} \nolimits$ is closed under taking quotients.
\end{enumerate}
\end{defn}
Our results here require a fairly well-known embedding result, which is based on a generalization of Lemma~\ref{lem:Hall}.
\begin{prop}[Hall; Neumann, Neumann {\cite[Theorem 5.1]{NN59}}]\label{prop:EAembedding}
Suppose $K\in \mathop{\rm EG} \nolimits$. Then there exists $H\in \mathop{\rm EG} \nolimits$ and a short exact sequence
\[
\{e\}\rightarrow M\rightarrow G\rightarrow \mathbb{Z}\rightarrow \{e\}
\]
where $G$ is 2-generated, $G\in \mathop{\rm EG} \nolimits$, $[M,M]=[H,H]^{<\mathbb{Z}}$, and $K$ embeds into $[H,H]$.
\end{prop}
\subsection{Decomposition trees}
We now define a tree associated to a marked group $G$. Just as in the previous sections, we will see that this tree being well-founded or not gives group-theoretic information about $G$, in this case characterizing being an elementary amenable group.\par
Let $G\in\mathscr{G}$. For $n\geq 0$, put $R_n(G):=\<g_0,\ldots,g_n\>$ and for $k\geq 1$, define
\[
S_k(G):=[G,G]\cap\bigcap\mc{N}_{k}(G)
\]
where $\mc{N}_k(G):=\{N\trianglelefteq G\;|\;|G:N|\leq k+1\}$. For each $l\geq 1$, we now define a tree $T^l(G)\subseteq\mathbb{N}^{<\mathbb{N}}$ and associated groups $G_s\in\mathscr{G}$ as follows:
\begin{enumerate}[$\bullet$]
\item Put $\emptyset\in T^l(G)$ and let $G_{\emptyset}:=G$.
\item Suppose we have $s\in T^l(G)$ and $G_s$. If $G_s\neq\{e\}$, put $s^{\smallfrown} n\in T^l(G)$ and $G_{s^{\smallfrown} n}:=S_{|s|+l}\left(R_n\left(G_s\right)\right)$.
\end{enumerate}
We call $T^l(G)$ the \textbf{decomposition tree} of $G$ with offset $l$. This tree is always non-empty, and if $s\in T^l(G)$ is terminal, then $G_s=\{e\}$. Since the composition of the functions $R_n$ and $S_k$ is associative, we obtain a useful observation:
\begin{obs}\label{prop:child_rank}
For $s\in T^l(G)$, $T^l(G)_s=T^{|s|+l}(G_s)$, and for each $r\in T^{|s|+l}(G_s)$, $(G_{s})_r=G_{s^{\smallfrown} r}$ as marked groups. This implies, in particular, that if $T^l(G)$ is well-founded, then so is $T^{|s|+l}(G_s)$.
\end{obs}
\begin{lem}\label{lem:Phi_borel}
For each $l\geq 1$, the map $\Phi^l\colon\mathscr{G}\to Tr$ given by $G\mapsto T^l(G)$ is Borel.
\end{lem}
As usual, we postpone the proof of this lemma to Section \ref{sec:Borel}.
\begin{lem}\label{lem:xi_indp}
Let $G,H\in\mathscr{G}$ and $H\hookrightarrow G$. Then for all $l\geq k \geq 1$,
\[
\rho\left(T^l(H)\right)\leq\rho\left(T^k(G)\right).
\]
In particular, for $G,G'\in\mathscr{G}$, if $G\cong G'$, then
\[
\rho\left(T^l(G)\right)=\rho\left(T^l(G')\right)
\]
for all $l\geq 1$.
\end{lem}
\begin{proof}
We induct on $\rho(T^k(G))$ simultaneously for all $k$. If $\rho(T^k(G))=1$, then $G=\{e\}$, so $H=\{e\}$. Suppose the lemma holds for all $G$ and $k$ with $\rho(T^k(G))\leq\beta$. Suppose that $f\colon H\to G$ is an embedding and $\rho(T^k(G))=\beta+1$. For all $n\geq 0$, there is some $k(n)$ so that $f(R_n(H))\leq R_{k(n)}(G)$. It follows that $f(H_n)\leq G_{k(n)}$ for all $n\geq 0$ since $S_{l+1}(G_{k(n)})\leq S_{k+1}(G_{k(n)})$. By the inductive hypothesis and Observation~\ref{prop:child_rank},
\[
\rho\left(T^l(H)\right)=\sup_{n\in\mathbb{N}}\left\{\rho\left(T^{l+1}(H_n)\right)\right\}+1 \leq \sup_{n\in\mathbb{N}}\left\{\rho\left(T^{k+1}(G_{k(n)})\right)\right\}+1 \leq \rho\left(T^k(G)\right)
\]
completing the induction.
\end{proof}
\begin{cor}\label{cor:some_all}
For $G\in \mathscr{G}$, $T^l(G)$ is well-founded for some $l\geq 1$ if and only if $T^l(G)$ is well-founded for all $l\geq 1$.
\end{cor}
\begin{proof} Suppose $G\in \mathscr{G}$ is so that $T^l(G)$ is well-founded. In view of Lemma~\ref{lem:xi_indp}, $T^k(G)$ is well-founded for all $k\geq l$. For $n\leq l$, take $s\in T^n(G)$ with $|s|=l$; if no such $s$ exists then $T^n(G)$ is plainly well-founded. There is an injection $G_s\hookrightarrow G$, so applying Lemma~\ref{lem:xi_indp} once again,
\[
\rho\left(T^{n+|s|}(G_s)\right)\leq \rho\left(T^{n+|s|}(G)\right).
\]
By choice of $s$, $n+|s|\geq l$, hence $\rho(T^{n+|s|}(G))<\omega_1$. Since $T^n(G)_s=T^{n+|s|}(G_s)$, we conclude $T^{n}(G)_s$ is well-founded for each $s$ of length $l$. The tree $T^n(G)$ is therefore well-founded, and the corollary follows.
\end{proof}
Define $\mathrm{W}:=\cup_{l=1}^{\infty} (\Phi^l)^{-1}(WF)$; that is, $\mathrm{W}$ is the collection of marked groups so that some decomposition tree is well-founded. By Corollary~\ref{cor:some_all}, every decomposition tree of a group in $\mathrm{W}$ is well-founded; that is to say, $W=\cap_{l=1}^{\infty} (\Phi^l)^{-1}(WF)$.\par
\indent Lemma~\ref{lem:xi_indp} shows the rank of a decomposition tree is independent of the marking. We thus define
\begin{defn} The \textbf{decomposition rank} of $G\in\mathrm{W}$ is defined to be
\[
\xi(G):=\min_{k\in \omega}\rho\left(T^k(G)\right)
\]
for some (any) marking of $G$. The \textbf{decomposition degree} is defined to be
\[
\deg(G):=\min\left\{k \mid \xi(G)=\rho\left(T^k(G)\right)\right\}
\]
for some (any) marking of $G$.
\end{defn}
\begin{cor}\label{lem:sgrp_xi}
If $G,H\in\mathscr{G}$ and $H\hookrightarrow G$, then $\xi(H)\leq \xi(G)$.
\end{cor}
\begin{rmk}
The decomposition rank in a fairly straightforward manner tracks the number of extensions and unions applied to produce the group. The decomposition degree, on the other hand, is currently mysterious. It somehow tracks the size of the finite groups ``appearing" in the construction of an elementary amenable group. We do not consider the decomposition degree further as it is tangential to our goal. We do study the decomposition rank in detail.
\end{rmk}
We now show that $\mathrm{W} \subseteq \mathop{\rm EG} \nolimits$ and $\mathrm{W}$ enjoys the same closure properties as $\mathop{\rm EG} \nolimits$, so that in fact $\mathop{\rm EG} \nolimits=\mathrm{W}$.
\begin{thm}
If $G\in\mathrm{W}$, then $G\in\mathop{\rm EG} \nolimits$.
\end{thm}
\begin{proof}
We induct on $\xi(G)$. For the base case, if $\xi(G)=1$, then $G=\{e\}$ and $G\in\mathop{\rm EG} \nolimits$. Suppose the theorem holds for all $\alpha<\beta$ and $\xi(G)=\rho(T^l(G))=\beta$. Consider $R_i(G)$. Since $R_i(G)$ is finitely generated, $\mc{N}_{l+1}(R_i(G))$ is finite, so
\[
\left|[R_i(G),R_i(G)]:G_i\right| <\infty.
\]
We infer $R_i(G)/G_i$ is finite-by-abelian and, therefore, elementary amenable.\par
\indent On the other hand, Observation~\rm\ref{prop:child_rank} gives $\rho(T^{1+l}(G_i))=\rho(T^l(G)_i)$. Hence, $\rho(T^{1+l}(G_i))<\beta$, and we conclude that $G_i\in \mathop{\rm EG} \nolimits$ from the inductive hypothesis. As $\mathop{\rm EG} \nolimits$ is closed under group extensions and countable increasing unions, $R_i(G)\in\mathop{\rm EG} \nolimits$ for all $i\in\omega$, whereby $G\in\mathop{\rm EG} \nolimits$.
\end{proof}
The family $\mathrm{W}$ also has the same closure properties as $\mathop{\rm EG} \nolimits$. Lemma~\ref{lem:xi_indp} already shows $\mathrm{W}$ is closed under taking subgroups. For the other closure properties, we require several lemmas.
\begin{lem}\label{lem:finandab}
$\mathrm{W}$ contains all finite groups and all abelian groups.
\end{lem}
\begin{proof}
If $G$ is abelian, then $\rho(T^1(G))\leq 2$. If $G$ is finite with size $m$, then $\rho(T^m(G))\leq 2$.
\end{proof}
We next consider increasing unions.
\begin{lem}\label{lem:union_xi}
If $G=\cup_{i\in\mathbb{N}} H_i$ and each $H_i\in\mathrm{W}$, then $G\in\mathrm{W}$.
\end{lem}
\begin{proof}
For each $i\in\mathbb{N}$, let $\alpha_i:=\rho(T^{1}(H_i))<\omega_1$. Since each $R_n(G)$ is finitely generated, there is some $m_n\in\mathbb{N}$ such that $R_n(G)\leq H_{m_n}$. By Lemma \ref{lem:xi_indp}, $\rho(T^{1}(G_n))\leq\rho(T^{1}(H_{m_n}))=\alpha_{m_n}$. We conclude $\rho(T^1(G))\leq \sup_{i\in\mathbb{N}} (\alpha_{m_i}) + 1 < \omega_1$, and thereby, $G\in\mathrm{W}$.
\end{proof}
In our construction, given $G$ and $k\geq 1$, we are particularly interested in the $G_i$ associated with $i\in T^k(G)$. We will see their decomposition rank is related to that of $G$ in a simple way; this observation is necessary for showing $W$ is closed under taking extensions and quotients.
\begin{lem}\label{lem:rk_xi}
Suppose $G\in\mathrm{W}$ is non-trivial and $\deg(G)=k$. Then
\[
\sup_{i\in \omega}\xi(G_i)+1\leq \xi(G)
\]
where $G_i$ is the subgroup of $G$ associated to $i\in T^k(G)$. In particular, $\xi(G_i)<\xi(G)$ for all $i\in\mathbb{N}$.
\end{lem}
\begin{proof}
By construction, for all $i\in\mathbb{N}$,
\begin{align*}
\rho\left(T^{k+1}(G_i)\right)+1 &=\rho\left(T^k(G)_i\right)+1 \\
&\leq \rho\left(T^k(G)\right).
\end{align*}
Hence,
\begin{align*}
\sup_{i\in \mathbb{N}}\xi(G_i)+1 &= \sup_{i\in \mathbb{N}} \left\{ \min_{l\in\mathbb{N}} \rho\left(T^l(G_i)\right)\right\}+1 \\
&\leq \sup_{i\in\mathbb{N}} \left\{ \rho\left(T^{k+1}(G_i)\right) \right\} +1\\
&= \rho\left(T^k(G)\right) \\
&= \xi(G)
\end{align*}
as desired.
\end{proof}
\noindent The inequality in Lemma~\rm\ref{lem:rk_xi} may be strict; for example, consider $\operatorname{Sym}_{fin}(\mathbb{N})$, the group of finitely supported permutations of $\mathbb{N}$. We also point out that Lemma~\rm\ref{lem:rk_xi} \emph{does not} hold for choices of $k$ such that $\rho(T^k(G))\neq\xi(G)$.\par
We next show that $\mathrm{W}$ is closed under extensions. We will first prove a weaker statement. This approach is inspired by \cite{Os02}.
\begin{lem}\label{lem:finorab_xi}
Suppose that $N\in\mathrm{W}$, $B$ is finite or abelian, and there is a short exact sequence
\[
1 \rightarrow N \rightarrow G \rightarrow B \rightarrow 1 .
\]
Then $G\in\mathrm{W}$.
\end{lem}
\begin{proof}
Suppose first that $B$ is abelian. Thus, $[G,G] \leq N$, so for any $l\geq 1$ and all $n\in T^l(G)$, $G_n\leq N$. It follows from Lemma \ref{lem:xi_indp} that for all $n\in\mathbb{N}$,
\[
\rho(T^{l+1}(G_n))\leq\rho(T^{l+1}(N))<\omega_1.
\]
Appealing to Observation \ref{prop:child_rank}, we infer $\rho(T^l(G))<\omega_1$, so $G\in \mathrm{W}$.
Suppose that $B$ is finite and $|G\colon N|=k$. For all $n\in T^k(G)$, $G_n\leq N$, so as above, $T^k(G)$ is well-founded. Hence, $G\in W$.
\end{proof}
\begin{lem}\label{lem:ext_xi}
Suppose the group $G$ is the extension of a group $B\in\mathrm{W}$ by a group $N\in\mathrm{W}$. Then $G\in\mathrm{W}$. The family $\mathrm{W}$ is thus closed under group extensions.
\end{lem}
\begin{proof}
We first establish the following claim.
\begin{claim*}
If $N\in\mathrm{W}$ and $B$ is finite-by-abelian, then the extension of $B$ by $N$ is in $\mathrm{W}$.
\end{claim*}
\begin{proof}[Proof of claim.]
Suppose that $B$ is the extension of an abelian group $A$ by a finite group $F$. Let $F_0$ be the preimage of $F$ in $G$. Then $G/F_0 \cong B/F \cong A$, so $G/F_0$ is abelian. Since $F_0$ is the extension of the finite group $F$ by $N$, Lemma \ref{lem:finorab_xi} implies that $F_0\in\mathrm{W}$. Applying Lemma \ref{lem:finorab_xi} a second time, $G\in\mathrm{W}$.
\end{proof}
We now prove the lemma by induction on $\beta=\xi(B)$. If $\beta=1$, then $B=\{e\}$ and the induction claim holds trivially. Suppose the result holds for all $\delta<\beta$. First, assume that $B$ is finitely generated, let $\deg(B)=l$, and form the decomposition tree $T^l(B)$. By finite generation, there is some $m\in\mathbb{N}$ such that for all $k\geq m$, $R_k(B)=B$, so $B_k=[B,B]\cap\bigcap\mc N_l(B)$. We now consider $K\trianglelefteq G$ the preimage of $B_k$ under the projection map. The group $K$ is the extension of $B_k$ by $N$, and $\xi(B_k)<\xi(B)$ by Lemma~\ref{lem:rk_xi}. The inductive hypothesis therefore implies $K\in\mathrm{W}$. On the other hand, $G/K$ is finite-by-abelian, so $G\in\mathrm{W}$ by our claim.\par
\indent If $B$ is not finitely generated, then $B=\cup_{n\in\mathbb{N}} R_n(B)$, and $\xi(R_n(B))\leq\xi(B)$ for all $n\in\mathbb{N}$. Letting $C_n$ be the preimage in $G$ of $R_n(B)$, the previous paragraph implies $C_n\in\mathrm{W}$. Since $G=\cup_{n\in\mathbb{N}} C_n$, Lemma \ref{lem:union_xi} ensures that $G\in\mathrm{W}$.
\end{proof}
Finally, we show that $W$ is closed under quotients.
\begin{lem}\label{lem:xi_quot}
If $G\in\mathrm{W}$ and $L\trianglelefteq G$, then $G/L\in \mathrm{W}$.
\end{lem}
\begin{proof} We argue by induction on $\xi(G)$. As the base case is immediate, suppose the lemma holds up to $\beta$ and let $G$ be such that $\xi(G)=\beta+1$. In view of Lemma~\ref{lem:union_xi}, we may assume $G$ is finitely generated, so $R_n(G)=G$ for all suitably large $n$. Say $k=\deg(G)$ and let $G_n$ be the subgroup corresponding to $n\in T^k(G)$.\par
\indent By the inductive hypothesis and Lemma~\ref{lem:rk_xi}, $G_nL/L\cong G_n/G_n\cap L\in \mathrm{W}$ for each $n$. On the other hand, $G/G_n\twoheadrightarrow(G/L)/(G_nL/L)$. Therefore, $(G/L)/(G_nL/L)$ is finite-by-abelian and so is in $W$. It now follows from Lemma \ref{lem:ext_xi} that $G/L\in W$.
\end{proof}
Combining Lemmas \ref{lem:xi_indp}, \ref{lem:finandab}, \ref{lem:union_xi}, \ref{lem:ext_xi}, and \ref{lem:xi_quot}, we obtain the following corollary.
\begin{cor}
If $G\in \mathop{\rm EG} \nolimits$, then $G\in\mathrm{W}$.
\end{cor}
We thus produce a characterization of elementary amenable groups.
\begin{thm}\label{thm:EA_char}
Let $G$ be a marked group. Then the following are equivalent:
\begin{enumerate}[(1)]
\item $G\in\mathop{\rm EG} \nolimits$ .
\item $T^l(G)$ is well-founded for all $l\geq 1$.
\item $T^l(G)$ is well-founded for some $l\geq 1$.
\end{enumerate}
\end{thm}
We can rephrase this to have the form of a chain condition independent of the marking. This corollary may thus be taken to be a definition of elementary amenability.
\begin{cor}
A countable group $G$ is elementary amenable if and only if there is no infinite descending sequence of the form
\[
G=G_0\geq G_1\geq\ldots
\]
such that for all $n\geq 0$, $G_n\neq\{e\}$ and there is a finitely generated subgroup $K_n\leq G_n$ with $G_{n+1}= [K_n,K_n]\cap H_n$ where $H_n$ is the intersection of the index-$(\leq(n+1))$ normal subgroups of $K_n$.
\end{cor}
\begin{proof}
Suppose $G\in\mathscr{G}$ and there is an infinite descending sequence
\[
G=G_0\geq G_1\geq\ldots
\]
as in the statement. Form $T^1(G)$, the decomposition tree of $G$ with offset $1$. We now proceed by induction to build $s_0\subsetneq s_1\subsetneq \dots$ with $s_i\in T^1(G)$ and $|s_i|=i$ such that $G_i\hookrightarrow G_{s_i}$. The base case is immediate: set $s_0=\emptyset$. Suppose we have defined $s_n$, so $G_n\hookrightarrow G_{s_n}$. Let $K_n\leq G_{n}$ be such that $G_{n+1}= [K_n,K_n]\cap H_n$ where $H_n$ is the intersection of the index-$(\leq n+1)$ normal subgroups of $K_n$. Since $K_n$ is finitely generated, there is $R_{m}(G_{s_n})$ such that $K_n\hookrightarrow R_{m}(G_{s_n})$. It follows that $G_{n+1}\hookrightarrow G_{s_n^{\smallfrown} m}$. Setting $s_{n+1}=s_n^{\smallfrown} m$, we have verified the inductive claim. The tree $T^1(G)$ thus has an infinite branch, so by Theorem~\rm\ref{thm:EA_char}, $G\notin\mathop{\rm EG} \nolimits$.
\medskip
Suppose there are no infinite descending sequences as in the statement and form $T^1(G)$. Let $s_0\subsetneq s_1\subsetneq \dots$ with $s_i\in T^1(G)$ and $|s_i|=i$. It suffices to show $s_0\subsetneq s_1\subsetneq \dots$ terminates, and so $T^1(G)$ is well-founded. This is indeed obvious since by construction the sequence of subgroups $G_{s_0}\geq G_{s_1}\geq\dots$ is a sequence of subgroups as in the chain condition.
\end{proof}
There are two main differences between this chain condition and the chain conditions explored in the earlier sections of this paper. First of all, $G_{n+1}$ is not related to $G_n$ only by being a subgroup. This is not unheard of; for example when looking at weak chain conditions one requires that $G_{n+1}$ be an \emph{infinite index} subgroup of $G_n$. The second difference is that the definition of $H_n$ changes with $n$. As far as we are aware, there are no widely-studied chain conditions defined in this way. That elementary amenability can be recast this way suggests that perhaps there are other interesting chain conditions with this property.
\subsection{$\mathop{\rm EG} \nolimits$ is not Borel}
We now study the descriptive-set-theoretic properties of $\mathop{\rm EG} \nolimits$. We show that on $\mathop{\rm EG} \nolimits$ the decomposition rank is unbounded below $\omega_1$.
\begin{lem}\label{lem:xiRankSucc}
For every $K\in \mathop{\rm EG} \nolimits$, there is $L\in \mathop{\rm EG} \nolimits$ with $\xi(K)<\xi(L)$.
\end{lem}
\begin{proof}
Let $G\in \mathop{\rm EG} \nolimits$ be as given by Proposition~\rm\ref{prop:EAembedding} for $K$ and form $L:=G\wr \mathbb{Z}$. Let $k=\deg(L)$, and take $L_i$ to be the subgroup of $L$ corresponding to $i\in T^k(L)$. Since $L$ is finitely generated, we may find $n$ such that $L=R_n(L)$. \par
\indent We now consider $L_n$. The group $[L,L]=[R_n(L),R_n(L)]$ certainly contains $[M,M]=[H,H]^{<\mathbb{Z}}$. On the other hand, if $N\trianglelefteq L$ has index $k+1$, there is $n\in \mathbb{Z}\setminus \{0\}$ so that $n\in N$. Applying Lemma~\ref{lem:wreath}, $[G,G]\leq N$, so $[H,H]^{<\mathbb{Z}}\leq L_n$. The group $K$ thus embeds into $[H,H]$, and Lemma~\rm\ref{lem:sgrp_xi} implies $\xi(K)\leq \xi(L_n)$. Appealing to Lemma~\rm\ref{lem:rk_xi}, we conclude $\xi(K)< \xi(L)$ proving the lemma.
\end{proof}
Our next lemma follows immediately from Corollary~\rm\ref{lem:sgrp_xi} by taking the direct sum.
\begin{lem}\label{lem:xiRankLimit}
Let $\{A_i\}_{i\in \mathbb{N}}$ be countable groups. If $A_i\in \mathop{\rm EG} \nolimits$ for all $i\in \mathbb{N}$, then there is $A\in \mathop{\rm EG} \nolimits$ with $\xi(A)\geq \xi(A_i)$ for all $i\in \mathbb{N}$.
\end{lem}
\begin{lem}
For all $\beta<\omega_1$, there is $G\in \mathop{\rm EG} \nolimits$ such that $\xi(G)\geq \beta$.
\end{lem}
\begin{proof}
The proof is the same as that of Lemma \ref{lem:CentRankUnbdd}, with Lemmas \ref{lem:xiRankSucc} and \ref{lem:xiRankLimit} referenced at the appropriate places.
\end{proof}
\begin{lem}\label{lem:xi_unbounded}
For each $\beta<\omega_1$, there is a finitely generated $G\in \mathop{\rm EG} \nolimits$ such that $\xi(G)\geq \beta$.
\end{lem}
\begin{proof}
Let $H\in\mc \mathop{\rm EG} \nolimits$ be a group such that $\xi(H)\geq\beta$. Proposition~\rm\ref{prop:EAembedding} implies that $H$ embeds into a 2-generated group $G\in\mc \mathop{\rm EG} \nolimits$. By Corollary~\ref{lem:sgrp_xi}, $\xi(G)\geq\xi(H)\geq\beta$.
\end{proof}
\begin{thm}\label{thm:EA_nonborel}
$\mathop{\rm EG} \nolimits$ is a non-Borel $\Pi^1_1$ set in $\ms{G}$, and $\mathop{\rm EG} \nolimits \cap \mathscr{G}_{fg}$ is a non-Borel $\Pi^1_1$ set in $\ms{G}_{fg}$.
\end{thm}
\begin{proof}
This follows from Theorem \ref{thm:BddnessThm}, Lemma \ref{lem:Phi_borel}, and Lemma~\rm\ref{lem:xi_unbounded} along with the facts that $\xi(G)\leq \rho(T^1(G))$ and that $\rho\circ \Phi^1$ is a $\Pi^1_1$-rank on $\mathop{\rm EG} \nolimits$.
\end{proof}
Let $\mathop{\rm AG} \nolimits\subseteq\mathscr{G}$ denote the class of countable amenable groups. Via Theorem~\rm \ref{thm:EA_nonborel}, we now may give a non-constructive answer to an old question of Day \cite{D57}, which was open until Grigorchuk \cite{G84} constructed groups of intermediate growth: \textit{Is it the case that every amenable group is elementary amenable}?
\begin{cor}
There is a finitely generated amenable group that is not elementary amenable.
\end{cor}
\begin{proof}
It is well-known that $\mathop{\rm AG} \nolimits$ is Borel; see Lemma~\rm\ref{lem:AG_borel} for a proof. The set $\mathop{\rm AG} \nolimits\cap\mathscr{G}_{fg}$ is thus Borel. On the other hand, Theorem~\rm \ref{thm:EA_nonborel} gives that $\mathop{\rm EG} \nolimits\cap \mathscr{G}_{fg}$ is not Borel. We conclude that $\mathop{\rm EG} \nolimits\cap \mathscr{G}_{fg} \subsetneq \mathop{\rm AG} \nolimits\cap \mathscr{G}_{fg}$.
\end{proof}
\subsection{Further observations}
By a result of C. Chou \cite[Proposition 2.2.]{Ch80}, the class of elementary amenable groups is the smallest class of countable discrete groups that satisfies (i),(ii), and (iii) of Definition~\rm\ref{def:EA}. Chou's theorem suggests a natural ranking of elementary amenable groups different than our decomposition rank. Indeed, after \cite{Os02}, define
\begin{enumerate}[$\bullet$]
\item $G\in \mathop{\rm EG} \nolimits_0$ if and only if $G$ is finite or abelian.
\item Suppose $\mathop{\rm EG} \nolimits_{\alpha}$ is defined. Put $G\in \mathop{\rm EG} \nolimits_{\alpha}^e$ if and only if there exists $N\trianglelefteq G$ such that $N\in \mathop{\rm EG} \nolimits_{\alpha}$ and $G/N\in \mathop{\rm EG} \nolimits_{0}$. Put $G\in \mathop{\rm EG} \nolimits_{\alpha}^l$ if and only if $G=\bigcup_{i\in \mathbb{N}}H_i$ where $(H_i)_{i\in \mathbb{N}}$ is an $\subseteq$-increasing sequence of subgroups of $G$ with $H_i\in \mathop{\rm EG} \nolimits_{\alpha}$ for each $i\in \mathbb{N}$. Set $\mathop{\rm EG} \nolimits_{\alpha+1}:=\mathop{\rm EG} \nolimits_{\alpha}^e\cup \mathop{\rm EG} \nolimits_{\alpha}^l$.
\item For $\lambda$ a limit ordinal, $\mathop{\rm EG} \nolimits_{\lambda}:=\bigcup_{\beta<\lambda}\mathop{\rm EG} \nolimits_{\beta}$.
\end{enumerate}
By a result of D. Osin \cite[Lemma 3.2]{Os02}, $\bigcup_{\alpha <\omega_1}\mathop{\rm EG} \nolimits_{\alpha}$ is closed under group extension. It now follows from Chou's theorem that $\mathop{\rm EG} \nolimits=\bigcup_{\alpha <\omega_1}\mathop{\rm EG} \nolimits_{\alpha}$. One may then define for $G\in \mathop{\rm EG} \nolimits$
\[
\mathop{\rm rk}(G):=\min\{\alpha\;|\;G\in \mathop{\rm EG} \nolimits_{\alpha}\}.
\]
\noindent We call $\mathop{\rm rk}(G)$ the \textbf{construction rank} of $G$.
We now compare $\xi$ and $\mathop{\rm rk}$ and in the process mostly recover a theorem of Olshanskii and Osin.
\begin{prop}\label{prop:rk_xi}
For $G\in \mathop{\rm EG} \nolimits$, $\mathop{\rm rk}(G)\leq 3\xi(G)$.
\end{prop}
\begin{proof}
We induct on $\xi(G)$ for the proposition. For the base case, if $\xi(G)=1$, then $G=\{e\}$, and the inductive claim obviously holds. Suppose the proposition holds up to $\beta$. Say $\xi(G)=\beta+1$ and $\deg(G)=k$. Then $\xi(G_i)\leq \beta$ for each $G_i$ associated to $i\in T^k(G)$, and applying the inductive hypothesis, $\mathop{\rm rk}(G_i)\leq 3\xi(G_i)$.\par
\indent On the other hand, $R_i(G)/G_i$ is finite-by-abelian, say an extension of the group $A$ by the group $F$. Letting $F_0$ be the inverse image of $F$ in $R_i(G)$ under the usual projection, $\mathop{\rm rk}(F_0)\leq\mathop{\rm rk}(G_i)+1$, and $R_i(G)/F_0\cong A$. Hence,
\[
\mathop{\rm rk}(R_i(G))\leq (\mathop{\rm rk}(G_i)+1)+1 \leq 3\xi(G_i)+2.
\]
We conclude
\[
\mathop{\rm rk}(G)\leq \sup_{i\in \mathbb{N}}\left(3\xi(G_i)+2 \right)+1\leq 3(\beta+1)=3\xi(G).
\]
This finishes the induction.
\end{proof}
Bounding $\xi$ from above by $\mathop{\rm rk}$ involves a bit more work. We begin with a general lemma for well-founded trees.
\begin{lem}\label{lem:wf_tree}
Suppose $T$ is a well-founded tree. Then
\[
\rho(T)\leq \sup_{|s|=k}\rho(T_s) + k.
\]
\end{lem}
\begin{proof}
We argue by induction on $|s|$. For the base case, $|s|=1$,
\[
\rho(T)=\rho_T(\emptyset)+1=\sup_{i\in T}\left(\rho_T(i)+1\right)+1=\sup_{i\in \mathbb{N}}\rho(T_i)+1.
\]
\indent Supposing the lemma holds up to length $k$,
\[
\rho(T)\leq \sup_{|s|=k}\rho(T_s)+k\leq \sup_{|s|=k}\left(\sup_{s^{\smallfrown} i\in T}\rho(T_{s^{\smallfrown} i})+1\right)+k\leq\sup_{|s|=k+1}\rho(T_s)+k+1
\]
completing the induction.
\end{proof}
\begin{prop}\label{prop:xi_rk_ub}
For $G\in \mathop{\rm EG} \nolimits$,
\[
\rho\left(T^1(G)\right)\leq \omega(\mathop{\rm rk}(G)+1).
\]
In particular, $\xi(G)\leq \omega(\mathop{\rm rk}(G)+1)$.
\end{prop}
\begin{proof}
We argue by induction on $\mathop{\rm rk}(G)$. For the base case, $\mathop{\rm rk}(G)=0 $, $G$ is either finite or abelian. There is thus $m\geq 1$ such that every element of $T^1(G)$ has length at most $m$. It follows that $\rho(T^1(G))$ is finite, which proves the base case.\par
\indent Suppose the lemma holds up to $\alpha$ and $\mathop{\rm rk}(G)=\alpha+1$. Let us consider first the case that the construction rank is given by a countable increasing union; say $G=\bigcup_{n\in \omega}H_n$ with $\mathop{\rm rk}(H_n)\leq \alpha$ for each $n$. Since $R_i(G)$ is finitely generated, there is $n(i)$ for which $G_i\leq H_{n(i)}$. We apply the inductive hypothesis and Lemma~\rm\ref{lem:xi_indp} to conclude
\[
\rho\left(T^2(G_i)\right)\leq \rho(T^1(G_i))\leq \omega(\alpha+1).
\]
Hence,
\[
\rho\left(T^1(G)\right)=\sup_{i\in \omega}\rho\left(T^2(G_i)\right) +1 \leq \omega\cdot \alpha +\omega+1\leq \omega(\alpha+2),
\]
verifying the hypothesis in this case. \par
\indent We now consider the case $\mathop{\rm rk}(G)$ is given by a group extension. Suppose $H\trianglelefteq G$ is such that $\mathop{\rm rk}(H)=\alpha$ and $\mathop{\rm rk}(G/H)=0$. If $G/H$ is abelian, $G_i\leq H$ for each $i$. Hence, $\mathop{\rm rk}(G_i)\leq \alpha$, and the desired result follows just as in the increasing union case. Suppose $G/H$ is finite. We may find $k$ such that for all $s\in T^1(G)$ with $|s|=k$, $G_s\leq H$. Applying the inductive hypothesis and Lemma~\rm\ref{lem:xi_indp},
\[
\rho\left(T^{k+1}(G_s)\right)\leq \rho\left(T^{1}(G_s)\right)\leq \omega(\alpha+1).
\]
Lemma~\rm\ref{lem:wf_tree} now implies
\[
\rho\left(T^1(G)\right)\leq \sup_{|s|=k}\rho\left(T^1(G)_s\right)+k\leq \omega(\alpha+1)+k\leq \omega(\alpha+2).
\]
This completes the induction, and we conclude the proposition.
\end{proof}
As a corollary to Lemma~\rm\ref{lem:xi_unbounded} and Proposition~\rm\ref{prop:xi_rk_ub}, we obtain a less detailed version of a theorem from the literature.
\begin{cor}[Olshanskii, Osin {\cite[Corollary 1.6]{OO13}}]\label{cor:OlOs}
For every ordinal $\alpha<\omega_1$, there is $G\in \mathop{\rm EG} \nolimits\cap \mathscr{G}_{fg}$ such that $\alpha\leq\mathop{\rm rk}(G)$. The function $\mathop{\rm rk}\colon\mathop{\rm EG} \nolimits\cap \mathscr{G}_{fg}\to ORD$ is thus unbounded below $\omega_1$.
\end{cor}
\begin{proof} Suppose for contradiction $\alpha <\omega_1$ is such that $\mathop{\rm rk}(G)< \alpha$ for all $G\in \mathop{\rm EG} \nolimits$. By Proposition~\rm\ref{prop:xi_rk_ub}, $\xi(G)\leq \omega(\alpha+1)<\omega_1$ for all $G\in\mathop{\rm EG} \nolimits$, contradicting Lemma~\rm\ref{lem:xi_unbounded}.
\end{proof}
In our proof of Theorem~\rm\ref{thm:EA_nonborel}, we use that $\rho\circ\Phi^1$ is a $\Pi^1_1$-rank. It is natural to ask if $\xi$ itself is a $\Pi^1_1$-rank. This is indeed the case.
\begin{thm}
The decomposition rank is a $\Pi^1_1$-rank on $\mathop{\rm EG} \nolimits$.
\end{thm}
\begin{proof}
Each of the ranks $\phi_l:=\rho\circ \Phi^l$ is a $\Pi^1_1$-rank on $\mathop{\rm EG} \nolimits$ where $\Phi^l$ is as defined in Lemma~\rm\ref{lem:Phi_borel}. Let $\leq_l^{\Pi}\subseteq \mathscr{G}\times \mathscr{G}$ and $\leq_l^{\Sigma} \subseteq \mathscr{G}\times \mathscr{G}$ be the relations given by $\phi_l$ as a $\Pi^1_1$-rank. We now consider the following relations:
\[
\leq_{\xi}^{\Pi}:=\bigcup_{N\in \mathbb{N}}\bigcap _{l\geq N}\leq_l^{\Pi}\text{ and } \leq_{\xi}^{\Sigma}:=\bigcup_{N\in \mathbb{N}}\bigcap _{l\geq N}\leq_l^{\Sigma}
\]
Since co-analytic and analytic sets are closed under countable unions and intersections, $\leq_{\xi}^{\Pi}$ is co-analytic and $\leq_{\xi}^{\Sigma}$ is analytic. To conclude $\xi$ is a $\Pi^1_1$-rank, it thus remains to show for $H\in \mathop{\rm EG} \nolimits$,
\begin{align*}
G\in \mathop{\rm EG} \nolimits \,\wedge\, \xi(G)\leq \xi(H) &\Leftrightarrow G \leq_{\xi}^\Sigma H \\
&\Leftrightarrow G \leq_\xi^\Pi H.
\end{align*}
Suppose $G\in \mathop{\rm EG} \nolimits$ and $\xi(G)\leq \xi(H)$. Letting $M:=\max\{\deg(G),\deg(H)\}$, we see that $\rho\left(T^k(G) \right)\leq \rho \left(T^k(H) \right)$ for all $k\geq M$ via Lemma~\rm\ref{lem:xi_indp}, hence $\phi_k(G)\leq \phi_k(H)$ for $k\geq M$. We conclude that $G \leq_{\xi}^{\Pi} H$ and $G\leq_{\xi}^{\Sigma} H$.\par
\indent Conversely, suppose $G \leq_{\xi}^{\Pi} H$ and $G\leq_{\xi}^{\Sigma} H$ and let $M\geq 0$ be such that $G \leq_{k}^{\Pi} H$ and $G \leq_{k}^{\Sigma} H$ for all $k\geq M$. Immediately, $G\in \mathop{\rm EG} \nolimits$. For each $k\geq M$, we further see $\phi_k(G)\leq \phi_k(H)$, and taking $k=\max\{\deg(G),\deg(H),M\}$,
\[
\xi(G)=\phi_k(G)\leq \phi_k(H)=\xi(H).
\]
Therefore, $\xi$ is a $\Pi^1_1$-rank.
\end{proof}
Propositions \ref{prop:rk_xi} and \ref{prop:xi_rk_ub} combine to give us
$$ \xi(G) \leq \omega(\mathop{\rm rk}(G)+1) \leq \omega(3\xi(G)+1), $$
so $\mathop{\rm rk}$ is closely related to a $\Pi^1_1$-rank. Given this close relationship, it is natural to ask whether or not $\mathop{\rm rk}$ is a $\Pi^1_1$-rank. We suspect, however, that $\mathop{\rm rk}$ is \textit{not} a $\Pi_1^1$-rank as the sets $\mathop{\rm rk}^{-1}(\alpha)$ are likely analytic and non-Borel for suitably large $\alpha$; in fact, we believe $\mathop{\rm rk}^{-1}(2)$ is analytic and non-Borel. Indeed, if $\mathop{\rm EG} \nolimits_\alpha$ is Borel and uncountable, then $\mathop{\rm EG} \nolimits^e_{\alpha}$ is defined by quantifying over $\mathop{\rm EG} \nolimits_\alpha$. We thus expect $\mathop{\rm EG} \nolimits_{\alpha}^e$ to be analytic and, barring some clever argument, non-Borel. (We remark that one can make such a clever argument in the case of $\mathop{\rm EG} \nolimits^e_0$, but it does not seem to work beyond that.) We do not pursue this question further as it is tangential to the aim of this work and somewhat technical.
\section{Borel functions and sets}\label{sec:Borel}
In previous sections we made claims that certain maps and sets were Borel, and from this and the Boundedness Theorem \ref{thm:BddnessThm}, we concluded that certain subsets of $\mathscr{G}$ were not Borel. A slogan from descriptive set theory is ``Borel = explicit'' meaning if you describe a map or set without an appeal to something like the axiom of choice or quantifying over an uncountable space, it should be Borel. As the maps and sets from previous sections are ``explicit'' in this sense, we were content to state they were Borel without further proof. To those not as familiar with descriptive set theory, we offer this section to verify our previous claims.
Recall that $\mathscr{G}=\{ N\trianglelefteq\mathbb{F}_{\omega} \}$ and that we identify $N$ with the group $\mathbb{F}_{\omega}/N$. We make frequent use of the usual projection from $\mathbb{F}_{\omega}$ to $\mathbb{F}_{\omega}/N$ and always denote this projection by $f_N$. Every countable group is identified with an element of $\mathscr{G}$; in fact, a given group $G$ corresponds to many distinct elements of $\mathscr{G}$ as there are many different surjections of $\mathbb{F}_{\omega}$ onto $G$. We fix an enumeration $(\gamma_i)_{i\in \mathbb{N}}$ for $\mathbb{F}_{\omega}$, and this gives rise to an enumeration of $G$ in the obvious way. Let us also enumerate the generators for $\mathbb{F}_{\omega}$ as $(a_i)_{i\in \mathbb{N}}$. Recall finally that $\mc G_{fg} = \cup_{n\in\mathbb{N}} \{ N \trianglelefteq \mathbb{F}_{\omega} \mid \forall k\geq n \;a_k\in N \}$. This is an $F_\sigma$ subset of $\mathscr{G}$. In particular, its Borel sets as a Borel space are precisely those sets of the form $B\cap\mathscr{G}_{fg}$ where $B\subseteq\mathscr{G}$ is Borel.
\subsection{Borel functions}
The sub-basic open sets of $\mathscr{G}$ are those of the form $O_{\gamma}=\{ N \mid \gamma \in N\}$ and their complements. The Borel $\sigma$-algebra on $\mathscr{G}$ is thus generated by the $O_\gamma$, so in order to show a map $\psi \colon\mathscr{G}\to\mathscr{G}$ is Borel, we need only to check that $\psi^{-1}(O_\gamma)$ is Borel for all $\gamma\in\mathbb{F}_{\omega}$.\par
\indent We begin with the easier examples of Borel maps.
\begin{lem}\label{lem:QuotientBorel}
For each $\delta\in\mathbb{F}_{\omega}$, there is a Borel map $Q_\delta\colon\mathscr{G}\to\mathscr{G}$ such that if $N\in\mathscr{G}$ with $\mathbb{F}_{\omega}/N\cong G$, then $\mathbb{F}_{\omega}/Q_\delta(N)\cong G/\ngrp{f_N(\delta)}$.
\end{lem}
\begin{proof}
Since $G/\ngrp{f_N(\delta)}\cong \mathbb{F}_{\omega}/\ngrp{N,\delta}$, the map $Q_\delta(N):=\ngrp{\delta}N$ meets our requirements. We need only check that it is Borel. For this,
\begin{align*}
Q_\delta^{-1}(O_\gamma) &= \{ N\in\mathscr{G} \mid \gamma\in\ngrp{\delta}N \} \\
&= \{ N\in\mathscr{G} \mid \exists g\in\ngrp{\delta} \; g^{-1}\gamma\in N \} \\
&= \bigcup_{g\in\ngrp{\delta}} \{ N\in\mathscr{G} \mid g^{-1}\gamma\in N\}
\end{align*}
which is open, so we have verified the lemma.
\end{proof}
We can now easily prove Lemma \ref{lem:MaxNMapBorel}.
\begin{proof}[Proof of Lemma \ref{lem:MaxNMapBorel}]
By repeated composition, we may define $Q_s\colon\mathscr{G}\to\mathscr{G}$ for all $s\in\mathbb{N}^{<\mathbb{N}}\setminus \{\emptyset\}$ so that
\[
Q_s(N)=\ngrp{\gamma_s}N;
\]
we define $Q_{\emptyset}:=id$. The previous lemma ensures these maps are Borel.\par
\indent Now suppose $t\in\mathbb{N}^{<\mathbb{N}}$ is of the form $v^{\smallfrown} i$ with $v\in\mathbb{N}^{<\mathbb{N}}$ and $i\in\mathbb{N}$ and consider the basic open set $O_t:=\{T\in Tr\mid t\in T\}$ of $Tr$. We see that
\[
\Phi_{M_n}^{-1}(O_t)=\{ N\in\mathscr{G} \mid Q_v(N)\neq Q_t(N)\},
\]
which is Borel. The map $\Phi_{M_n}$ is thus Borel.
\end{proof}
\begin{lem}\label{lem:Rn_borel}
For each $n\geq 0$, there is a Borel map $R_n\colon\mathscr{G}\to\mathscr{G}$ such that if $N\in\mathscr{G}$ with $\mathbb{F}_{\omega}/N\cong G$, then $\mathbb{F}_{\omega}/R_n(N)\cong\<g_0,\ldots,g_n\>$.
\end{lem}
\begin{proof}
Let $\pi_n\colon\mathbb{F}_{\omega}\to\mathbb{F}_{\omega}$ be induced by mapping the generators $(a_i)_{i\in \mathbb{N}}$ as follows:
\[\pi_n(a_i) =
\begin{cases}
\gamma_i, & 0\leq i\leq n \\
e, & \text{otherwise.}
\end{cases}
\]
Suppose that $N\in\mathscr{G}$ with $\mathbb{F}_{\omega}/N=G$. The function $f_N\circ\pi_n\colon\mathbb{F}_{\omega}\to \grp{g_0,\dots,g_n}$ is then a surjection. We thus define $R_n$ to be the map sending $N$ to $\ker (f_N\circ\pi_n)$. Since $\mathbb{F}_{\omega}/\ker (f_N\circ\pi_n)\cong \grp{g_0,\dots,g_n}$, this works as intended. \par
\indent As $\gamma\in\ker (f_N\circ\pi_n)$ iff $\pi_n(\gamma)\in\ker (f_N)$, we conclude that
\[
R_n^{-1}(O_\gamma) = \{ M \in\mathscr{G} \mid \pi_n(\gamma)\in M \},
\]
which is the open set $O_{\pi_n(\gamma)}$. The map $R_n$ is thus Borel.
\end{proof}
The above proof works for subgroups generated by any fixed collection of elements of $G$; that is to say, the same proof shows the maps $G\mapsto G_s$ defined in Section \ref{sec:Max} are Borel, so as before, we get Lemma \ref{lem:MaxMapBorel} as a corollary.
We now move on to proving Lemma \ref{lem:CentMapBorel}; this follows from the next lemma. Its proof is more involved than the previous two.
\begin{lem}
For each $s\in\mathbb{N}^{<\mathbb{N}}\setminus \{\emptyset\}$, there is a Borel map $C_s\colon\mathscr{G}\to\mathscr{G}$ such that if $N\in\mathscr{G}$ with $\mathbb{F}_{\omega}/N\cong G$, then $\mathbb{F}_{\omega}/C_s(N)\cong C_G(\{g_s\})$.
\end{lem}
\begin{proof}
Suppose that $N\in\mathscr{G}$ and $\mathbb{F}_{\omega}/N\cong G$. Define $\pi_N\colon\mathbb{F}_{\omega}\to\mathbb{F}_{\omega}$ by
\[
\pi_N(a_j):=
\begin{cases}
\gamma_j, & \text{ if }f_N(\gamma_j)\in C_G\left(\{f_N(\gamma_s)\}\right) \\
e, & \text{ else.}
\end{cases}
\]
The map $f_N\circ \pi_N\colon\mathbb{F}_{\omega}\to C_G(\{g_s\})$ is then a surjection, so the map $N\mapsto\ker(f_N\circ\pi_N)$ works as intended. In order to check it is Borel, we introduce the set
\[
S_j:=\{N\in\mathscr{G} \mid \pi_N(a_j)=\gamma_j\}.
\]
Since $f_N(\gamma_j)\in C_G(\{f_N(\gamma_s)\})$ iff $[\gamma_j,\gamma_{s_i}]\in N$ for each $0\leq i \leq |s|-1$,
\[
S_j=\{N\in\mathscr{G} \mid [\gamma_j,\gamma_{s_i}]\in N \text{ for each }0\leq i \leq |s|\},
\]
which is an open set.
We now fix a word $\delta=\delta(a_0,\dots,a_m)\in \mathbb{F}_{\omega}$ and consider the pre-image of the basic open set $O_{\delta}$. Our notation $\delta(a_0,\dots,a_m)$ indicates the word $\delta$ only uses the letters appearing in the parentheses. We may evaluate $\pi_N(\delta)$ by substituting in the images of $a_0,\dots,a_m$, so $\pi_{N}(\delta)=\delta(x_0,\dots,x_m)$ for some $\ol{x}:=(x_0,\dots,x_m)\in \Omega:=\prod_{i=0}^m\{\gamma_i,e\}$ that depends on $N$. The set of $N\in \mathscr{G}$ such that $\pi_N(\delta(a_0,\ldots,a_m))=\delta(\ol{x})$ for some fixed $\ol{x}$ is the Borel set
\[
S_{\ol{x}}:=\bigcap_{x_j=\gamma_j} S_j \cap \bigcap_{x_k=e} S_k^c.
\]
Since $\delta\in\ker(f_N\circ \pi_N)$ iff $\pi_N(\delta)\in\ker f_N=N$, we now see that
\begin{align*}
C_s^{-1}(O_\delta) &= \{N\in\mathscr{G} \mid \pi_N(\delta)\in N\} \\
&= \bigcup_{\ol{x}\in\Omega} \left( \{N\in\mathscr{G} \mid \delta(\ol{x})\in N\} \cap S_{\ol{x}} \right)
\end{align*}
which is Borel.
\end{proof}
We next show the maps $S_k$ from Section \ref{sec:EAGroups} are Borel. The main idea is the same as in previous lemma.
\begin{lem}\label{lem:Sk_Borel}
For each $k\geq 1$, there is a Borel map $S_k:\mathscr{G}_{fg}\rightarrow \mathscr{G}$ such that if $\mathbb{F}_{\omega}/N=G$, then
\[
\mathbb{F}_{\omega}/S_k(N)\cong [G,G]\cap \bigcap \mc{N}_k(G)
\]
where $\mc{N}_k(G):=\{M\trianglelefteq G\mid |G:M|\leq k+1\}$.
\end{lem}
\begin{proof}
Suppose $N\in \mathscr{G}_{fg}$ and $G\cong\mathbb{F}_{\omega}/N$. Similarly to the previous lemma, we define $\pi_N\colon\mathbb{F}_{\omega}\to\mathbb{F}_{\omega}$ by
\[
\pi_N(a_i):=
\begin{cases}
\gamma_i, & \text{ if }f_N(\gamma_i)\in [G,G]\cap\bigcap\mc{N}_k(G)\\
e, & \text{ else.}
\end{cases}
\]
Define $S_k:\mathscr{G}_{fg}\rightarrow \mathscr{G}$ by $N\mapsto \ker(f_N\circ\pi_{N})$; this map behaves as desired. We claim this map is also Borel. \par
\indent Define
\[
\mc N_k := \left\{ M\in\mathscr{G}_{fg} \mid |\mathbb{F}_{\omega} : M|\leq k+1 \right\}.
\]
If $N\in\mathscr{G}_{fg}$, then the collection of index-$\leq k+1$ subgroups of $\mathbb{F}_{\omega}/N$ is precisely $\{ MN/N \mid M\in\mc N_k \}$. Therefore, $f_N(\gamma_i)\in [G,G]\cap\bigcap\mc N_k(G)$ iff $\gamma_i\in [\mathbb{F}_{\omega},\mathbb{F}_{\omega}]N\cap \bigcap_{M\in\mc N_k} MN $. As in the previous lemma, we may define
\begin{align*}
S_i &:= \left\{N\in \mathscr{G}_{fg} \mid \pi_N(a_i)=\gamma_i \right\}\\
&= \left\{N\in \mathscr{G}_{fg} \mid \gamma_i \in [\mathbb{F}_{\omega},\mathbb{F}_{\omega}]N\cap \bigcap_{M\in\mc N_k} MN \right\}\\
&= \bigcup_{\delta\in[\mathbb{F}_{\omega},\mathbb{F}_{\omega}]} \left\{N\in\mathscr{G}_{fg} \mid \delta^{-1}\gamma_i\in N \right\} \cap \bigcap_{M\in \mc{N}_k}\bigcup_{\delta\in M}\left\{N\in\mathscr{G}_{fg} \mid \delta^{-1}\gamma_i\in N\right\}.
\end{align*}
The last set is Borel since $\mc{N}_k$ is countable. Given $\ol{x}:=(x_0,\dots,x_m)\in \Omega:=\prod_{i=0}^m\{\gamma_i,e\}$, we define as before $S_{\ol{x}}$. \par
\indent Fixing a word $\delta=\delta(a_0,\dots,a_m)\in \mathbb{F}_{\omega}$, we now consider the pre-image of the basic open set $O_{\delta}$. We see
\begin{align*}
S_k^{-1}(O_\delta) &= \{N\in\mathscr{G} \mid \pi_N(\delta)\in N\} \\
&= \bigcup_{\ol{x}\in\Omega} \big( \{N\in\mathscr{G} \mid \delta(\ol{x})\in N\} \cap S_{\ol{x}} \big)
\end{align*}
which is Borel.
\end{proof}
\indent Using Lemma~\ref{lem:Rn_borel} and \ref{lem:Sk_Borel}, we build Borel maps $\Psi^l_s:\ms{G}\rightarrow \ms{G}$ for each $l\in\mathbb{N}$ and $s\in \mathbb{N}^{<\mathbb{N}}$. For $s=\emptyset$, put $\Psi^l_{\emptyset}=id$. Supposing we have defined $\Psi^l_s$, define $\Psi^l_{s^{\smallfrown} n}$ by
\[
\Psi^l_{s^{\smallfrown} n}(N):=S_{|s|+l}\circ R_n(\Psi^l_s(N)).
\]
It follows that if $s\in T^l(G)$ with $G=\mathbb{F}_{\omega}/N$, then $\mathbb{F}_{\omega}/\Psi^l_s(N)= G_s$. If $s\notin T^l(G)$, then $\mathbb{F}_{\omega}/\Psi^l_s(N)=\{e\}$.
\begin{proof}[Proof of Lemma \ref{lem:Phi_borel}]
\indent Fixing $s\in \mathbb{N}^{<\mathbb{N}}$ and $l\in\mathbb{N}$,
\[
(\Phi^l)^{-1}(O_s)=\left\{N\in \ms{G} \mid s\in T^l(\mathbb{F}_{\omega}/N)\right\}.
\]
If $s=\emptyset$, then $(\Phi^l)^{-1}(O_s)=\ms{G}$ which is plainly Borel. Else, say $s=r^{\smallfrown} n$, so
\[
\begin{array}{ccl}
(\Phi^l)^{-1}(O_s) & = & \left\{N\in \ms{G} \mid r^{\smallfrown} n \in T^l(\mathbb{F}_{\omega}/N)\right\}\\
& = & \left\{N\in \ms{G} \mid (\mathbb{F}_{\omega}/N)_r\neq \{e\}\right\}\\
& = & (\Psi^l_r)^{-1}(\ms{G}\setminus \{e\}),
\end{array}
\]
which is Borel.
\end{proof}
\subsection{Borel sets}
Recall that $\mathop{\rm AG} \nolimits$ denotes the class of countable amenable groups.
\begin{lem}[Folklore]\label{lem:AG_borel}
The set $\mathop{\rm AG} \nolimits$ is Borel in $\ms{G}$, and therefore, $\mathop{\rm AG} \nolimits\cap \mathscr{G}_{fg}$ is Borel.
\end{lem}
\begin{proof}
Amenable groups are characterized by F\o lner's property: A countable group $G$ is amenable if and only if for every finite $F\subseteq G$ and every $n\geq 1$, there is a finite non-empty subset $K\subseteq G$ such that
\[
\frac{|xK\Delta K|}{|K|}\leq \frac{1}{n}
\]
for all $x\in F$ where $\Delta$ denotes the symmetric difference.\par
\indent Letting $P_f(\mathbb{F}_{\omega})$ be the collection of finite subsets of $\mathbb{F}_{\omega}$, we infer
\[
\mathop{\rm AG} \nolimits=\bigcap_{F\in P_f(\mathbb{F}_{\omega})}\bigcap_{n\geq 1}\bigcup_{K\in P_f(\mathbb{F}_{\omega})}\bigcap_{x\in F}\left\{N\in \mathscr{G} \mid \frac{|f_N(x)f_N(K)\Delta f_N(K)|}{|f_N(K)|}\leq \frac{1}{n}\right\}.
\]
It thus suffices to show
\[
\Omega:=\left\{N\in \mathscr{G} \mid \frac{|f_N(x)f_N(K)\Delta f_N(K)|}{|f_N(K)|}\leq \frac{1}{n}\right\}
\]
is Borel. It is easy to see requiring $|f_N(K)|=m$ and $|f_N(x)f_N(K)\Delta f_N(K)|=l$ is Borel, hence
\[
\Omega=\bigcup_{\frac{l}{m}\leq \frac{1}{n}}\left\{N\mid |f_N(x)f_N(K)\Delta f_N(K)|=l\text{ and } |f_N(K)|=m \right\}
\]
is Borel. The set $\mathop{\rm AG} \nolimits$ is thus Borel.
\end{proof}
\section{Further remarks}\label{sec:Remarks}
Our results give tools to study groups enjoying any of the other chain conditions in the literature. Perhaps more interestingly, our results suggest new questions concerning elementary amenable groups and groups with the minimal condition on centralizers, maximal condition on subgroups, and maximal condition on normal subgroups. \par
\indent Most immediately, one desires a better understanding of the various rank functions. In the case of max groups, there are no infinite subgroup rank two groups, the infinite groups with subgroup rank 3 are Tarski monsters, and $\mathbb{Z}$ has rank $\omega+1$. In the case of max-n, examples of finite rank groups are easy to produce and understand; however, transfinite rank examples are somewhat mysterious. Following Olshanskii and Osin, cf. \cite[Corollary 1.6]{OO13}, we ask
\begin{quest}
For which ordinals $\alpha$ is there an infinite group in $\mc M_C$ ($\mc M_{\max{}}, \mc M_n$) such that the centralizer rank (subgroup rank, length) is $\alpha$?
\end{quest}
\indent In a different direction, showing a set is non-Borel in $\mathscr{G}$ demonstrates there is no ``simple" definition of the class. Our techniques give a way to determine if a subset of $\mathscr{G}$ (or of a Borel subset of $\mathscr{G}$) given by a chain condition is not Borel and hence to determine if it does not admit a ``simple" characterization. In the setting of max-n groups, there is a particularly intriguing question along these lines. By an old result of Hall, a two-step solvable group is max-n if and only if it is finitely generated; this is certainly a Borel condition. On the other hand, no such nice characterization of three-step solvable groups with max-n is known. We thus ask
\begin{quest}
Is the set of max-n three-step solvable marked groups Borel?
\end{quest}
\indent In a similar vein, our results on elementary amenable groups, in a sense, show elementary amenable groups are not ``elementary". One naturally asks
\begin{quest}[Hume] Is there an intermediate ``elementary" Borel set between $\mathop{\rm EG} \nolimits\cap \mathscr{G}_{fg}$ and $\mathop{\rm AG} \nolimits\cap \mathscr{G}_{fg}$? More precisely, is there an elementary class $\ms{E}(B)$ in the sense of Osin \cite{Os02} with $B$ ``small" such that $\mathop{\rm EG} \nolimits\cap \mathscr{G}_{fg}\subseteq \ms{E}(B)\subsetneq \mathop{\rm AG} \nolimits\cap\mathscr{G}_{fg}$ and $\ms{E}(B)$ is Borel?
\end{quest}
We also arrive at new questions with a descriptive-set-theoretic flavor.
\begin{defn}
Let $Y$ be a uncountable Polish space. A set $A\subseteq Y$ is \textbf{$\Pi^1_1$-complete} if $A$ is $\Pi^1_1$ and for all $B\subseteq X$ with $X$ an uncountable Polish space and $B$ co-analytic, $B$ Borel reduces to $A$.
\end{defn}
The idea is that $\Pi^1_1$-complete sets are as complicated as they possibly could be; Theorem \ref{thm:WFComplete} says that $WF\subseteq Tr$ is $\Pi^1_1$-complete.
\begin{quest}
Are any of $\mc M_C,\mc M_{\max{}},\mc M_n,$ or $\mathop{\rm EG} \nolimits$ $\Pi^1_1$-complete?
\end{quest}
Note that for a positive answer it suffices to show that $WF$ (or some other $\Pi^1_1$-complete set) Borel reduces to these sets. Under an extra set-theoretic assumption known as $\Sigma^1_1$-Determinacy, every $\Pi^1_1$ set which is not Borel is in fact $\Pi^1_1$-complete. We do not expect that extra set-theoretic assumptions should be necessary to prove any of the sets are $\Pi^1_1$-complete; we mention this as evidence that the positive answer is indeed the correct one. It is worth noting the question is a problem in group theory. For example, in the case of $\mathop{\rm EG} \nolimits$ one must devise a method of building a group from a tree so that well-founded trees give rise to elementary amenable groups and ill-founded trees give rise to non-elementary-amenable groups.
\subsection*{Acknowledgments}
The authors would like to thank Alexander Kechris and Andrew Marks for helpful mathematical discussions.
J. Williams was partially supported by NSF Grant 1044448, Collaborative Research: EMSW21-RTG: Logic in Southern California.
\bibliographystyle{bibgen}
|
1,314,259,995,911 | arxiv | \section{INTRODUCTION}
The nonlinear forces between colliding beams are one of the main
performance limitations in modern colliders. Electron lenses have been
proposed as a tool for mitigation of beam-beam
effects~\cite{Shiltsev:PRSTAB:1999}. It was demonstrated that the
pulsed electron current can produce different betatron tune shifts in
different proton or antiproton bunches, thus cancelling bunch-to-bunch
differences generated by long-range beam-beam
forces~\cite{Shiltsev:PRL:2007, Shiltsev:NJP:2008,
Shiltsev:PRSTAB:2008}. In these experiments, the electron beam had a
flat transverse current-density distribution, and the beam size was
larger than the size of the circulating beam. To first order, the
effect of the electron lens was a bunch-by-bunch linear betatron tune
shift.
The present research went a step further. We studied the feasibility
of using the magnetically confined, nonrelativistic beam in the
Tevatron electron lenses to compensate nonlinear head-on beam-beam
effects in the antiproton beam. For this purpose, the transverse
density distribution of the electron beam must mimic that of the
proton beam, so that the space charge force acting on the antiprotons
is partially canceled. The betatron phase advance between the
interaction points and the electron lens should be close to an integer
multiple of~$\pi$.
During regular Tevatron operations, both stochastic and electron
cooling were used to reduce the transverse emittance of
antiprotons. Under these conditions, antiprotons were transversely
much smaller than protons, making head-on effects essentially
linear. Intensity loss rates of antiprotons due to beam-beam were
caused by long-range interactions and rarely exceeded 5\% per
hour. While an improvement of the Tevatron performance by head-on
beam-beam compensation was not foreseen, we were interested in the
feasibility of the concept and in providing the experimental basis for
the simulation codes used in the planned application of electron
lenses to the RHIC collider at BNL~\cite{Fischer:IPAC:2012,
Gu:IPAC:2012, Luo:PRSTAB:2012}.
\section{EXPERIMENTAL APPARATUS}
\begin{figure}[b!]
\centering
\begin{tabular}{cc}
\emph{side view} & \emph{top view} \\
\includegraphics[height=0.48\columnwidth]{Fig_gun_side} &
\includegraphics[width=0.48\columnwidth]{Fig_gun_top} \\
\includegraphics[width=0.48\columnwidth]{Fig_gun_prof3D} &
\includegraphics[width=0.48\columnwidth]{Fig_gun_prof1D} \\
\multicolumn{2}{c}{\emph{measured current-density profiles}}
\end{tabular}
\caption{The 10.2-mm (0.4-in) Gaussian electron gun: the assembled gun
(top left); a detail of the copper cylindrical anode and of the
convex tungsten dispenser cathode surface (top right); example of
current-density measurements (bottom).}
\label{fig:gun}
\end{figure}
\begin{figure}[b!]
\centering
\includegraphics[width=\columnwidth]{Fig_TEL2_layout}
\caption{Layout of the beams in the Tevatron electron lens. (Dimensions
are in millimeters.)}
\label{fig:TEL}
\end{figure}
\begin{table}[t!]
\caption{Tevatron lattice functions (amplitude~$\beta$,
dispersion~$D$, and betatron phase~$\phi$) at the interaction points and at
the electron lens.}
\label{tab:lattice}
\begin{center}
\begin{tabular}{lrrrrrr}
\toprule
& $\beta_x$ & $\beta_y$ & $D_x$ & $D_y$ & $\phi_x$ & $\phi_y$ \\
& \multicolumn{2}{c}{[m]} & \multicolumn{2}{c}{[m]} &
\multicolumn{2}{c}{[$2\pi$]} \\
\midrule
CDF & 0.30 & 0.30 & 0.0 & 0.0 & 6.63 & 6.85 \\
DZero & 0.50 & 0.50 & 0.0 & 0.0 & 13.77 & 13.85 \\
TEL2 & 68 & 153 & 1.2 & $-$1.0 & 3.17 & 3.22 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
An electron gun based on a convex tungsten dispenser cathode operating
at a temperature of 1400~K was designed and
built~\cite{Kamerdzhiev:EPAC:2008}. The diameter of the cathode was
10.2~mm (0.4~in). Its shape and the geometry of the electrodes were
chosen to produce a current density profile close to a Gaussian
distribution. Figure~\ref{fig:gun} shows pictures of the electron gun
and an example of a current density measurement. The maximum peak
current yield was 0.5~A at a cathode-anode voltage of 4.6~kV. The
standard deviation (rms) of the current profile distribution was
$\sigma_g = \q{2.0}{mm}$ at the gun.
The electron gun was installed in the second Tevatron electron lens
(TEL2) in June~2009 (Figure~\ref{fig:TEL}). In the electron lens, the
beam was generated inside the gun solenoid (0.1--0.4~T) and guided by
a superconducting solenoid (1--6~T) through the 3-m overlap region,
where it interacted with the circulating beams (protons or
antiprotons) before being extracted and dumped in the collector. The
size~$\sigma_m$ of the electron beam in the overlap region was
controlled by the ratio between the magnetic field in the gun
solenoid~$B_g$ and in the main solenoid~$B_m$: $\sigma_m = \sigma_g
\cdot \sqrt{B_g/B_m}$. Distortions of the electron beam profile due to
its space-charge evolution were mitigated by the large axial field
($B_m > \q{1}{T}$).
In the Tevatron, 36 proton bunches (referred to as P1--P36) collided
with 36 antiproton bunches (A1--A36) at the center-of-momentum energy
of 1.96~TeV. There were 2 head-on interaction points (IPs),
corresponding to the CDF and the DZero experiments. Protons and
antiprotons circulated in the same vacuum pipe on helical
orbits. Their separation at TEL2 was 9~mm (about 6~mm both
horizontally and vertically). Each particle species was arranged in
3~trains of 12~bunches each, circulating at a revolution frequency of
47.7~kHz. The bunch spacing within a train was 396~ns, or 21 rf
buckets at 53~MHz. The bunch trains were separated by 2.6-$\mu$s
abort gaps. The synchrotron frequency was 34~Hz, or $7\times 10^{-4}$
times the revolution frequency. The machine operated with betatron
tunes near 20.58. The relevant lattice functions are reported in
Table~\ref{tab:lattice}. Thanks to the special 5-kV high-voltage
modulator (200-ns rise time), the electron beam could be synchronized
with any bunch or group of bunches, and its intensity could be varied
bunch by bunch~\cite{Pfeffer:JINST:2011}.
\section{RESULTS}
Experiments on beam-beam compensation with Gaussian electron beams
were carried out between September~2009 and July~2010. Preliminary
results were discussed in Refs.~\cite{Valishev:IPAC:2010,
Valishev:PAC:2011}.
\subsection{Beam Alignment and Loss Patterns}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Fig_losses}
\caption{Measured loss rates (red) and calculated intensity decay
rates (blue) during a vertical electron beam scan across the
antiproton beam. The antiproton vertical tune was lowered by 0.003
to enhance the effect. No losses caused by the electron beam were
observed with nominal tunes.}
\label{fig:vscan}
\end{figure}
Because of the nonlinear fields, alignment between electrons and
antiprotons was critical. We performed several position scans to
ensure that the response of the beam position monitors was accurate
for both fast signals from antiproton bunches and for slower signals
from electron pulses. These position scans were also useful to assess
the effects of misalignments on losses and to compare the experimental
results with numerical calculations. We simulated losses during a
vertical alignment scan using the weak-strong numerical tracking code
Lifetrac~\cite{Shatilov:PAC:2005}. The model included the full
collision pattern for the relevant antiproton bunch and a thin-kick
Gaussian electron beam implemented via an analytical formula. The beam
parameters corresponded to the conditions at the time of the
measurement at the end of Store~7718. We tracked a bunch of 5\,000
macroparticles for $3\times 10^6$~turns for various vertical electron
beam misalignments and evaluated the intensity loss rate. The
simulation reproduced several features observed in experiments. First,
the simulation performed at the nominal antiproton working point
(tunes set to $Q_x=0.575$, $Q_y=0.581$) predicted no losses for any
value of the vertical misalignment. This was also observed
experimentally: at the nominal working point, the electron beam did
not cause any additional beam loss. Similarly to the experiment, the
verical tune in the simulation had to be lowered by 0.003 to produce
particle losses. Moreover, the simulation at the modified working
point demonstrated the characteristic double-hump structure of the
loss rate as a function of offset. The position of peaks was in good
agreement with the measurements. Figure~\ref{fig:vscan} shows the
measured loss rates (red crosses) and the simulated decay rates (blue
crosses and lines). Both electron and antiproton vertical rms beam
sizes in the overlap region were equal to 0.6~mm.
\subsection{Incoherent Tune Shifts and Tune Spread}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Fig_Schottky}
\caption{Schottky spectra vs. electron lens current.}
\label{fig:Schottky}
\end{figure}
The effect of the electron lens on the incoherent tune distribution
could be observed directly during dedicated antiproton-only stores,
when there was no contamination from protons in the 21-MHz Schottky
signal. Figure~\ref{fig:Schottky} shows the vertical Schottky signal
as a function of electron lens current. The vertical tick marks
indicate the expected magnitude of the linear beam-beam
parameter~$\xi_e$ due to $N_e$~electrons with Gaussian standard
deviation~$\sigma_e$ and velocity~$\beta_e c$ at a location where the
amplitude function is~$\beta$:
\begin{equation}
\xi_e = -\frac{N_e r_p \beta (1+\beta_e)}{4\pi \gamma_p \sigma_e^2}.
\end{equation}
Here, $r_p$ represents the classical radius of the proton and
$\gamma_p$ is the relativistic factor of the circulating beam. As
expected, a downward shift and widening of the antiproton tune
distribution is observed. The width of the vertical tune line agrees
well with the hypothesis that $\xi_e$ represents the maximum tune
shift.
\subsection{Effects on Coherent Beam-beam Modes}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Fig_tcm}
\caption{Spectra of transverse coherent modes.}
\label{fig:tcm}
\end{figure}
A system for bunch-by-bunch measurements of transverse coherent
beam-beam oscillations was developed~\cite{Stancari:BIW:2010,
Stancari:PRSTAB:2012}. It was based on the signal from a single
beam-position monitor in a region of the ring with high amplitude
functions. Because of its high frequency resolution and its
single-bunch capability, this system complemented the Schottky
detectors and direct-diode-detection base-band tune monitor. It was
conceived as a possible tool to monitor beam-beam compensation
effects.
Figure~\ref{fig:tcm} shows the signal from a single antiproton bunch
towards the end of a regular collider store (Store~7719). The top plot
shows the spectrum of coherent modes under nominal conditions. The
linear beam-beam parameter per interaction point was 0.0050 for
antiprotons and 0.0023 for protons. The middle plot corresponds to the
electron lens acting on the bunch, with $\xi_e = -0.006$. For
comparison, the bottom plot shows the effect of lowering the vertical
antiproton tune by~0.0022. In the middle plot, one can see a downward
shift of the first eigenmode and a suppression of the second. This
suppression could be caused in part by the antiproton tune moving away
from the proton tune. A considerable change in the width of the first
coherent mode was also observed, but relating the reduced width of the
coherent mode to a narrower tune distribution (as one would expect if
there was beam-beam compensation) requires further investigation and
numerical simulations.
\subsection{Tune Scans with Dedicated Head-on-only Stores}
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{Fig_scan}
\caption{Measured decay rates of the 3 antiproton bunches during a
diagonal tune scan in a special 3-on-3 collider store.}
\label{fig:3x3meas}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Fig_scansim}
\caption{Numerical simulation of a diagonal tune scan.}
\label{fig:3x3sim}
\end{figure}
To enhance head-on effects and to suppress long-range forces in the
Tevatron, two special 3-on-3 collider stores were attempted. In these stores,
3~proton bunches collided with 3~antiproton bunches. The bunches were
equally spaced around the machine. Antiprotons were intentionally
heated to increase their emittance and approach the size of proton
bunches. Unfortunately, during the first experiment, the emittances
of two proton bunches increased dramatically between the beta squeeze
and collisions, before the beginning of the study. Hence, the store
could not be used for our purposes.
A smaller blow up of proton emittances occurred before the second
study as well, making conditions far from ideal: the antiproton
beam-beam parameter was less than 0.015, electron sizes could not be
matched to proton sizes, and the attempt to increase the size of the
electron beam resulted in a reduced compensation strength ($\xi_e =
-0.002$). Nevertheless, several tune scans were performed, both
vertically and diagonally in the tune diagram.
Figure~\ref{fig:3x3meas} shows the measured decay rates for the 3
antiproton bunches as a function of the average tune (from the 1.7-GHz
Schottky detector) during a diagonal scan: the bunch affected by the
electron lens (A25, magenta), the control bunch (A13, dark blue), and
the bunch colliding with the two least dense proton bunches (A1,
green). Lifetimes and tune space were obviously better for~A1. The
tune shift of the affected bunch with respect to the control bunch is
compatible with the expected amount (0.002), but it is too small to be
clearly observed. Some resonances (4/7 and 7/12, for instance) appear
stronger with the lens on, whereas the 3/5 is weaker (or shifted). One
may observe that, as expected, beam-beam forces appear to drive the
even resonance 7/12 (large difference between the green and the blue
points), but not the odd resonance 4/7 (control bunch and
low-beam-beam bunch have similar lifetimes). There are regions of the
working point where the bunch affected by the electron lens had better
lifetime (0.560--0.568 and 0.592--0.598), but this special 3-on-3
store was not enough to clearly see a reduction in tune spread or an
improvement in the available tune space.
Nevertheless, these measurements provided useful information on the
available tune space for comparisons with simulation
codes. Figure~\ref{fig:3x3sim} shows the antiproton intensity decay
rates and emittance growth rates calculated with Lifetrac as a
function of tune in a diagonal scan. The horizontal scale is the bare
lattice tune plus half of the beam-beam parameter, in order to
simulate the average of the incoherent tune distribution. As the tune
approaches the 7th order resonance (0.571) from above, loss rates
increase dramatically. Increasing the tune towards the 5th order
resonance (0.6) causes emittance growth. According to this
calculation, with the nonideal experimental conditions described
above, the electron lens does not cause harm in the stable region, but
it can make things worse outside. The region of available tune space
is well reproduced by the simulations.
\section{CONCLUSIONS}
The first studies of beam-beam compensation with Gaussian electron
lenses were carried out at the Tevatron.
We found that, in spite of the very different time structure of the
antiproton bunch and of the electron pulse, alignment of the electron
beam with the circulating beam using a common beam position monitor
was accurate to within 0.1~mm and reproducible from store to store.
We observed the effects of the electron lens on beam lifetimes and
tunes. At the nominal working point in tune space, the electron lens
did not have any adverse effects on the circulating beam, even when
intentionally misaligned. With only antiprotons in the machine, the
tune shift and tune spread caused by the electron lens were clearly
seen.
Dedicated collider stores with only 3 bunches per species (no
long-range interactions) were attempted, but the experimental
conditions were not ideal. The data was used for code
benchmarking. Moreover, tune scans conducted during these special
stores provided a direct comparison between the lifetimes of a control
antiproton bunch, a bunch affected by the electron lens, and a bunch
experiencing reduced beam-beam forces.
The machine was not ideal for a direct demonstration of the beam-beam
compensation concept for two main reasons: head-on nonlinearities for
cooled antiprotons were weak during normal operations; and the lattice
requirements (zero dispersion, phase advance close to an integer
multiple of~$\pi$) were not exactly met at the electron
lens. Nevertheless, several key experimental observations were made.
\section{ACKNOWLEDGMENTS}
The authors would like to thank W.~Fischer and C.~Montag (BNL) for
their suggestions on experiment design and for participating in part
of the studies, and V.~Shiltsev (Fermilab) for discussions and
insights. We are grateful to the Operations Department in Fermilab's
Accelerator Division for making these experiments possible.
Fermi Research Alliance, LLC operates Fermilab under Contract
No.~DE-AC02-07CH11359 with the United States Department of
Energy. This work was partially supported by the US LHC Accelerator
Research Program (LARP).
|
1,314,259,995,912 | arxiv | \section{Introduction}
Generalizability is one of the most important problems in research for artificial general intelligence. Ideally, we want our algorithms to be able to generalize to unseen circumstances. In the context of reinforcement learning, we hope that our agents can master games in such way that when the objects are spawned in a different configuration, they can still play the game. An even more challenging direction is task transfer, where the agent can adapt to new games with similar rules with little or no training.
In this work, we propose a novel task framework in which a variety of different tasks can be constructed under the same set of simple rules. This is the first step towards a ``generalizable reinforcement learning". This might be rephrased into a problem of transfer learning: how do trained models behave in the planning for unseen circumstances? What is the cause of its generalization or its inability to generalize? In context of physical problem solving, generalization clearly requires a certain level of scene understanding and intuitive understanding of physics: it has been argued that data-driven approaches that perform pattern recognition might fail to generalize beyond training data and therefore is different from human learning \cite{tenenbaum2011grow}. Therefore, the ultimate goal of artificial intelligence is a learning framework for agents in simulated environments that should be able generalize to unseen scenarios well with no or minimal amounts of additional training.
Our main contribution is threefold:
\begin{itemize}
\item A principle for testing generalizability of artificial agents outside the training environment it is trained on
\item A (series of) environment of increasing difficulty that tests the capability of the agent to generalize
\item A collection of baselines on this environment that either succeeds in or fails to learn the task
\end{itemize}
\section{Related Works}
\subsection{OpenAI Robotics Environments}
Learning control schemes is a important and practical topic in Reinforcement Learning. However, most Reinforcement Learning algorithms are extremely sample inefficient. If trained in the real world with these methods, a robot would have to fail millions of times to know what the right things to do is. Therefore, it is greatly beneficial to train the model on a simulator and then transfer the policy onto the real world. For this purpose, OpenAI released a series of robotics control environments \cite{gym} based on the physics simulation engine Mujoco \cite{mujoco}.
The environments involve two types of robots: \textit{Fetch}, a robotic arm with 7 degrees of freedom, and \textit{ShadowHand}, a robotic hand with 20 degrees of freedom. The Fetch environments consists of tasks such as reaching a certain position in space, pushing a block to a certain position on the table, and sliding a puck to a position where the robot arm cannot reach. The ShadowHand environments involve orienting objects with various shapes to a desired orientation in the hand.
All of these environments provided by OpenAI are goal-oriented, which is to say that there is a determined goal the success rate on which the agents are evaluated on. Particularly, these goals can be expresses in a simple vector, which can be compared with the vector corresponding to the current state to determine if the goal is reached. For example, for the \textit{FetchReach-v0} environment, the goal is the target position in space that the robot is trying to reach, and the current reached goal is the position of the robot's grippers. In the sparse reward setting of the environments, a reward is given only when the goal is reached, and in the dense reward setting, rewards are given according to the distance between the desired goal and current goal reached. This formulation of goals makes possible the Hindsight Experience Replay (HER) method \cite{andrychowicz2017hindsight}, which treats failure episodes of the agents as potentially successful episodes with a different goal specified. This greatly benefits training since the reward signals are increased greatly for these environment.
\subsection{Curriculum Learning}
Of the many methods to guide the agent into the desired behavior, curriculum learning is one of the most general and successful frameworks \cite{Curriculum}. It is also the most intuitive since it is how humans learn most subjects in school. For our environments, we choose the curriculum learning framework where the task is fixed and the distribution of the starting state varies \cite{reverse_curriculum}. Let $\rho_0:\mathbb{S}\rightarrow\mathbb{R}_+$ be the distribution of the start state that we evaluate the agent on. In training, we use different distributions $\rho_i$ such that it is easier for the agent to get reward signals to learn useful information. Once the agent reaches a certain level of performance on distribution $\rho_i$, we switch to the next distribution $\rho_{i+1}$. In the case where the hardness of $\rho_i$ increases smoothly and converges to $\rho_0$, we expect the agent to be able to learn to perform the task well on the test distribution $\rho_0$.
\subsection{Imitation Learning: AggreVaTeD}
In some tasks that are sufficiently difficult, we perform imitation learning. However, in some cases we do not have a near optimal policy teacher for our network to learn completely through its demonstration, in this case, a mixed imitation learning and reinforcement learning is needed \cite{aggravated}. In the best case, the AggreVaTeD algorithm can provide up to exponential lower sample complexity than pure reinforcement leaning. While the theory is very involved, the algorithm is simple to state. We first define a teacher-forcing ratio $a$, according to which our agent either gets trained by supervision learning on the output of the teacher, or by the reward signal from reinforcement learning. It is expected that the teacher offers the most help early in the training and not so much late in the training, and so the algorithm anneals the teacher forcing ratio from $a_{max}$ to $a_{low}$ through some predefined scheduling function. In short, an unbiased and variance reduced of the loss gradient is:
\begin{equation}
\hat{\nabla_{\theta_n}} = \frac{1}{HK} \sum^K_{i=1}\sum^H_{t=1}\frac{\nabla_{\theta_n} \pi_{\theta_n} (a_t^{i,n}|s_t^{i,n})}{\pi_{\theta_n}(a_t^{i,n}|s_t^{i,n})} A^*_t(a_t^{i,n}|s_t^{i,n})
\end{equation}
where we have used importance sampling because the action space is continuous. For more detail, see \cite{aggravated}. The algorithm is very simple:
\begin{algorithm}
\SetKwInOut{Input}{Input}
\Input{The given MDP and expert $\pi^*$, learning rate $\eta_n$, schedule rate $a_i$, where $a_n \to 0$ as $n \to \infty$}
Initialize policy $\pi_{\theta_i}$\;
\For{$n=1$ to N}
{
Mixing policies: $\hat{\pi}_n = a_n\pi^* + (1-a_n)\pi_{\theta_n}$\;
Starting from $\rho_0$, roll in by executing $\hat{\pi}_n$ on the given MDP to generate $K$ trajectories $\{\tau_i^n\}$ \;
Using $Q^*$ and $\{\tau_i^n\}_i$, compute the descent direction $\delta_{\theta_n}$\;
$\theta_{n+1}=\theta_n - \eta_n\delta_{\theta_n}$ \;
}
\Return the best hypothesis $\{\hat{\pi}_n\}$ on validation\;
\caption{Differentiable AggreVaTe}
\end{algorithm}
\subsection{Deep Deterministic Policy Gradients}
Deep Deterministic Policy Gradients (DDPG) is a policy gradient algorithm that uses a stochastic behavior policy for good exploration but estimates a deterministic target policy, which is much easier to learn \cite{lillicrap2015continuous}. DDPG is also based on actor-critic algorithms; it primarily uses two neural networks, one for the actor and the other the critic. These networks computes action predictions for the current state and generate a temporal difference error signal each time step. The input of the actor network is the current state, and the output is a single real value representing an action chosen from a continuous action space. The loss function is:
\begin{equation}
L = \frac{1}{N}\sum_i (y_i - Q(s_i, a_i| \theta^Q)^2)
\end{equation}
differentiating with respect to this gives us the update rule for the actor network, and the the stochastic version of it is \cite{Silver:2014:DPG:3044805.3044850}:
\begin{equation}\label{ddpg}
\nabla_{\theta^\mu} \mu \approx \mathrm{E}[\nabla_a Q(s, a | \theta^Q) |_{s=s_t, a=\mu(s_t)} \nabla_{\theta^\mu} \mu(s|\theta^\mu )|_{s=s_t}]
\end{equation}
In fact, this is true as long as the Markov decision process satisfies some appropriate conditions, for more detail see \cite{Silver:2014:DPG:3044805.3044850}. This tells us that the stochastic policy gradient is equivalent to the deterministic policy gradient. The pseudo-code for DDPG is:
\begin{algorithm}
\SetKwInOut{Input}{Input}
Randomly initialize critic network $Q(s,a|\theta^Q)$ and actor $\mu(s|\theta^\mu)$ with weights $\theta^Q$ and $\theta^\mu$ \;
Initialize target network $Q'$ and $\mu'$ with weights $\theta^{Q'} = \theta^Q$, $\theta^{\mu'} = \theta^\mu$ \;
Initialize replay buffer $R$\;
\For{episode = 1 to M}
{
Initialize a random process $G$ for action exploration\;
Receive initial observation state $s_1$\;
\For{t=1 to T}{
Select action $a_t = \mu(s_t|\theta^\mu) + \mathcal{N}_t$ according to the current policy and exploration noise\;
Execute action $a_t$ and observe reward $r_t$ and observe new state $s_{t+1}$\;
Store transition $(s_t, a_t, r_t, s_{t+1})$ in $R$\;
Sample a random minibatch of $N$ transitions $(s_i, a_i, r_i, s_{i+1})$ from $R$\;
Set $y_i = r_i + \gamma Q'(s_{i+1}, \mu'(s_{i+1}| \theta^{\mu'}) | \theta^{Q'})$\;
Update critic by minimizing the loss: $L=\frac{1}{N}\sum_i(y_i-Q(s_i, a_i|\theta^Q))^2$\;
Update the actor policy using the sampled policy gradient according to eq.~\ref{ddpg}\;
Update the target networks:
\[\theta^{Q'} = \tau \theta^Q + (1 -\tau)\theta^{Q'}\]
\[\theta^{\mu'} = \tau \theta^Q + (1 -\tau)\theta^{\mu'}\]
}
}
\caption{Deep Deterministic Policy Gradient}
\end{algorithm}
\section{Environment}
\begin{figure}
\centering
\includegraphics[width=0.99\textwidth]{fig1.jpeg}
\caption{Illustration of the designed task. In each case the robot aims to put the green and blue object into contact, under the constraint that the red and blue object should not touch.}
\label{fig:env}
\end{figure}
\subsection{Task Framework}
Based on the discussion before, we propose a novel framework consisting of simple rules that supports a wide range of robotics tasks. In the framework objects are colored with four colors: red, blue, green and grey. When a red (constraint) object contacts with a blue (manipulation) object, the task is considered failed. When all blue objects are in contact with a green (goal) object at some point, the task is considered complete. Grey objects are neutral and does not contribute to the success condition when touching other objects. Also, nothing special happens when a red object contacts with a green object. Note that the rules for different colored blocks touching can be made more general, and we are only considering the simplest set from which interesting tasks can be constructed.
Under the color scheme in the aforementioned framework, consider the above situations (fig.~\ref{fig:env}). The first situation corresponds to a block contact problem, which, with more constraints specified, can become a problem for block stacking. The second one corresponds to a task of toppling the block tower so that the blue block falls onto the green table. The third situation corresponds to a path planning problem where the robot arm has to reach the green block while avoiding the obstacles. These are just simple examples of what tasks the framework is able to cover, and there can be many more complicated variations and combinations of these tasks. In fact, similar versions of many previously studied robotics tasks such as fetching, pushing and stacking blocks, along with more sophisticated tasks such as path planning with obstacles and toppling a block tower in a certain direction can all be implemented in this framework.
Note that different arrangement of blocks and colors such as shown in the figure are considered to be different tasks. However, since they are under the same set of rules, it is natural to assume that if the agent learns how these simple rules work as well as skills for manipulating blocks from the tasks during training, it should be able to generalize to other unseen tasks and solve them as well. In fact, it is obvious that humans with basic motor skills, scene understanding and intuition for physics should be able to solve many of these different unseen tasks with ease. We hypothesize that current RL methods still rely on large numbers of training scenarios and will overfit the environments they are trained on. As a consequence they will fail to generalize to unseen types of tasks under our framework. We think that generalization or transfer learning performance on different tasks within this framework can be very challenging and is a good measure of a system's scene understanding and intuitive physics capabilities. In this paper, we attempt to kickstart the process towards fully solving the task framework by performing existing methods on two simple environments under the framework.
\subsection{Tested Environments}
Here we describe our environments \textit{BlocksTouch-v0} and \textit{BlocksChoose-v0}, which we performed experiments on. The environments are built in the MuJoCo environment \cite{mujoco}, and are based on the robotics tasks in the OpenAI gym environment \cite{gym}. In the OpenAI Fetch robotics environments, the agent is a robotics arm with 7 degrees of freedom with a clamp to pickup objects. However for the purpose of our experiments, several degrees of freedom, including control of the gripper, are locked, and the agent only have to output a 4-dimensional action. The actions $\mathcal{A}\in \mathbb{R}^4$ are real-valued torques applied to the joints of the robot, and each is normalized to $-1$ and $1$. The observations contains the position, velocity and gripper states of the robot, as well as the position, orientation, velocities and color of each block, and concatenated sequentially.
The \textit{BlocksTouch-v0} environment has two blocks, a green one and a blue one, which need to come into contact, while in the environment \textit{BlocksChoose-v0} there is an extra block to interfere with the agent. To lower the hardness of the task, we always put the grey block in the last few dimensions in the observation. Screenshots taken from running of our environment is given in fig.~\ref{fig:video}.
\subsection{Curriculum Settings}
As an example, the 2 block task is implemented in the following way. We first fix the arm to start at a default position for every episode. Now define a maximum radius $R$, such that the first block appears within radius $R$ to the arm uniform randomly; we then sample the second block, which appears within radius $R$ to the first block uniformly randomly. This finishes the set up of the episode, and we start the episode from here. We start from a very small $R$ and gradually increase it to include the whole table at the highest difficulty (while making sure that the blocks are not sampled outside the table). For the 3 blocks case, we also include a minimum radius $R_{min}$, the distance from the center point between the colored blocks under which the grey block would not spawn. This value starts high and is gradually decreased to 0. We also define a level threshold $h$, and for every epoch, if the training success rates pass the threshold $h$, then we increase the difficulty to the next level. In practice, we find $h=0.7$ tend to work well. We noticed that curriculum training greatly increased the training speed and convergence rate of the baseline agents. A 3-layer baseline model takes fewer than 50 epochs to converge in this training regime.
\section{Methods}
\subsection{Policy Gradient Methods}
In the experiments we tried two variants of Policy Gradient: Deep Deterministic Policy Gradient (DDPG) provided by OpenAI \cite{gym}, and our own variant which produces a normal distribution over the space of continuous actions, which we call Policy Gradient with Gaussian Distribution (PGGD). The OpenAI implementation of DDPG uses the actor-critic scheme. The actor's objective is simply the negative value function $-Q(s,a)$ predicted by the critic, while the critic's objective is the commonly used TD-learning objective. The inclusion of the critic reduces the variance and makes it more stable in training, but it also complicates the implementation of imitation learning, since both the actor and the critic needs to be learning from the expert. By contrast, PGGD does not have a critic model, and the actor produces a Gaussian distribution over the action space, from which one action is sampled and executed. This can be represented as $a(s, \theta)\sim\mathcal{N}(\mu(s,\theta), \sigma^2(s, \theta))$. The update rule for PGGD is the same as in normal policy gradient. Here is a the two algorithms in pseudocode:
Note that during training, the variance of the distribution produced by PGGD will never decrease to 0, and while the stochasticity is good for exploration, it is not good for evaluating the performance of the algorithm. Therefore we divide performance measure into three categories: training, testing and finals. In testing and finals, we use the mean of the Gaussian for a more stable evaluation of the performance. In training and testing, the evaluations are performed on the current level of curriculum, and the success rate in testing will determine if the agent is ready for the next level of difficulty. The finals is evaluated on the maximum difficulty and reflects the true training progress.
\subsection{Imitation Learning}
In our experiments we found that the 3-blocks environment is significantly harder than the 2-blocks environment, even with the application of curriculum learning. Since these task are similar in structure, and the third block mainly serves as a distraction and hindrance in completing the task, we wish to transfer useful knowledge of the learned agent from the 2-blocks case to the 3-blocks environment.
We used the AggreVaTeD framework to transfer knowledge between the expert and the agent being trained. The expert used in our experiments is a DDPG policy trained on the 2 blocks environment, and the learner policy is a PGGD policy in the 3-blocks environment. During the training of the learner policy, expert take control of the robot with a probability $\beta$ with is annealed exponentially:
$$\beta=\beta_0+(1-\beta_0)e^{-\frac{t}{t_0}}$$
Where $t_0$ is controls the rate of annealing and $\beta_0$ controls the independence of the learner policy. Whenever the expert is in control, the state from the environment is processed so that the grey block is removed from the observation. So the expert performs actions while not seeing the grey block. This scheme produces experiences with good reward signals since the grey block is far away from the colored ones and do not interfere with the task early on in the curriculum. However, as the curriculum gets harder, there is an increasing chance that the grey block is spawned in between the colored blocks, making the expert fail on completing the task.
\section{Experiments}
The hyperparameters we used for DDPG is mostly identical to those reported in \cite{plappert2018multi}. The actor and critic networks are both MLPs with 3 layers and 256 hidden units each, with ReLU activation \cite{Nair_relu}. The input is the normalized state observations. Tanh activation of the output is used for the actor to produce a valid action. We tried some variants in network depth and learning rate, and found the original ones to be the optimal. For PGGD we used linear activation for the mean and softplus for the standard deviation, and the learning rate is $0.0001$. Both policies are trained off-policy with experience replay \cite{Lin1992}, with a batch size of $256$. For imitation learning we used $\beta_0=0$ and $t_0=50$, which we empirically found to yield good results.
Each epoch of training consists of 50 cycles of training, in which every MPI worker records rollouts in the replay memory and trains on 40 batches sampled from the memory. All experiments are trained to a maximum of 200 epochs.
We originally performed our experiments on an AWS machine with 2 cores, using 2 rollouts per MPI worker. We discovered that the number of workers used does not impact the learning of PGGD significantly. However, when we ran experiments on another machine with 20 cores, we discovered a significant boost of performance on DDPG, yielding more competitive results learning from scratch than more sophisticated methods like PGGD+AggreVaTeD on the 3-blocks environment. Despite this fact, it is still true that PGGD+AggreVaTeD outperforms vanilla DDPG under low sample size constraint, and therefore can be more sample efficient under these circumstances.
\subsection{Baselines}
We found the 2 blocks environment to be fairly easy to learn for DDPG. The policy reached 70\% accuracy after 75 epochs while trained on 2 cores, and it reached 95\% after 20 epochs while trained on 20 cores (fig.~\ref{fig:plots}, top left). Watching the behaviour of the agents, we found that the agent learned to always push one of the two blocks towards the other one, instead of reaching for the closest one. This sometimes leads to failure when one the blocks is a little too far away from the robot for it to retrieve. This suggests that although a high level of performance is reached, the agent still has no concept that both blocks can be moved to achieve the same goal.
The baseline for 3 blocks is much harder to train on 2 cores for DDPG, as the algorithm can not even get past the first difficulty level, where the grey block is position far away from the colored blocks so that it does not interfere at all. Tweaking with the spawning position of the third block, we discover that even the tiniest variation of the spawning position of the third block produces great fluctuations in training. This is likely due to the fact that without enough experience as evidence, the network does not know that the grey block is irrelevant and should be ignored at this point. We tried changing our curriculum so that the algorithm can eventually learn this, but it turned out to be too slow to be meaningful.
\subsection{Imitation Learning}
Here we cover the results obtained with PGGD+AggreVaTeD, with the agent trained in the two blocks environment in the previous section as the expert. We notice that with the expert policy as guidance, the learner quickly picks up on what the right thing to do is, passing the lower difficulty levels with ease. Eventually the training reaches 67\% success rate at around 250 epochs (fig.~\ref{fig:plots}, top right). Note that this is much better than training PGGD alone with the same hyperparameters, where the agent barely passes the first difficulty level after 200 epochs. The performance is also better than that of the expert, with a success rate of merely 44\% due to the colored blocks being further apart and the grey block getting in the way from time to time. Also note that although the data is produced with 20 cores, the similar level of performance can be reached with only 2 cores. These facts show that PGGD+AggreVaTeD is very effective at kickstarting training and obtaining higher performance by learning from an imperfect expert.
\subsection{DDPG with 20 Cores}
During our final run of the experiment, we discovered the surprising fact that DDPG with 20 cores actually outperforms PGGD+AggreVaTeD by a large margin, with a success rate of 90\% at 200 epochs, reaching a summit of 97\% at 500 epochs (fig.~\ref{fig:plots}, bottom left/right). We suspect that this is the result of having a critic which can learn from a more independently distributed set of data to guide the actor. In particular, we hypothesized that DDPG was able to learn to avoid the third block by moving other blocks around it. To test this hypothesis, we constructed a final challenge level for the agents, where the colored blocks are at least 0.15 distance apart, and the grey block spawns at the center point of the two colored blocks. We observed that PGGD+AggreVaTeD does not know to avoid the grey block, and can just end up pushing all three blocks off the table, while DDPG learned to maneuver around the grey block almost every single time (fig.~\ref{fig:video}). Note that this configuration of blocks is rare even for the highest difficulty level in the training scenarios. This indicates that the agent is already capable to generalize to a different distribution of test cases, indicating a minimal understanding of the physics of this block puzzle. The final results are summarized in Table.~\ref{table:results}.
\begin{figure}[H]
\centering
\includegraphics[width=175pt]{2Blocks-DDPG.png}
\includegraphics[width=180pt]{3Blocks-PGGD.png}
\includegraphics[width=175pt]{3Blocks-all.png}
\includegraphics[width=180pt]{3Blocks-DDPG.png}
\caption{Training plots of our experiments on the two environments.}
\label{fig:plots}
\end{figure}
\newpage
\begin{figure}[H]
\centering
\includegraphics[width=350pt]{good.jpeg}
\includegraphics[width=350pt]{bad.jpeg}
\caption{Examples of the agent dealing with the grey block.
Top: The agent learns to push the blue block around the grey block; Bottom: The agent does not avoid the grey block, leading to a failure.}
\label{fig:video}
\end{figure}
\begin{table}[H]
\label{sample-table}
\begin{center}
\begin{tabular}{l|ll}
\multicolumn{1}{c}{\bf Method} &\multicolumn{1}{c}{\bf Normal} &\multicolumn{1}{c}{\bf Challenge}
\\ \hline
DDPG(2 blocks) &0.44 &0.24\\
\textbf{DDPG(3 blocks)} &\textbf{0.97} &\textbf{0.90}\\
PGGD+AggreVaTeD &0.64 &0.40\\
\end{tabular}
\end{center}
\caption{Final evaluation of different agents on the 3 blocks environment.}
\label{table:results}
\end{table}
\section{Discussion}
In this paper, we propose a novel task framework under which a variety of tasks can be formalized. We constructed two simple environments in Mujoco and successfully solved them with the help of curriculum learning and imitation learning. However, a lot still remains to be done. The two environments are simplified for the purpose of the experiment, and the task would be much harder if in the 3 blocks environment the colors are shuffled instead of fixed. A possible next step can also be building a more sophisticated network to handle any possible number of blocks.
We believe that the proposed environment is a novel framework to assess whether a learning agent has an understanding of physical reasoning, as opposed to mere pattern matching. In the future, we wish to algorithms that can not only solve one of the tasks with sparse rewards, but also use that knowledge and understanding of the task structure to transfer to other tasks in the framework.
\clearpage
\bibliographystyle{alpha}
|
1,314,259,995,913 | arxiv | \section{Introduction}
\subsection{Our study}
Let us consider $p$ {\em diagonalizable} matrices $M_1, \cdots, M_p$ in
$\mathbb{C}^{n \times n}$ which pairwise commute. A classical result states
that these matrices are simultaneously diagonalizable, i.e., there exists
an invertible matrix $E$ and diagonal matrices $\Sigma_i$, $1 \leqslant
i \leqslant p$, such that $EM_i E^{- 1} = \Sigma_i$, $1 \leqslant
i \leqslant p$, see e.g. \cite{HJ12}. The aim of this paper is to numerically compute a solution $(E,F,\Sigma)$ of the system of equations
\begin{eqnarray}
f(E,F,\Sigma):=\left(\begin{array}{c}
FE - I_n\\
FME - \Sigma
\end{array}\right) & = & 0 \label{eq1}
\end{eqnarray}
where $\Sigma=(\Sigma_1,\ldots,\Sigma_p)$ and $E M F-\Sigma:=(EM_1 F -
\Sigma_1,\ldots,EM_p F - \Sigma_p)$. Notice that this system is
multi-linear in the unknowns $E,F, \Sigma$. We verify that when $p=1$
and $M_{1}$ is a generic matrix, this system has a solution set of dimension
$2\,n^{2} - n^{2} -(n^{2}-n)= n$. However, for $p>1$ and generic
matrices $M_{i}$, there is no solution. To have a solution, the pencil
$M$ must be on the manifold $\mathcal{D}_{p}$ of $p$-tuples of simultaneously
diagonalizable matrices.
The system \eqref{eq1} can be generalized to the following system:
\begin{eqnarray}
f'(E,F,\Sigma'):=\left(\begin{array}{c}
FM_{0}E - \Sigma_{0}\\
FME - \Sigma
\end{array}\right) & = & 0 \label{eqh2}
\end{eqnarray}
where $\Sigma'=(\Sigma_{0},\Sigma_1,\ldots,\Sigma_p)$, $M_{0}\in \mathbb{C}^{n\times n}$ is replacing $I_{n}$ and $\Sigma_{0}$ is
a diagonal matrix replacing $I_{n}$ in the first equation.
When the pencil $M'=(M_{0}, M_{1}, \ldots, M_{p})$ contains an invertible
matrix, the solutions of the two systems are closely related. If
$M_{0}$ is invertible, a
solution $(E,F,\Sigma')$ of \eqref{eqh2} for $M'=(M_{0}, M_{1}, \ldots,
M_{p})$ gives the solution $(F M_{0}, E \Sigma_{0}^{-1}, \Sigma
\Sigma_{0}^{-1})$ of \eqref{eq1} for $M = (M_{0}^{-1} M_{1}, \ldots,
M_{0}^{-1} M_{p})$.
A similar correspondence between the solution sets can be obtained if
a linear combination $M_{0}'= \sum_{i=1}^{p} \lambda_{i} M_{i}$ is
invertible.
As \eqref{eqh2} can be seen as an homogeneisation
of \eqref{eq1} and appears in several contexts and applications, we
will also study Newton-type methods for this homogenized system.
To solve the system of equations \eqref{eq1}, we propose to apply a
Newton-like method and to analyse the Newton map associated to an iteration. These ideas
also are been developed in a technical report for the fast computation of the
singular value decomposition \cite{JVDHYak}.
The classical Newton map defines $(E+X,F+Y,\Sigma+S)$ from $(E,F,\Sigma)$
in order to cancel the linear part in the Taylor expansion of
$f(E+X,F+Y,\Sigma+S)$. An easy computation shows that the
perturbations $X$, $Y$ and $S$ are solutions of such a Sylvester-type linear system
\begin{eqnarray}
\left(\begin{array}{c}
FE - I_n+FX+YE\\
F M E - \Sigma-S+XMF+EMY
\end{array}\right) & = & 0. \label{eq1_bis}
\end{eqnarray}
The technical background to solve this linear system is the Kronecker
product, see \cite{HJ91}. In this way the size of the linear system
that one needs to invert is $n^2.$
On the other hand if we consider a Newton map defined by
$(E (I_{n}+X), (I_{n}+Y)F,\Sigma+S)$ from $(E,F,\Sigma)$ such that $X$, $Y$ and $S$
cancel the linear part of the Taylor expansion of
$f(E(I_n+X),(I_n+Y)F,\Sigma+S)$, we can produce explicit solutions
for the linear system in $X$, $Y$ and $S$ given by:
\begin{eqnarray}
\left(\begin{array}{c}
Z+X+Y\\
\Delta-S+\Sigma X+Y\Sigma
\end{array}\right) & = & 0. \label{eq1_ter}
\end{eqnarray}
where $Z = F E - I_{n}$ and $\Delta = F M E - \Sigma$.
We will see that the linear system \eqref{eq1_ter} admits an explicit solution
$(X,Y,S)$ with respect to $Z$ and $\Delta$ for $p=1,2$. This is due to the fact that $\Sigma$ is a diagonal matrix. From these considerations we define and analyse a sequence which converges quadratically
towards a solution of the system \eqref{eq1} without inverting a linear
system at each step of this Newton-like method.
We say that we have a quadratic sequence associated to a system of
equation, if the sequence converges quadratically towards a solution.
\subsection{Related works} Simultaneous matrix diagonalization is
required by many algorithms as it was pointed out in \cite{BBM92}. A
numerical analysis for two normal commuting matrices is proposed in
\cite{BBM93} using Jacobi like methods. Their method adjusts the
classical Jacobi method in successively solving $\frac{n(n-1)}{2}$
two-real-variable optimization problems at each sweep of the
algorithm. Their main result states a local quadratic convergence and
can be summarized in the following way. Let $\textrm{off}_2 (A,B)^2=\sum_{i\ne j}\abs{A_{i,j}}^2+\abs{B_{i,j}}^2$. Let $\{\alpha_1, \dots, \alpha_n\}$ (resp. $\{\beta_1, \dots, \beta_n\}$) be the set of the eigenvalues of $A$ (resp. $B$). Let $A^k$ and $B^k$ the matrices obtained at the step $k$ of the Jacobi like method and $\rho_k=\textrm{off}_2 (A^k,B^k)$. If
$$\rho_0<\frac{1}{2}\delta:=\frac{1}{4}\min_{i\ne j}{(\abs{\alpha_i-\alpha_j}, \abs{\beta_i-\beta_j})}$$
then
$$\rho_{k+1}<2n(9n-13)\,\frac{\rho^2_k}{\delta}.$$
We will see in Theorems ~\ref{theo2} and~\ref{th-quad-conv} that
the local conditions of quadratic convergence do not depend on
$n$. Many other papers studies the so-called Jacobi-like methods (see
e.g. ~\cite{luc-alb},~\cite{mes-bel} and references therein).
In \cite{Hoeven} an iteration with a proof of convergence towards a
numerical solution of the system \eqref{eq1} when $p=1$ i.e. for
$M_1$, with the assumption of $M_1$ being a diagonalizable matrix, is
presented. It requires matrix inversion. Furthermore, under some extra
assumptions, its quadratic convergence is established.
For a pencil of real {\em symmetric} matrices $C=(C_1, \ldots, C_s)$, several
algorithms based on Riemannian optimization methods (see
\cite{AbsMahSep2008}) have been developped in order to find an {\em
approximate joint diagonalizer} (see
e.g. \cite{bouchard,absil1,alg1,JM}). The idea is to find a local
minimizer $B\in\mathbb{R}^{n\times n}$ of an objective function $f$
which measures the degree of non-diagonality of the pencil $(BC_1B^T,
\ldots, BC_sB^T)$ over a Riemannian manifold (see
\cite{objfunc,bouchard,Afsari} for some examples of objective
functions). This Riemannian manifold is defined according to the
geometric constraints considered on $B$. For instance, the
diagonalizer is supposed to be orthogonal in some of these algorithms
after a pre-whitening step (see
e.g. \cite{blind,blind1,blind3,alg1,alg2,JM,alg3,alg4}). Due to
inaccuracies in the computation of the diagonalizer with orthogonality
constraints (see. \cite{inacc}), {\em oblique} constraints, i.e. all the rows of
the diagonalizer have unit Euclidean norm, have also been considered instead of the former constraints in more recent works
(see e.g. \cite{absil1,bouchard}). These algorithms can
be used when the pencil of symmetric matrices is simultaneously
diagonalizable. In this case we aim to find a zero of the objective
function $f$. However, these algorithms have a computation
complexity higher than the Newton-type algorithm that we propose (see
Proposition \ref{complexity}). For instance, most of them combine line search
\cite[Ch4]{AbsMahSep2008} or trust region \cite[Ch7]{AbsMahSep2008}
methods, and matrix inversions at each iteration (see the exact
Riemannian Newton iteration in \cite{absil1}). Moreover, the points on
the Riemannian manifold are updated using a retraction operator (see
\cite[Ch4]{AbsMahSep2008} or \cite{bouchard} for an example of a
retraction operator on the oblique manifold). In the Newton-type
method described in \Cref{sec-p=1,sec-p=2} the points are updated by
using direct and explicit formulas. They have a lower complexity than the Riemannian optimization based algorithms and they are well-adapted to computation with high precision.
Simultaneous diagonalisation of matrix pencils appears in many
applications.
In the solution of multivariate polynomial equations by algebraic
methods, the isolated roots of the system are obtained from the
computation of common eigenvectors of commuting operators of
multiplication in the quotient ring and from their eigenvalues
\cite{CoxUsingalgebraicgeometry2005},
\cite{elkadi_introduction_2007}. In the case of simple roots, this
reduces to simultaneous diagonalisation of a matrix pencil.
The approach of approximate joint diagonalizer for a pencil of real
{\em symmetric} matrices is used to solve Blind Source Separation
(BSS) problem, with potential applications in wide domains of
engineering (see e.g. \cite{BSS}).
Simultaneous matrix diagonalization of pencils of general matrices
also appears in the rank (or canonical) decomposition of tensors
\cite{lath06}. Under certain conditions this rank decomposition is
unique \cite{si-bro}. In this case simultaneous matrix diagonalization
allows to compute this rank decomposition which plays a crucial role
in numerous applications such that Psychometric \cite{ca-ch}, Signal
Processing and Machine Learning \cite{ci-ma}, \cite{si-lath}, Sensor
array processing \cite{so-do}, Arithmetic Complexity \cite{bu13}, wireless communications \cite{sovan}, multidimensional
harmonic retrieval \cite{so-lath17-1}, \cite{so-lath17-2},
Chemometrics \cite{bro97}, and Principal components
analysis \cite{jolli}.
\subsection{Outline}
The sections \ref{sec-inverse-pb}, \ref{sec-p=1}, \ref{sec-p=2}, \ref{sec-cvg-family} are devoted to give conditions to get a quadratic sequence
respectively to numerically approximate a solution of the systems
\begin{itemize}
\item $FE- I_n=0$,
\item the system \eqref{eq1} when $p=1$,
\item the system \eqref{eqh2} when $p=1$,
\item the system \eqref{eq1} for any $p$.
\end{itemize}
Moreover, we provide for these cases, a certification that the
sequence converges to a nearby solution and a test to detect when this
convergence is quadratic from an initial point. In Section~\ref{sec-exp} we perform a numerical experimentation. The final section is for our conclusions and future works.
\subsection{Notation and preliminaries}
Throughout this work, we will use the infinity vector norm and the
corresponding matrix norm. For a given vector $v \in \mathbb{C}^n$ and matrix
$M \in \mathbb{C}^{n \times n}$, they are respectively given by:
\begin{eqnarray*}
\| v \| & = & \max \{ \abs{v_1}, \ldots, \abs{v_n} \}\\
\| M \| & = & \max_{\| v \| = 1} \| M v \| .
\end{eqnarray*}
Explicitly, $\| M \| = \max \{\abs{m_{i,1}}+\ldots+\abs{m_{i,n}}: 1\le i\le n\}.$\\
For a second matrix $N \in \mathbb{C}^{n \times n}, \text{we have}$
\begin{eqnarray*}
\| M + N \| & \leqslant & \| M \| + \| N \| ~\text{(sub-additivity)}\\
\| M N \| & \leqslant & \| M \| \| N \| ~\text{(sub-multiplicativity)}.
\end{eqnarray*}
Moreover, for a given matrix $M\in\mathbb{C}^{n\times n}$, we denote by $\|M\|_{\mathrm{L}, \mathrm{Tri}}$\\and $\|M\|_{Frob}$ the following:
$$\|M\|_{\mathrm{L}, \mathrm{Tri}}:=\max_{\substack{{1\le i \le n}\\{1\le j\le i-1}}}\abs{m_{i,j}},$$ i.e the max matrix norm of the lower triangular part of $M,$\\
$$\|M\|_{Frob}:=\sqrt{\sum_{i=1}^{n}\sum_{j=1}^{n}{\abs{m_{i,j}}^2}},$$
i.e. the Frobenius norm of $M$.
Furthermore, we consider in this paper the regular case of diagonalizable matrices, that is, the matrices are diagonalizable with simple eigenvalues. Thus we will use the following notation
$$\mathcal{W}_n:=\{M\in\mathbb{C}^{n\times n} \mid M\text{with pairwise distinct eigenvalues}\}.$$ It is well-known that $\mathcal{W}_n$ is dense in $\mathbb{C}^{n\times n}$.
The Lie group of $n\times n$ invertible matrices, denoted by $GL_n$, is the so-called general linear group \cite{lineargrp}. We denote by $\mathcal{D}_n$ the vector space of diagonal matrices of size $n$ and $\mathcal{D}_n'$ denotes the subset of $\mathcal{D}_n$ in which the diagonal matrices are of $n$ distinct diagonal entries. Let $E, F\in GL_n$ and $\Sigma\in\mathcal{D}_n'$. The tangent space of $GL_n$ at $E$ (resp. $F$) is denoted by $T_EGL_n$ (resp. $T_FGL_n$) and the tangent space of $\mathcal{D}_n'$ at $\Sigma$ is denoted by $T_\Sigma\mathcal{D}_n'$. The perturbation of respectivelly $E$, $F$ and $\Sigma$ that we consider in this paper are of the following form: $E+\dot{E}$, $F+\dot{F}$ and $\Sigma+\dot{\Sigma}$, where $\dot{E}$ and $\dot{F}$ are respectivelly in $T_EGL_n$ and $T_FGL_n$ and $\dot{\Sigma}$ is in $T_\Sigma\mathcal{D}_n'$.\\
As $GL_n$ is a Lie group, $\dot{E}$ and $\dot{F}$ can be written as $EX$ and $YF$ such that $X, Y$ are in the Lie algebra of $GL_n$ which is equal to $\mathbb{C}^{n\times n}$ (since this Lie algebra is $T_{I_n}GL_n$ and $GL_n$ is an open subset in $\mathbb{C}^{n\times n}$).\\
As $\mathcal{D}_n'$ is open in $\mathcal{D}_n$ then $T_\Sigma\mathcal{D}_n'=\mathcal{D}_n$, herein $\dot{\Sigma}=S\in\mathcal{D}_n$.
Finally, the perturbation of $E$, $F$ and $\Sigma$ that we consider are as follows:\\
$E+EX$, $F+YF$ and $\Sigma+S$, such that $X$ and $Y$ are in $\mathbb{C}^{n\times n}$ and $S$ is a diagonal matrix in $\mathbb{C}^{n\times n}$.
For a matrix $M\in\mathbb{C}^{n\times n}$, let $\tmop{diag} (M)$ be the diagonal matrix with the same
diagonal as $M$ and let $\tmop{off} (M)$ be the matrix where the diagonal term
of $M$ are replaced by $0$. We have $M = \tmop{diag} (M) + \tmop{off} (M)$. We
say that $M$ is an off-matrix if $M = \tmop{off} (M)$. In addition, let $(\lambda_1, \dots, \lambda_n)\in\mathbb{C}^n$, $\mathrm{diag}(\lambda_1, \dots, \lambda_n)$ is the diagonal matrix in $\mathbb{C}^{n\times n}$ of diagonal entries $\lambda_1, \dots, \lambda_n$.
The superscripts $.^t$, $.^*$ and $.^{-1}$ are used respectively for the transpose, Hermitian conjugate, and the inverse matrix.
We state the following lemma which will be used in some of the proofs.
\begin{lemma}
\label{lem-eps-u}Let $\varphi (\varepsilon, u) = \frac{\prod_{j \geqslant 0}
(1 + u \varepsilon^{2^j}) - 1}{\varepsilon u}$. Given $\varepsilon
\leqslant \frac{1}{2}$, $u \leqslant 1$, and $i \geqslant 0$, we have
\begin{eqnarray}
\prod_{j \geqslant 0} (1 + u \varepsilon^{2^{j + i}}) & \leqslant & 1 + 2
u \varepsilon^{2^i} \label{ineq-lem-eps}
\end{eqnarray}
\end{lemma}
\begin{proof}
Modulo taking $\varepsilon^{2^i}$ instead of $\varepsilon$, it suffices to
consider the case when $i = 0$. Now $\varphi (\varepsilon, u)$ is an
increasing function in $\varepsilon$ and $u$, since its power series
expansion in $\varepsilon$ and $u$ admits only positive coefficients. \
Consequently, $\varphi (\varepsilon, u) \leqslant \varphi (\frac{1}{2}, 1) = 2$.
\end{proof}
\section{ Newton-type method for the system \large {\texorpdfstring{$FE - I_n = 0$}{TEXT}}.}\label{sec-inverse-pb}
Let $f: GL_n\times GL_n \to \mathbb{C}^{n\times n},~(E,F)\mapsto FE-I_n$. We consider the following perturbations $E+EX$, $F+YF$ of respectively $E$ and $F$ where $X,~Y\in\mathbb{C}^{n\times n}$.\\
To define the Newton sequence we have to solve the linear system obtained by canceling the linear part in the Taylor expansion of $f(E+EX, F+YF)$. The same methodology will be adopted in the next sections for the other considered systems. Hereafter, we detail the computation of the Newton sequence associated to the system $FE-I_n=0$. Moreover, a sufficient condition on the initial point for the quadratic convergence of this Newton sequence will be established.\\
Let $Z = FE - I_n$. We observe that
\begin{eqnarray}
f(E+EX, F+YF)&=&(F + YF) (E + EX) - I_n \\
&=& Z + (Z + I_n) X + Y (Z + I_n) + Y (Z + I_n) X.
\label{FYF-EEX}
\end{eqnarray}
We assume here that $Z$ is of small norm i.e. we start from an initial point $(E_0, F_0)$ close from the solution of the system $FE-I_n=0$.\\
Consequently, the linear system of first order terms to solve is
\begin{equation}\label{linear}
Z+X+Y=0.
\end{equation}
Hence $X = Y = -\frac{Z}{2}$ is a solution of \Cref{linear}. Moreover we get, by substituting in Equation (\ref{FYF-EEX}) $X$ and $Y$ by $-\frac{Z}{2}$,
\begin{eqnarray}
(F + YF) (E + EX) - I_n & = & Z^2 \left( - \frac{3}{4} I_n + \frac{Z}{4}
\right) . \label{eq2}
\end{eqnarray}
\begin{proposition}
Let $Z_0 = F_0 E_0 - I_n$. Define $X_0 = - \frac{Z_0}{2}$, $E_1 = E_0 (I_n + X_0)$, $F_1 = (I_n + X_0) F_0$ and $Z_1=F_1E_1-I_n$. Assume that $\| Z_0 \| \leqslant 1$. Then
\begin{eqnarray}
\| Z_1 \| & \leqslant & \| Z_0 \|^2 \label{eq3}
\end{eqnarray}
\end{proposition}
\begin{proof}
It follows easily from (\ref{eq2}).
\end{proof}
\begin{theorem}
Let $E_0$ and $F_0$ two complex square matrices of size $n$. Let $Z_0 = F_0 E_0 - I_n$
and assume that $\varepsilon = \| Z_0 \| < \frac{1}{2}$. The sequences
defined for $i \geqslant 0$
\begin{eqnarray*}
Z_i & = & F_i E_i - I_n\\
X_i & = & - \frac{Z_i}{2}\\
E_{i + 1} & = & E_i (I_n + X_i)\\
F_{i + 1} & = & (I_n + X_i) F_i
\end{eqnarray*}
converge quadratially towards the solution of $FE-I_n=0$. Each $E_i$, respectively $F_i$ are invertible and, if $E_{\infty}$
and $F_{\infty}$ are respectively the limits of sequences $(E_i)_{i
\geqslant 0}$ and $(F_i)_{i \geqslant 0}$ we have for $i \geqslant 0,$
\begin{eqnarray*}
\| E_i - E_{\infty} \| & \leqslant & (1 + 2 \varepsilon) 2^{- 2^{i + 1} +
1}_{} \varepsilon \| E_0 \|,\\
\| F_i - F_{\infty} \| & \leqslant & (1 + 2 \varepsilon) 2^{- 2^{i + 1} +
1}_{} \varepsilon \| F_0 \| .
\end{eqnarray*}
\end{theorem}
\begin{proof}
Let us prove by induction that $\| Z_k \| \leqslant 2^{- 2^k + 1}
\varepsilon$. Since $\varepsilon < \frac{1}{2}$, we have
\begin{eqnarray*}
\| Z_{k + 1} \| & \leqslant & \| Z_k \|^2 \qquad \tmop{from} \left(
\ref{eq3} \right)\\
& \leqslant & \varepsilon 2^{- 2^{k + 1} + 2} \varepsilon^{} \quad\\
& \leqslant & 2^{- 2^{k + 1} + 1} \varepsilon .
\end{eqnarray*}
Consequently $Z_{\infty} = 0.$ Since $X_k = - \frac{Z_k}{2}$ we deduce
\begin{eqnarray*}
\| X_k \| & \leqslant & 2^{- 2^k} \varepsilon .
\end{eqnarray*}
It follows $X_{\infty} = 0$. We have
\begin{eqnarray*}
E_k & = & E_{k - 1} (I_n + X_{k - 1})\\
& = & E_0 (I_n + X_0) \cdots (I_n + X_{k - 1}) .
\end{eqnarray*}
Denoting $W_i = \prod_{0 \leqslant k \leqslant i} (I_n + X_k)$, $W_{\infty}
= \prod_{k \geqslant 0} (I_n + X_k)$ we compute
\begin{eqnarray*}
\| W_{\infty} - I_n \| & \leqslant & \prod_{k \geqslant 0} (1 + 2^{- 2^k}
\varepsilon) - 1\\
& \leqslant & 2 \varepsilon\qquad\text{by using \Cref{lem-eps-u}}.
\end{eqnarray*}
Then $W_{\infty}$ is invertible and $\| W_{\infty}^{- 1} \| \leqslant
\dfrac{1}{1 - 2 \varepsilon}$. Let $E_{\infty} = E_0 W_{\infty}$. Hence $E_0
= E_{\infty} W_{\infty}^{- 1}$. In the same way $F_0 = W_{\infty}^{- 1}
F_{\infty}$. Finally, the identity $F_{\infty} E_{\infty} - I_n = 0$ permits
to conclude that $E_0$ and $F_0$ are invertible. In the same way we prove
easily that $\| W_i - I_n \| \leqslant 2 \varepsilon$. It follows that $W_i$
is invertible. Since $E_i = E_0 W_i$ we deduce that $E_i$ is invertible.
Moreover
\begin{eqnarray*}
\| W_i - W_{\infty} \| & \leqslant & \| W_i \| \left\| 1 - \prod_{k
\geqslant i + 1} (1 + \| X_k \|) \right\|\\
& \leqslant & (1 + \| W_i - I_n \|) \left\| \prod_{k \geqslant 0} (1 +
2^{- 2^{k + i + 1}} \varepsilon) - 1 \right\| \\
& \leqslant & (1 + 2 \varepsilon) 2^{- 2^{i + 1} + 1} \varepsilon \qquad\text{by using \Cref{lem-eps-u}}.
\end{eqnarray*}
We deduce that
\begin{eqnarray*}
\| E_i - E_{\infty} \| & \leqslant & (1 + 2 \varepsilon) 2^{- 2^{i + 1} +
1} \varepsilon \| E_0 \|.
\end{eqnarray*}
These properties also holds for the $F_i$'s. The theorem is proved.
\end{proof}
\section{ Newton-like method for diagonalizable matrices.}\label{sec-p=1}
Let $M\in\mathcal{W}_n$, $\Sigma\in\mathcal{D}_n'$, $E,~F\in GL_n$. We aim to construct Newton sequences which converge towards the numerical solution of $f(E, F, \Sigma)=0$ where $f: GL_n\times GL_n\times\mathcal{D}_n'\to \mathbb{C}^{n\times n}\times \mathbb{C}^{n\times n},~(E, F, \Sigma)\mapsto(FE-I_n, FME-\Sigma)$.
We consider in the same way as before the perturbations $E + EX$ and $F + YF$ of respectively $E$ and $F$ and in addition the perturbation $\Sigma+S$ of $\Sigma$ such that $S\in\mathcal{D}_n$. We get with $Z=FE-I_n$ and $\Delta = FME - \Sigma$ :
\begin{align*}
&(F + YF) (E + EX) - I_n \\
&= Z + (Z + I_n) X + Y (Z + I_n) + Y (Z + I_n) X \numberthis \label{FE-In} \\
&(F + YF) M (E + EX) - \Sigma - S \\
&= FME - \Sigma-S + FMEX + YFME + YFMEX\nonumber\\
& = \Delta - S + \Sigma X + Y \Sigma + \Delta X + Y \Delta + Y (\Delta +\Sigma) X \numberthis \label{FME-S}
\end{align*}
As in the previous section we assume that $(E, F, \Sigma)$ is sufficiently close to the solution of $f(E, F, \Sigma)=0$, thus the linear system that we obtain from (\ref{FE-In}) and (\ref{FME-S}) is
\begin{equation*}\begin{cases} Z+X+Y&=0 \\ \Delta-S+\Sigma X+Y\Sigma&=0 \end{cases}\end{equation*}
The following lemma gives a solution of this linear system.
\begin{lemma}
\label{lem-SXY3}Let $\Sigma = \tmop{diag} (\sigma_1, \cdots, \sigma_n)$, $Z =
(z_{i, j})_{1\le i, j\le n}$ and $\Delta = (\delta_{i, j})_{1\le i, j\le n}$ be given matrices in $\mathbb{C}^{n\times n}$. Assume that
$\sigma_i \neq \sigma_j$ for $i \neq j$. Let $S$, $X$ and $Y$ be matrices
defined by
\begin{eqnarray}
S & = & \tmop{diag} (\Delta - Z \Sigma) \label{SXY-1}\\
x_{i, i} & = & 0 \\
x_{i, j} & = & \frac{- \delta_{i, j} + z_{i, j} \sigma_j}{\sigma_i -
\sigma_j}, \qquad i \neq j \\
y_{i, i} & = & - z_{i, i} \\
y_{i, j} & = & \frac{\delta_{i, j} - z_{i, j} \sigma_i}{\sigma_i -
\sigma_j}, \qquad i \neq j. \label{SXY-5}
\end{eqnarray}
Then we have
\begin{eqnarray}
Z + X + Y & = & 0 \label{Z+X+Y=0}\\
\Delta - S + \Sigma X + Y \Sigma & = & 0 \label{Delta-S-etc3}
\end{eqnarray}
Moreover
\begin{eqnarray}
\| X \|, \| Y \| & \leqslant & \kappa \varepsilon (K + 1)
\label{bnd-NX-NY}
\end{eqnarray}
where $\varepsilon \geqslant \max (\| Z \|, \| \Delta \|)$, $\kappa = \max
\left( 1, \max_{i \neq j} \dfrac{1}{\abs{ \sigma_i - \sigma_j }} \right)$\quad
and $K =$\\$ \max (1, \max_i \abs{ \sigma_i })$.
\end{lemma}
\begin{proof}
It easy to verify that $X + Y + Z = 0.$ In this way the equation
(\ref{Delta-S-etc3}) is equivalent to
\begin{eqnarray*}
\Delta - S - Z \Sigma + \Sigma X - X \Sigma & = & 0.
\end{eqnarray*}
Since $\tmop{diag} (\Delta - S - Z \Sigma) = \tmop{diag} (\Sigma X - X
\Sigma) = 0$ the formulas which define $X$ follow easily. The bounds
(\ref{bnd-NX-NY}) also are obvious to establish.
\end{proof}
In the next theorem we introduce the Newton sequences associated to the system $f(E, F, \Sigma)=0$ with a sufficient condition on the initial point for its quadratic convergence.
\begin{theorem}\label{theo2}
Let $E_0, F_0\in GL_n$ and $\Sigma_0\in\mathcal{D}_n'$ be given and defined \ the sequences for $i
\geqslant 0$,
\begin{eqnarray*}
Z_i & = & F_i E_i - I_n\\
\Delta_i & = & F_i ME_i - \Sigma_i\\
S_i & = & \tmop{diag} (\Delta_i - Z_i \Sigma_i)\\
E_{i + 1} & = & E_i (I_n + X_i)\\
F_{i + 1} & = & (I_n + Y_i) F_i\\
\Sigma_{i + 1} & = & \Sigma_i + S_i,
\end{eqnarray*}
where $S_i$, $X_i$ and $Y_i$ are defined by the formulas
(\ref{SXY-1}--\ref{SXY-5}). Let us defined $\varepsilon_0 = \max (\| Z_0 \|,
\| \Delta_0 \|)$, $\kappa_0 = \max \left( 1, \max_{i \neq j} \dfrac{1}{\abs{
\sigma_{0, i} - \sigma_{0, j} }} \right)$\quad and $K_0 = \max (1,
\max_i \abs{ \sigma_{0, i} })$. Assume that
\begin{eqnarray}
u := \kappa_0^2 (K_0 + 1)^3 \varepsilon_0 & \leqslant & 0.136.
\label{cond-convergence}
\end{eqnarray}
Then the sequences $(\Sigma_{i,} E_i, F_i)_{i \geqslant 0} $converge
quadratially to the solution of $(FE - I_n, FME - \Sigma) = 0$. More precisely
$E_0$ and $F_0$ are invertible and
\begin{eqnarray*}
\| E_i - E_{\infty} \| & \leqslant & 0.61 \times 2^{1 - 2^{i + 1}} \| E_0
\| u\\
\| F_i - F_{\infty} \| & \leqslant & 0.61 \times 2^{1 - 2^{i + 1}} \| F_0
\| u.
\end{eqnarray*}
\end{theorem}
\begin{proof}
Let us denote \ for each~$i \geqslant 0$,
\[ \begin{array}{rclcrcl}
\varepsilon_{} & = & \varepsilon_0 & & \varepsilon_i & = & \max (\|
Z_i \|, \| \Delta_i \|)\\
\kappa_{} & = & \kappa_0 & \qquad & \kappa_i & = & \max \left( 1, \;
\max_{1 \leqslant j < k \leqslant n} \frac{1}{\abs{ \sigma_{i,
k^{}}^{} - \sigma_{i, j}^{} }} \right)\\
K_{} & = & K_0 & & K_i & = & \max_{1\le k\le n} \left( 1, \; \abs{
\sigma_{i, k}^{} } \right),
\end{array} \]
where $\sigma_{i, 1}^{}, \ldots, \sigma_{i, n}^{}$ denote the diagonal
entries of $\Sigma_i^{}$. Let us show by induction on $i$ that
\begin{eqnarray}
\varepsilon_i & \leqslant & 2^{1 - 2^i} \varepsilon \label{main-ind-13}\\
\| \Sigma_i - \Sigma_0 \| & \leqslant & (2 - 2^{2 - 2^i}) \varepsilon
\label{main-ind-23}\\
\kappa_i & \leqslant & \frac{\kappa}{1 - 4 \kappa \varepsilon}
\label{main-ind-3}\\
K_i & \leqslant & K + 2 \varepsilon \label{main-ind-43}
\end{eqnarray}
These inequalities clearly hold for $i = 0$. Assuming that the induction
hypothesis holds for a given $i$ and let us prove it for $i + 1$. First we
have
\begin{eqnarray*}
Z_{i + 1} & = & Z_i X_i + Y_i Z_i + Y_i (Z_i + I_n) X_i .
\end{eqnarray*}
Hence
\begin{eqnarray*}
\| Z_{i + 1} \| & \leqslant & (2 + \kappa_i (K_i + 1) (1 + \varepsilon_i))
\kappa_i (K_i + 1) \varepsilon_i^2\\
& \leqslant & 3 \kappa_{i^{}}^2 (K_i + 1)^3 \varepsilon_{i^{}}^{^{^{}} 2}
.
\end{eqnarray*}
On the another hand
\begin{eqnarray*}
\Delta_{i + 1} & = & \Delta_i X_i + Y_i \Delta_i + Y_i (\Delta_i +
\Sigma_i) X_i .
\end{eqnarray*}
\begin{eqnarray*}
\| \Delta_{i + 1} \| & \leqslant & (2 + \kappa_i (K_i + 1) (K_i +
\varepsilon_i)) \kappa_i (K_i + 1) \varepsilon_i^2\\
& \leqslant & 3 \kappa_{i^{}}^2 (K_i + 1)^3 \varepsilon_{i^{}}^{^{^{}} 2}
.
\end{eqnarray*}
It follows
\begin{eqnarray*}
\varepsilon_{i + 1} & \leqslant & \frac{3 \kappa_0^2 (K_{} + 1 + 2
\varepsilon)^3}{(1 - 4 \kappa_{} \varepsilon)^2} \varepsilon_i^2\\
& \leqslant & \frac{3 \left( 1 + \frac{u}{8} \right)^3}{\left( 1 -
\frac{u}{2} \right)^2_{}} \kappa^2 (K_{} + 1)^3 \varepsilon_i^2 \quad
\tmop{since} \quad \varepsilon \leqslant \frac{u}{8}\\
& \leqslant & \frac{3 \left( 1 + \frac{u}{8} \right)^3}{\left( 1 -
\frac{u}{2} \right)^2_{}} \kappa_{}^2 (K_{} + 1)^3 \varepsilon_{}^{} 2^{2
- 2^{i + 1}} \varepsilon\\
& \leqslant & 2^{1 - 2^{i + 1}} \varepsilon \quad \tmop{since} \quad
\frac{3 \left( 1 + \frac{u}{8} \right)^3}{\left( 1 - \frac{u}{2}
\right)^2_{}} \kappa_{}^2 (K_{} + 1)^3 \varepsilon \leqslant 2^{- 1}
\tmop{for} u \leqslant 0.136.
\end{eqnarray*}
Next we prove (\ref{main-ind-23}) for $i + 1$. We have :
\begin{eqnarray*}
\| \Sigma_{i + 1} - \Sigma_0 \| &\leqslant&
\|S_i \| + \| \Sigma_i - \Sigma_0 \| \\&\leqslant & 2^{1 - 2^i} \varepsilon
+ (2 - 2^{2 - 2^i}) \varepsilon =
(2 - 2^{1 - 2^i}) \varepsilon\\
& \leqslant & (2 - 2^{2 - 2^{i + 1}}) \varepsilon .
\end{eqnarray*}
We then deduce (\ref{main-ind-43}) for $i + 1$ :
\begin{eqnarray*}
K_{i + 1} := \| \Sigma_{i + 1}^{} \| & \leqslant & \| \Sigma_0^{} \|
+ (2 - 2^{2 - 2^{i + 1}}) \varepsilon \leqslant K + 2 \varepsilon .
\end{eqnarray*}
Let us finally prove (\ref{main-ind-3}) for $i + 1$. The $\sigma_{i + 1,
j}$'s are the diagonal values of $\Sigma_{i + 1}^{}$. The bound \cite{Weyl1912} implies that
\[ \abs{ \sigma_{i + 1, j} - \sigma_{0,
j} } \leqslant \| \Sigma_{i + 1} - \Sigma_0 \| \leqslant 2
\varepsilon \hspace{3em} \text{for} ~1 \leqslant j \leqslant n. \]
So that for $1 \leqslant j < k \leqslant n$, we obtain using $\kappa
\varepsilon \leqslant \frac{u}{8}$ \ :
\begin{eqnarray*}
\abs{ \sigma_{i + 1, k} - \sigma_{i + 1, j} } & \geqslant & \abs{ \sigma_{0, k} -
\sigma_{0, j} } - \abs{ \sigma_{i + 1, k} - \sigma_{0, k} } - \abs{ \sigma_{i + 1,
j} - \sigma_{0, j} }\\
& \geqslant & \abs{ \sigma_{0, k} - \sigma_{0, j} } (1 - \kappa \abs{ \sigma_{i
+ 1, k} - \sigma_{0, k} } - \kappa \abs{ \sigma_{i + 1, j} - \sigma_{0, j}
})\\
& \geqslant & \abs{ \sigma_{0, j} - \sigma_{0, k} } (1 - 4 \kappa
\varepsilon) \hspace{1.2em}\\
& \geqslant & \abs{ \sigma_{0, j} - \sigma_{0, k} } \Big(1 - \frac{u}{2}\Big)
\geqslant 0.
\end{eqnarray*}
Finally, we get :
\begin{eqnarray*}
\kappa_{i + 1} = & \leqslant & \dfrac{\kappa}{1 - 4 \kappa \varepsilon} .
\end{eqnarray*}
This completes the proof of the four induction
hypotheses~(\ref{main-ind-13}--\ref{main-ind-43}) at order $i + 1$.
Let $W_i = \prod_{k = 0}^i (I_n + X_k)$. Since
\begin{eqnarray*}
\| X_k \| & \leqslant & \kappa_k (K_k + 1) \varepsilon_k\\
& \leqslant & \frac{1 + \frac{u}{8}}{1 - \frac{u}{2}} \kappa_{} (K_{} +
1) \varepsilon_{} 2^{1 - 2^k}\\
& \leqslant & \frac{\left( 1 + \frac{u}{8} \right) u}{4 \left( 1 -
\frac{u}{2} \right)} 2^{1 - 2^k}\\
& \leqslant & 0.28 \times 2^{1 - 2^k} u \quad \tmop{since} u \leqslant
0.136.
\end{eqnarray*}
Consequently,
\begin{eqnarray*}
\| W_{\infty} - I_n \| & \leqslant & \prod_{i \geqslant 0} (1 + 0.28 u
2^{1 - 2^i}) - 1\\
& \leqslant & 0.56 u \quad \text{from \Cref{lem-eps-u}}\\
& \leqslant & 0.56 \times 0.136 \leqslant 0.0762.
\end{eqnarray*}
Hence $W_{\infty}$ is invertible and $E_0 = E_{\infty} W_{\infty}^{- 1}$.
This implies that $E_0$ is invertible. Moreover,
\begin{eqnarray*}
\| W_i - W_{\infty} \| & \leqslant & \| W_i \| \left\| 1 - \prod_{k
\geqslant i + 1} (1 + \| X_k \|) \right\|\\
& \leqslant & (1 + \| W_i - I_n \|) \left\| \prod_{k \geqslant 0}^{} (1 +
0.28 \times 2^{1 - 2^{k + i + 1}}) - 1 \right\|\\
& \leqslant & (1 + 0.0762) \times 0.56 \times 2^{1 - 2^{i + 1}} u\quad \text{from \Cref{lem-eps-u}}\\
& \leqslant & 0.61 \times 2^{1 - 2^{i + 1}} u.
\end{eqnarray*}
We deduce that
\begin{eqnarray*}
\| E_i - E_{\infty} \| & \leqslant & 0.61 \times 2^{1 - 2^{i + 1}} \| E_0
\| u.
\end{eqnarray*}
In the same way we show that $F_0$ is invertible and
\begin{eqnarray*}
\| F_i - F_{\infty} \| & \leqslant & 0.61 \times 2^{1 - 2^{i + 1}} \| F_0
\| u.
\end{eqnarray*}
The theorem is proved.
\end{proof}
\begin{proposition}\label{complexity}
The complexity of the Newton iteration in \Cref{theo2} is in $\mathcal O(n^\omega)$ where $\omega$ is the linear algebra exponent of matrix multiplications.
\end{proposition}
\begin{proof}
The computation of all the entries $x_{i,j}$, $y_{i,j}$ of $X_i$ and $Y_i$ by the formulas (\ref{SXY-1}--\ref{SXY-5}) requires in total $\mathcal O(n^2)$ arithmetic operations. The computation of $Z_i, \Delta_i, S_i, E_{i+1}, F_{i+1}$, which requires $6$ matrix multiplications and diagonal matrix operations, has a complexity in $\mathcal O(n^\omega)$.
Consequently, the complexity of each iteration is in $\mathcal O(n^\omega)$.
\end{proof}
\begin{remark}
It is possible to generalize this approach in the case where the diagonal matrices are
replaced by Jordan matrices.
\end{remark}
\section{Newton-like method for two simultaneously diagonalizable matrices.} \label{sec-p=2}
Let $M_1, M_2$ be two commuting matrices in $\mathcal{W}_n$, thus $M_1$ and $M_2$ are simultaneously diagonalizable. We aim to find $E, F\in GL_n$ which diagonalize simultaneously $M_1, M_2$ so that: $FM_kE=\Sigma_k\mid k \in\{1, 2\},~\text{and}~\Sigma_1, \Sigma_2\in\mathcal{D}_n'$. This equivalent to find the numerical solution of $f(E, F, \Sigma_1, \Sigma_2)=0$ such that $f:(E, F, \Sigma_1, \Sigma_2)\mapsto(FM_1E-\Sigma_1, FM_1E-\Sigma_1)$
We consider as before perturbations $E+EX$, $F+ YF$ and
$\Sigma_k + S_k$ of respectively $E$, $F$ and $\Sigma_k$ for $k \text{$\in \{
1, 2 \}$}$ . Letting $Z_k = \tmop{FM}_k E - \Sigma_k$ for $k = 1, 2$, we have:
\begin{align*}
&(F + \tmop{YF}) M_k (E + \tmop{EX}) - (\Sigma_k+S_k) \\
&= Z_k - S_k + \Sigma_k
X + Y \Sigma_k + Z_k X + \tmop{YZ}_k + Y (Z_k + \Sigma_k) X \numberthis \label{k=1,2}
\end{align*}
By assuming $Z_1, Z_2$ are of small norm, the linear system to solve from Equation (\ref{k=1,2}) is the following
\begin{eqnarray}
Z_k - S_k + \Sigma_k X + Y \Sigma_k & = & 0, \qquad k = 1, 2
\label{Delta}
\end{eqnarray}
A solution of (\ref{Delta}) is given in the following lemma.
\begin{lemma}
\label{lem-SXY1} Let $\Sigma_k = \tmop{diag} (\sigma_1^k, \cdots,
\sigma_n^k)$, $Z_k = (z^k_{i, j})_{1\le i, j\le n}$ be given matrices in $\mathbb{C}^{n\times n}$ for $k\in\{1, 2\}$. Assume that
$\begin{vmatrix}
\sigma_j^1 & \sigma_j^2\\
\sigma_i^1 & \sigma_i^2
\end{vmatrix} \neq 0$ for $i \neq j$. Let $X$, $Y$, and $S_k$ be
matrices defined by
\begin{eqnarray}
x_{i, i} & = & 0 \label{eq:23}\\
x_{i, j} & = & \frac{\begin{vmatrix}
\sigma_j^1 & z_{i, j}^1\\
\sigma_j^2 & z_{i, j}^2
\end{vmatrix}}{\begin{vmatrix}
\sigma_i^1 & \sigma_j^1\\
\sigma_i^2 & \sigma_j^2
\end{vmatrix}}, \qquad i \neq j \label{eq:24}\\
y_{i, i} & = & 0 \\
y_{i, j} & = & - \frac{\begin{vmatrix}
\sigma_i^1 & z_{i, j}^1\\
\sigma_i^2 & z_{i, j}^2
\end{vmatrix}}{\begin{vmatrix}
\sigma_i^1 & \sigma_j^1\\
\sigma_i^2 & \sigma_j^2
\end{vmatrix}}, \qquad i \neq j \label{eq:26}\\
S_k & = & \tmop{diag} (Z_k), \qquad k = 1, 2. \label{SXY-11}
\end{eqnarray}
Then we have
\begin{eqnarray}
Z_k - S_k + \Sigma_k X + Y \Sigma_k & = & 0, \qquad k = 1, 2
\label{Delta-S-etc4}
\end{eqnarray}
Moreover
\begin{eqnarray}
\| X \|, \| Y \| & \leqslant & 2 \kappa \varepsilon K \label{eq:bnd-29}
\end{eqnarray}
where $\varepsilon = \max (\| Z_1 \|, \| Z_2 \|)$, $\kappa = \max \left( 1,
\max_{i \neq j} \LARGE{\normalsize{\dfrac{1}{\begin{vmatrix}
\sigma_i^1 & \sigma_j^1\\
\sigma_i^2 & \sigma_j^2
\end{vmatrix}} }} \right)$, $K =$\\$ \max (1, \max_{i, k}
\abs{ \sigma_i^k })$.
\end{lemma}
\begin{proof}
It is easy to verify that the equation (\ref{Delta-S-etc4}) implies that for
$i \neq j$,
\begin{eqnarray*}
\sigma_i^k x_{i, j} + \sigma_j^k y_{i, j} + z_{i, j^{}}^k & = & 0
\end{eqnarray*}
and that the solution of these equations is given by the formula
{\eqref{eq:24}}, {\eqref{eq:26}}. Choosing $x_{i, i} = y_{i, i}$=0, we take
$S_k = \tmop{diag} (Z_k + \Sigma_k X + Y \Sigma_k) = \tmop{diag} (Z_k)$
since $\Sigma_k X + Y \Sigma_k$ is an off-matrix, to have the equation
(\ref{Delta-S-etc4}) satisfied. The bounds (\ref{eq:bnd-29}) easily follows
from {\eqref{eq:24}}, {\eqref{eq:26}}.
\end{proof}
\begin{theorem}\label{theo3}
\label{th-quad-conv}Let $E_0$, $F_0\in GL_n$ and $\Sigma_{0, k} = \tmop{diag}
(\sigma_{0, 1}^k, \ldots, \sigma_{0, n}^k)\in\mathcal{D}_n'$, $k = 1, 2$, be given and let
define the sequences for $i \geqslant 0$ and $k = 1, 2$:
\begin{eqnarray*}
Z_{i, k} & = & F_i M_k E_i - \Sigma_{i, k} \quad\\
S_{i, k} & = & \tmop{diag} (Z_{i, k})\\
E_{i + 1} & = & E_i (I_n + X_i)\\
F_{i + 1} & = & (I_n + Y_i) F_i\\
\Sigma_{i + 1, k} & = & \Sigma_{i, k} + S_{i, k},
\end{eqnarray*}
where \ $X_i$, $Y_i$ are defined by the formulas (\ref{eq:23}--\ref{eq:26}).
Let \normalsize $\varepsilon_0 = \max (\| Z_{0, 1} \|, \| Z_{0, 2} \|)$, $\kappa_0 =
\max \left( 1, \max_{i \neq j} \LARGE{
\normalsize{\dfrac{1}{\begin{vmatrix}
\sigma_{0, i}^1&\sigma_{0, j}^1\\
\sigma_{0, i}^2&\sigma_{0, j}^2
\end{vmatrix}}}} \right)$ and $K_0 = \max (1,
\max_{j, k} \abs{ \sigma_{0, j}^k })$. Assume that
\begin{eqnarray}
u := 4 \varepsilon_0 \kappa_0^2 K_0^3 & \leqslant & 0.094.
\end{eqnarray}
Then the sequences $(\Sigma_{i, k,} E_i, F_i)_{i \geqslant 0} $converge
quadratially to the solution of $FM_k E - \Sigma_k$ for $k = 1, 2$. More
precisely $E_0$ and $F_0$ are invertible and
\normalsize{\begin{eqnarray*}
\| E_i - E_{\infty} \| & \leqslant & 1.46 \times 2^{1 - 2^{i + 1}} \| E_0
\| u\\
\| F_i - F_{\infty} \| & \leqslant & 1.46 \times 2^{1 - 2^{i + 1}} \| F_0
\| u.
\end{eqnarray*}}
\end{theorem}
\begin{proof}
Let us denote \ for each~$i \geqslant 0$,
\[ \begin{array}{rclcrcl}
\varepsilon_{} & = & \varepsilon_0 & & \varepsilon_i & = & \max (\|
Z_{i, 1} \|, \| Z_{i, 2} \|)\\
\kappa_{} & = & \kappa_0 & \qquad & \kappa_i & = & \max \left( 1, \;
\max_{1 \leqslant j < k \leqslant n} \LARGE{
\normalsize{\dfrac{1}{\begin{vmatrix}
\sigma_{i, j}^1 & \sigma_{i, k}^1\\
\sigma_{i, j}^2 & \sigma_{i, k}^2
\end{vmatrix}}}} \right)\\
K & = & K_0 & & K_i & = & \max (1, \max_{j, k} (\abs{
\sigma_{i, j}^k } )) ,
\end{array} \]
where $\sigma_{i, 1}^k, \ldots, \sigma_{i, n}^k$ are the diagonal entries
of $\Sigma_{i, k}^{}$. Let us show by induction on $i$ that
\normalsize{\begin{eqnarray}
\varepsilon_i & \leqslant & 2^{1 - 2^i} \varepsilon \label{main-ind-14}\\
\| \Sigma_{i, k} - \Sigma_{0, k} \| & \leqslant & (2 - 2^{2 - 2^i})
\varepsilon \label{main-ind-24}\\
\kappa_i & \leqslant & \frac{\kappa}{1 - 8 \kappa \varepsilon (K +
\varepsilon)} \label{main-ind-34}\\
K_i & \leqslant & K + 2 \varepsilon \label{main-ind-44}
\end{eqnarray}}
These inequalities clearly hold for $i = 0$. Assuming that the induction
hypothesis holds for a given $i$ and let us prove it for $i + 1$. First, we
have
\begin{eqnarray*}
Z_{i + 1, k} & = & Z_{i, k} X_i + Y_i Z_{i, k} + Y_i (Z_{i, k} +
\Sigma_{i, k}) X_i .
\end{eqnarray*}
\begin{eqnarray*}
\| Z_{i + 1, k} \| & \leqslant & 2 \varepsilon_i^2 \kappa_i K_i + 2
\varepsilon_i^2 \kappa_i K_i + 4 \varepsilon_i^2 \kappa_i^2 K_i^2
(\varepsilon_i + K_i)\\
\text{} & \leqslant & 4 \varepsilon_i^2 \kappa_i^2 K_i + 4 \varepsilon_i^2
\kappa_i^2 K_i^2 (1 + K_i) \quad \text{since $\varepsilon_i \leqslant 1
\text{and $\kappa_i \geqslant 1$}$}\\
& \leqslant & 3 \times 4 \varepsilon_i^2 \kappa_i^2 K_i^3 = 12
\varepsilon_i^2 \kappa_i^2 K_i^3 \qquad \text{since $K_i \geqslant 1.$}
\hspace{3em}
\end{eqnarray*}
It follows
\begin{eqnarray*}
\varepsilon_{i + 1} & \leqslant & \frac{12 \kappa_{}^2 (K + 2
\varepsilon)^3}{(1 - 8 \kappa_{} \varepsilon (K + \varepsilon))^2}
\varepsilon_i^2 \quad \leqslant \frac{12 \varepsilon \kappa_{}^2 (K + 2
\varepsilon)^3}{(1 - 8 \kappa_{} \varepsilon (K + \varepsilon))^2} 2^{2 -
2^{^{i + 1}}} \varepsilon\\
& \leqslant & 3 \frac{\left( 1 + \frac{u}{2} \right)^3}{\left( 1 - 2 u
\left( 1 + \frac{u}{4} \right) \right)^2_{}} u 2^{2 - 2^{^{i + 1}}}
\varepsilon \quad \tmop{since} \quad \frac{\varepsilon}{K} \leqslant
\frac{u}{4}, \kappa \varepsilon \leqslant \frac{u}{4}\\
& \leqslant & 2^{1 - 2^{i + 1}} \varepsilon \quad \tmop{since} \quad 3
\frac{\left( 1 + \frac{u}{2} \right)^3}{\left( 1 - 2 u \left( 1 +
\frac{u}{4} \right) \right)^2_{}} \leqslant 2^{- 1} \tmop{for} u \leqslant
0.094.
\end{eqnarray*}
The proof of (\ref{main-ind-24}) is the same as (\ref{main-ind-23}) in the proof of \Cref{theo2}, and for
(\ref{main-ind-44}), $K_i = \max (\| \Sigma_{i, 1} \|, \| \Sigma_{i, 2} \|) \leqslant K + 2
\varepsilon, \text{as in (\ref{main-ind-43})}$, thus we have:
$\abs{ \sigma_{i + 1, j}^k - \sigma_{0, j}^k } \leqslant \| \Sigma_{i + 1, k} - \Sigma_{0, k}
\| \leqslant 2 \varepsilon \hspace{3em} 1 \leqslant j \leqslant n, k = 1, 2$
.
\
Let us finally prove (\ref{main-ind-34}) for $i + 1$. First we have:
\begin{eqnarray*}
\abs{ \sigma_{i + 1, j}^1 \sigma_{i + 1, k}^2 - \sigma_{0, j}^1
\sigma_{0, k^{}}^2 } & = & \abs{ \sigma_{i + 1, j}^1
\sigma_{i + 1, k}^2 - \sigma_{0, j}^1 \sigma_{i + 1, k^{}}^2 + \sigma_{0,
j}^1 \sigma_{i + 1, k}^2 - \sigma_{0, j}^1 \sigma_{0, k}^2 }
\text{}\\
& = & \abs{ \sigma_{i + 1, k}^2 (\sigma_{i + 1, j}^1 - \sigma_{0, j}^1) +
\sigma_{0, j^{}}^1 ( \sigma_{i + 1, k}^2 - \sigma_{0, k}^2) }
\\
& \leqslant & 2 \varepsilon \abs{ \sigma_{i + 1, k}^2 } + 2 \varepsilon \abs{
\sigma_{0, j^{}}^1 }\\
& \leqslant & 2 \varepsilon (K + 2 \varepsilon) + 2 \varepsilon K = 4
\varepsilon (K + \varepsilon) .
\end{eqnarray*}
\qquad Now,
\begin{eqnarray*}
\abs{\sigma_{i + 1, j}^1 \sigma_{i + 1, k}^2 - \sigma_{i + 1, k}^1 \sigma_{i
+ 1, j}^2 }&\geqslant& \\
\abs{ \sigma_{0, j}^1 \sigma_{0, k}^2 - \sigma_{0,
k}^1 \sigma_{0, j}^2 } - \abs{ \sigma_{0, j}^1 \sigma_{0, k}^2 - \sigma_{i +
1, j}^1 \sigma_{i + 1, k}^2 } - \abs{ \sigma_{i + 1, k}^1 \sigma_{i + 1, j}^2
- \sigma_{0, k}^1 \sigma_{0, j}^2 }&\geqslant& \\
\abs{ \sigma_{0, j}^1 \sigma_{0, k}^2 - \sigma_{0, k}^1
\sigma_{0, j}^2 } (1 - 8 k \varepsilon (K + \varepsilon)) .
\end{eqnarray*}
Finally, we get :
\begin{eqnarray*}
\kappa_{i + 1} & \leqslant & \dfrac{\kappa}{1 - 8 \kappa \varepsilon (K
+ \varepsilon)} .\\
& &
\end{eqnarray*}
This completes the proof of the four induction
hypotheses~(\ref{main-ind-14}--\ref{main-ind-44}) at order $i + 1$.
Let $W_i = \prod_{k = 0}^i (I_n + X_k)$. Since
\begin{eqnarray*}
\| X_l \| & \leqslant & 2 \kappa_l K_l \varepsilon_l\\
& \leqslant & 2 \frac{\kappa}{1 - 8 \kappa \varepsilon (K + \varepsilon)}
(K + 2 \varepsilon) \varepsilon_{} 2^{1 - 2^l}\\
& \leqslant & \frac{\left( 1 + \frac{u}{2} \right) u}{2 \left( 1 - 2 u
\left( 1 + \frac{u}{4} \right) \right)} 2^{1 - 2^l}\\
& \leqslant & 0.65 \times 2^{1 - 2^l} u \quad \tmop{since} u \leqslant
0.094.
\end{eqnarray*}
Consequently,
\begin{eqnarray*}
\| W_{\infty} - I_n \| & \leqslant & \prod_{i \geqslant 0} (1 + 0.65
\times 2^{1 - 2^i} u) - 1\\
& \leqslant & 1.3 u \quad \text{from \Cref{lem-eps-u}}\\
& \leqslant & 1.3 \times 0.094 = 0.1222
\end{eqnarray*}
Hence $W_{\infty}$ is invertible and $E_0 = E_{\infty} W_{\infty}^{- 1}$.
This implies that $E_0$ is invertible. Moreover,
\begin{eqnarray*}
\| W_i - W_{\infty} \| & \leqslant & \| W_i \| \left\| 1 - \prod_{k
\geqslant i + 1} (1 + \| X_k \|) \right\|\\
& \leqslant & (1 + \| W_i - I_n \|) \left\| \prod_{k \geqslant 0}^{} (1 +
0.059 \times 2^{1 - 2^{k + i + 1}}) - 1 \right\|\\
& \leqslant & (1 + 0.1222) \times 1.3 \times 2^{1 - 2^{i + 1}} u\\
& \leqslant & 1.46 \times 2^{1 - 2^{i + 1}} u.
\end{eqnarray*}
We deduce that
\begin{eqnarray*}
\| E_i - E_{\infty} \| & \leqslant & 1.46 \times 2^{1 - 2^{i + 1}} \| E_0
\| u.
\end{eqnarray*}
In the same way we show that $F_0$ is invertible and
\begin{eqnarray*}
\| F_i - F_{\infty} \| & \leqslant & 1.46 \times 2^{1 - 2^{i + 1}} \| F_0
\| u.
\end{eqnarray*}
The theorem is proved.
\end{proof}
\section{Convergence for a family of simultaneously diagonalizable matrices.}\label{sec-cvg-family}
In this section we present two strategies to solve the system (\ref{eq1}) of a family of commuting matrices $(M_i)_{1\le i\le p}$ in $\mathcal{W}_n$. The first strategy is trivial and consists of finding the common diagonalizers $E$ and $F$ of the family by numerically solving one of the systems $(FE - I_n, FM_1 E - \Sigma_1) = 0$ or $(FM_1 E - \Sigma_1, FM_2 E - \Sigma_1) = 0$ using \Cref{theo2} or \Cref{theo3}. Next we deduce the remaining diagonal matrices $\Sigma_i$ using the formulas
\begin{eqnarray*}
\Sigma_{i, k} & = & \frac{E (:, k)^{\ast} M_i E (:, k)}{E (:, k)^{\ast} E
(:, k)} \qquad 1 \leqslant k \leqslant n, \quad 2~\text{or}~3 \leqslant i \leqslant p,
\end{eqnarray*}
where $E (:, k)$ is the \emph{k-th} column in $E$.\\ In this strategy we use that a diagonalizer of one or two matrices of the family can diagonalize the other matrices of the family. We note that, in general, we don't have this property for simultaneously diagonalizable matrices, where, for instance, it is posssible to find a diagonalizer of $M_1$ which is not a common diagonalizer for the other matrices of the family. Nevertheless, this property holds here since we suppose that the matrices $M_i$ have simple eigenvalues.
Another strategy is to find a ``good'' linear combination of the $M_i$'s. This
is based on \Cref{good-combination} and \Cref{theo6}.
\begin{lemma}
\label{good-combination}Let us suppose that the $M_i$ commute pairwise and
are linearly independent i.e. that $\sum_{i = 1}^p a_i M_i = 0 \Rightarrow
a_i = 0, i = 1 : p$. Let $E\in GL_n$ and $\Sigma_i\in\mathcal{D}_n'$ be such that
\begin{eqnarray*}
E^{- 1} M_i E - \Sigma_i & = & 0, \quad i = 1 : p.
\end{eqnarray*}
Let $S \in \mathbb{C}^{n \times p}$
and the column $i$ of $S$ is the diagonal of $\Sigma_i$. Let $\sigma =
(\sigma_{1,} \ldots, \sigma_n)$ and $\Sigma = \tmop{diag} (\sigma)$. Then
the matrix $S$ has a full rank and $\alpha = (S^{\ast} S)^{- 1} S^{\ast}
\sigma$ satisfies
\begin{eqnarray*}
\sum_{i = 1}^p \alpha_i E^{- 1} M_i E - \Sigma & = & 0.
\end{eqnarray*}
\end{lemma}
\begin{proof}
Since the matrices $M_i$ are simultaneously diagonalizable there exists $E$ be such that
$E^{- 1} M_i E - \Sigma_i = 0$. \ The condition
\begin{eqnarray*}
\sum_{i = 1}^p \alpha_i \Sigma_i - \Sigma & = & 0
\end{eqnarray*}
is written as $S \alpha = \sigma$ where $S \in \mathbb{C}^{n \times p}$. The
assumption $\sum_{i = 1}^p a_i M_i = 0 \Rightarrow a_i = 0, i = 1 : p$
implies that the matrix has a full rank. Consequently
\begin{eqnarray*}
\alpha & = & (S^{\ast} S)^{- 1} S^{\ast} \sigma .
\end{eqnarray*}
The lemma follows.
\end{proof}
\begin{theorem}\label{theo6}
Let $M_1, \ldots, M_p \in \mathbb{C}^{n\times n}$ be $p$ simultaneously diagonalizable matrices and verify the
assumption of linearly independent. Let us consider matrices $E_0$, $F_0$
and $\Sigma_{0, i} = \tmop{diag} (F_0 ME_0)$, $i = 1 : p$. Let define the
matrix $S \in \mathbb{C}^{n \times p}$ which the column $i$ is the
diagonal of $\Sigma_{0, i}$. Let \normalsize{$\sigma =\left( 1, e^{\frac{2 i \pi}{n}},
\ldots, e^{\frac{2 i (n - 1) \pi}{n}} \right)$}, $\Sigma = \tmop{diag}
(\sigma)$ and $\alpha = (S^{\ast} S)^{- 1} S^{\ast} \sigma$. We consider
the system
\begin{eqnarray}
\left( \begin{array}{c}
EF - I_n\\
FME - \Sigma
\end{array} \right) & = & 0 \label{eq-normalisé}
\end{eqnarray}
where $M = \sum_{i = 1}^p \alpha_i M_i$. Let $\varepsilon = \| F_0 ME_0 -
\Sigma \|$. If
\begin{eqnarray*}
n^2 \varepsilon & \leqslant & 0.272
\end{eqnarray*}
then $(F_0, E_0, \Sigma)$ satisfies the condition (\ref{cond-convergence})
of \Cref{theo2}.
\end{theorem}
\begin{proof}
In this case the quantity $\kappa$ defined in the \Cref{theo2}
is equal to
\begin{eqnarray*}
\kappa & = & \frac{1}{2 ~\abs{ \sin \left( \frac{\pi}{n} \right) }
}\\
& \leqslant & \frac{n}{4} \quad \tmop{since} ~\abs{ \sin \left(
\frac{\pi}{n} \right) } \geqslant \frac{2}{n} ~\tmop{for} ~n
\geqslant 2.
\end{eqnarray*}
Since $K_0 = 1$ we get
\begin{eqnarray*}
\kappa^2 (K_0 + 1)^3 \varepsilon & \leqslant & \frac{n^2}{2} \varepsilon .
\end{eqnarray*}
The condition $\dfrac{n^2}{2} \varepsilon \leqslant 0.136$ gives the result.
\end{proof}
\section{Numerical illustration}\label{sec-exp}
We use a Julia implementation of the Newton sequences in the numerical experiments. The experimentation has been done on a Dell Windows desktop with 8 GB memory and Intel 2.3 GHz CPU.
\subsection{Simulation}
In this section we apply the Newton iterations presented in \Cref{theo2} (resp. \Cref{theo3}) on examples of diagonalizable matrices (resp. of two simultaneously diagonalizable matrices). We validate experimentally the sufficiency of the condition established in \Cref{theo2} (resp. \Cref{theo3}) to have a quadratic sequence (\Cref{table1,table2,table5,table6}). On the other hand, as this condition is sufficient but not necessary, we show through some other examples how this Newton sequence starting from an initial point which is not verifying this condition could converge quadratically (\Cref{table3,table4,table7,table8}). This allows us to have an heuristic estimation on the numerical dependency of the Newton sequences from this condition to converge. Furthermore, these examples reveal the possibility of achieving computation in such problem with high precision. For example, in the case of a diagonalizable matrix of simple eigenvalues, we can compute its eigenvalues using one of the solvers which works with a double precision. Then we take this point as an initial point for the Newton sequence of \Cref{theo2} in order to increase the precision. Hereafter, we give some details about the tests: \emph{Test1} for \Cref{theo2} and \emph{Test2} for \Cref{theo3}, considered in this section.\\
\textbf{\emph{Test1}.} Let $\mathbb{K}=\mathbb{R}~\text{or}~\mathbb{C}$, $M=E\Sigma E^{-1}+10^{-\mathrm{e}}A$, where $\mathrm{e}\in\{3, 6\}$. The matrices $E$, $\Sigma$, and $A\in\mathbb{K}^{n\times n}$ are chosen randomly following a standard normal distribution such that $E$ is invertible, $\Sigma$ is diagonal with $n$ different diagonal entries and $A$ is any random square matrix of size $n$ and Frobenius norm equal to 1. Since $M$ is a small perturbation of $E\Sigma E^{-1}$, more precisely $\|M-E\Sigma E^{-1}\|_{Frob}=10^{-\mathrm{e}}$, M is a diagonalizable matrix of simple eigenvalues. Herein we apply the Newton iteration of \Cref{theo2} on $M$ with initial point $E_0=E$, $F0=E^{-1}$ and $\Sigma_0=\Sigma$. The residual error reported in this test at iteration $k$ is given by: \begin{center}
$\text{err}_{res}=\max(\|F_kE_k-I_n\|, \|F_kME_k-\Sigma_k\|).$
\end{center}
\textbf{\emph{Test2}.} Let $\mathbb{K}=\mathbb{R}~\text{or}~\mathbb{C}$, $M_1=F^{-1}\Sigma_1E^{-1},~M_2=F^{-1}\Sigma_2E^{-1}$, where $E$, $F$, $\Sigma_1$ and $\Sigma_2\in\mathbb{K}^{n\times n}$ are randomly sampled according into a standard normal distribution, such that $E$ and $F$ are invertible, $\Sigma_1$ and $\Sigma_2$ are diagonal with $n$ different diagonal entries. The Newton iteration in \Cref{theo3} is applied on $M_1$ and $M_2$ with initial point $E_0$, $F_0$, $\Sigma_{0,1}$ and $\Sigma_{0,2}$, such that these matrices are obtained by applying a small perturbation on respectively $E$, $F$, $\Sigma_1$ and $\Sigma_2$ as follows:\\
$E_0=E+10^{-\mathrm{e}}A$, $F_0=F+10^{-\mathrm{e}}B$, $\Sigma_{0,1}=\Sigma_1+10^{-\mathrm{e}}C$, $\Sigma_{0,2}=\Sigma_2+10^{-\mathrm{e}}D$, where $\mathrm{e}\in\{3, 6\}$, $A$ and $B$ (resp. $C$ and $D$) are random square matrices (resp. random diagonal matrices with different diagonal entries) of size $n$ and Frobenius norm equal to 1, with entries in $\mathbb{K}$ following standard normal distribution. The residual error reported in this test at iteration $k$ is given by: \begin{center}
$\text{err}_{res}=\max(\|F_kM_1E_k-\Sigma_{k,1}\|, \|F_kM_2E_k-\Sigma_{k,2}\|).$
\end{center}
We notice that the condition established in \Cref{theo2} (resp. \Cref{theo3}) is reached in \emph{Test1} (resp. \emph{Test2}) for matrices of size $10$ with order of perturbation equal to $10^{-6}$, and we can see in \Cref{table1,table2,table5,table6} that the Newton sequences with initial point verifying the condition in the associated theorem converge quadratically. We can notice also that by increasing the perturbation up to $10^{-3}$ (the initial point does not verify the condition in the associated theorem), the Newton sequences converge quadratically for different sizes of matrices $n=10,~50,~100$ (see \Cref{table3,table4,table7,table8}).
\begin{table}[ht!]
\centering
\caption{The computational results throughout 7 iterations of an exemple of implementation of \emph{Test1} with $\mathbb{K}=\mathbb{R},~n=10~\text{and}~\mathrm{e}=6$.}
\begin{tabular}{ |c|c|c| }
\hline
Iteration & $\kappa^2(K+1)^3\varepsilon\le0.136$&$\text{err}_{res}$ \\ \hline
1 &0.07915&5.51$e-6$ \\
2 &2.52$e-6$&1.76$e-10$ \\
3 &9.29$e-16$&6.47$e-20$ \\
4 &1.11$e-34$&7.78$e-39$\\
5 &1.83$e-72$&1.28$e-76$ \\
6&4.31$e-148$&3.01$e-152$\\
7& 1.16$e-287$&8.08$e-292$ \\
\hline
\end{tabular}
\label{table1}
\end{table}
\begin{table}[ht!]
\centering
\caption{The computational results throughout 7 iterations of an exemple of implementation of \emph{Test1} with $\mathbb{K}=\mathbb{C},~n=10~\text{and}~\mathrm{e}=6$.}
\begin{tabular}{ |c|c|c| }
\hline
Iteration & $\kappa^2(K+1)^3\varepsilon\le0.136$&$\text{err}_{res}$ \\ \hline
1 &0.00735 &1.14$e-5$ \\
2 & 2.14$e-8$ & 3.35$e-11$ \\
3 & 5.11$e-19$& 7.99$e-22$ \\
4 &6.88$e-40$ & 1.07$e-42$\\
5 &7.31$e-82$ & 1.14$e-84$ \\
6&9.70$e-166$ & 1.51$e-168$\\
7& 4.28$e-284$ & 6.69$e-287$ \\
\hline
\end{tabular}
\label{table2}
\end{table}
\begin{table}[ht!]
\centering
\caption{The residual error throughout 7 iterations given by the implementation of \emph{Test1} with $\mathbb{K}=\mathbb{R}, \mathrm{e}=3$ and $n=10, 50, 100$.}
\begin{tabular}{ |c|c|c|c| }
\hline
Iteration & $n=10$ & $n=50$&$n=100$ \\ \hline
1 &0.00857 &0.07931 &0.03226 \\
2 &0.00019 &0.05761&0.01380\\
3 & 1.58$e-8$&0.00619&0.00061\\
4 & 4.79$e-16$&8.74$e-5$&5.42$e-7$\\
5 & 3.56$e-31$&1.31$e-8$&3.83$e-13$\\
6& 1.39$e-61$&2.39$e-16$&1.80$e-25$\\
7&1.91$e-122$&7.03$e-32$&3.81$e-50$ \\
\hline
\end{tabular}
\label{table3}
\end{table}
\begin{table}[ht!]
\centering
\caption{The residual error throughout 7 iterations given by the implementation of \emph{Test1} with $\mathbb{K}=\mathbb{C}, \mathrm{e}=3$ and $n=10, 50, 100$.}
\begin{tabular}{ |c|c|c|c| }
\hline
Iteration & $n=10$ & $n=50$&$n=100$ \\ \hline
1 &0.00884 &0.00975&0.01600 \\
2 &8.59$e-6$ &6.39$e-5$&0.00010\\
3 &3.91$e-11$ &3.99$e-9$&4.68$e-9$\\
4 &9.87$e-22$ &1.87$e-17$&3.13$e-17$\\
5 &7.60$e-43$ &4.42$e-34$&8.84$e-34$\\
6&5.14$e-85$ &2.50$e-67$&9.45$e-67$\\
7&2.64$e-169$ &8.28$e-134$&1.05$e-132$ \\
\hline
\end{tabular}
\label{table4}
\end{table}
\begin{table}[ht!]
\centering
\caption{The computational results throughout 7 iterations of an exemple of implementation of \emph{Test2} with $\mathbb{K}=\mathbb{R},~n=10~\text{and}~\mathrm{e}=6$.}
\begin{tabular}{ |c|c|c| }
\hline
Iteration & $4\kappa^2K^3\varepsilon\le0.094$&$\text{err}_{res}$ \\ \hline
1 &0.07650&6.72$e-6$ \\
2 &1.73$e-7$&1.52$e-11$ \\
3 &5.58$e-18$&4.90$e-22$ \\
4 &5.49$e-39$&4.82$e-43$\\
5 & 3.10$e-81$& 2.73$e-85$ \\
6& 2.28$e-165$& 2.01$e-169$\\
7& 2.20$e-279$& 1.94$e-283$ \\
\hline
\end{tabular}
\label{table5}
\end{table}
\begin{table}[ht!]
\centering
\caption{The computational results throughout 7 iterations of an exemple of implementation of \emph{Test2} with $\mathbb{K}=\mathbb{C},~n=10~\text{and}~\mathrm{e}=6$.}
\begin{tabular}{ |c|c|c| }
\hline
Iteration & $4\kappa^2K^3\varepsilon\le0.094$&$\text{err}_{res}$ \\ \hline
1 &0.00686&9.16$e-6$ \\
2 &7.14$e-9$&9.53$e-12$ \\
3 &9.51$e-21$&1.26$e-23$ \\
4 &6.69$e-44$&8.92$e-47$\\
5 & 3.77$e-90$& 5.04$e-93$ \\
6& 2.59$e-182$& 3.45$e-185$\\
7& 1.65$e-281$& 2.20$e-284$ \\
\hline
\end{tabular}
\label{table6}
\end{table}
\begin{table}[ht!]
\centering
\caption{The residual error throughout 7 iterations given by the implementation of \emph{Test2} with $\mathbb{K}=\mathbb{R}, \mathrm{e}=3$ and $n=10, 50, 100$.}
\begin{tabular}{ |c|c|c|c| }
\hline
Iteration & $n=10$ & $n=50$&$n=100$ \\ \hline
1 &0.02901 &0.00457 &0.01004 \\
2 &7.97$e-5$ & 1.03$e-6$&1.31$e-6$ \\
3 &4.21$e-9$ &1.69$e-11$ &3.71$e-11$ \\
4 &1.07$e-16$ &2.42$e-23$ &1.23$e-22$ \\
5 &3.92$e-33$ &1.18$e-44$ &1.46$e-43$ \\
6&2.63$e-64$ &1.02$e-89$ &1.67$e-86$ \\
7&1.71$e-128$ &3.20$e-177$ &9.01$e-172$ \\
\hline
\end{tabular}
\label{table7}
\vspace{0.3cm}
\caption{The residual error throughout 7 iterations given by the implementation of \emph{Test2} with $\mathbb{K}=\mathbb{C}, \mathrm{e}=3$ and $n=10, 50, 100$.}
\begin{tabular}{ |c|c|c|c| }
\hline
Iteration & $n=10$ & $n=50$&$n=100$ \\ \hline
1 &0.00733 &0.00314 &0.00552 \\
2 &3.49$e-6$ &7.48$e-7$ &1.35$e-6$ \\
3 &2.91$e-12$ &1.11$e-13$ &1.19$e-13$ \\
4 &2.04$e-24$ &2.54$e-27$ &1.68$e-27$ \\
5 &8.23$e-49$ &3.04$e-54$ &2.19$e-54$ \\
6&1.88$e-97$ &3.41$e-108$ &1.50$e-108$ \\
7&1.31$e-194$ &1.91$e-215$ &4.53$e-216$ \\
\hline
\end{tabular}
\label{table8}
\end{table}
\subsection{Wilkinson polynomial}\label{wilkinson}
For $n\in\mathbb{N}^*$, the polynomial given by:
\begin{equation}\label{wilk}
P(x)=\prod_{i=1}^{n}{(x-i)}
\end{equation}
is the so-called \emph{n-th} Wilkinson polynomial. It is a monic polynomial of degree $n$ with $n$ simple roots from 1 to $n$. Let $P(x)=x^n+a_{n-1}x^{n-1}+\cdots+a_0$. It is known that the roots of $P(x)$ are the eigenvalues of its companion matrix $C(P)$. It is possible to compute the roots of the Wilkinson polynomial in high precision. The process is to compute by the standard Julia's solver the eigenvalues and the eigenvectors of $C(P)$, then use this as an initial point of the Newton sequences in \Cref{sec-p=1} to increase the precision. However, we noticed that this strategy works only until $n=19$. For $n\geq20$ some numerical inaccuracy issues appears in the computation of the initial point. More concretely, if we take for instance $n=20$, the $n$ eigenvalues given by the standard Julia's solver are as follows: \\
\small{\texttt{0.9999999999981168 + 0.0im\\
2.0000000001891918 + 0.0im\\
2.9999999926196894 + 0.0im\\
4.000000196012741 + 0.0im\\
4.999996302203527 + 0.0im\\
6.000048439601834 + 0.0im\\
6.999557630040994 + 0.0im\\
8.002891069857936 + 0.0im\\
8.986693042189247 + 0.0im\\
10.049974037139467 + 0.0im\\
10.886016935269065 + 0.0im\\
12.358657519230299 + 0.0im\\
12.561193394139806 + 0.0im\\
14.51895930872283 - 0.2133045589544431im\\
14.51895930872283 + 0.2133045589544431im\\
16.206794587063147 + 0.0im\\
16.885716688231323 + 0.0im\\
18.030097274474777 + 0.0im\\
18.993902180590464 + 0.0im\\
20.000542093702702 + 0.0im}}.\\
Since the problem comes from the matrix for which we compute the eigenvalues with double precision, one can think whether we can replace the companion matrix by another matrix which has the same characteristic polynomial (in this case the Wilkinson polynomial). In fact, as discussed by M.Fiedler in \cite{FIEDLER1990265}, we can construct a symmetric matrix whose characteristic polynomial is $P(x)$. We retrieve this construction from \cite{FIEDLER1990265}: Let $b_1, \dots, b_{n-1}$ be distinct numbers such that $P(b_i)\neq0$. Let $Q(x)=\prod_{i=1}^{n-1}{(x-b_i)}$, and let
\begin{eqnarray*}
c_{i}&=&-\sqrt{\frac{P\left(b_{i}\right)}{Q^{\prime}\left(b_{i}\right)}}\\
c^{t}&=&\left(c_{1}, \ldots, c_{n-1}\right)\\
B&=&\operatorname{diag}\left(b_{1}, \ldots, b_{n-1}\right)\\
d&=&-a_{n-1}-\sum_{i=1}^{n-1}{b_{i}},
\end{eqnarray*}
then the characteristic polynomial of $
A=\left(\begin{array}{ll}
B & c \\
c^{t} & d
\end{array}\right)
$
is equal to $P(x)$. Since $P(x)$ is of real coefficients and its roots are simple and real, we can choose the $b_i$'s such that they interlace the roots i.e. $1<b_1<2<b_2<\ldots<19<b_{n-1}<20$, so that, as shown in \cite{FIEDLER1990265}, the symmetric matrix is of real coefficients. For instance, we take in our construction $b_i=i+0.5,~\forall i\in\{1, \ldots, n-1\}$. Now, by computing the matrix $A$ in high precision (1024 bits) and applying the standard Julia's solver to compute the eigenvalues of $A$ rounded to Float64, we found:\\
\small{\texttt{1.0000000000000036\\
2.000000000000007\\
2.9999999999999964\\
4.0\\
5.0\\
6.0\\
7.000000000000011\\
7.999999999999998\\
8.999999999999998\\
9.999999999999998\\
11.0\\
12.0\\
12.999999999999998\\
13.999999999999996\\
15.0\\
16.0\\
17.000000000000007\\
18.0\\
19.000000000000004\\
19.999999999999996}}\\
We take these eigenvalues with their eigenvectors as an initial point of the Newton sequences in \Cref{sec-p=1}. We consider a precision equal to 1024 bits. The residual error is as in the previous subsection. The initial residual error with this initial point is equal to 8.49$e-14$. We report the residual error throughout 4 iterations:\\
\emph{iter1:} 2.04$e-27$\\
\emph{iter2:} 3.21$e-55$\\
\emph{iter3:} 1.16$e-110$\\
\emph{iter4:} 1.28$e-221$\\
Finally, we find that the 20 eigenvalues computed by the Newton iterations give the 20 roots of the Wilkinson polynomial in high precision. We notice that the process was very fast (taking about 0.3 seconds). This example highlights the importance of the high precision computation in the accuracy of the polynomial's roots.\\
\subsection{QR algorithm with Newton condition}
The aim of this experiment is to illustrate the introduction of the condition given by Theorem \ref{theo2} in an iterative method to compute eigenvalues such as QR method. The practical implementations of eigen solver in linear algebra libraries use many ingredients. For reasons of simplicity we will only consider here the classical basic QR algorithm to compute the eigenvalues (and eigenvectors if the matrix is symmetric) \cite{10.5555/264989}. The QR algorithm consists of generating a sequence $(A_k)_k$ such that $A_0=A$, at the \emph{k-th} step the QR decomposition of $A_k=Q_kR_k$, where $Q_k$ is an orthogonal matrix and $R_k$ is an upper triangular matrix, is computed; and $A_{k+1}=R_kQ_k$. These sequences converge, under some conditions, to the Schur form of $A$, such that the diagonal entries of its triangular matrix are the eigenvalues of $A$. If $A$ is symmetric then the columns of $Q=\prod_{k}Q_k$ give the eigenvectors of $A$. The QR decomposition at each step can be computed by using Householder transformations. The classical QR algorithm in its crude form is given in pseudo-code in \Cref{algo1}.
\begin{algorithm}
\caption{QR algorithm}\label{algo1}
\begin{algorithmic}[1]
\State \textbf{Input:} $A\in\mathbb{C}^{n\times n}$.
\State Compute the QR decomposition of $A$: $A=QR$;
\State Set $k=0$, $A_0=A$, $Q_0=Q$, $R_0=R$;
\State Set $A_1=R_0Q_0$;
\State Set $\mathrm{err}_1=\|A_1\|_{\mathrm{L}, \mathrm{Tri}}$;
\While{$\mathrm{err}_k=\|A_k\|_{\mathrm{L}, \mathrm{Tri}}> {threshold}$}
\State $A_k=Q_kR_k$;
\State $A_{k+1}=R_kQ_k$.
\EndWhile
\State \textbf{Output:} $\mathrm{diag}(A_{k^*})$, $\prod_{0\le k\le k^*}Q_k$.
\end{algorithmic}
\end{algorithm}
We can use \Cref{algo1} to construct an initial point to the Newton sequence in \Cref{sec-p=1}. Indeed, since it is sufficient that the initial point verify the condition established in \Cref{theo2} to make the Newton sequences converge to the eigenvalue decomposition, we introduce this condition into the QR algorithm (see \Cref{algo2,algo3}). So that this algorithm stops once the Newton condition is verified, to give the hand to the fast Newton sequence to start iterating. This step reduces noticeably the number of iterations of the QR algorithm (see \cref{fig1}).
\begin{algorithm}
\caption{Test for Newton (\texttt{Test\_for\_Newton})}\label{algo2}
\begin{algorithmic}[1]
\State \textbf{Input:} $M, \Sigma=\tmop{diag}(\sigma_1, \ldots, \sigma_n), E, F\in\mathbb{C}^{n\times n}$.
\State $Z=F E - I_n$, $\Delta=F M E-\Sigma$;
\State $\varepsilon=\max (\| Z \|, \| \Delta \|)$; \State $\kappa = \max \left( 1, \max_{i \neq j} \dfrac{1}{\abs{
\sigma_{i} - \sigma_{j} }} \right)$;
\State $K = \max (1,
\max_i \abs{ \sigma_{i} })$;
\State \textbf{Output:} $\kappa^2 (K + 1)^3 \varepsilon$.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{QR algorithm with Newton test}\label{algo3}
\begin{algorithmic}[1]
\State \textbf{Input:} $A\in\mathbb{C}^{n\times n}$.
\State Compute the QR decomposition of $A$: $A=QR$;
\State Set $k=0$, $A_0=A$, $Q_0=Q$, $R_0=R$;
\State Set $A_1=R_0Q_0$, $\Sigma_k=\tmop{diag}(A_k)$, $E_k=\prod_{0\le i\le k-1}Q_i$, $F_k=E_k^*$;
\State Set $\mathrm{err}_1=\texttt{Test\_for\_Newton}(A, \Sigma_1, E_1, F_1)$;
\While{$\mathrm{err}_k=\texttt{Test\_for\_Newton}(A, \Sigma_k, E_k, F_k)>0.136$}
\State $A_k=Q_kR_k$;
\State $A_{k+1}=R_kQ_k$.
\EndWhile
\State \textbf{Output:} $\Sigma$, $E$ and $F$.
\end{algorithmic}
\end{algorithm}
\begin{figure}[ht]%
\centering
\includegraphics[width=0.9\textwidth]{fig1.eps}
\caption{The number of iterations of respectively \Cref{algo1} (with $threshold=1.e-6$) and \Cref{algo3} applied on randomly sampled symmetric positive semi-definite matrices obeying Gaussian distributions of size $n=3, \ldots, 20$.}\label{fig1}
\end{figure}
Going back to the symmetric matrix $A$ of size 20 and characteristic polynomial equal to the Wilkinson polynomial ($n=20$) in \Cref{wilkinson}. We apply \Cref{algo3} on $A$, it needs 230 iterations to provide an initial point satisfying the Newton condition, the initial residual error is 1.45$e-5$. Starting from this point the residual error of 6 iterations of the Newton sequences are:\\
\emph{iter1:} 3.25$e-9$\\
\emph{iter2:} 4.07$e-19$\\
\emph{iter3:} 6.21$e-39$\\
\emph{iter4:} 1.37$e-78$\\
\emph{iter5:} 6.68$e-158$\\
\emph{iter6:} 3.84$e-295$\\
The process took about 0.7 seconds. It took more time than in the previews approach in \Cref{wilkinson} and this, not only because there are two more Newton iterations, but also because, as we mentioned before, the QR algorithm implemented in the Julia's solver from which we take the initial point, is more sophisticated. For instance the QR decomposition is applied on a Hessenberg reduction of $A$. We can also use these techniques to enhance \Cref{algo3}. However, the main idea that we want to underline here is that the use of the Newton condition in a QR-type algorithm can reduce the number of steps to provide an initial point to the Newton method, and provide an efficient algorithm to compute simple eigenvalues with high precision.
\section{Conclusion}
Taking a Newton approach towards systems of equations describing the simultaneous diagonalization problem of diagonalizable matrices, lead us to new algorithmic insights. We exhibit a Newton type method without solving linear system at each step as in the case of a classical Newton method. The numerical experiments corroborate the quadratic convergence predicted by the theoretical analysis. Moreover by incorporating the test given by \Cref{theo2}, the classical QR method gain in efficiency and allows to compute eigenvalues and eigenvectors with high precision.
\\
We focused on the regular case. Some improvements and extension can be considered, such as the treatment of clusters of eigenvalues. Another direction that can be explored, is the construction of higher order methods.
|
1,314,259,995,914 | arxiv | \section{Introduction}
In this paper we study a classical game theoretic problem: $n$ players want to divide a resource among themselves. Is it always possible to do so in a fair, in some sense, way?
We consider a simple case, when the resource is the line segment $[0,1]$, and allow its partitions into $n$ closed (possibly degenerating to empty) segments with pairwise disjoint interiors. For each partition of $[0,1]$, each of the players would be satisfied to take one of the partition pieces, the choice of a player need not be unique. As a simple example, every player may rate the pieces with her/his own integrable ``value'' function $f_i$ on $[0,1]$, and prefer any of those partition segments that maximize the value of the integral of $f_i$ over the partition segment. However, we do not assume that the players have such ``value'' functions; in fact, they may rate the segments of the partition using an arbitrary complicated logic or no logic at all.
A partition (of the segment) is called \emph{envy-free}, if the players can be matched with the pieces of the partition so that each player is satisfied with the matching piece. Following Gale \cite{gale1984} and other classical results, we also make a natural ``continuity'' assumption: A player prefers the $i$th piece if in another, but arbitrarily close to the given partition configuration, she/he also prefers the $i$th piece.
One may additionally assume that the players are never satisfied with the empty pieces of any partition, this is the so called ``something is better than nothing'' assumption. In other words, any player prefers nonempty parts over empty parts. Assuming that ``something is better than nothing'', the existence of an envy-free segment partition is guaranteed by Gale's theorem (see Theorem \ref{theorem:gale} below for the precise statement).
Without the ``something is better than nothing'' assumption the situation becomes somewhat more complicated. In terms of the original economy problem, we may be considering the resource which comes with an additional cost. For some partitions, the cost of every nonempty piece might exceed its value, in which case a player might prefer to take an empty piece instead.
For the segment partitioning problem without the ``something is better than nothing'' assumption, in \cite{segal2018} it was proved that envy-free segment partitions exist for $n=3$ (the case $n=2$ is an easy exercise). In \cite{meunier-zerbib2018} the result was extended to $n=4$, or any prime $n$. In this work we give a complete solution to the same problem: \textit{If $n$ is a prime power then an envy-free segment partitioning with the possibility to choose the empty part always exists (equivalent to Theorem~\ref{theorem:prime-power}). Conversely, for every $n$ which is not a prime power, there exists an instance of this problem with no solution (equivalent to Theorem~\ref{theorem:n-even}).}
We need some preparations and setting up the notation in order to give mathematically precise statements of our results; in the introduction we only give informal statements. The rest of the paper is organized as follows. In Section \ref{section:classical} we outline the classical results and reductions of the envy-free division problems to precise mathematical questions. We start from the mapping version of the Knaster--Kuratowski--Mazurkiewicz theorem, Theorem \ref{theorem:kkm}, and then proceed to Gale's theorem, Theorem~\ref{theorem:gale}. In Section \ref{section:easy} we review some easy results that prepare the reader to understanding the substantially new results in subsequent sections.
For classical results in Section \ref{section:classical} and for new results in Section \ref{section:segment} we emphasize that the natural way to handle the envy-free segment partition problem is to analyze necessary and sufficient conditions that a continuous map of a simplex to itself hits its center; which amounts to determining possible mapping degrees of maps between spheres under some additional assumptions, analogous to \emph{equivariance} with respect to a group action.
In Section \ref{section:borsuk-ulam} we show another fundamental result: \textit{For $n$ odd and not a prime power, there is no Borsuk--Ulam theorem for equivariant maps from a Hausdorff compactum $X$ with a free action of the permutation group $\mathfrak S_n$ to $\mathbb R^n$ with the permutation action of $\mathfrak S_n$.} This result is not related to the original segment envy-free division problem, but it prevents using some of the well-known general techniques for other envy-free division or fair partition problems. Its analogues and their consequences are developed in \cite{avku2019,aks2019}.
\subsection*{Acknowledgments}
The authors thank Shira~Zerbib, Fr\'ed\'eric~Meunier, Alfredo~Hubard, Oleg~Musin, Arkadiy~Skopenkov, Peter~Landweber, Pavle Blagojevi\'c, and the unknown referee for useful remarks and corrections to the text.
\section{Classical KKM-type results and partition problems}
\label{section:classical}
We recall some classical results around the Knaster--Kuratowski--Mazurkiewicz theorem~\cite{kkm1929} with modifications from \cite{gale1984,bapat1989}. Let $\Delta^{n-1}$ be the $(n-1)$-dimensional simplex, which we usually parametrize as
\[
\Delta^{n-1} = \left\{(t_1,\ldots,t_n)\in\mathbb R^n \mathop{\ |\ }\nolimits t_1,\ldots, t_n \ge 0,\ t_1 + \dots + t_n = 1 \right\}.
\]
We also denote by $\Delta^{n-1}_i$ the facet of $\Delta^{n-1}$ given by the additional constraint $t_i=0$. Sometimes, when we know the dimension $n$, we will denote the simplex by $\Delta$ and its facets by $\Delta_i$.
In the above notation the KKM theorem reads: \textit{Let $A_1,\ldots, A_n$ be closed subsets of $\Delta^{n-1}$, covering the simplex, such that for every $i=1,\ldots,n$ the intersection $\Delta^{n-1}_i\cap A_i$ is empty. Then the intersection $A_1\cap A_2\cap \dots \cap A_n$ is not empty.} We will also use the KKM theorem in the mapping form:
\begin{theorem}[The mapping KKM theorem]
\label{theorem:kkm}
Assume $f : \Delta^{n-1} \to \Delta^{n-1}$ is a continuous map such that for all $i$ we have $f(\Delta^{n-1}_i) \subset \Delta^{n-1}_i$. Then $f$ is surjective.
\end{theorem}
\begin{proof}
Let us approximate $f$ with a PL map in order to treat the mapping degree geometrically. A PL map assumes a subdivision of $\Delta$, in order to refer to the faces of the original (tautological) triangulation of $\Delta$ we use the term \emph{big faces}.
We may assume that the approximating PL map has the same property that any big facet (and hence any big face of arbitrary dimension) is mapped to itself. Considering $\Delta$ as a PL manifold with boundary, we notice that $f$ takes boundary to the boundary. Therefore the mapping degree of $f$ is well defined and is equal to the mapping degree of its restriction $f|_{\partial \Delta} : \partial\Delta\to\partial\Delta$. This is clear either from the geometric definition of the mapping degree, or from the exact homology sequence of the pair $(\Delta, \partial\Delta)$ and action of $f_*$ on it, or from the Stokes theorem for differential forms in $\Omega^{n-1}(\Delta)$ (this may sound strange at this point, but we will essentially use differential forms in the proof of Theorem~\ref{theorem:prime-power} below).
Then we prove by induction on the dimension that the mapping degree of $f$ equals $1$. The case of dimension $n=1$ is clear, for the induction step we note that the restriction to a big facet, $f|_{\Delta_i} : \Delta_i\to\Delta_i$, satisfies the same assumptions (big faces go to themselves) and hence we conclude that the degree of $f|_{\Delta_i}$ equals $1$. From the geometric description of the degree of a PL map, this degree is the same as the degree of $f|_{\partial \Delta}$, which in turn equals the degree of $f$.
\end{proof}
\begin{proof}[Reduction of the classical KKM to its mapping version]
Replace $A_i$ with a continuous function $g_i : \Delta \to\mathbb R$, such that $g_i(A_i)=1$ and $g_i(x)=0$ for $x$ outside an $\epsilon$-neighborhood of $A_i$. When $\epsilon>0$ is sufficiently small, we will have $g_i(\Delta_i) = 0$ from the assumption $\Delta_i\cap A_i = \emptyset$.
Since the $A_1,\ldots,A_n$ cover the simplex, we conclude that $g_1(x) + \dots + g_n(x) \ge 1 > 0$ for every $x\in\Delta$. Dividing every $g_i$ by this sum, we obtain non-negative continuous functions $f_1,\ldots,f_n$ with unit sum everywhere in the simplex. Such $f_i$ are coordinates of a map
\[
f : \Delta \to \Delta,
\]
and the property $f_i(\Delta_i) = 0$ means that any facet goes to itself. Hence by the mapping KKM theorem $f$ is surjective and therefore there exists $x\in\Delta$ such that $f_i(x)=1/n$ for any $i$. Such a point $x$ is in the $\epsilon$-neighborhood of $A_i$ for every $i$. Passing to the limit as $\epsilon\to 0$ and using compactness of $\Delta$ we assume that $x$ tends to a point in $\Delta$. From the fact that all $A_i$ are closed we conclude that this limit point belongs to $A_1\cap \dots \cap A_n$ and show that this intersection is not empty.
\end{proof}
Now we proceed to the generalization of the KKM theorem, useful in proving existence of equilibria in questions relevant to economy.
\begin{theorem}[Gale's theorem]
\label{theorem:gale}
Let $A_{ij}$ be closed subsets of $\Delta^{n-1}$, indexed by $i=1,\ldots,n$ and $j=1,\ldots,n$. Assume that for every fixed $j$ the family of sets $\{ A_{1j}, A_{2j},\ldots, A_{nj} \}$ covers the simplex, and $A_{ij}\cap \Delta^{n-1}_i$ is empty for every $i$ and $j$. Then there exists a permutation $\sigma$ of size $n$ such that the intersection $A_{1\sigma(1)}\cap A_{2\sigma(2)}\cap\dots\cap A_{n\sigma(n)}$ is not empty.
\end{theorem}
\begin{proof}
We essentially reproduce the (sketch of the) proof in \cite[Proof of the lemma on page 63]{gale1984}, giving more details. Replace each set $A_{ij}$ by a function $g_{ij}$. Using the covering assumption, we may normalize $g_{ij}$ to obtain $f_{ij}$ such that
\[
f_{1j} + \dots + f_{nj} = 1
\]
at any point of the simplex and any $j$, and also $f_{ij}(\Delta_i)=0$. Now introduce non-negative functions
\[
h_i = \frac{f_{i1} + \dots + f_{in}}{n},
\]
which still satisfy $h_1 + \dots + h_n = 1$ everywhere in the simplex, and $h_i(\Delta_i)=0$. Hence there appears a continuous map $h : \Delta\to\Delta$ sending each facet to itself and by the mapping KKM theorem we conclude that there exists $x\in \Delta$ such that $h_i(x) = 1/n$ for every $i$.
Evaluating our original matrix of functions $f_{ij}$ at the point $x$, we conclude that
\[
\sum_i f_{ij}(x) = 1,\quad \sum_j f_{ij}(x) = 1.
\]
The matrix $(f_{ij}(x))$ is doubly stochastic and the Birkhoff--von Neumann theorem \cite{birkhoff1946} (see also the textbook \cite[pages 56--58]{barvinok2002}) asserts that this matrix is a convex combination of permutation matrices. In particular, there exists a permutation $\sigma$ such that $f_{i\sigma(i)}(x) > 0$ for every $i$. Alternatively, this can also be deduced with a little effort from Hall's marriage theorem \cite[Theorem~1]{hall1935}. Going to the limit and using the compactness and closeness again, we obtain that $A_{1\sigma(1)}\cap\dots\cap A_{n\sigma(n)}\neq\emptyset$.
\end{proof}
For far-reaching generalizations of these theorems, see for example \cite[Theorem 3.1]{musin2017}, which provides a Gale-type theorem corresponding to homotopy classes of maps from topological spaces to spheres, of which the degree of a map between spheres of equal dimensions is a particular case.
The meaning of Gale's theorem in economy can be illustrated as follows. The simplex $\Delta^{n-1}$ (sometimes) parametrizes partitions of a resource into $n$ parts. The set $A_{ij}$ corresponds to the partitions where the player $j$ would be satisfied to take the $i$th part of the resource and leave the rest to the other players. The other assumptions of the theorem mean that in every partition every player would be satisfied with some part, and nobody will be satisfied to take the empty part with $t_i=0$. The conclusion of the theorem then means that there exists a partition and an assignment $\sigma$ of the parts to the players such that every player will be satisfied.
The basic case that we mostly study below is when the simplex $\Delta^{n-1}$ parametrizes partitions of a segment $[0,1]$ into parts
\[
[0, t_1], [t_1, t_1 + t_2],\dots, [t_1+\dots + t_{n-1}, 1].
\]
The facet $\Delta_i$ then corresponds to the situation when $t_i=0$ and hence the $i$th partition segment degenerates to one point. We will identify such a one point segment with the empty set in the subsequent sections.
\section{When some players may choose nothing}
\label{section:easy}
\subsection{Assume that some parts may be dropped}
What happens when $A_{ij}\cap \Delta_i$ is non-empty in Gale's theorem, or, in terms of the envy-free segment partition problem, if some players sometimes prefer to take nothing from the resource partition? This question was left as an exercise to the reader in \cite[middle of page 3]{meunier-zerbib2018}, let us perform this exercise here.
We may obtain a result about this by adjusting the situation to the assumption of Gale's theorem. Let us remove from $A_{ij}$ the part where $t_i < \epsilon$. This will satisfy the assumption $A_{ij}\cap \Delta_i=\emptyset$ of Gale's theorem, but will break the assumption that $\{A_{ij}\}_{i=1}^n$ cover the simplex for every $j$.
In order to fix the covering assumption, given $j$, let us add $t\in \Delta$, which did not belong to any $A_{ij}$, to $A_{i_{max}j}$ where $t_{i_{max}}$ is a maximal coordinate of the point $t$, there may be several maximal coordinates. Such a modification of $A_{ij}$ keeps the assumption that the coordinate $t_i$ is no smaller than $\epsilon$ on $A_{ij}$.
Now apply Gale's theorem to the modified sets to obtain a permutation $\sigma$ and a point $x_\epsilon\in \bigcap A_{i\sigma(i)}$. If all the coordinates of $x_\epsilon$ are greater than $\epsilon$ then we are in the range where we did not modify anything and the problem is solved.
Otherwise there exist coordinates of $x_\epsilon$ that are at most $\epsilon$. In this case we are going to the limit $\epsilon\to+0$, from the compactness we may assume that $x_\epsilon\to x$ and the permutation is all the time the same. In the coordinates $x_1,\ldots,x_n$ of the limit configuration some coordinates $x_i$ will then be zero, otherwise we are in the first case.
In this limit configuration, speaking in terms of the envy-free segment partition problem, some player $j=\sigma(i)$ may be dissatisfied with the assignment of the part $i$ to her/him. But this may only happen in the situation when this player preferred parts with some $t_{i'} < \epsilon$ in the neighborhood of $x$, we may assume $i'$ fixed here. By the closedness of the preference set $A_{i'j}$ we obtain that $x_{i'}=0$ for the limit point $x$ and that the player $j$ does prefer the emptyset in the partition $x$.
Now we conclude:
\begin{corollary}
Under the assumptions of Gale's theorem, modified so that some players may sometimes prefer nothing, it is possible to find a partition, assign some parts to the players, drop some unwanted parts, and assign nothing to some of the players, so that all players will be satisfied.
\end{corollary}
\subsection{General observations when no part may be dropped}
In our argument it is crucial that whenever the player is satisfied with the part $i$ such that $t_i=0$, he/she will also be satisfied with any other part $i'$ such that $t_{i'}=0$. In other words, there is only one sort of ``nothing''.
Now we return to the setting when it is not allowed to drop parts in a partition. Let us explain why any problem of KKM--Gale type is roughly equivalent to the study of continuous maps $f : \Delta^{n-1}\to\Delta^{n-1}$. We will always use the covering assumption, in terms of the envy-free segment partition problem, in every partition any player is satisfied with some of the parts.
In one direction, we start from the preference sets $A_{ij}$ and pass to functions $f_{ij}$, as in the proof of Theorem~\ref{theorem:gale} above. If certain assumptions on $A_{ij}$ imply certain other assumptions on $f_{ij}$ that, in turn, allow us to conclude that the map hits the center of the simplex, then we are done by essentially the same argument.
In the other direction, having a continuous map $f : \Delta^{n-1}\to \Delta^{n-1}$, we put
\[
A_{ij} = \left\{t\in \Delta^{n-1} \mathop{\ |\ }\nolimits \forall i'\ f_i(t)\ge f_{i'}(t) \right\}.
\]
This definition does not depend on $j$, that is the players have precisely the same preference, hence we put $A_i = A_{ij}$. The family of closed sets $A_1,\ldots, A_n$ covers the simplex. Note that in the case, when all the players have the same preference, the setting of Gale's theorem degenerates to the setting of the KKM theorem. Now we observe that the $A_i$ have a common point if and only if
\[
f_1(t) = \dots = f_n(t) = \frac{1}{n}
\]
for some $t$.
Since it is easy to build a continuous map $f:\Delta^{n-1}\to\Delta^{n-1}$ missing the center of the simplex, it is now clear that in order to have a Gale-type theorem, we need some assumption like ``no player is satisfied with an empty part''. Here we give a very explicit example:
\begin{example}
One may ask if it is sufficient to have the assumption ``if somebody prefers nothing then he/she does not care on which position this nothing occurs'' and prove a KKM--Gale-type theorem, without using any equivariance assumptions or other similar assumptions. This is not the case already for the KKM theorem. Take the triangle $\Delta^2$ and put
\[
A_1=\Delta^2,\quad A_2=\{t_1=t_2=0\},\quad A_3=\{t_1=t_3=0\}.
\]
In terms of the envy-free segment partition problem, in all cases the player prefers part $1$. When parts $1$ and $2$ are empty, the player also prefers part $2$. When parts $1$ and $3$ are empty, the player also prefers part $3$. But there is no configuration where the player prefers all three parts; or in case of Gale's theorem, where the preferences of three identical players are met.
\end{example}
\subsection{Using permutation equivariance}
One possible way is to introduce an assumption of ``equivariance on the boundary'' with respect to the action of the permutation group $\mathfrak S_n$ on the simplex $\Delta^{n-1}$ by permuting the coordinates. For example, in Gale's theorem we may require, for every $i,j=1,\ldots,n$ and any permutation $\sigma$,
\[
\sigma A_{ij}\cap \partial \Delta^{n-1}= A_{\sigma(i)j} \cap \partial \Delta^{n-1}.
\]
In terms of the envy-free segment partition problem, this means that when a partition has empty parts (the boundary of the simplex) and the parts of a partition are permuted, then the players trace the parts they prefer and continue preferring them. When a partition has $n$ non-empty parts, then the players may take the order into account. Perhaps, the formulation here is not very natural from the point of view of economy, but it may serve to us as a mathematically natural example, which we can handle. Here we give a positive result for this setting:
\begin{theorem}
\label{theorem:equivariant-gale}
The KKM theorem and Gale's theorem are valid when it is allowed to choose empty parts if we impose the ``equivariance on the boundary'' assumption and also assume that $n$ is a prime power.
\end{theorem}
This theorem follows from well-known results on degrees of equivariant maps between spheres, see for example \cite{marzantowicz1989} or the textbook \cite[Sketch of the proof of Theorem 6.2.5]{matousek2003using}. The technique of the latter reference shows that the homological trace of a $G$- equivariant map $f : S^{n-1}\to S^{n-1}$ is divisible by a prime $p$ (and hence its degree is $\pm 1$ modulo $p$) if all the $G$-orbits in $S^{n-1}$ have size divisible by $p$. Here we provide a similar explicit argument proving this theorem, because we will use modifications of this argument to establish further results. In particular, Theorem~\ref{theorem:n-odd} asserts that dropping the assumption that $n$ is a prime power, at least for odd $n$, leads to an opposite conclusion.
\begin{lemma}
\label{lemma:finite-to-one}
Assume $G$ is a finite group acting on a polyhedron $P$ and acting linearly on a vector space $V$. Assume that for any subgroup $H\subseteq G$ the inequality $\dim P^H \le \dim V^H$ holds for the subspaces of $H$-fixed points. Then for any $G$-invariant triangulation of $P$ its second barycentric subdivision has the following property: The set of $G$-equivariant PL maps $f : P \to V$, linear on faces of the second barycentric subdivision, has an open dense subset consisting of maps with finite fibers $f^{-1}(y)$ for any $y\in V$.
\end{lemma}
\begin{proof}
Let us first make one barycentric subdivision. The vertices of the barycentric subdivision are marked by the dimension of the faces they originate from and those marks are preserved by $G$. Hence the action of $G$ has the following property: \emph{For any $g\in G$ and any face $\phi$ we have $g(\phi) = \phi$ if and only if $g$ is the identity map on $\phi$.} We now assume that the triangulation of $P$ has this property.
Now consider $G$-equivariant maps, linear on faces of the barycentric subdivision $P'$ (this may be the second barycentric subdivision we make). We show that a dense open subset of such maps (that is a \emph{generic map} of this kind) has the required property. Such a map $f : P \to V$ is defined whenever we define it equivariantly on vertices of the subdivision $P'$, and we argue by induction on the poset of the vertices of $P'$, which is the same as the poset of faces of $P$.
Assume we have a vertex $\phi\in P'$ and consider possible values $f(\phi)$. Let $H$ be the stabilizer of $\phi$ as a vertex of $P'$, by our assumption in the beginning ofthe proof this is also the stabilizer of every point of $\phi$ as a face of $P$. The value $f(\phi)$ must be chosen in $V^H$ and $f(\phi)\in V^H$ is the only constraint needed to extend $f$ to the orbit $G\phi$ equivariantly. For any face of $P'$, given by a chain of vertices of $P'$
\[
\phi_1 < \phi_2 < \dots < \phi_k < \phi
\]
of faces of $P$, we assume by induction that $f(\phi_1), \ldots, f(\phi_k)$ are affinely independent and form a $(k-1)$-dimensional simplex in $V$ (otherwise $f$ is not finite-to-one on $\phi_k$).
The dimension assumption in the statement of the lemma means that $k \le \dim \phi \le \dim V^H$ (speaking of dimension, we consider $\phi$ as a face of $P$ and note that $\phi\subseteq P^H$), hence for a generic choice of $f(\phi)\in V^H$ the points $f(\phi_1), \ldots, f(\phi_k), f(\phi)$ are affinely independent. This applies to all chains that end in $\phi$ and completes the induction step and the proof is complete.
\end{proof}
\begin{proof}[Proof of Theorem \ref{theorem:equivariant-gale}]
Consider any $\mathfrak S_n$ equivariant map $\partial \Delta^{n-1} \to \partial \Delta^{n-1}$ and compose it with the inclusion $\partial \Delta^{n-1}\subset W_n$ into the affine span of $\Delta^{n-1}$ to obtain a $\mathfrak S_n$-equivariant map
\[
f_1 : \partial \Delta^{n-1} \to W_n.
\]
Let $f_0 : \partial \Delta^{n-1}\to W_n$ be the standard $\mathfrak S_n$-equivariant inclusion. Connect them by an equivariant homotopy
\[
h : \partial \Delta^{n-1}\times [0,1] \to W_n,
\]
which can be chosen as $h(x,t) = (1-t) f_0(x) + t f_1(x)$.
Note that the difference in the degrees of $f_0$ and $f_1$ as maps of $\partial \Delta^{n-1}$ to itself equals the degree of $h$ over the center $c\in \Delta^{n-1}$, which may be considered as the origin $0\in W_n$. This follows from the fact that the degree of a map between closed connected oriented manifolds with boundary $h : M\to N$ satisfying $h(\partial M)\subset \partial N$ is well defined and equals the degree of the restriction $h|_{\partial M} : \partial M\to \partial N$. Here $M=\partial \Delta^{n-1}\times [0,1]$ and $N=\Delta^{n-1}$.
Lemma \ref{lemma:finite-to-one} applies because
\[
\left( \partial \Delta^{n-1} \times [0,1] \right)^H = \left( \partial \Delta^{n-1} \right)^H \times [0,1],
\]
it allows us to assume, after a perturbation of $h$, that $h^{-1}(0)$ is finite and the degree can be counted geometrically as the sum of local degrees at the points $x\in h^{-1}(0)$. The degree at a point $x\in\partial \Delta^{n-1}$ equals to the degree at any other point $\sigma x$ for $\sigma\in\mathfrak S_n$, because $\sigma$ acts of the orientation of the domain and the range by the permutation sign.
Hence we are interested in the size of the orbit of a point $x$, which is counted as follows: Split the barycentric coordinates of $x$ into blocks of equal coordinates, let $k_1,\ldots,k_\ell$ be the sizes of the blocks, note that for the boundary points $x$ we have at least two blocks. Then the stabilizer of $x$ has size $k_1!\cdots k_\ell!$ and the size of the orbit is
\[
\frac{n!}{k_1! \cdots k_\ell!} = \binom{n}{k_1\ k_2\ \cdots\ k_\ell}.
\]
Since the multinomial coefficient is the product of the binomial coefficients
\[
\binom{n}{k_1\ k_2\ \cdots\ k_\ell} = \binom{n}{k_1}\cdot \binom{n-k_1}{k_2}\dots \binom{n-k_1 - \dots - k_{\ell - 1}}{k_\ell},
\]
the Lucas theorem \cite{lucas1878} on divisibility of the binomial coefficients by primes implies that the size of the orbit is divisible by $p$ when $n=p^\alpha$, because the first factor in the above formula is already divisible by $p$. Hence the degree of $h$ over zero is always divisible by $p$ and the degree of $f_1$ as a map of $\partial \Delta^{n-1}$ to itself is $1$ modulo $p$.
\end{proof}
\section{A segment partition problem with choosing nothing}
\label{section:segment}
One particular setting, which we borrow from \cite{segal2018,meunier-zerbib2018}, is when a point $(t_1,\ldots, t_n)\in \Delta^{n-1}$ is interpreted as a partition of a unit segment, in this case different points of the simplex in fact give the same partition. More precisely, in the vector $(t_1,\ldots, t_n)$ we may move zero coordinates of this vector to any position, only keeping the order of positive coordinates, the actual partition of the segment will be the same. Hence the preferences of the players have to follow these permutations, which gives us a modification of the equivariance assumptions.
\subsection{Pseudo-equivariance assumptions}
\label{section:pseudo-equivariant}
Now it is natural to introduce \emph{the segment partition problem with the possibility of choosing nothing} so that preferences are in accordance with the above described identifications. Those identifications can be described by identifying the proper faces of $\Delta^{n-1}$ by linear maps. Those maps $\sigma_{FGZ} : F \to G$ may be viewed as permutations of the coordinates $\sigma_{FGZ} : \Delta^{n-1} \to \Delta^{n-1}$ of the simplex, that move the nonzero coordinates of a face $F$ to the nonzero coordinates of another face $G$ preserving their order, and move the zero coordinates of a face $F$ to zero coordinates of a face $G$ with an arbitrary bijection, which we denote by $Z$. In particular, for given $F$ and $G$ of dimension $k$ there are $(n-k-1)!$ bijections $Z$. The possibility to permute the zero coordinates arises because those permutations do not change the actual partition of the segment.
We also assume that a player is not allowed to take nothing in the presence of $n$ non-empty parts, otherwise we would have to drop a part, as we did in the previous section. This keeps the covering property, for every $j=1,\ldots,n$,
\[
\Delta^{n-1} = \bigcup_{i=1}^n A_{ij}
\]
and allows, as in the proof of Theorem~\ref{theorem:gale}, to pass to the continuous map $f : \Delta^{n-1}\to \Delta^{n-1}$ setting. In terms of the continuous map, we then have the restrictions
\begin{equation}
\label{equation:pseudo-eq}
f \circ \sigma_{FGZ} = \sigma_{FGZ} \circ f\quad\text{valid on the face}\quad F.
\end{equation}
Let us clarify these relation. For given $F,G,Z$ this relation is only applied to points $x\in F\subset\Delta^{n-1}$. The image $\sigma_{FGZ}(x)$ on the left hand side then belongs to $G$, and then $f$ applies to it. On the right hand side we first apply $f$ to $x$ to obtain a point in the simplex that need not belong to any specific facet; after that we apply $\sigma_{FGZ}$ defined as a permutation, taking its $Z$ part into account.
Note that this setting resembles a certain equivariance assumption on the map $f$, at least on the boundary of $\Delta^{n-1}$. But this is not quite that, because the permutations $\sigma_{FGZ}$ do not constitute a group and the commutation restrictions \eqref{equation:pseudo-eq} are only applied for points lying on the facet $F$. For briefness, let us call a continuous $f:\Delta^{n-1}\to\Delta^{n-1}$ satisfying the commutation restrictions \eqref{equation:pseudo-eq} \emph{pseudo-equivariant}.
Of course, we need to explain, how to pass from sets to continuous functions in the pseudo-equivariant case. Relations \eqref{equation:pseudo-eq} in terms of closed sets $A_{ij}$ read
\begin{equation}
\sigma_{FGZ} \left( A_{ij} \cap F \right) = A_{\sigma_{FGZ}(i)j}\cap G,
\end{equation}
which assumes the form \eqref{equation:pseudo-eq}, when we pass from the closed sets $A_{ij}$ to their upper semicontinuous indicator functions $\chi_{ij} = \chi_{A_{ij}}$. If we approximate the indicator functions by continuous functions without due caution, the assumptions \eqref{equation:pseudo-eq} may fail at a point $x$ in a face $F$, because during the approximation of the $\chi_{ij}$ by continuous functions $f_{ij}$ the values $f_{ij}(\sigma_{FGZ}(x))$ may be influenced by nearby points not belonging to $F$ and not subject to the relation \eqref{equation:pseudo-eq}.
In order to pass to continuous functions correctly, we put our $\Delta$ into a slightly enlarged concentric simplex $\widetilde\Delta$, and first extend the upper semicontinuous indicator functions $\chi_{ij}$ to $\widetilde\Delta$ by composing them with the metric projection $\pi : \widetilde\Delta\to\Delta$, $\chi_{\widetilde A_{ij}} = \chi_{ij}\circ \pi$. This does not affect the existence of solutions for the partition problem, but allows us to conclude that \eqref{equation:pseudo-eq} will now hold not only on a face $\widetilde F\subset \widetilde\Delta$, but also in some $\epsilon$-neighborhood of $\widetilde F$, for some $\epsilon>0$, because the new $\widetilde F$ projects to the corresponding original $F$ along with its neighborhood. After that we choose a single $\epsilon>0$ for all faces, take continuous functions
\[
g_{ij}(x) = \max\left\{1 - \frac{\dist(x, \widetilde A_{ij})}{\epsilon}, 0 \right\},
\]
and then normalize
\[
f_{ij}(x) = \frac{g_{ij}(x)}{\sum_{i'} g_{i'j}(x)}.
\]
The relations \eqref{equation:pseudo-eq} will hold for such functions on respective faces of $\widetilde\Delta$, since they only depend on the behavior of $\widetilde A_{ij}$ in the $\epsilon$-neighborhood of $x$.
\subsection{A positive solution when $n$ is a prime power}
The arguments in the previous section reduce the segment partition problem with the possibility of choosing nothing to proving that a pseudo-equivariant map $f : \Delta^{n-1}\to\Delta^{n-1}$ sends some point to the center of the simplex.
\begin{theorem}
\label{theorem:prime-power}
When $n=p^\alpha$, for a prime $p$, any pseudo-equivariant map $f : \Delta^{n-1}\to\Delta^{n-1}$ in the sense of \eqref{equation:pseudo-eq} hits the center $c\in\Delta^{n-1}$.
\end{theorem}
\begin{proof}
We fix $n=p^\alpha$ and omit it from the notation where appropriate. Like in the proof of Theorem~\ref{theorem:equivariant-gale}, in order to prove what we need, it is sufficient to show that $f(\partial \Delta)$ either has nonzero linking number with the center of $\Delta$, or touches the center. If it touches the center then the problem is solved; hence assume that the center is not touched by $f(\partial \Delta)$ and study the linking number.
Similar to the proof of Theorem \ref{theorem:equivariant-gale}, in order to have information about the linking number we start with the identity map $f_0 : \Delta\to\Delta$, which is pseudo-equivariant and has the linking number of $f(\partial \Delta)$ with the center equal to $1$. It then remains to show that once we deform this $f_0$ to arbitrary $f_1$ pseudo-equivariantly, the linking number may only change by a multiple of $p$, thus remaining always nonzero.
The linking number changes when a point in the boundary $x\in \partial \Delta$ passes through the center $c$ under a pseudo-equivariant homotopy $h_t$ with parameter $t$. If $x$ lies in the relative interior of a $k$-dimensional face $F$ of $\Delta$ then we may apply the relations \eqref{equation:pseudo-eq} to $x$ with different $G$ and $Z$. Those relations show that in total $\binom{n}{k+1}$ images $h_t(\sigma_{FG}(x))$ pass through $c$ together with $x$. Let us call the points $\sigma_{FGZ}(x)$ for different $G$ of dimension $k$ (they do not depend on $Z$) the \emph{pseudo-orbit} of $x$.
The change in the linking number corresponds to the sum of mapping degrees of the homotopy
\[
h : \partial \Delta\times [0,1] \to \Delta
\]
at the points of $h^{-1}(0)$. To make the argument correct, we may assume $h$ piece-wise linear and perturb it generically, keeping the pseudo-equivariance conditions. For any point $x$ in the relative interior of a face $F$, the relations \eqref{equation:pseudo-eq} restrict the image $h(x,t)$ to the linear span of $F$ (``linear'' in the sense that we put the origin to the center of $\Delta$), which has dimension no less than $F\times [0,1]$. Hence, exactly as in the proof of Lemma \ref{lemma:finite-to-one}, a generic pseudo-equivariant PL map $h$ has the property that the preimage of the center under $h$ is a discrete point set, consisting of several pseudo-orbits; and the local mapping degrees are correctly defined.
If we had an equivariance for $h$ under a group action making this pseudo-orbit a real orbit, and permuting their neighborhoods in $\partial\Delta$ accordingly, then we would have that the change in the linking number equals $\binom{n}{k+1}$ times an integer, which would do the job since such a binomial coefficient is divisible by $p$ when $n=p^\alpha$. But we only have pseudo-equivariance in \eqref{equation:pseudo-eq}, whose equations with $\sigma_{FGZ}$ are only applied on the respective face $F$.
In order to use the pseudo-equivariance correctly, we notice that any point of the considered pseudo-orbit belongs to $n-k-1$ facets of $\Delta$ and its disk neighborhood in $\partial \Delta$ splits into $n-k-1$ parts. Some of those parts of neighborhoods of the points in the pseudo-orbits are identified by the maps $\sigma_{\Delta_i\Delta_j}$, corresponding to pairs of facets (the bijection $Z$ in this case is always unique). Since we have $n$ facets in total, we in fact split the parts of neighborhoods of the pseudo-orbit to identified $n$-tuples.
We may calculate the sum of mapping degrees of $h$ over the pseudo-orbit (or over all points mapped to the center of $\Delta$) by choosing a radially symmetric differential form $\nu\in \Omega^{n-1}(\Delta)$ supported near the center of $\Delta$ with unit integral and integrating its pull-back over the neighborhoods of our pseudo-orbit points. The integration is possible, since we consider a piece-wise linear $h$. We essentially use the mapping degree formula (see \cite[page 188]{guillemin-pollack2010}, for example)
\[
\int_{\partial \Delta\times [0,1]} h^*\nu = (\deg h) \int_{\Delta} \nu = \deg h,
\]
taking in account that the image of the boundary of $\partial \Delta\times [0,1]$ does not hit the support of $\nu$, the neighborhood of the center of $\Delta$. From the assumption that the piece-wise linear map $h$ is in general position, the integral on the left hand side is in fact the integral over neighborhoods of points in the preimage of the center of $\Delta$, if we choose the support of $\nu$ sufficiently small. Hence we assume that we are now studying one pseudo-orbit of such points and integrate over a union of their neighborhoods, split into parts, in order to estimate the corresponding part of the mapping degree of $h$.
Once we split the neighborhoods into parts according to the facets of $\partial\Delta$, we may integrate $h^*\nu$ over every part $P$ of a neighborhood of a point in the pseudo-orbit to obtain a \emph{partial mapping degree} of $P$,
\[
\deg_P h = \int_P h^*\nu.
\]
Here we assume that the parts of neighborhoods $P$ are oriented according to the orientation of $\partial\Delta$. Then the sum over all parts of neighborhoods will be the degree of $h$ in the neighborhood of the pseudo-orbit in question. Note that a partial mapping degree is a real number, not necessarily an integer. The identifications $\sigma_{\Delta_i\Delta_j}$ show that among the numbers $\deg_P h$ obtained by such integration some are equal, the whole collection of these partial mapping degrees in fact split into $n$-tuples of equal real numbers. Those equalities appear with no sign, since $\nu$ is radially symmetric and only changes its sign according to the sign of a permutation of coordinates, which occurs simultaneously in the domain, where the orientation of $\partial\Delta$ also changes according to the sign of the permutation, and in the image of $h$.
Another relation for the partial mapping degrees $\deg_P h$ is that the sum of partial mapping degrees over the parts of the neighborhood of every point in the pseudo-orbit is an integer, possibly depending on the point, the ordinary local mapping degree.
We want to use the two types of equalities described above and show that the sum of all partial mapping degrees for the pseudo-orbit in question is an integer divisible by $p$. After the summation over all pseudo-orbits going to the center of $\Delta$ under $h$, this will show that the full mapping degree of $h$ is divisible by $p$ and therefore the degree of $f|_{\partial \Delta}$ as a map from $\partial \Delta$ to $\Delta\setminus \{c\} \sim \partial\Delta$ is always $1$ modulo $p$, as it is for the identity map $f_0$. From this we can conclude that $f$, as a map $\Delta\to\Delta$, always touches the center of the simplex.
Let us introduce some notation in order to work with partial mapping degrees and their sum. Consider a point $x$ in the pseudo-orbit, describe its \emph{kind} by the sequence $[y_1, \ldots, y_{k+2}]$, where $y_i$ is the number of zero coordinates between the $(i-1)$th and $i$th nonzero coordinates of $x$. More precisely, if $x_{i_1}, \ldots, x_{i_{k+1}}$ are the nonzero coordinates of $x$ then the kind of $x$ is $[i_1-1, i_2-i_1-1, \ldots, i_{k+1}-i_k-1, n-i_{k+1}]$. For example, the point $(0, x_2, 0, 0, x_5)$ will have the kind $[1, 2, 0]$. For any sequence $y_1, \ldots, y_{k+2}$ of non-negative integers summing up to $n-k-1$ there corresponds a unique point of kind $[y_1, \ldots, y_{k+2}]$ in the pseudo-orbit of a given point $x$ from a relative interior of a $k$-dimensional face of the simplex. Hence we may use the kinds to enumerate points in a pseudo-orbit.
Let $P$ be a part of the neighborhood of a point of the kind $[y_1, \ldots, y_{k+2}]$ in the facet given by $t_i=0$. The $i$th coordinate of the point is $0$ and there is some $y_j$ to which it corresponds. Hence $P$ is uniquely described by $[y_1, \ldots, y_{k+2}]$ with sum $n-k-1$ and the choice of the index $j$ of the position of zero. We may view the points of $P$ as $k+1$ big coordinates, $n-k-2$ small coordinates (which were zero for original pseudo-orbit points in $k$-faces), and one zero. The sequence $[y_1, \ldots, y_{j-1}, y_j-1, y_{j+1}, \ldots, y_{k+2}]$ then describes the positions of small coordinates among big coordinates and ignores zero. The identifications of $n$ such parts of neighborhoods in a pseudo-orbit corresponds to inserting zero into arbitrary position of a given sequence of big and small coordinates; therefore it is natural to call $[y_1, \ldots, y_{j-1}, y_j-1, y_{j+1}, \ldots, y_{k+2}]$ the \emph{kind} of a pseudo-orbit of parts of neighborhoods. Then to each sequence $y_1, \ldots, y_{k+2}$ of non-negative integers summing up to $n-k-2$ there corresponds a unique part of neighborhood kind.
Moreover, we denote by $\deg[y_1, \ldots, y_{j-1}, y_j-1, y_{j+1}, \ldots, y_{k+2}]$ the partial mapping degree of any part of a neighborhood of the given kind, this degree indeed only depends on the kind. In order to prove the theorem, we need to show that the sum of all such degrees, multiplied by $n$, is an integer divisible by $p$. We split this sum into several parts, for any integer $0\leq r\leq n-k-2$, put
\[
S_r=\sum_{r+y_2 + \dots + y_{k+2}=n-k-2} \deg[r, y_2, \ldots, y_{k+2}],
\]
and put $S_{-1}=0$ for consistency. What we need to prove then translates to
\begin{equation}
\label{eq:goal}
n\sum_{r=0}^{n-k-2} S_r \equiv 0 \mod p.
\end{equation}
Summing up the partial mapping degrees in the neighborhood of the point of the kind $[y_1, \ldots, y_{k+2}]$ we get
\begin{equation}
\label{eq:ngb}
\sum_i y_i \deg[y_1, \ldots, y_i-1, \ldots, y_{k+2}] \in {\mathbb Z}.
\end{equation}
Summing up formulas of \eqref{eq:ngb} for different kinds with $y_1=r$ we get
\begin{equation}
\label{eq:sumr}
rS_{r-1} + (n-r-1)S_r\in{\mathbb Z}.
\end{equation}
Indeed, each $\deg[r-1, y_2, \ldots, y_{k+2}]$ contributes with coefficient $r$ in \eqref{eq:ngb} for the neighborhood of the point of the kind $[r, y_2, \ldots, y_{k+2}]$. And each $\deg[r, y_2, \ldots, y_{k+2}]$ contributes with coefficient $y_2+1$ in \eqref{eq:ngb} for the neighborhood of the point of the kind $[r, y_2+1, \ldots, y_{k+2}]$, with the coefficient $y_3+1$ in \eqref{eq:ngb} for the neighborhood of the point of the kind $[r, y_2, y_3+1, \ldots, y_{k+2}]$, and so on. Its total contribution then is
\[
(y_2 + 1) + \dots + (y_{k+2} + 1),
\]
which is equal to $n-k-2 - r + (k+1)=n-r-1$.
Let us prove by induction that
\begin{equation}
\label{eq:sumr_ind}
(r+1)\binom{n-1}{r+1}S_r\in{\mathbb Z}.
\end{equation}
The base $r=0$ of induction follows from \eqref{eq:sumr} with $r=0$. Suppose we have proved \eqref{eq:sumr_ind} for some $r$. Writing \eqref{eq:sumr} for $r+1$, we get
\[
(r+1)S_{r} + (n-r-2)S_{r+1}\in{\mathbb Z}.
\]
Multiply by $\binom{n-1}{r+1}$ to get
\[
(r+1)\binom{n-1}{r+1}S_{r} + (n-r-2)\binom{n-1}{r+1}S_{r+1}\in{\mathbb Z}.
\]
By the induction assumption, we have
\[
(n-r-2)\binom{n-1}{r+1}S_{r+1}\in{\mathbb Z}.
\]
Substituting $\binom{n-1}{r+1}=\frac{r+2}{n-r-2}\binom{n-1}{r+2}$, we get the desired result
\[
(r+2)\binom{n-1}{r+2}S_{r+1}\in{\mathbb Z}.
\]
Since $n=p^{\alpha}$ is a prime power, then all digits of $n-1$ in $p$-adic notation are $p-1$. Hence, by the Lucas theorem \cite{lucas1878} we get that $\binom{n-1}{r+1}$ is not divisible by $p$. This means that $(r+1)\binom{n-1}{r+1}$ is not divisible by $p^{\alpha}$ for all $0\leq r\leq n-k-2$, since $r$ is not divisible by $p^\alpha$. Therefore, the least common multiple $m$ of the numbers $(r+1)\binom{n-1}{r+1}$ for all $0\leq r\leq n-k-2$ is also not divisible by $p^{\alpha}$.
From \eqref{eq:sumr_ind} we conclude that
\[
m\sum_r S_r\in{\mathbb Z}.
\]
For each kind of a neighborhood there are exactly $n$ partial neighborhoods of this kind, so we also know that
\[
n\sum_r S_r = p^\alpha \sum_r S_r\in{\mathbb Z}.
\]
Hence, $n\sum _r S_r$ is divisible by $\frac{n}{\mathrm{gcd}(n, m)}$, which in turn is divisible by $p$, because $m$ is not divisible by $n=p^{\alpha}$. This establishes \eqref{eq:goal} and completes the proof.
\end{proof}
\subsection{Counterexamples when $n$ is not a prime power}
\label{section:counterexamples}
As it was shown above, in order to build a counterexample, where the segment partition problem with possibility to choose nothing and no part can be dropped has no solution, it is sufficient to build a pseudo-equivariant map $f : \Delta^{n-1}\to\Delta^{n-1}$ missing the center $c\in\Delta^{n-1}$ and put
\[
A_{ij} = \left\{t\in \Delta^{n-1} \mathop{\ |\ }\nolimits \forall i'\ f_i(t)\ge f_{i'}(t) \right\}
\]
independent on the player index $j$.
The first observation is that it is sufficient to have a pseudo-equivariant map $f$ such that the image of the boundary $f(\partial \Delta^{n-1})$ is not linked with the center $c\in\Delta^{n-1}$. Since the homotopy group $\pi_{n-2}\left(\Delta^{n-1}\setminus \{c\}\right)$ is $\mathbb Z$, the possibility to (re)extend $f$ continuously to the interior of the simplex $\Delta^{n-1}$ is fully governed by the linking number and any such continuous extension does not violate the pseudo-equivariance relations \eqref{equation:pseudo-eq}, because the relations are only applicable on the boundary of the simplex.
The second observation is that it is sufficient to find a continuous map $f : \partial \Delta^{n-1}\to \Delta^{n-1}$ having zero linking number of the image with the center of the simplex and equivariant with respect to the action of the full permutation group $\mathfrak S_n$. The full equivariance on the boundary implies the pseudo-equivariance we need, and a continuous extension of $f$ to the interior of the simplex is possible provided the linking number is zero.
In what follows we will switch between the two points of view: To find $f : \partial \Delta^{n-1}\to \Delta^{n-1}$ with zero linking number with the center is the same as to find $f : \partial \Delta^{n-1}\to \partial \Delta^{n-1}$ with zero mapping degree. In order to see these are the same just compose $f$ with a central projection from the center of the simplex to have its image contained in the boundary of the simplex; and note that such a projection preserves equivariance and pseudo-equivariance.
One counterexample is in fact a counterexample to Theorem \ref{theorem:equivariant-gale}.
\begin{theorem}
\label{theorem:n-odd}
If $n$ is odd and not a prime power then there exists an $\mathfrak S_n$-equivariant continuous $f : \partial \Delta^{n-1}\to\Delta^{n-1}$ of zero linking number with the center of $\Delta^{n-1}$.
\end{theorem}
\begin{proof}
We fix $n$ and omit $n$ from the notation where appropriate. We will start with the identity $f_0 : \partial \Delta \to \partial \Delta$, considered also as the inclusion $\partial \Delta \to \Delta$. It definitely has degree $1$ and we are going to modify it equivariantly so that its mapping degree will become $0$.
A modification will consist in taking a dimension $k$, all the centers of the $k$-dimensional simplices $c_1,\ldots,c_N$, $N=\binom{n}{k+1}$, and pulling the images $f(c_i)$ to the center of $\Delta$ (along with pulling their neighborhoods continuously and equivariantly). When the images $f(c_i)$ cross the origin, the linking number of $f(\partial \Delta)$ will change by either $+1$ or $-1$ at every point, and by $\pm\binom{n}{k+1}$ in total.
Of course, in such a modification the sign $+$ or $-$, at first glance, is fixed. But we may not only pull a point $c_1$ towards the origin, but also flip the mapping derivative image of the tangent space $T_{c_1}F$ to the $k$-face $F$ containing $c_1$ on the way. Such a flip commutes with the stabilizer of $c_i$ in the permutation group and can therefore be extended equivariantly to the neighborhood of the orbit $\{c_i\}$. Moreover, when $k$ is odd, this flip will change the sign of the crossing and therefore we will be able to choose the sign of the modification by applying or not applying the flip before the crossing. See the details of this pulling and flipping moves, for $n=3$, in Figures \ref{figure:pulling} and \ref{figure:pulling-all}.
When $k$ is even, the flip does not change the sign of the crossing, hence we are only able to make one crossing, and when we pull the point $c_1$ (and equivariantly its orbit) back through the center of $\Delta$, we just make the opposite crossing and return to where we started from in terms of the linking number. When $k$ is odd, we have much more freedom. We may pull the images $f(c_i)$ and their neighborhoods to the center $c\in\Delta$ once again and once again choose the sign of the crossing using or not using the equivariant flip before the crossing. In total, for odd $k$, this allows us to change the linking number by any multiple of $\binom{n}{k+1}$, positive or negative. Figure \ref{figure:pulling-twice} shows how to make two successive changes of the linking number in the same direction.
\begin{figure}[ht]
\center
\includegraphics[width=100mm]{fig_pulling.png}
\caption{Pulling one point towards the center with/without a flip of signs.}
\label{figure:pulling}
\end{figure}
\begin{figure}[ht]
\center
\includegraphics[width=100mm]{fig_pulling_all.png}
\caption{Pulling an orbit of points towards the center.}
\label{figure:pulling-all}
\end{figure}
\begin{figure}[ht]
\center
\includegraphics[width=60mm]{fig_pulling_twice.png}
\caption{Pulling a point towards the center and then pulling it back with a flip. The other points in the orbit are not shown.}
\label{figure:pulling-twice}
\end{figure}
Recall Ram's theorem \cite{ram1909} (or the Lucas theorem \cite{lucas1878} that we have already used) that asserts that there exist integers $x_1,\ldots,x_{n-1}$ such that
\[
x_1 \binom{n}{1} + x_2 \binom{n}{2} + \dots + x_{n-1}\binom{n}{n-1} = -1,
\]
provided $n$ is not a prime power. Note that in our case $n$ is not a prime power.
Moreover, $n$ is odd and therefore, in view of the symmetry $\binom{n}{k+1} = \binom{n}{n-k-1}$, the set of the binomial coefficients is the same as the set of binomial coefficients with even $k+1$. Hence, if we repeatedly use our moves for odd $k$ with possible flips then by Ram's theorem we will be able to modify the linking number of $f(\partial \Delta)$ with $c$ from $1$ to zero.
\end{proof}
It remains to handle the case of even $n$, but this is less easy. In the above argument we cannot change the crossing sign for even $k$ and $n-k-2$, in particular, we can add or subtract $\binom{n}{k+1}$ from the linking number, but cannot repeat this operation, since when we move the orbit back to the center of $\Delta$, we just change the linking number back. A flip was really needed in order to have a chance to repeat the change by $\pm\binom{n}{k+1}$ several times in the same direction. In particular, for $n=6$ we failed to produce a $\mathfrak S_6$-equivariant map $\Delta^5\to\Delta^5$ of zero degree by hand.
What we are able to do now, is to do this in the setting of pseudo-equivariance instead of full equivariance. The following result shows that the segment partition problem with the possibility of choosing nothing has no solution if $n$ is not a prime power.
\begin{theorem}
\label{theorem:n-even}
If $n$ is not a prime power then there exists a pseudo-equivariant, in terms of relations \eqref{equation:pseudo-eq}, continuous $f : \partial \Delta^{n-1}\to\Delta^{n-1}$ of zero linking number with the center of $\Delta^{n-1}$.
\end{theorem}
\begin{proof}
We do the same modifications as in the previous proof, but we need to handle the case of even $k$. In view of the relations $\binom{n}{k+1} = \binom{n}{n-k-1}$ we may also assume that $k\ge n/2-1\ge 2$.
Note that, for a $k$-face $F$, any composition of the pseudo-equivariance symmetries $\sigma_{F'G'Z}$ with $F'\supseteq F$ cannot take the face $F$ to itself and induce a non-identity map on it, because all such symmetries preserve the order of the nonzero coordinates. Hence we can choose a direction $v_1\in T_{c_1}F$ (because we only consider faces of positive dimension) in any point $c_1$ in the relative interior of $F$ and we will have the well-defined defined pseudo-orbit $\{c_i\}$ of this point and this direction $v_i\in T_{c_i}F_i$, so that the pseudo-equivariance symmetries permute those points and those directions whenever they are defined on them.
Now we modify the original identity map $f_0$, we pull the images of the pseudo-orbit $f(c_i)$ towards the center $c$ of $\Delta$ and on the way to the center we flip the tangent space $f_* \left( T_{c_1}F_1 \right)$ along the chosen direction $f_* v_1$, if we need to switch the sign of the crossing. The corresponding flips around every point of the pseudo-orbit $\{f(c_i)\}$ will be made in the pseudo-equivariant fashion, in total allowing us to modify the linking number by $\pm\binom{n}{k+1}$ with a sign we choose.
It is possible to iterate such steps, moreover, in the absence of the true equivariance we are allowed to choose $c_1\in F$ different from the center of $F$, making every step independent of the other steps. Having the possibility to choose the sign and iterate, in view of Ram's theorem for non-prime power $n$, we can obtain zero linking number.
\end{proof}
\section{A negative result for the equivariant fair partition technique}
\label{section:borsuk-ulam}
\subsection{Failure of a Borsuk--Ulam-type result from the mapping degree}
One general approach to envy-free segment partition problems (or fair partition problems, as in \cite{ahk2014,aak2018}) is to introduce a configuration space $X$ with an action of $\mathfrak S_n$ and a test map $f : X\to \mathbb R^n$ equivariant with respect to the action of $\mathfrak S_n$ on $X$ and its action on $\mathbb R^n$ by permuting the coordinates so that a solution to the problem is a situation when for some $x\in X$ the image $f(x)$ hits the diagonal
\[
D_n = \{(u,u,\dots,u) \in \mathbb R^n\mathop{\ |\ }\nolimits u\in\mathbb R\}.
\]
Sometimes, a Borsuk--Ulam-type theorem guarantees such a diagonal hit. We now show that Theorem \ref{theorem:n-odd} guarantees that there is no such Borsuk--Ulam-type theorem for certain values of $n$:
\begin{theorem}
\label{theorem:odd-non-pp}
Assume $n$ is odd and not a prime power. Then for any Hausdorff compactum $X$ with a free action of $\mathfrak S_n$ there exists a continuous $\mathfrak S_n$-equivariant map $X\to \mathbb R^n$ not touching the diagonal $D_n\subset\mathbb R^n$.
\end{theorem}
\begin{proof}
Consider the orthogonal decomposition $\mathbb R^n = D_n\oplus W_n$ and the unit sphere $S(W_n)$ in the $(n-1)$-dimensional space $W_n$. It is possible to map the simplex $\Delta^{n-1}$ to $W_n$ equivariantly, just subtracting $1/n$ from every barycentric coordinate, then the radial projection from the origin will identify $\partial \Delta^{n-1}$ with $S(W_n)$ equivariantly.
Theorem \ref{theorem:n-odd} in these terms says that there exists an equivariant map $S(W_n)\to S(W_n)$ of mapping degree $0$. It remains to use Lemma \ref{lemma:zero-degree} below. This lemma gives a $\mathfrak S_n$-equivariant map $X\to S(W_n)$. Composing it with the inclusion $S(W_n)\subset\mathbb R^n$ we obtain an equivariant map from $X$ to $\mathbb R^n$ not touching the diagonal of $\mathbb R^n$.
\end{proof}
Now we present the lemma that (together with its proof) was communicated to us by Alexey Volovikov. It is a particular case of \cite[Lemma~3.9]{bartsch1993}\footnote{$S$ is a $G$-CW complex since it is a sphere of a linear representation of $G$ and hence can be $G$-equivariantly triangulated.}, but we present a short proof of the particular case we need here for completeness.
\begin{lemma}
\label{lemma:zero-degree}
Let $G$ be a finite group and $S$ be a sphere with an action of $G$. If there exists an equivariant map $f : S\to S$ of zero degree then any Hausdorff compactum $X$ with a free action of $G$ has an equivariant map $X\to S$.
\end{lemma}
\begin{proof}
A zero degree map of spheres $S\to S$ is null-homotopic and can be continuously extended to a cone over the sphere $S$. Consider the join $G * S$ as a union of $|G|$ such cones glued together along their bases and extend the map from one cone to all other cones by equivariance with respect to the diagonal action of $G$ on the join, obtaining an equivariant map $g : G*S \to S$. Then take joins of $g$ with identity maps of $G$ and compose them to extend the chain of equivariant maps
\[
\cdots \to G * G * G * S \to G * G * S \to G * S \to S.
\]
Since every component of the join embeds into the join, we may drop $S$ in the domain and eventually have an equivariant map as a composition:
\[
\underbrace{G * G * \dots * G}_N \to \underbrace{G * G * \dots * G}_N * S\to S
\]
for any $N$.
The join in the domain of the last map is the $(N-2)$-connected $(N-1)$-dimensional approximation $E_N G$ to the classifying space $EG$ of the group $G$. By standard properties of the classifying spaces it follows that, given a Hausdorff compactum $X$ with a free action of $G$, there exists an equivariant map $X\to E_N G$ for sufficiently large $N$, hence there exists an equivariant map $X\to S$ as a composition of $X\to E_N G \to S$.
\end{proof}
\begin{remark}
The assumption on compactness of $X$ in Theorem \ref{theorem:odd-non-pp} is not very restrictive in practical situations, since in most cases the non-compact configurations spaces for fair partition problem are $\mathfrak S_n$-equivariantly homotopy equivalent to their compact models, as it happened in \cite[Theorem~3.13]{bz2014}, for example.
\end{remark}
\subsection{Some remarks and consequences}
We first answer some doubts expressed by an unknown referee of \cite{aks2019} about the novelty of Theorem \ref{theorem:odd-non-pp}. Our Theorem \ref{theorem:odd-non-pp} is similar to, but is not a particular case of \cite[Theorem 3.6]{bartsch1993}. Indeed, \cite[Theorem 3.6]{bartsch1993} takes a group $G$ from a certain class and proves that there exists \emph{some} representation $W$ of $G$, for which there exists a $G$-equivariant map $X \to S(W)$ from any fixed point free $G$-space $X$. In Theorem \ref{theorem:odd-non-pp}, by contrast, we prove that for a specific group $G=\mathfrak S_n$ and a \emph{specific} representation sphere of $G$, $S(W)$, there exists a $G$-equivariant map $X\to S(W)$ from any free $G$-space $X$. The group $G=\mathfrak S_n$ does not satisfy the hypothesis of \cite[Theorem 3.6]{bartsch1993}, because it contains a subgroup (the alternating group) of prime index. The discussion in \cite[the paragraph after Theorem 3.6]{bartsch1993} also hints that our specific $W$ cannot be the one constructed in the proof of \cite[Theorem 3.6]{bartsch1993}, since our $W$ has the property $W^H=0$ whenever a subgroup $H\subset \mathfrak S_n$ acts transitively on the indices $1,\ldots, n$.
Now we briefly outline the open problems and the work that appeared after this paper was published as an Arxiv preprint. Theorems \ref{theorem:equivariant-gale} and \ref{theorem:odd-non-pp} leave the question ``For which $n$ is it possible to have a $\mathfrak S_n$-equivariant map $S(W_n)\to S(W_n)$ of zero degree?'' open in the case when $n$ is not a prime power and is even. The final resolution of this question requires more technicalities and is done in the separate paper \cite{avku2019}.
In terms of the works \cite{ahk2014,bz2014}, the theorems of this section show that the direct approach to fair partition problems does not only fail in terms of the primary cohomology obstruction, but also in terms of higher obstructions, when $n$ is odd and not a prime power. This approach (and its appropriate generalizations) has some particular consequences for the topological Tverberg problem (or, more generally, to van Kampen--Flores-type problems), which are given in \cite{aks2019}.
Theorem \ref{theorem:odd-non-pp} also provides counterexamples to a class of envy-free division problems, where labeled partitions in $n$ nonempty parts ($n$ is odd and not a prime power) are parametrized by a compact polyhedron $X$, on which $\mathfrak S_n$ acts by permutations of the labels, and the preferences of the $n$ players do not depend on the labels. A counterexample is obtained by taking equivariant $f : X\to S(W_n)$ and assigning the preference of any player to the parts with corresponding maximal coordinate $f_i$ of the map; the situation when every part is preferred by some player is then impossible. For a more detailed exposition of this idea, see \cite[Section~2]{avku2019}.
|
1,314,259,995,915 | arxiv | \subsubsection*{References}}
\usepackage{mathtools}
\usepackage{booktabs}
\usepackage{tikz}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{hyperref}
\usepackage{url}
\usepackage{booktabs}
\usepackage{amsfonts}
\usepackage{nicefrac}
\usepackage{microtype}
\usepackage{lipsum}
\usepackage{graphicx}
\usepackage{setspace}
\usepackage{fullpage,graphicx,psfrag,amsmath,amsfonts,verbatim,tabularx,multirow,amssymb}
\usepackage{color,soul}
\usepackage{placeins}
\usepackage{tikz}
\usepackage{algorithm}
\usepackage[noend]{algpseudocode
\usetikzlibrary{bayesnet}
\usetikzlibrary{arrows}
\usepackage{color}
\usepackage[justification=centering]{caption}
\usepackage{subcaption}
\newcommand{\multilinecomment}[1]{}
\usetikzlibrary{backgrounds}
\usepackage{amsthm}
\usepackage{hyperref}
\usepackage{natbib}
\usepackage{booktabs}
\usepackage{mathtools}
\newcommand{\swap}[3][-]{#3#1#2}
\newcount\Comments
\Comments=0
\definecolor{darkgreen}{rgb}{0,0.5,0}
\newcommand{\kibitz}[2]{\ifnum\Comments=1\textcolor{#1}{#2}\fi}
\newcommand{\ambuj}[1]{\kibitz{darkgreen}{[AT: #1]}}
\newcommand{\adigi}[1]{\kibitz{blue}{[AD: #1]}}
\newcommand{{H}}{{H}}
\newcommand{\eta_{m}}{\eta_{m}}
\newcommand{\eta_{e}}{\eta_{e}}
\newcommand{K}{K}
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{fact}[theorem]{Fact}
\newtheorem{definition}{Definition}
\newtheorem{assumption}{Assumption}
\newtheorem{exmp}{Example}[section]
\DeclareMathOperator*{\argmax}{arg\,max}
\DeclareMathOperator*{\argmin}{arg\,min}
\makeatletter
\newtheorem*{rep@theorem}{\rep@title}
\newcommand{\newreptheorem}[2]{%
\newenvironment{rep#1}[1]{%
\def\rep@title{#2 \ref{##1}}%
\begin{rep@theorem}}%
{\end{rep@theorem}}}
\makeatother
\newreptheorem{theorem}{Theorem}
\newreptheorem{lemma}{Lemma}
\newenvironment{sketch}{\paragraph{\normalfont \textit{Proof Sketch.}}}{\hfill$\square$}
\newcommand{\centered}[1]{\begin{tabular}{l} #1 \end{tabular}}
\DeclarePairedDelimiter{\ceil}{\lceil}{\rceil}
\DeclarePairedDelimiter{\floor}{\lfloor}{\rfloor}
\newtheorem{remark}[theorem]{Remark}
\usepackage{xr-hyper}
\makeatletter
\newcommand*{\addFileDependency}[1]
\typeout{(#1)}
\@addtofilelist{#1}
\IfFileExists{#1}{}{\typeout{No file #1.}}
}
\makeatother
\newcommand*{\myexternaldocument}[1]{%
\externaldocument{#1}%
\addFileDependency{#1.tex}%
\addFileDependency{#1.aux}%
}
\title{Balancing Adaptability and Non-exploitability in Repeated Games}
\author[1]{\href{mailto:<adigi@umich.edu>?Subject=Your UAI 2022 paper}{Anthony~DiGiovanni}{}}
\author[1]{Ambuj~Tewari}
\affil[1]{%
Department of Statistics\\
University of Michigan\\
Ann Arbor, MI, USA
}
\begin{document}
\maketitle
\begin{abstract}
We study the problem of adaptability in repeated games: simultaneously guaranteeing low regret for several classes of opponents.
We add the constraint that our algorithm is non-exploitable, in that the opponent lacks an incentive to use an algorithm against which we cannot achieve rewards exceeding some ``fair'' value.
Our solution is an expert algorithm (LAFF),
which searches within a set of sub-algorithms that are optimal for each opponent class,
and
punishes evidence of exploitation by switching to a
policy that enforces a fair solution.
With benchmarks that depend on the opponent class, we first show that LAFF has sublinear regret uniformly over
these classes.
Second, we show that LAFF discourages exploitation,
because exploitative opponents have linear regret.
To our knowledge, this work is the first to provide guarantees for both regret and non-exploitability in multi-agent learning.
\end{abstract}
\section{Introduction}\label{sec:intro}
General-sum repeated games
represent interactions between agents aiming to maximize their respective reward functions, with the possibility of compromise over conflicting goals. Despite their simplicity, achieving high rewards in such games is a challenging learning problem due to the complex space of
possible opponents.
Both the behavior of a given opponent
throughout
a game, and that opponent's choice of learning algorithm, may depend on one's own algorithm.
\citet{C20}
argues,
based on empirical studies of repeated game tournaments, that a successful agent must achieve two goals. First, it must optimize its actions with respect to its beliefs about the opponent. Second, it should act such that
the opponent forms beliefs
motivating a response that is beneficial to the agent.
In particular, multi-agent reinforcement learning (MARL) features the following tradeoff: how to adapt to a variety of
potential opponents,
while also actively shaping other agents' models of
oneself
such that they respond with cooperation, rather than exploitation.
If
an agent
commits to a
fixed policy
to ``lead'' the other player's best response \citep{LS01}, it may perform arbitrarily poorly against players that do not converge to such a response. This motivates the design of adaptive algorithms that try to lead,
but can
retreat
to a ``Follower'' (best response) approach if doing so gives greater rewards \citep{PS05, ICML10-chakraborty}.
An effective algorithm in this class is S++ \citep{C14}, which,
due to its
Follower sub-algorithm, has the drawback that it is exploitable\textemdash that is, it rewards agents insisting on unfair bargains (``bully'' strategies)
\citep{CO18, SLRC21}.
A simple motivating example of Follower exploitability is the game of Chicken (Figure \ref{fig:chicken}),
between players Row and Column.
Suppose Column knows
Row
will take
the apparently optimal action 1
if Column
repeats action 2.
Column
will then want to use the Leader strategy of committing to action 2 to gain the highest reward. Row thus only gets reward 0.25, and if Column has truly committed, an attempt by Row to dissuade this strategy by taking action 2 would give both players reward 0.
A cooperative outcome, e.g., alternating between the off-diagonal cells, could be achieved if Row's learning algorithm were designed to \textit{publicly disincentivize} commitments
to the exploitative Leader strategy.
\begin{figure}[ht]
\centering
\begin{tabular}{|c|c|}
\hline
0.5, 0.5 & 0.25, 1 \\
\hline
1, 0.25 & 0, 0\\
\hline
\end{tabular}
\caption{Reward bimatrix for Chicken.}
\label{fig:chicken}
\end{figure}
MARL research has largely neglected the latter half of the adaptability vs. non-exploitability tradeoff.
Existing algorithms are either evaluated solely by
their
rewards \textit{conditional} on given opponents \citep{PS05, C14}, or, when the evaluation criterion does account for the incentives of algorithm selection,
the pool of competitor algorithms typically excludes bully strategies \citep{CG10}.
Previous MARL algorithms addressing the adaptability half of the tradeoff lack finite-time guarantees on rewards.
We aim to provide a theoretically grounded algorithm for repeated games that is both adaptable, by using Leader and Follower sub-algorithms, and non-exploitable.
More broadly,
this paper addresses a challenge of interest in several
areas of machine learning:
designing algorithms that account for how the distribution of data the algorithms are applied to may change based on the choice of the algorithms themselves.
\paragraph{\textbf{Related work}}
Previous algorithms for repeated games have
combined Leader and Follower modules,
aiming for
the following guarantees: worst-case safety, best response to players with bounded memory, and convergence in self-play to Pareto efficiency, i.e., an outcome in which no player can do better without the other doing worse \citep{PS04}.
Like ours,
these algorithms aim for adaptability,
but they do not have regret guarantees --- the desired
properties are only
shown to hold asymptotically.
Manipulator \citep{PS05} achieves these properties by starting with a fixed strategy
that maximizes the user's rewards conditional on the opponent using a best response, and switching to
reinforcement learning (RL) with a safety override if
that
strategy does not yield its target rewards.
Related to the self-play guarantee,
we prove a more general property of Pareto efficiency against effective RL algorithms (see Section \ref{sec:sub:rgclass}).
Like Manipulator, our approach tests
sub-algorithms sequentially.
S++ \citep{C14}
has empirically strong performance
on
the guarantees above.
However,
neither of these algorithms guarantee non-exploitability.
Although to our knowledge
no previous works have proven non-exploitability in our sense,
several algorithms are designed to
achieve ``fair'' Pareto efficiency
in self-play without using
Follower approaches that would be exploitable.
\citet{LS05}'s algorithm for
computation of
Nash equilibria, like our Leader sub-algorithms, enforces a Pareto efficient outcome
by punishing deviations.
If an agent played this equilibrium, which satisfies properties of symmetry similar to
the outcome our Egalitarian Leader sub-algorithm aims for, it would be non-exploitable.
However, committing to this equilibrium
precludes
learning a best response to fixed strategies that offer higher rewards than the cooperative solution, or exploiting adaptive players, which our Conditional Follower and Bully Leader sub-algorithms achieve, respectively.
In two-player bandit problems where the reward bimatrix must be learned, UCRG \citep{TD20} has
near-optimal
regret in self-play with respect to the egalitarian bargaining solution
(Section \ref{bargtheory}).
However, it cannot provably cooperate with
agents other than
itself, learn best responses, or exploit adaptive players.
Our objectives of adaptability and non-exploitability are inspired by work on learning equilibrium \citep{BT04, fcl, CR21}, a solution concept in which players' \textit{learning algorithms} are in a Nash equilibrium, beyond merely the equilibrium of an individual game itself.
This objective accounts for the dependence of the problems faced by multi-agent learning algorithms on the design of such algorithms.
\paragraph{\textbf{Contributions}} We propose an algorithm (LAFF) that, to our knowledge, is the first proven to have both strong performance against different classes of players in repeated games and a guarantee of non-exploitability, formalized in Section \ref{sec:sub:regretdef}. Specifically, these classes consist of stationary
algorithms (``Bounded Memory''), unpredictable adversaries (``Adversarial''), and adaptive RL agents (``Follower'').
LAFF's modular design
allows for extensions to a broader variety of opponent classes in future work. We propose regret metrics appropriate for games against Followers, based on the goal of Pareto efficiency. Our method of proof of adaptability and non-exploitability is novel, applying ``optimistic'' principles at two levels. First, LAFF starts with the sub-algorithm (or \textit{expert}) that would give the highest expected rewards
if the opponent were
in that expert's target class (``potential''), then proceeds through experts in descending order of
potential.
Second, LAFF chooses whether to switch experts by comparing the potential
of the active expert with its empirical average reward plus a slack term, which decreases with the time for which the expert is used.
For non-exploitability and regret against Followers, we use the properties of an enforceable bargaining solution (see Section \ref{bargtheory}) to upper-bound the other player's rewards.
\section{Preliminaries}\label{sec:prelim}
We study a special class of Markov games: repeated games with a bounded memory state representation \citep{PS05} and public randomization.
\subsection{Setup and Opponent Classification}\label{sec:sub:rgclass}
\noindent Consider a repeated game over $T$ time steps, defined for players $i=1,2$ by action spaces $\mathcal{A}^{(i)}$,
reward matrices $\mathbf{R}^{(i)}$,
and a fixed player memory length $K \in \mathbb{N}$. Here, all $\mathbf{R}^{(i)}(a^{(1)}, a^{(2)}) \in [0,1]$ are known by both players.
At time~$t$ the following random variables are drawn: $S_t$ for state, $A_t^{(i)}$ for actions, and $R_t^{(i)} = \mathbf{R}^{(i)}(A_t^{(1)},A_t^{(2)})$ for rewards.
A state space $\mathcal{S} := (\mathcal{A}^{(1)})^K \times (\mathcal{A}^{(2)})^K \times \{0, 1\}^{2K+2}$, and transition probabilities $\mathcal{P}(s'|s,a^{(1)},a^{(2)})$ between states, are induced by two features:
(1) the tuple of both players' last $K$ actions, and (2) the tuple of the last $K$ and current outcome of a randomization signal, for each player. (See Section 2.1.2 of \citet{MS06}.)
Thus, players condition their
actions
on their memory of the last $K$ time steps,
and a signal that permits
correlated action choices.
Formally, let $(w^{(1)}_t, w^{(2)}_t) \in [0, 1]^2$ be weights chosen by the respective players at time $t$,\footnote{We restrict to cases where players commit to a fixed weight, so the effective action space is finite. See the Appendix for details.} and draw $X_t \sim \text{Unif}[0,1]$ independent of all other random variables in the game. Then, letting $y_t^{(i)}$ be the realized value of $Y_t^{(i)} := \mathbb{I}[X_t < w^{(i)}_t]$, the second feature at time $t$ is
$(y_{t-K}^{(1)},...,y_t^{(1)},y_{t-K}^{(2)},...,y_t^{(2)})$.
This allows the players to correlate actions through the public signal $X_t$, even if one player unilaterally generates the signal.
For instance, in Chicken (Figure \ref{fig:chicken}),
players could flip a fair coin ($w^{(1)}_t = w^{(2)}_t = 0.5$)
at
each time step
and play the pair of actions
leading to the top-right cell
when it comes up heads, otherwise
play the bottom-left cell.
In this framework, at each time step each player has a choice of both a weight $w_t^{(i)}$ and policy $\pi^{(i)}_t: \mathcal{S} \to \Delta^{|\mathcal{A}^{(i)}|}$, a mapping from states to distributions over actions.
Given a fixed policy of player 2, a repeated game is a
Markov decision process (MDP) given by
$(\mathcal{S}, \mathcal{A}^{(1)}, r, p)$
as follows.
Let $a^{(i)}(s)$ be the last action of player $i$
that defines state $s$.
Here, $r: \mathcal{S} \times \mathcal{A}^{(1)} \to [0,1]$ is
$r(s, a) = \mathbf{R}^{(1)}(a^{(1)}(s), a^{(2)}(s))$,
and $p:\mathcal{S} \times \mathcal{A}^{(1)} \times \mathcal{S} \to [0,1]$ is
$p(s'|s,a) = \sum_{a^{(2)}} \mathcal{P}(s'|s,a,a^{(2)}) \pi^{(2)}(a^{(2)}|s)$.
A policy is called Markov if it is conditioned only on the current state.
The problem faced by our learner, player 1, depends on which of the following classes player 2's algorithm is in:
\begin{enumerate}
\item \textit{Bounded Memory}: (i) Player 2 uses a constant $w^{(2)}$, reported at the start of the game; (ii) $\pi^{(2)}$ is Markov
and does not depend on time or player 1's signals $w^{(1)}_t$ or $y_t^{(1)}$; and (iii) for all $s, a^{(2)}$ we have $\pi^{(2)}(a^{(2)}|s) > 0$.\footnote{This relatively strong condition is needed for a concentration result in our analysis, ruling out cases where players remain in a transient state for an unknown time. We need to know the exit time from the transient states to compute the quantity $\overline{r}_{i,
\tau}^{(2)}$ used by one of our experts. Section \ref{sec:experiments} shows strong results against a Bounded Memory player (FTFT) for which this condition does not hold.}
\item \textit{Adversarial}: Player 2 selects actions according to any arbitrary distribution, which may depend on the history of play and on player 1's policy at each time step.
\item \textit{Follower}: A Follower learns a best response when player 1 is ``eventually stationary'' (formalizing the follower concept in \citet{LS01}), and when the value of that best response meets player 2's standard of fairness. For some fairness threshold $V^{(2)} \geq 0$ (depending on the game), player 2's algorithm has the following properties.
Suppose that after time $T_0$, player 1 always plays a Bounded Memory algorithm (without condition 3), which induces an MDP of finite diameter $D$ where player 2's optimal average reward is at least $V^{(2)}$.
Then with probability at least $1-\delta$, player 2's regret up to time $T$ (see Section \ref{sec:sub:regretdef}) is bounded by $C_1T_0 + C_2D(SAT\log(T/\delta))^{1/2}$ for constants $C_1, C_2$.
\end{enumerate}
A repeated game against a Bounded Memory player is equivalent to a communicating MDP \citep{puterman}.
A Follower formalizes an agent that models \textit{our} agent as an MDP (Leader), and the regret bound in our definition is of a standard form for RL algorithms \citep{optQ}.
Many MARL algorithms take this approach at least partly \citep{PS05, ICML10-chakraborty, CG10}, hence this is a reasonable class to consider.
For example,
\citet{LS05}'s algorithm,
which plays a
certain
sequence of actions
and punishes deviations from that sequence,
is Bounded Memory ---
this algorithm does not change its policy
in response to the other player,
but its policy conditions on past actions.
A standard RL algorithm,
which would learn the sequence played by \citet{LS05}'s algorithm
and converge to
an optimal policy against it,
and which is a component of more complex repeated games
algorithms like Manipulator and S++,
is a case of a Follower.
As discussed in \citet{C20},
a large proportion of top-performing algorithms are Bounded Memory (Leaders) or Followers, or switch between the two.
These classes
illustrate fundamental
approaches to multi-agent learning
(thus, likely opponents
that our algorithm would face):
Either an agent behaves consistently, trying to shape the learning opponent’s behavior (Bounded Memory), or
the agent changes policies in a process of learning how the opponent behaves and computing an optimal response to that opponent, possibly subject to fairness standards as they try to avoid exploitation (Follower).
The Adversarial class accounts for opponent behavior between these two extremes, which is difficult to learn in generality, but a
worst-case guarantee
can still be achieved.
We thus restrict to guarantees against formalizations of these classes.
Bounds against a wider variety of opponents would be less theoretically tractable, as far as finding the optimal strategy against one class interferes with performance against another.
(For example, \citet{PS05} note that in the repeated Prisoner's Dilemma,
it is
impossible for an algorithm to guarantee the best
response to an opponent
that may play either grim trigger
---
``defect if and only if either
player defected last round''
---
or ``always cooperate.'')
Extending to other opponent classes is an important direction for future work.
\subsection{Background on Bargaining Theory}\label{bargtheory}
\noindent To define appropriate
optimality criteria
for these opponent classes and construct corresponding experts, we use several concepts from bargaining theory.
We also illustrate these
concepts in the game of Chicken
from the introduction
(Example \ref{example:barg_concepts}).
Define the \textit{security values}
$\mu_{\textsc{S}}^{(i)} := \max_{\mathbf{v}_i} \min_{\mathbf{v}_{-i}} \mathbf{v}_1^\intercal \mathbf{R}^{(i)} \mathbf{v}_2$,
i.e., the rewards that each player can guarantee
regardless of their opponent's actions,
with player 1's maximin strategy as $\mathbf{v}^{(1)}_{\textsc{M}} = \argmax_{\mathbf{v}_1} \min_{\mathbf{v}_2} \mathbf{v}_1^\intercal \mathbf{R}^{(1)} \mathbf{v}_2$.
Let $\mathcal{G} := \{(\mathbf{R}^{(1)}(i,j),$ $\mathbf{R}^{(2)}(i,j)) \ | \ i \in \mathcal{A}^{(1)}, j \in \mathcal{A}^{(2)}\}$,
the set of reward pairs achievable
by pure actions in the game.
An important set of rewards in the computation of enforceable bargaining solutions is the convex polytope $\mathcal{U} := \text{Conv}(\mathcal{G}) \cap \{(u_1, u_2) \ | \ u_1 \geq \mu_{\textsc{S}}^{(1)}, u_2 \geq \mu_{\textsc{S}}^{(2)}\}$,
reward pairs that are achievable by randomizing over joint actions and give each player at least their security value.
One reward pair satisfying several desirable properties is the egalitarian bargaining solution (EBS) \citep{TD20}, given by $(\mu_{\textsc{E}}^{(1)}, \mu_{\textsc{E}}^{(2)}) := \argmax_{(u_1, u_2) \in \mathcal{U}} \min_{i=1,2}\{u_i - \mu_{\textsc{S}}^{(i)}\}$.
The reward pairs over which we search for optimal benchmark values,
described in Section \ref{sec:sub:regretdef},
are subject to the following constraint of enforceability. To our knowledge, this definition, including the formalization of enforceability for finite punishment lengths, has not been provided in previous work on non-discounted games. However, see Definition 2.5.1 in \citet{MS06} for the discounted case.
\begin{definition}
\label{def:enf}
Let $(u_1, u_2) \in \mathcal{U}$ be a convex combination
of points in some set of joint actions $\mathcal{X}$.
Let $r(\mathcal{X}) := \max_{(x_1,x_2) \in \mathcal{X}} \{\max_{j \neq x_2} \mathbf{R}^{(2)}(x_1,j) - \mathbf{R}^{(2)}(x_1,x_2)\}$
be player 2's deviation profit.
Then $(u_1, u_2)$ is \textbf{$\epsilon$-enforceable}, relative to a memory length $K$ and $\epsilon > 0$, if:
\begin{align*}
Ku_2 &\geq K\mu_{\textsc{S}}^{(2)} + r(\mathcal{X}) + \epsilon.
\end{align*}
\end{definition}
Intuitively, if player 2 does not deviate from player 1's desired action sequence, player 2 receives
$u_2$
on average
for each of $K$ steps. If player 2 deviates, gaining at most $r(\mathcal{X})$ profit, player 1 may punish with player 2's security value for $K$ steps. We call the total sequence reward ``enforceable'' if
it exceeds the total deviation reward
by at least $\epsilon$.
Let $\mathcal{U}(\epsilon)$ be the set of $\epsilon$-enforceable rewards in $\mathcal{U}$. Then, the feasible region $\mathcal{U}(\epsilon)$,
used to compute an enforceable version of the EBS,
shrinks with increasing~$\epsilon$ and decreasing~$K$.
The $\epsilon$-enforceable EBS, which we will use to design one of the Leader experts, is found by solving the optimization problem from Section 3.2.4 of \citet{TD20} under the constraint in Definition \ref{def:enf}.
A similar procedure, applied to the objective of maximizing only player 1's reward, gives the Bully solution for the second
Leader expert.
We provide details on these solutions in the Appendix.
\multilinecomment{
While it has been shown that the EBS can be tractably computed absent enforceability constraints \citep{TD20}, it is nontrivial that this extends to the constrained case.
Lemma \ref{enforce}, proven in the Appendix, helps us construct the enforceability-constrained EBS.
\begin{lemma}
\label{enforce}
Consider any function $f$ that is monotone in $\mathcal{U}$, that is, if $u_1 \geq v_1$ and $u_2 \geq v_2$ then $f(u_1,u_2) \geq f(v_1,v_2)$. Then there always exists a maximizer of $f$ over $\mathcal{U}(\epsilon)$ that is a convex combination of no more than two points in $\mathcal{G}$.
\end{lemma}
The $\epsilon$-enforceable EBS, which we will use to design one of the Leader experts, is found as follows. Assign to each joint action pair $x_A := (i_1, j_1)$ and $x_B := (i_2, j_2)$ the score $\rho(x_A,x_B) := \max_{\alpha_{AB}} \min_{i=1,2}\{\alpha_{AB} \mathbf{R}^{(i)}(x_A) + (1-\alpha_{AB})\mathbf{R}^{(i)}(x_B) - \mu_{\textsc{S}}^{(i)}\}$, where $\mathbf{R}^{(i)}(x_A) := \mathbf{R}^{(i)}(i_1,j_1)$ and $\mathbf{R}^{(i)}(x_B) := \mathbf{R}^{(i)}(i_2,j_2)$, and choose the pair with the highest score \citep{TD20}.
Searching over pairs is sufficient by Lemma \ref{enforce}. We maximize $\rho$ over $\alpha_{AB}$ subject to enforceability.
For two points such that $\mathbf{R}^{(2)}(x_A) > \mathbf{R}^{(2)}(x_B)$ (order does not matter), $\epsilon$-enforceability requires:
\begin{align*}
& \alpha_{AB} \geq \frac{ r(\{x_A, x_B\}) + \epsilon + K[\mu_{\textsc{S}}^{(2)} - \mathbf{R}^{(2)}(x_B)]}{K [\mathbf{R}^{(2)}(x_A) - \mathbf{R}^{(2)}(x_B)]}.
\end{align*}
If $\mathbf{R}^{(2)}(x_A) = \mathbf{R}^{(2)}(x_B)$, then $\alpha_{AB}$ can be arbitrary as long as the first line above still holds; otherwise, this pair is not enforceable regardless of $\alpha_{AB}$.
Taking $\mathbf{R}^{(2)}(x_A) > \mathbf{R}^{(2)}(x_B)$ without loss of generality,
there are two cases to consider.
(1) If $\mathbf{R}^{(i)}(x_A) \geq \mathbf{R}^{(i)}(x_B)$ for both $i=1,2$,
both functions in the minimum have nonnegative slope, so $\rho$ is nondecreasing in $\alpha_{AB}$.
Otherwise, (2) $\rho$ has its maximum at $a = \frac{\mathbf{R}^{(2)}(x_B) - \mathbf{R}^{(1)}(x_B)}{\mathbf{R}^{(1)}(x_A) - \mathbf{R}^{(1)}(x_B) + \mathbf{R}^{(2)}(x_B) - \mathbf{R}^{(2)}(x_A)}$.
In case 1, since $\epsilon$-enforceability is a \textit{lower} bound $v(\epsilon, K)$ on $\alpha_{AB}$, the optimal $\alpha_{AB} = 1$ if that upper bound is at most 1, otherwise this pair is not enforceable.
In case 2, if enforceability does not exclude $a$, then $\alpha_{AB} = a$. Otherwise, the non-excluded region must decrease down from $v(\epsilon, K)$ or increase up to $v(\epsilon, K)$; either way, $\alpha_{AB} = v(\epsilon, K)$ is optimal.
Finally, we also construct the Bully solution for the second Leader expert by following the procedure above, except with a ``selfish'' score $\rho(x_A,x_B) := \max_{\alpha_{AB}} \alpha_{AB} \mathbf{R}^{(1)}(x_A) + (1-\alpha_{AB})\mathbf{R}^{(1)}(x_A)$.
This is, again, a monotone function over $\mathcal{U}(\epsilon)$, so searching over pairs of joint actions suffices. If $\mathbf{R}^{(1)}(x_A) \leq \mathbf{R}^{(1)}(x_B)$, $\rho$ is nondecreasing in $\alpha_{AB}$, so as before we set
$\alpha_{AB} = v(\epsilon, K)$.
If $\mathbf{R}^{(1)}(x_A) > \mathbf{R}^{(1)}(x_B)$, we set $\alpha_{AB} = 1$.
}
\begin{exmp}
\label{example:barg_concepts}
In Chicken (Figure \ref{fig:chicken}),
both players' security value is 0.25, guaranteed by playing action 1.
The EBS is given by 50\% weight on the top-right action pair, and 50\% on the bottom-left, giving both players $0.625$.
If player~1 plays its half of either action pair in the EBS, player 2 does worse by deviating
(by a margin of at least 0.25), so no punishment
is necessary to enforce
the EBS.
Thus the EBS is enforceable for any $K$ and $\epsilon < 0.375K + 0.25$.
\end{exmp}
\subsection{Objectives}\label{sec:sub:regretdef}
\noindent The metric of regret, which we aim to minimize, varies based on the class of player 2 our algorithm faces. For a player 2 algorithm $\mathfrak{B}$, regret with respect to a benchmark $\mu(\mathfrak{B})$ is $\mathcal{R}(T) := T\mu(\mathfrak{B}) - \sum_{t=1}^T R_t^{(1)}$.
\paragraph{\textbf{Bounded Memory}} By condition 3 for Bounded Memory, player 2 induces a communicating MDP.
Let $\Pi$ be the set of time-independent deterministic Markov policies. Then the state-independent optimal average reward is $\mu_*^{(1)} := \max_{\pi^{(1)} \in \Pi} \lim_{t \to \infty} \frac{1}{t} \mathbb{E}_{ \pi^{(1)}}(\sum_{i=0}^t R_i^{(1)}|S_0)$. Here, $\mu(\mathfrak{B}) = \mu_*^{(1)}$.
\paragraph{\textbf{Adversarial}} Against an Adversarial player, an appropriate benchmark is the greatest expected value that player 1 can guarantee, no matter player 2's actions. This is player~1's security value: $\mu(\mathfrak{B}) = \mu_{\textsc{S}}^{(1)}$. Note the distinction from \textit{external regret} used in adversarial bandits and MDPs.
While the problem is trivial if player 2 is known to be Adversarial, since one can always play the maximin strategy, our challenge is to maintain low Adversarial regret without losing guarantees on other regret measures. This corresponds to \textit{safety} in multi-agent learning \citep{PS04}.
\paragraph{\textbf{Follower}} The concept of regret against a Follower is more complex.
Player 2's sequence of policies can vary significantly based on
player 1's actions.
Evaluating our algorithm
by
the maximum average reward in hindsight would have to account for this counterfactual dependence \citep{C14}.
However, by considering enforceability, we can define benchmarks by lower bounds on this maximum,
constrained
by the Follower's fairness value $V^{(2)}$.
We consider two cases depending on $V^{(2)}$, focusing for simplicity on the extremes where the Follower either accepts nothing less than the EBS or accepts any enforceable bargain. In principle, our framework could be extended for other $V^{(2)}$ values.
First, the EBS
is Pareto efficient, meaning
we cannot achieve greater than $\mu_{\textsc{E}}^{(1)}$ without player 2 receiving less than $\mu_{\textsc{E}}^{(2)}$.
When
the EBS can be enforced
with a fixed policy, $\mu_{\textsc{E}}^{(1)}$ is thus an appropriate
benchmark if the fairness threshold $V^{(2)}$ is player 2's part of the EBS pair.
The EBS is not always enforceable for finite $K$, however.
In this case,
the enforceable version of the
EBS is
the maximizer
$(\mu_{\textsc{E},\epsilon}^{(1)}, \mu_{\textsc{E},\epsilon}^{(2)})$ of the objective $f(u_1, u_2) = \min_{i=1,2}\{u_i - \mu_{\textsc{S}}^{(i)}\}$ in $\mathcal{U}(\epsilon)$ for some $\epsilon > 0$.
For this first case,
we therefore consider $V^{(2)} = \mu_{\textsc{E},\epsilon}^{(2)}$, where player 2 follows conditionally. If $\mathcal{U}(\epsilon)$ is empty, $(\mu_{\textsc{E},\epsilon}^{(1)}, \mu_{\textsc{E},\epsilon}^{(2)}) := (\mu_{\textsc{S}}^{(1)}, \mu_{\textsc{S}}^{(2)})$. We set $\mu(\mathfrak{B}) = \mu_{\textsc{E},\epsilon}^{(1)}$.
The second case is $V^{(2)} = 0$, i.e., player 2 follows unconditionally. Here, we compute the maximizer over $\mathcal{U}(\epsilon)$ of $f(u_1, u_2) = u_1$.
Let $(\mu_{\textsc{B},\epsilon}^{(1)}, \mu_{\textsc{B},\epsilon}^{(2)})$ be the solution to this optimization problem (the \textit{Bully values}), or $(\mu_{\textsc{B},\epsilon}^{(1)}, \mu_{\textsc{B},\epsilon}^{(2)}) := (\mu_{\textsc{S}}^{(1)}, \mu_{\textsc{S}}^{(2)})$ if no solution exists. We define $\mu(\mathfrak{B}) = \mu_{\textsc{B},\epsilon}^{(1)}$.
While these regret metrics
provide standards for
adaptability,
we must also formalize non-exploitability.
We seek a guarantee on an algorithm's performance against its best response.
It is unclear how to characterize the best response to an algorithm capable of adapting to several opponent classes. Given this, we focus on a tractable and practically relevant subproblem: guaranteeing that the best response to our algorithm is not a ``bully'' in the sense discussed in the introduction, which is the most common exploitative strategy in MARL literature \citep{PS05, LS01,Press10409, LS05}.
Even this weaker guarantee is absent from previous work, and we show numerically in Section \ref{sec:experiments} that this suffices for our algorithm to be in learning equilibrium with itself
(see Section \ref{sec:intro}) in a pool of top-performing algorithms.
\begin{definition}
Let player 2 be Bounded Memory,
and $\mu_{\textsc{M}}^{(1)}$ and $\mu_{\textsc{M}}^{(2)}$ be the expected rewards for players 1 and 2
when player 1 uses $\mathbf{v}^{(1)}_{\textsc{M}}$ and
player 2 uses $\pi^{(2)}$.
An algorithm $\mathfrak{A}$ is
\textbf{$(V^{(1)},\eta_{e})$-non-exploitable}
if, whenever
$\mu_*^{(1)} < V^{(1)} - \eta_{e}$ and $\mu_{\textsc{M}}^{(2)} > \mu_{\textsc{E},\epsilon}^{(2)}$, for all $c > 0$ player 2's regret with respect to $\mu_{\textsc{E},\epsilon}^{(2)} + c$ against $\mathfrak{A}$ is $\Omega(T)$.
\end{definition}
Our algorithm is exploitable if player 2 can profit
(do better than $\mu_{\textsc{E},\epsilon}^{(2)}$)
from
a policy against which we cannot achieve close to
some value corresponding to a standard of fairness.
The hyperparameter $V^{(1)}$ tunes the tradeoff
between exploitability and
flexibility to various opponents.
Player 2 does \textit{not} profit from
exploitation if they incur linear regret.
\begin{exmp}
In Chicken (Figure \ref{fig:chicken}), let $V^{(1)} = 0.625$ (i.e., the EBS), and consider the following strategies: a) always play action 2, b) always play the opponent's last action,
and c) play the best response to the empirical distribution of the opponent's past actions. Strategy (a) is exploitative Bounded Memory. Thus, we argue that an effective algorithm should avoid playing the ``best response'' of action 1, instead discouraging the use of this strategy by, e.g., consistently playing the EBS (see Egalitarian Leader in the next section). Strategy (b) is also Bounded Memory, but not exploitative since one can achieve at least $V^{(1)}$ against this player on average. Our algorithm should therefore learn the best response to (b). Strategy (c) is a Follower with $V^{(2)} = 0$, thus our algorithm should converge to consistently playing action 2 against (c), achieving the Bully value.
\end{exmp}
\section{Lead and Follow Fairly (LAFF)}\label{sec:ergalgo}
We apply an expert algorithm to a set of experts designed for our target classes. Expert algorithms
use an active expert to choose an action at a given time,
and switch active experts based on their relative performance \citep{C14}.
LAFF switches experts sequentially, going to the next expert in a predefined sequence only
if the rewards obtained by its active expert fall short of the current target value.
Some of the experts are also designed to guarantee non-exploitability.
\subsection{Description of Experts}
\noindent
LAFF uses an active expert for an epoch of length $H$ before checking whether to switch. Let $\tau$ be the time elapsed since LAFF started using the current instance of the active expert (at time $t_i + 1$), and define $\overline{r}^{(1)}_{i,\tau} := \frac{1}{\tau} \sum_{t=t_i + 1}^{t_i + \tau} R^{(1)}_t$ and $\overline{r}^{(2)}_{i,\tau} := \frac{1}{\tau-K} \sum_{t=t_i + K + 1}^{t_i + \tau} R^{(2)}_t$. See Figure \ref{flowchart} for a summary of algorithmic elements that these experts depend on.
\begin{figure}
\centering
\tikz{
\node[obs, xshift=-3.5cm] (f) {$\phi_F$}; %
\node[obs, xshift=-2cm] (e) {$\phi_E$}; %
\node[obs, xshift=-0.5cm] (m) {$\phi_M$}; %
\node[obs, xshift=1cm] (b) {$\phi_B$}; %
\node[latent, rectangle, above=of f, yshift=-0.5cm] (q) {Q-learning};
\node[latent, rectangle, above=of e, yshift=-0.5cm, xshift=0.75cm] (v) {$\mathbf{v}^{(1)}_{\textsc{M}}$};
\node[latent, rectangle, above=of e, yshift=-0.5cm, xshift=2.25cm] (p) {$\mathbf{v}^{(1)}_P$};
\edge {q} {f}
\edge {v} {e,m,b}
\edge {p} {e,b}
\edge {e} {f,m}
}
\caption{Algorithmic components (white) of LAFF's experts (gray). An arrow from one node to another means the former is used in computation of the output by the latter.}
\label{flowchart}
\end{figure}
\paragraph{\textbf{Conditional Follower $(\phi_F)$}}
Recall the
benchmarks $\mu_{\textsc{B},\epsilon}^{(1)}$,
$\mu_{\textsc{E},\epsilon}^{(1)}$, and
$\mu_{\textsc{S}}^{(1)}$from
Section \ref{sec:sub:regretdef}.
To handle cases where
$\mu_*^{(1)}$
against a Bounded Memory player 2
lies
between these values,
LAFF uses $\phi_F$ multiple times in the sequence (called ``instances''). This expert starts off equivalent to Optimistic Q-learning \citep{optQ}, whose regret bound
(in an MDP with $S$ states and $A$ actions)
with probability at least $1-\delta$ is $\mathcal{R}_{Q}(\tau, \delta) = \mathcal{O}((SA\log(\frac{\tau}{\delta}))^{1/3}\tau^{2/3})$. After each \textit{subepoch} of length $H^{1/2}$, if $\overline{r}^{(1)}_{i,\tau} < V^{(1)} - \frac{\mathcal{R}_{Q}(\tau, \delta/T)}{\tau}$, this expert switches to the Egalitarian Leader $\phi_E$ (below) for as long as \textit{any} instance of $\phi_F$ is used. Otherwise, it uses Optimistic Q-learning for the next subepoch.
\paragraph{\textbf{Conditional Maximin ($\phi_M$)}}
Initially, $\phi_M$ uses the policy $\pi^{(1)}(\cdot|s) = \mathbf{v}^{(1)}_{\textsc{M}}$ for all $s$. Let $\eta_{m} > 0$ be a slack variable, chosen based on the class of Adversarial players considered in Theorem \ref{hedge}. After each subepoch, if $\overline{r}^{(2)}_{i,\tau} > \mu_{\textsc{E},\epsilon}^{(2)} - \eta_{m} + \sqrt{\frac{\log(T/\delta)}{2(\tau-K)}}$, this expert switches to $\phi_E$ for the rest of the game. Otherwise, it uses $\mathbf{v}^{(1)}_{\textsc{M}}$ for the next subepoch.
\paragraph{\textbf{Egalitarian Leader ($\phi_E$)}} If there is no enforceable EBS, let $\phi_E \equiv \mathbf{v}^{(1)}_{\textsc{M}}$.
Otherwise, let the EBS action pairs be denoted $(a_{\textsc{E}}^{(1)}(y), a_{\textsc{E}}^{(2)}(y))$ for $y=0,1$,
and the weight on the first action pair
be $\alpha_{\textsc{E}}$.
While $\epsilon$-enforceability requires that a punishment of length $K$ is sufficient to make a reward pair player 2's best response, this length may not be \textit{necessary}.
We therefore consider the least harsh punishment (if any) needed to enforce the EBS, that is, the value $K' \leq K$ satisfying $K' = \max\Big\{0, \Big \lceil \frac{r(\{(a_{\textsc{E}}^{(1)}(0), a_{\textsc{E}}^{(2)}(0)), (a_{\textsc{E}}^{(1)}(1), a_{\textsc{E}}^{(2)}(1))\}) + \epsilon}{\mu_{\textsc{E},\epsilon}^{(2)} - \mu_{\textsc{S}}^{(2)}} \Big \rceil \Big\}$.
Let $\mathbf{v}^{(1)}_P := \argmin_{\mathbf{v}_1} \max_{\mathbf{v}_2} \mathbf{v}_1^\intercal\mathbf{R}^{(2)}\mathbf{v}_2$, player 1's punishment strategy.
Recall that policies in our framework are conditioned on binary signals $Y_t^{(i)}$,
whose distributions are determined
by players' reported weights $w_t^{(i)}$.
Then, for the first ${K'}$ time steps, with the realized value $y_{t}^{(1)}$ of the signal given by $w_t^{(1)} = \alpha_{\textsc{E}}$ for all $t$, $\phi_E$ plays $a_{\textsc{E}}^{(1)}(y_{t}^{(1)})$. (This
ensures that, if LAFF switches to $\phi_E$ mid-game, player 2 is not punished for
having played actions other than the EBS
before LAFF started signaling enforcement of the EBS.) Afterwards, $\phi_E$ uses the following stationary policy. If, for any of the past $K'$ timesteps, player 2 has played $A^{(2)}_t \neq a_{\textsc{E}}^{(2)}(y_t^{(2)})$
--- i.e., deviated from the EBS ---
the distribution over actions for that state is $\mathbf{v}^{(1)}_P$. Otherwise, $a_{\textsc{E}}^{(1)}(y_t^{(1)})$ is played.
\paragraph{\textbf{Bully Leader ($\phi_B$)}} This expert is defined like $\phi_E$, but using the Bully solution from Section \ref{bargtheory}
(maximizing the selfish objective).
If there is no enforceable solution, given by $(a_{\textsc{B}}^{(1)}(y), a_{\textsc{B}}^{(2)}(y))$ for $y=0,1$ and $\alpha_{\textsc{B}}$, let $\phi_B \equiv \mathbf{v}^{(1)}_{\textsc{M}}$. Otherwise, define $\phi_B$ just as $\phi_E$ for this solution.
\subsection{Algorithm}
We design the selection of experts by LAFF (Algorithm \ref{followfirst}) such that, for any of our target classes, LAFF eventually
commits to the optimal expert against player 2 in a sequence $\{\phi_j\}_j$.
Over an epoch, the active expert is executed,
and we update this expert's average rewards
since it was made active (line \ref{record}). Afterwards, LAFF switches to the next expert in the schedule if and only if it rejects the hypothesis that the current expert's expected value exceeds its corresponding target $\mu_j$ (line \ref{baselinecheck}).
The false positive rate of this hypothesis test is controlled by a function $\mathcal{B}$, which
decreases with
$\sqrt{\tau}$.
We define $\mathcal{B}$ in the proof of Lemma \ref{followregret} (see Appendix).
\multilinecomment{
The false positive rate of this hypothesis test is controlled by a function $\mathcal{B}$ of the time elapsed since the last switch (line \ref{tauup}), defined:
\begin{align*}
\xi(\epsilon, r) &:= \begin{cases}
\frac{\epsilon}{2K'},& \text{if } r \geq 0\\
\frac{\epsilon + r}{2K'},& \text{if } -\epsilon < r < 0\\
-r, & \text{otherwise},
\end{cases} \\
\mathcal{B}(\tau) &:= \frac{1}{\tau} \cdot \frac{K'\xi(\epsilon, r(\mathcal{X})) + C_1T_0 + K'+1}{\xi(\epsilon, r(\mathcal{X}))} \\
&+ \frac{1}{\tau} \cdot \frac{C_2\mathcal{R}_{Q}(\tau, \frac{\delta}{T}) + (3 + \xi(\epsilon, r(\mathcal{X})))\sqrt{\frac{\tau \log(\frac{T}{\delta})}{2}}}{\xi(\epsilon, r(\mathcal{X}))}.
\end{align*}
Where $\mathcal{X} = \mathcal{X}_{\textsc{B}} := \{(a_{\textsc{B}}^{(1)}(y), a_{\textsc{B}}^{(2)}(y))\}_{y=0,1}$ for expert index $j \leq 2$, $\mathcal{X} = \mathcal{X}_{\textsc{E}} := \{(a_{\textsc{E}}^{(1)}(y), a_{\textsc{E}}^{(2)}(y))\}_{y=0,1}$ for $j > 2$, and $\delta > 0$ is some confidence level.
}
Because $\mu_{\textsc{B},\epsilon}^{(1)} \geq \mu_{\textsc{E},\epsilon}^{(1)} \geq \mu_{\textsc{S}}^{(1)}$, and the optimal reward $\mu_*^{(1)}$ against a Bounded Memory player may be greater than $\mu_{\textsc{B},\epsilon}^{(1)}$ or in between these values, $\{\phi_j\}_{j}$ prioritizes the order of experts based on the optimal average reward they could achieve against the corresponding player 2 class (line \ref{initline}).
\section{Analysis}
We will now show that LAFF meets our key criteria of adaptability
and non-exploitability.
See Appendix for proofs of lemmas and the detailed proof of Theorem \ref{hedge}.
Lemma \ref{followregret} shows that
with high probability player 2's rewards against $\phi_E$ are not much greater than the EBS
(thus non-exploitability is feasible),
and player 1's rewards against a Follower are near the target when the correct Leader is used.
\begin{algorithm}
\caption{Lead and Follow Fairly (LAFF)}\label{followfirst}
\begin{algorithmic}[1]
\State \textbf{Init} target schedule $\{\mu_j\}_j = \{\mu_{\textsc{B},\epsilon}^{(1)}, \mu_{\textsc{B},\epsilon}^{(1)},\mu_{\textsc{E},\epsilon}^{(1)},\mu_{\textsc{E},\epsilon}^{(1)},$ $\mu_{\textsc{S}}^{(1)}\}$, expert schedule $\{\phi_j\}_j = \{\phi_F, \phi_B, \phi_F, \phi_E,$ $\phi_F, \phi_M\}$, expert index $j = 1$, $\tau = 0$, $R_\tau = 0
\label{initline}
\For{$i=1,2,\dots,\ceil{T/H}$}
\For{$t=(i-1)H + 1,\dots,\min\{iH, T\}$}
\State Run expert $\phi_j$
\State $R_\tau \leftarrow R_\tau + \mathbf{R}^{(1)}(A^{(1)}_t, A^{(2)}_t)$ \label{record}
\EndFor
\State $\tau \leftarrow \tau + H$ \label{tauup}
\If{$j < |\{\phi_j\}_j|$ and $\frac{R_\tau}{\tau} < \mu_j - \mathcal{B}(\tau)$} \label{baselinecheck}
\State $j \leftarrow j +1$, $\tau \leftarrow 0$, $R_\tau \leftarrow 0$
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\begin{lemma}
\label{followregret}
\textbf{(Reward Bounds When LAFF Leads)}
If player 1 uses $\phi_E$ over a sequence of length $\tau+K'$ starting at time $t^*+1$, then
with probability at least $1- \frac{3\delta}{T}$:
\begin{align*}
&\sum_{t=t^*+K'+1}^{t^* + K' + \tau} R^{(2)}_t \leq K' + 1 + \tau\mu_{\textsc{E},\epsilon}^{(2)} + 3\sqrt{\textstyle{\frac{1}{2}}\tau\log(\frac{T}{\delta})}.
\end{align*}
If player 2 is a Follower with $V^{(2)} = 0$, and player 1 uses $\phi_B$, then with probability at least $1- \frac{5\delta}{T}$, we have $\overline{r}^{(1)}_{i,\tau} \geq \mu_{\textsc{B},\epsilon}^{(1)} - \mathcal{B}(\tau)$.
If $V^{(2)} = \mu_{\textsc{E},\epsilon}^{(2)}$, and player 1 uses $\phi_E$, then with probability at least $1- \frac{5\delta}{T}$, we have $\overline{r}^{(1)}_{i,\tau} \geq \mu_{\textsc{E},\epsilon}^{(1)} - \mathcal{B}(\tau)$.
\end{lemma}
Lemma \ref{conditional_experts} guarantees that with high probability, LAFF follows or uses the maximin strategy against non-exploitative players, and punishes exploitative players.
\begin{lemma}
\label{conditional_experts}
\textbf{(False Positive and Negative Control of Exploitation Test)} Consider a sequence of $k$ epochs each of length $H$.
Let $m^*_{F}$ or $m^*_{M}$ be, respectively, the index of the \textit{subepoch} within this sequence at the start of which $\phi_F$ or $\phi_M$ switches to punishing with $\phi_E$, if at all (if not, let $m^*_{F}$ or $m^*_{M} = \infty$). Let $\eta_{e} \geq \frac{2\mathcal{R}_{Q}(H/2, \delta/T)}{H} + \sqrt{\frac{2S^2A\log(c_0/\delta)}{c_1H}}$, where $c_0, c_1$ are defined as in Theorem 5.1 of \citet{MT05}, and $\eta_{m} \geq \sqrt{\frac{\log(T/\delta)}{2(H/2-K)}} + \sqrt{\frac{64e\log(N_q/\delta^2)}{(1-\lambda)(H/2-K)}}$, where $\lambda$ and $N_q$ are constants with respect to time defined in Lemma \ref{raolemma} (see Appendix).
Then, suppose player 2 is Bounded Memory, and $\phi_F$ is used. If $\mu_*^{(1)} < V^{(1)} - \eta_{e}$, then with probability at least $1-\delta$, $m^*_{F} \leq \ceil{\frac{H^{1/2}}{2}}$. If $\mu_*^{(1)} \geq V^{(1)}$, then with probability at most $\frac{kH^{1/2}\delta}{T}$, $m^*_{F} < \infty$. If $\phi_M$ is used, and $\mu_{\textsc{M}}^{(2)} > \mu_{\textsc{E},\epsilon}^{(2)}$, then with probability at least $1-\delta$, $m^*_{M} \leq \ceil{\frac{H^{1/2}}{2}}$.
Suppose player 2 is Adversarial, with a sequence of action distributions $\{\pi^{(2)}_t\}$ such that, for any $M \geq H^{1/2} - K$ and $i$, $\frac{1}{M} \sum_{t=i+1}^{i+M} {\mathbf{v}^{(1)}_{\textsc{M}}}^\intercal \mathbf{R}^{(2)} \pi^{(2)}_t \leq \mu_{\textsc{E},\epsilon}^{(2)} - \eta_{m}$. Then, if $\phi_M$ is used, with probability at most $\frac{kH^{1/2}\delta}{T}$, $m^*_{M} < \infty$.
\end{lemma}
Our main result, Theorem \ref{hedge}, claims that 1)
against each of our target classes,
LAFF achieves a regret bound of the same order
as Optimistic Q-learning
in single-agent MDPs \citep{optQ},
and 2) LAFF satisfies non-exploitability.
\begin{theorem}
\label{hedge}
Let $\mathcal{C}$ be the set of player 2 algorithms that are any of the following:
\begin{itemize}
\item Adversarial, with a sequence of action distributions $\{\pi^{(2)}_t\}$ such that $\frac{1}{M} \sum_{t=i+1}^{i+M} {\mathbf{v}^{(1)}_{\textsc{M}}}^\intercal \mathbf{R}^{(2)} \pi^{(2)}_t \leq \mu_{\textsc{E},\epsilon}^{(2)} - \eta_{m}$ for any $M \geq T^{1/4}$ and $i$,
\item Follower, with $V^{(2)} \in \{0, \mu_{\textsc{E},\epsilon}^{(2)}\}$, or
\item Bounded Memory, with
$\mu_*^{(1)} \geq V^{(1)}$.
\end{itemize}
Let $\eta_{m}$ and $\eta_{e}$ satisfy the conditions of Lemma \ref{conditional_experts}.
Then, with probability at least $1-5\delta$, LAFF satisfies:
\begin{align*}
\max_{\mathcal{C}} \mathcal{R}(T) &= \mathcal{O}(\mathcal{R}_{Q}(T, \delta/T)).
\end{align*}
Further, with probability at least $1-6\delta$, LAFF is
$(V^{(1)},\eta_{e})$-non-exploitable
when there exists an enforceable EBS.
\end{theorem}
If there is no enforceable EBS, $\mu_{\textsc{E},\epsilon}^{(2)} = \mu_{\textsc{S}}^{(2)}$ and so we cannot guarantee player 2 does worse than $\mu_{\textsc{E},\epsilon}^{(2)}$ in expectation.
The class of Adversarial players for which Theorem \ref{hedge} holds is technically restrictive. However, non-exploitability requires that for each strategy (expert) used by our algorithm that could be exploited, including Conditional Maximin, we exclude from our target class some subset of opponents. That is, we cannot guarantee low Adversarial regret against players who receive more than the EBS value against maximin, because such players may exploit us.
\begin{sketch}
For each opponent class, we need to show that with high probability LAFF does not lock in to
a suboptimal expert for that class. If LAFF locks in to an expert for which the corresponding target value $\mu_j$ is \textit{greater} than the opponent's benchmark $\mu(\mathfrak{B})$, this implies LAFF consistently receives rewards such that ``regret'' with respect to $\mu_j$ grows like $\mathcal{R}_Q$, by design of $\mathcal{B}(\tau)$. But since the benchmark is less than $\mu_j$, the true regret is also bounded as desired.
We therefore only need to consider the cases of $\mu_j \leq \mu(\mathfrak{B})$. First, we know that each expert achieves at most $\mathcal{R}_Q$ regret against its target opponent class, by, respectively: the definitions of $\mathcal{R}_Q$ (for non-exploitative Bounded Memory) and maximin (for Adversarial), and Lemma \ref{followregret} (for Followers).
Lemma \ref{conditional_experts} ensures with high probability that $\phi_F$ and $\phi_M$ do not switch to $\phi_E$ when not exploited, so they inherit the desired regret bounds.
Then, we need only show that once LAFF reaches the expert
whose target class matches the opponent
(thus guaranteeing low regret using that expert), with high probability LAFF does not switch.
But
if using the corresponding expert gives LAFF low regret with respect to $\mu(\mathfrak{B}) \geq \mu_j$, then its rewards are sufficiently high that the condition for switching experts (line \ref{baselinecheck} of Algorithm \ref{followfirst}) never holds. The first claim of the theorem follows.
\begin{figure*}[ht]
\centering
\begin{tabular}{ccc}
\ \ Unconditional Follower (Q-Learning) & \ \ \ \ \ \ \ \ Conditional Follower (LAFF) & \ \ \ \ \ \ \ Bounded Memory (FTFT) \\
\includegraphics[width=5.3cm]{figures/1.png} & \includegraphics[width=5.3cm]{figures/2.png} & \includegraphics[width=5.3cm]{figures/3.png} \\
\end{tabular}
\begin{tabular}{cc}
\ \ \ \ \ \ \ \ \ Adversarial (Manipulator) & \ \ \ \ \ \ \ \ Exploitative (Bully) \\
\includegraphics[width=5.3cm]{figures/4.png} & \includegraphics[width=5.3cm]{figures/5.png} \\
\end{tabular}
\caption{The first four plots show LAFF's average regret, in each of 11 games detailed in the Appendix, for the following opponents: Unconditional Follower (Q-Learning), Conditional Follower (LAFF), Bounded Memory (FTFT), Adversarial (Manipulator). The last plot shows the regret of an Exploitative (Bully) algorithm against LAFF.}
\label{fig:regrets}
\end{figure*}
To show non-exploitability, suppose LAFF locks in to the first instance of $\phi_F$. By Lemma \ref{conditional_experts}, $\phi_F$ detects evidence of exploitation sufficiently early that the remaining time left in the game is linear in $T$. After detecting exploitation, $\phi_F$ plays the same policy as $\phi_E$. But by Lemma \ref{followregret}, against this policy player 2 cannot guarantee an average reward greater than $\mu_{\textsc{E},\epsilon}^{(2)}$ plus a term that vanishes at a rate $T^{1/2}$. The second claim of the theorem follows for the other possible locked-in experts as well by considering two facts. First, whenever $\phi_E$ or $\phi_B$ is used, Lemma \ref{followregret} again bounds player 2's rewards, since by Pareto efficiency of the EBS player 2's rewards from the Bully solution cannot exceed $\mu_{\textsc{E},\epsilon}^{(2)}$. Second, if LAFF reaches $\phi_M$, again Lemma \ref{conditional_experts} ensures sufficiently fast detection of exploitation with high probability.
\end{sketch}
\section{Numerical Experiments}\label{sec:experiments}
Code for the experiments in this section is available on
Github.\footnote{\url{https://github.com/digiovannia/ad_expl}}
We evaluate LAFF by three empirical metrics. First,
we find LAFF's empirical regret against one
algorithm from each target class.
Second, LAFF and a set of top-performing repeated games algorithms compete in a round-robin tournament. For each algorithm, we find its rewards against its best response algorithm in this set,
and check if it is in a learning equilibrium by applying a Nash equilibrium solver \citep{Knight2018} to the matrices of empirical rewards for algorithm pairs.
These criteria evaluate exploitability: more exploitable algorithms have lower rewards against algorithms that optimize against them,
and an exploitable algorithm cannot be in equilibrium with itself unless the fairness threshold $V^{(1)}$ is low. Finally, we perform a replicator dynamic simulation \citep{CO18}. Each generation, the algorithms' fitness values are computed as averages of the round-robin scores weighted by the distribution of the population of algorithms. Then, the population distribution is updated in proportion to fitness. This evaluates how well a given algorithm performs when the distribution of its opponents is determined by those algorithms' own performance.
Exploitability is thus implicitly penalized by accounting for opponents' incentives.
Details on the implementation of these experiments are in the Appendix. We set $V^{(1)} = \mu_{\textsc{E},\epsilon}^{(1)}$.
Our set of competitors to LAFF consists of Bounded Memory (Bully, Forgiving Generalized Tit-for-Tat or FTFT), Follower (M-Qubed, Q-Learning, Fictitious Play), and expert (Manipulator, S++) algorithms. See Appendix for details and sources.
We chose these algorithms because, first, they performed
well
in a repeated games tournament \citep{CO18},
and second,
they cover our opponent classes.
S++ and Manipulator do not fall cleanly into any of those classes, but they are the closest comparisons in previous literature to LAFF, since they
adapt to a variety of opponents by switching between Leader and Follower experts.
To ensure sufficient diversity of test games, we choose games based on the taxonomy of Figure 1 in \citet{topology}. Six game families
are categorized by the structures of their Nash equilibria.
We use two games from each family, one with symmetric rewards and one with asymmetric, except Cyclic, which has no symmetric games (see Appendix).
\begin{table*}
\centering
\caption{Rewards of algorithm pairs, averaged over games and trials (pure learning equilibria in are highlighted in bold text, and each algorithm's reward against its best response is in blue)}
\label{tab:emp_matrix}
\begin{tabular}{ccccccccc}
\toprule
& S++ & Manipulator & M-Qubed & Bully & Q-Learning & LAFF & FTFT & FP \\
\midrule
S++ & 0.75, 0.76 & 0.73, 0.80 & \textcolor{blue}{0.73}, 0.81 & 0.65, 0.77 & 0.82, 0.76 & 0.71, 0.8 & 0.70, 0.68 & 0.72, 0.55 \\
Manipulator & 0.87, 0.68 & 0.76, 0.71 & 0.77, 0.65 & \textcolor{blue}{0.65}, 0.77 & 0.89, 0.67 & 0.70, 0.65 & 0.71, 0.60 & 0.76, 0.55 \\
M-Qubed & 0.88, \textcolor{blue}{0.68} & 0.68, 0.68 & 0.80, 0.74 & \textcolor{blue}{0.65}, 0.80 & 0.79, 0.75 & 0.76, 0.73 & 0.78, 0.65 & 0.62, 0.56 \\
Bully & 0.86, 0.61 & 0.83, \textcolor{blue}{0.60} & 0.85, \textcolor{blue}{0.61} & 0.48, 0.44 & \textbf{\textcolor{blue}{0.91}, \textcolor{blue}{0.63}} & 0.61, 0.49 & 0.72, 0.55 & 0.76, \textcolor{blue}{0.56} \\
Q-Learning & 0.82, 0.77 & 0.73, 0.83 & 0.79, 0.67& \textbf{\textcolor{blue}{0.68}, \textcolor{blue}{0.85}} & 0.83, 0.74 & 0.71, 0.84 & 0.81, \textcolor{blue}{0.67} & 0.64, 0.56 \\
LAFF & 0.87, 0.65 & 0.71, 0.66 & 0.74, 0.72 & 0.55, 0.61 & 0.90, 0.66 & \textbf{\textcolor{blue}{0.77}, \textcolor{blue}{0.74}} & 0.80, 0.70 & 0.75, 0.57 \\
FTFT & 0.64, 0.70 & 0.49, 0.71 & 0.59, 0.76& 0.60, 0.71 & 0.59, 0.78 & \textcolor{blue}{0.61}, 0.78 & 0.80, 0.75 & 0.46, 0.72 \\
FP & 0.70, 0.73 & \textcolor{blue}{0.66}, 0.74 & 0.66, 0.55 & 0.63, 0.73 & 0.69, 0.57 & 0.61, 0.71 & 0.71, 0.60 & 0.68, 0.55 \\
\bottomrule
\end{tabular}
\end{table*}
\paragraph{\textbf{Regret Bounds}} Figure \ref{fig:regrets} shows LAFF's regret, averaged over 50 trials, in games against an algorithm from each target class, and the regret of an exploitative Bounded Memory algorithm against LAFF.
We chose Manipulator as ``Adversarial'' because it does not play the EBS and is not a pure Leader or Follower.
However, in the symmetric Unfair game, the empirical rewards indicate that Manipulator attempts to exploit LAFF,
so LAFF punishes Manipulator at the expense of the Adversarial regret guarantee.
From the plot evaluating player 2's regret, we also exclude four games where player 2's Bully solution equals the EBS, since in these cases $\mu_*^{(1)} \geq V^{(1)}$
(player 1 is not exploited by playing the optimal policy).
In most games, LAFF's regret
eventually plateaus,
while the exploitative player has linear regret, showing that LAFF is non-exploitable.
In three games,
LAFF has linear regret against an Unconditional Follower and non-exploitative Bounded Memory player. This may be due to the practical difficulty of choosing hyperparameters for tests used to decide when to switch to the next expert; these tests depend on some unknown quantities, so for our experiments, we tuned $\mathcal{B}(\tau)$ on a training set of four games that are not included in the set of 11 games for these results
(see Appendix).
Longer time horizons may be required for
the conditions on $\eta_{e}$ in Lemma \ref{conditional_experts} to hold.
We used a horizon of $T=2 \cdot 10^5$ to be on
the same approximate scale as experiments in other works on repeated games \citep{CG10, LS05, C14}.
\begin{figure}[ht]
\centering
\includegraphics[width=7cm]{figures/rep.png}
\caption{Replicator dynamic results, where the bold curves are average population shares and shaded regions are plus and minus one standard deviation.}
\label{fig:rd_results}
\end{figure}
\paragraph{\textbf{Round Robin}} Table \ref{tab:emp_matrix} shows the average rewards of each algorithm pair across the 11 games and 50 trials,
which provide an empirical bimatrix for the \textit{learning game}, i.e., a meta-game in which users choose algorithms to deploy across different repeated games.
An algorithm's reward against its best response (highlighted in blue) measures how much it bullies when possible and avoids exploitation.
Both as player 1 and player 2, LAFF is second by this metric, behind Bully. We also highlight the pure strategy Nash equilibria of this learning game (in bold), noting that LAFF is in a learning equilibrium with itself. Unfortunately, the pairing in which Q-Learning follows Bully is also an equilibrium. Thus there is an equilibrium selection problem, e.g., both users might choose Bully and receive very low rewards. However, in practice it may be easier for users to coordinate on both using LAFF, because there is no conflict over choosing which side is the Leader (Bully) versus the Follower (Q-Learning).
\paragraph{\textbf{Replicator Dynamic}}
On average over 1000 runs,
LAFF converges to 100\% of the population in the pool of algorithms (Figure \ref{fig:rd_results}), based on fitness computed as the \textit{minimum} of an algorithm's average reward over the set of games when playing as player 1 versus player 2. This metric matches the motivation
for the EBS; algorithm users will not know \textit{a priori} which of the two ``sides'' of the game they will be in. Thus, they may prefer their algorithm to cooperate with itself (maximize an egalitarian objective), instead of bullying its copy in hopes of being on the side of the bully.
\section{Discussion}
When choosing algorithms for multi-agent interactions, users
will have to trade off robustness to the variety
of possible algorithms they might face, with avoiding providing other users incentives to exploit them \citep{SLRC21}.
We have presented an algorithm for repeated games that balances these desiderata.
Both properties can facilitate cooperation between learning agents, while still allowing them to accept generous offers.
If LAFF faces an agent who ``follows'' fair, Pareto efficient bargaining proposals, the Egalitarian Leader leads them to a mutual benefit over their security values.
If the other agent's fairness standard is different, the Conditional Follower can follow this alternative proposal using RL if it is not exploitative;
otherwise, the exploitation penalty encourages the other player to be more cooperative.
Against exploitable agents, the Bully Leader can benefit from a more self-interested bargain.
Finally, if the other player is unwilling to cooperate at all but is not exploitative, Conditional Maximin ensures safety. In future work, more experts can be added
based on agent classes that we have neglected. For example, while LAFF includes Leader experts only for the extreme cases in which player 2 has a high or minimal fairness standard, one could add Leaders for other bargaining solutions.
The biggest limitations of our approach are restrictive assumptions required for our non-exploitability criterion, and the strictness of this criterion. The margin $\eta_{e}$ is small only for sufficiently large time horizons,
hence the linear regret in some of our experiments. Though LAFF successfully punishes players against whom it receives less than fair rewards, this is only strategically necessary when such players \textit{benefit} from playing this way (genuine ``exploitation'').
It may not be practically necessary
to modify the experts to not punish
when the opponent also does worse,
because an opponent would not have an incentive to lead with a Pareto inefficient policy.
Finally, we note that
our approach is not intended to provide the optimal balance of the adaptability-exploitability tradeoff;
in particular, keeping a fixed fairness threshold
may not be ideal if it
prevents an algorithm from
cooperating with algorithms
that follow other intuitively ``fair'' standards \citep{SLRC21}.
\begin{contributions}
Both authors conceived and carried out the research project jointly. A.D.~wrote the paper and code for
numerical experiments. A.T.~helped edit the paper.
\end{contributions}
\begin{acknowledgements}
A.D.~acknowledges the support of a grant
from the Center on Long-Term Risk Fund.
\end{acknowledgements}
\subsubsection*{References}}
\usepackage{mathtools}
\usepackage{booktabs}
\usepackage{tikz}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{hyperref}
\usepackage{url}
\usepackage{booktabs}
\usepackage{amsfonts}
\usepackage{nicefrac}
\usepackage{microtype}
\usepackage{lipsum}
\usepackage{graphicx}
\usepackage{setspace}
\usepackage{fullpage,graphicx,psfrag,amsmath,amsfonts,verbatim,tabularx,multirow,amssymb}
\usepackage{color,soul}
\usepackage{placeins}
\usepackage{tikz}
\usepackage{algorithm}
\usepackage[noend]{algpseudocode
\usetikzlibrary{bayesnet}
\usetikzlibrary{arrows}
\usepackage{color}
\usepackage[justification=centering]{caption}
\usepackage{subcaption}
\newcommand{\multilinecomment}[1]{}
\usetikzlibrary{backgrounds}
\usepackage{amsthm}
\usepackage{hyperref}
\usepackage{natbib}
\usepackage{booktabs}
\usepackage{mathtools}
\newcommand{\swap}[3][-]{#3#1#2}
\newcount\Comments
\Comments=0
\definecolor{darkgreen}{rgb}{0,0.5,0}
\newcommand{\kibitz}[2]{\ifnum\Comments=1\textcolor{#1}{#2}\fi}
\newcommand{\ambuj}[1]{\kibitz{darkgreen}{[AT: #1]}}
\newcommand{\adigi}[1]{\kibitz{blue}{[AD: #1]}}
\newcommand{{H}}{{H}}
\newcommand{\eta_{m}}{\eta_{m}}
\newcommand{\eta_{e}}{\eta_{e}}
\newcommand{K}{K}
\newtheorem{theorem}{Theorem}
\newtheorem{lemma}{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{fact}[theorem]{Fact}
\newtheorem{definition}{Definition}
\newtheorem{assumption}{Assumption}
\newtheorem{exmp}{Example}[section]
\DeclareMathOperator*{\argmax}{arg\,max}
\DeclareMathOperator*{\argmin}{arg\,min}
\makeatletter
\newtheorem*{rep@theorem}{\rep@title}
\newcommand{\newreptheorem}[2]{%
\newenvironment{rep#1}[1]{%
\def\rep@title{#2 \ref{##1}}%
\begin{rep@theorem}}%
{\end{rep@theorem}}}
\makeatother
\newreptheorem{theorem}{Theorem}
\newreptheorem{lemma}{Lemma}
\newenvironment{sketch}{\paragraph{\normalfont \textit{Proof Sketch.}}}{\hfill$\square$}
\newcommand{\centered}[1]{\begin{tabular}{l} #1 \end{tabular}}
\DeclarePairedDelimiter{\ceil}{\lceil}{\rceil}
\DeclarePairedDelimiter{\floor}{\lfloor}{\rfloor}
\newtheorem{remark}[theorem]{Remark}
\usepackage{xr-hyper}
\makeatletter
\newcommand*{\addFileDependency}[1]
\typeout{(#1)}
\@addtofilelist{#1}
\IfFileExists{#1}{}{\typeout{No file #1.}}
}
\makeatother
\newcommand*{\myexternaldocument}[1]{%
\externaldocument{#1}%
\addFileDependency{#1.tex}%
\addFileDependency{#1.aux}%
}
\title{Balancing Adaptability and Non-exploitability in Repeated Games}
\author[1]{\href{mailto:<adigi@umich.edu>?Subject=Your UAI 2022 paper}{Anthony~DiGiovanni}{}}
\author[1]{Ambuj~Tewari}
\affil[1]{%
Department of Statistics\\
University of Michigan\\
Ann Arbor, MI, USA
}
\begin{document}
\maketitle
\begin{abstract}
We study the problem of adaptability in repeated games: simultaneously guaranteeing low regret for several classes of opponents.
We add the constraint that our algorithm is non-exploitable, in that the opponent lacks an incentive to use an algorithm against which we cannot achieve rewards exceeding some ``fair'' value.
Our solution is an expert algorithm (LAFF),
which searches within a set of sub-algorithms that are optimal for each opponent class,
and
punishes evidence of exploitation by switching to a
policy that enforces a fair solution.
With benchmarks that depend on the opponent class, we first show that LAFF has sublinear regret uniformly over
these classes.
Second, we show that LAFF discourages exploitation,
because exploitative opponents have linear regret.
To our knowledge, this work is the first to provide guarantees for both regret and non-exploitability in multi-agent learning.
\end{abstract}
\section{Introduction}\label{sec:intro}
General-sum repeated games
represent interactions between agents aiming to maximize their respective reward functions, with the possibility of compromise over conflicting goals. Despite their simplicity, achieving high rewards in such games is a challenging learning problem due to the complex space of
possible opponents.
Both the behavior of a given opponent
throughout
a game, and that opponent's choice of learning algorithm, may depend on one's own algorithm.
\citet{C20}
argues,
based on empirical studies of repeated game tournaments, that a successful agent must achieve two goals. First, it must optimize its actions with respect to its beliefs about the opponent. Second, it should act such that
the opponent forms beliefs
motivating a response that is beneficial to the agent.
In particular, multi-agent reinforcement learning (MARL) features the following tradeoff: how to adapt to a variety of
potential opponents,
while also actively shaping other agents' models of
oneself
such that they respond with cooperation, rather than exploitation.
If
an agent
commits to a
fixed policy
to ``lead'' the other player's best response \citep{LS01}, it may perform arbitrarily poorly against players that do not converge to such a response. This motivates the design of adaptive algorithms that try to lead,
but can
retreat
to a ``Follower'' (best response) approach if doing so gives greater rewards \citep{PS05, ICML10-chakraborty}.
An effective algorithm in this class is S++ \citep{C14}, which,
due to its
Follower sub-algorithm, has the drawback that it is exploitable\textemdash that is, it rewards agents insisting on unfair bargains (``bully'' strategies)
\citep{CO18, SLRC21}.
A simple motivating example of Follower exploitability is the game of Chicken (Figure \ref{fig:chicken}),
between players Row and Column.
Suppose Column knows
Row
will take
the apparently optimal action 1
if Column
repeats action 2.
Column
will then want to use the Leader strategy of committing to action 2 to gain the highest reward. Row thus only gets reward 0.25, and if Column has truly committed, an attempt by Row to dissuade this strategy by taking action 2 would give both players reward 0.
A cooperative outcome, e.g., alternating between the off-diagonal cells, could be achieved if Row's learning algorithm were designed to \textit{publicly disincentivize} commitments
to the exploitative Leader strategy.
\begin{figure}[ht]
\centering
\begin{tabular}{|c|c|}
\hline
0.5, 0.5 & 0.25, 1 \\
\hline
1, 0.25 & 0, 0\\
\hline
\end{tabular}
\caption{Reward bimatrix for Chicken.}
\label{fig:chicken}
\end{figure}
MARL research has largely neglected the latter half of the adaptability vs. non-exploitability tradeoff.
Existing algorithms are either evaluated solely by
their
rewards \textit{conditional} on given opponents \citep{PS05, C14}, or, when the evaluation criterion does account for the incentives of algorithm selection,
the pool of competitor algorithms typically excludes bully strategies \citep{CG10}.
Previous MARL algorithms addressing the adaptability half of the tradeoff lack finite-time guarantees on rewards.
We aim to provide a theoretically grounded algorithm for repeated games that is both adaptable, by using Leader and Follower sub-algorithms, and non-exploitable.
More broadly,
this paper addresses a challenge of interest in several
areas of machine learning:
designing algorithms that account for how the distribution of data the algorithms are applied to may change based on the choice of the algorithms themselves.
\paragraph{\textbf{Related work}}
Previous algorithms for repeated games have
combined Leader and Follower modules,
aiming for
the following guarantees: worst-case safety, best response to players with bounded memory, and convergence in self-play to Pareto efficiency, i.e., an outcome in which no player can do better without the other doing worse \citep{PS04}.
Like ours,
these algorithms aim for adaptability,
but they do not have regret guarantees --- the desired
properties are only
shown to hold asymptotically.
Manipulator \citep{PS05} achieves these properties by starting with a fixed strategy
that maximizes the user's rewards conditional on the opponent using a best response, and switching to
reinforcement learning (RL) with a safety override if
that
strategy does not yield its target rewards.
Related to the self-play guarantee,
we prove a more general property of Pareto efficiency against effective RL algorithms (see Section \ref{sec:sub:rgclass}).
Like Manipulator, our approach tests
sub-algorithms sequentially.
S++ \citep{C14}
has empirically strong performance
on
the guarantees above.
However,
neither of these algorithms guarantee non-exploitability.
Although to our knowledge
no previous works have proven non-exploitability in our sense,
several algorithms are designed to
achieve ``fair'' Pareto efficiency
in self-play without using
Follower approaches that would be exploitable.
\citet{LS05}'s algorithm for
computation of
Nash equilibria, like our Leader sub-algorithms, enforces a Pareto efficient outcome
by punishing deviations.
If an agent played this equilibrium, which satisfies properties of symmetry similar to
the outcome our Egalitarian Leader sub-algorithm aims for, it would be non-exploitable.
However, committing to this equilibrium
precludes
learning a best response to fixed strategies that offer higher rewards than the cooperative solution, or exploiting adaptive players, which our Conditional Follower and Bully Leader sub-algorithms achieve, respectively.
In two-player bandit problems where the reward bimatrix must be learned, UCRG \citep{TD20} has
near-optimal
regret in self-play with respect to the egalitarian bargaining solution
(Section \ref{bargtheory}).
However, it cannot provably cooperate with
agents other than
itself, learn best responses, or exploit adaptive players.
Our objectives of adaptability and non-exploitability are inspired by work on learning equilibrium \citep{BT04, fcl, CR21}, a solution concept in which players' \textit{learning algorithms} are in a Nash equilibrium, beyond merely the equilibrium of an individual game itself.
This objective accounts for the dependence of the problems faced by multi-agent learning algorithms on the design of such algorithms.
\paragraph{\textbf{Contributions}} We propose an algorithm (LAFF) that, to our knowledge, is the first proven to have both strong performance against different classes of players in repeated games and a guarantee of non-exploitability, formalized in Section \ref{sec:sub:regretdef}. Specifically, these classes consist of stationary
algorithms (``Bounded Memory''), unpredictable adversaries (``Adversarial''), and adaptive RL agents (``Follower'').
LAFF's modular design
allows for extensions to a broader variety of opponent classes in future work. We propose regret metrics appropriate for games against Followers, based on the goal of Pareto efficiency. Our method of proof of adaptability and non-exploitability is novel, applying ``optimistic'' principles at two levels. First, LAFF starts with the sub-algorithm (or \textit{expert}) that would give the highest expected rewards
if the opponent were
in that expert's target class (``potential''), then proceeds through experts in descending order of
potential.
Second, LAFF chooses whether to switch experts by comparing the potential
of the active expert with its empirical average reward plus a slack term, which decreases with the time for which the expert is used.
For non-exploitability and regret against Followers, we use the properties of an enforceable bargaining solution (see Section \ref{bargtheory}) to upper-bound the other player's rewards.
\section{Preliminaries}\label{sec:prelim}
We study a special class of Markov games: repeated games with a bounded memory state representation \citep{PS05} and public randomization.
\subsection{Setup and Opponent Classification}\label{sec:sub:rgclass}
\noindent Consider a repeated game over $T$ time steps, defined for players $i=1,2$ by action spaces $\mathcal{A}^{(i)}$,
reward matrices $\mathbf{R}^{(i)}$,
and a fixed player memory length $K \in \mathbb{N}$. Here, all $\mathbf{R}^{(i)}(a^{(1)}, a^{(2)}) \in [0,1]$ are known by both players.
At time~$t$ the following random variables are drawn: $S_t$ for state, $A_t^{(i)}$ for actions, and $R_t^{(i)} = \mathbf{R}^{(i)}(A_t^{(1)},A_t^{(2)})$ for rewards.
A state space $\mathcal{S} := (\mathcal{A}^{(1)})^K \times (\mathcal{A}^{(2)})^K \times \{0, 1\}^{2K+2}$, and transition probabilities $\mathcal{P}(s'|s,a^{(1)},a^{(2)})$ between states, are induced by two features:
(1) the tuple of both players' last $K$ actions, and (2) the tuple of the last $K$ and current outcome of a randomization signal, for each player. (See Section 2.1.2 of \citet{MS06}.)
Thus, players condition their
actions
on their memory of the last $K$ time steps,
and a signal that permits
correlated action choices.
Formally, let $(w^{(1)}_t, w^{(2)}_t) \in [0, 1]^2$ be weights chosen by the respective players at time $t$,\footnote{We restrict to cases where players commit to a fixed weight, so the effective action space is finite. See the Appendix for details.} and draw $X_t \sim \text{Unif}[0,1]$ independent of all other random variables in the game. Then, letting $y_t^{(i)}$ be the realized value of $Y_t^{(i)} := \mathbb{I}[X_t < w^{(i)}_t]$, the second feature at time $t$ is
$(y_{t-K}^{(1)},...,y_t^{(1)},y_{t-K}^{(2)},...,y_t^{(2)})$.
This allows the players to correlate actions through the public signal $X_t$, even if one player unilaterally generates the signal.
For instance, in Chicken (Figure \ref{fig:chicken}),
players could flip a fair coin ($w^{(1)}_t = w^{(2)}_t = 0.5$)
at
each time step
and play the pair of actions
leading to the top-right cell
when it comes up heads, otherwise
play the bottom-left cell.
In this framework, at each time step each player has a choice of both a weight $w_t^{(i)}$ and policy $\pi^{(i)}_t: \mathcal{S} \to \Delta^{|\mathcal{A}^{(i)}|}$, a mapping from states to distributions over actions.
Given a fixed policy of player 2, a repeated game is a
Markov decision process (MDP) given by
$(\mathcal{S}, \mathcal{A}^{(1)}, r, p)$
as follows.
Let $a^{(i)}(s)$ be the last action of player $i$
that defines state $s$.
Here, $r: \mathcal{S} \times \mathcal{A}^{(1)} \to [0,1]$ is
$r(s, a) = \mathbf{R}^{(1)}(a^{(1)}(s), a^{(2)}(s))$,
and $p:\mathcal{S} \times \mathcal{A}^{(1)} \times \mathcal{S} \to [0,1]$ is
$p(s'|s,a) = \sum_{a^{(2)}} \mathcal{P}(s'|s,a,a^{(2)}) \pi^{(2)}(a^{(2)}|s)$.
A policy is called Markov if it is conditioned only on the current state.
The problem faced by our learner, player 1, depends on which of the following classes player 2's algorithm is in:
\begin{enumerate}
\item \textit{Bounded Memory}: (i) Player 2 uses a constant $w^{(2)}$, reported at the start of the game; (ii) $\pi^{(2)}$ is Markov
and does not depend on time or player 1's signals $w^{(1)}_t$ or $y_t^{(1)}$; and (iii) for all $s, a^{(2)}$ we have $\pi^{(2)}(a^{(2)}|s) > 0$.\footnote{This relatively strong condition is needed for a concentration result in our analysis, ruling out cases where players remain in a transient state for an unknown time. We need to know the exit time from the transient states to compute the quantity $\overline{r}_{i,
\tau}^{(2)}$ used by one of our experts. Section \ref{sec:experiments} shows strong results against a Bounded Memory player (FTFT) for which this condition does not hold.}
\item \textit{Adversarial}: Player 2 selects actions according to any arbitrary distribution, which may depend on the history of play and on player 1's policy at each time step.
\item \textit{Follower}: A Follower learns a best response when player 1 is ``eventually stationary'' (formalizing the follower concept in \citet{LS01}), and when the value of that best response meets player 2's standard of fairness. For some fairness threshold $V^{(2)} \geq 0$ (depending on the game), player 2's algorithm has the following properties.
Suppose that after time $T_0$, player 1 always plays a Bounded Memory algorithm (without condition 3), which induces an MDP of finite diameter $D$ where player 2's optimal average reward is at least $V^{(2)}$.
Then with probability at least $1-\delta$, player 2's regret up to time $T$ (see Section \ref{sec:sub:regretdef}) is bounded by $C_1T_0 + C_2D(SAT\log(T/\delta))^{1/2}$ for constants $C_1, C_2$.
\end{enumerate}
A repeated game against a Bounded Memory player is equivalent to a communicating MDP \citep{puterman}.
A Follower formalizes an agent that models \textit{our} agent as an MDP (Leader), and the regret bound in our definition is of a standard form for RL algorithms \citep{optQ}.
Many MARL algorithms take this approach at least partly \citep{PS05, ICML10-chakraborty, CG10}, hence this is a reasonable class to consider.
For example,
\citet{LS05}'s algorithm,
which plays a
certain
sequence of actions
and punishes deviations from that sequence,
is Bounded Memory ---
this algorithm does not change its policy
in response to the other player,
but its policy conditions on past actions.
A standard RL algorithm,
which would learn the sequence played by \citet{LS05}'s algorithm
and converge to
an optimal policy against it,
and which is a component of more complex repeated games
algorithms like Manipulator and S++,
is a case of a Follower.
As discussed in \citet{C20},
a large proportion of top-performing algorithms are Bounded Memory (Leaders) or Followers, or switch between the two.
These classes
illustrate fundamental
approaches to multi-agent learning
(thus, likely opponents
that our algorithm would face):
Either an agent behaves consistently, trying to shape the learning opponent’s behavior (Bounded Memory), or
the agent changes policies in a process of learning how the opponent behaves and computing an optimal response to that opponent, possibly subject to fairness standards as they try to avoid exploitation (Follower).
The Adversarial class accounts for opponent behavior between these two extremes, which is difficult to learn in generality, but a
worst-case guarantee
can still be achieved.
We thus restrict to guarantees against formalizations of these classes.
Bounds against a wider variety of opponents would be less theoretically tractable, as far as finding the optimal strategy against one class interferes with performance against another.
(For example, \citet{PS05} note that in the repeated Prisoner's Dilemma,
it is
impossible for an algorithm to guarantee the best
response to an opponent
that may play either grim trigger
---
``defect if and only if either
player defected last round''
---
or ``always cooperate.'')
Extending to other opponent classes is an important direction for future work.
\subsection{Background on Bargaining Theory}\label{bargtheory}
\noindent To define appropriate
optimality criteria
for these opponent classes and construct corresponding experts, we use several concepts from bargaining theory.
We also illustrate these
concepts in the game of Chicken
from the introduction
(Example \ref{example:barg_concepts}).
Define the \textit{security values}
$\mu_{\textsc{S}}^{(i)} := \max_{\mathbf{v}_i} \min_{\mathbf{v}_{-i}} \mathbf{v}_1^\intercal \mathbf{R}^{(i)} \mathbf{v}_2$,
i.e., the rewards that each player can guarantee
regardless of their opponent's actions,
with player 1's maximin strategy as $\mathbf{v}^{(1)}_{\textsc{M}} = \argmax_{\mathbf{v}_1} \min_{\mathbf{v}_2} \mathbf{v}_1^\intercal \mathbf{R}^{(1)} \mathbf{v}_2$.
Let $\mathcal{G} := \{(\mathbf{R}^{(1)}(i,j),$ $\mathbf{R}^{(2)}(i,j)) \ | \ i \in \mathcal{A}^{(1)}, j \in \mathcal{A}^{(2)}\}$,
the set of reward pairs achievable
by pure actions in the game.
An important set of rewards in the computation of enforceable bargaining solutions is the convex polytope $\mathcal{U} := \text{Conv}(\mathcal{G}) \cap \{(u_1, u_2) \ | \ u_1 \geq \mu_{\textsc{S}}^{(1)}, u_2 \geq \mu_{\textsc{S}}^{(2)}\}$,
reward pairs that are achievable by randomizing over joint actions and give each player at least their security value.
One reward pair satisfying several desirable properties is the egalitarian bargaining solution (EBS) \citep{TD20}, given by $(\mu_{\textsc{E}}^{(1)}, \mu_{\textsc{E}}^{(2)}) := \argmax_{(u_1, u_2) \in \mathcal{U}} \min_{i=1,2}\{u_i - \mu_{\textsc{S}}^{(i)}\}$.
The reward pairs over which we search for optimal benchmark values,
described in Section \ref{sec:sub:regretdef},
are subject to the following constraint of enforceability. To our knowledge, this definition, including the formalization of enforceability for finite punishment lengths, has not been provided in previous work on non-discounted games. However, see Definition 2.5.1 in \citet{MS06} for the discounted case.
\begin{definition}
\label{def:enf}
Let $(u_1, u_2) \in \mathcal{U}$ be a convex combination
of points in some set of joint actions $\mathcal{X}$.
Let $r(\mathcal{X}) := \max_{(x_1,x_2) \in \mathcal{X}} \{\max_{j \neq x_2} \mathbf{R}^{(2)}(x_1,j) - \mathbf{R}^{(2)}(x_1,x_2)\}$
be player 2's deviation profit.
Then $(u_1, u_2)$ is \textbf{$\epsilon$-enforceable}, relative to a memory length $K$ and $\epsilon > 0$, if:
\begin{align*}
Ku_2 &\geq K\mu_{\textsc{S}}^{(2)} + r(\mathcal{X}) + \epsilon.
\end{align*}
\end{definition}
Intuitively, if player 2 does not deviate from player 1's desired action sequence, player 2 receives
$u_2$
on average
for each of $K$ steps. If player 2 deviates, gaining at most $r(\mathcal{X})$ profit, player 1 may punish with player 2's security value for $K$ steps. We call the total sequence reward ``enforceable'' if
it exceeds the total deviation reward
by at least $\epsilon$.
Let $\mathcal{U}(\epsilon)$ be the set of $\epsilon$-enforceable rewards in $\mathcal{U}$. Then, the feasible region $\mathcal{U}(\epsilon)$,
used to compute an enforceable version of the EBS,
shrinks with increasing~$\epsilon$ and decreasing~$K$.
The $\epsilon$-enforceable EBS, which we will use to design one of the Leader experts, is found by solving the optimization problem from Section 3.2.4 of \citet{TD20} under the constraint in Definition \ref{def:enf}.
A similar procedure, applied to the objective of maximizing only player 1's reward, gives the Bully solution for the second
Leader expert.
We provide details on these solutions in the Appendix.
\multilinecomment{
While it has been shown that the EBS can be tractably computed absent enforceability constraints \citep{TD20}, it is nontrivial that this extends to the constrained case.
Lemma \ref{enforce}, proven in the Appendix, helps us construct the enforceability-constrained EBS.
\begin{lemma}
\label{enforce}
Consider any function $f$ that is monotone in $\mathcal{U}$, that is, if $u_1 \geq v_1$ and $u_2 \geq v_2$ then $f(u_1,u_2) \geq f(v_1,v_2)$. Then there always exists a maximizer of $f$ over $\mathcal{U}(\epsilon)$ that is a convex combination of no more than two points in $\mathcal{G}$.
\end{lemma}
The $\epsilon$-enforceable EBS, which we will use to design one of the Leader experts, is found as follows. Assign to each joint action pair $x_A := (i_1, j_1)$ and $x_B := (i_2, j_2)$ the score $\rho(x_A,x_B) := \max_{\alpha_{AB}} \min_{i=1,2}\{\alpha_{AB} \mathbf{R}^{(i)}(x_A) + (1-\alpha_{AB})\mathbf{R}^{(i)}(x_B) - \mu_{\textsc{S}}^{(i)}\}$, where $\mathbf{R}^{(i)}(x_A) := \mathbf{R}^{(i)}(i_1,j_1)$ and $\mathbf{R}^{(i)}(x_B) := \mathbf{R}^{(i)}(i_2,j_2)$, and choose the pair with the highest score \citep{TD20}.
Searching over pairs is sufficient by Lemma \ref{enforce}. We maximize $\rho$ over $\alpha_{AB}$ subject to enforceability.
For two points such that $\mathbf{R}^{(2)}(x_A) > \mathbf{R}^{(2)}(x_B)$ (order does not matter), $\epsilon$-enforceability requires:
\begin{align*}
& \alpha_{AB} \geq \frac{ r(\{x_A, x_B\}) + \epsilon + K[\mu_{\textsc{S}}^{(2)} - \mathbf{R}^{(2)}(x_B)]}{K [\mathbf{R}^{(2)}(x_A) - \mathbf{R}^{(2)}(x_B)]}.
\end{align*}
If $\mathbf{R}^{(2)}(x_A) = \mathbf{R}^{(2)}(x_B)$, then $\alpha_{AB}$ can be arbitrary as long as the first line above still holds; otherwise, this pair is not enforceable regardless of $\alpha_{AB}$.
Taking $\mathbf{R}^{(2)}(x_A) > \mathbf{R}^{(2)}(x_B)$ without loss of generality,
there are two cases to consider.
(1) If $\mathbf{R}^{(i)}(x_A) \geq \mathbf{R}^{(i)}(x_B)$ for both $i=1,2$,
both functions in the minimum have nonnegative slope, so $\rho$ is nondecreasing in $\alpha_{AB}$.
Otherwise, (2) $\rho$ has its maximum at $a = \frac{\mathbf{R}^{(2)}(x_B) - \mathbf{R}^{(1)}(x_B)}{\mathbf{R}^{(1)}(x_A) - \mathbf{R}^{(1)}(x_B) + \mathbf{R}^{(2)}(x_B) - \mathbf{R}^{(2)}(x_A)}$.
In case 1, since $\epsilon$-enforceability is a \textit{lower} bound $v(\epsilon, K)$ on $\alpha_{AB}$, the optimal $\alpha_{AB} = 1$ if that upper bound is at most 1, otherwise this pair is not enforceable.
In case 2, if enforceability does not exclude $a$, then $\alpha_{AB} = a$. Otherwise, the non-excluded region must decrease down from $v(\epsilon, K)$ or increase up to $v(\epsilon, K)$; either way, $\alpha_{AB} = v(\epsilon, K)$ is optimal.
Finally, we also construct the Bully solution for the second Leader expert by following the procedure above, except with a ``selfish'' score $\rho(x_A,x_B) := \max_{\alpha_{AB}} \alpha_{AB} \mathbf{R}^{(1)}(x_A) + (1-\alpha_{AB})\mathbf{R}^{(1)}(x_A)$.
This is, again, a monotone function over $\mathcal{U}(\epsilon)$, so searching over pairs of joint actions suffices. If $\mathbf{R}^{(1)}(x_A) \leq \mathbf{R}^{(1)}(x_B)$, $\rho$ is nondecreasing in $\alpha_{AB}$, so as before we set
$\alpha_{AB} = v(\epsilon, K)$.
If $\mathbf{R}^{(1)}(x_A) > \mathbf{R}^{(1)}(x_B)$, we set $\alpha_{AB} = 1$.
}
\begin{exmp}
\label{example:barg_concepts}
In Chicken (Figure \ref{fig:chicken}),
both players' security value is 0.25, guaranteed by playing action 1.
The EBS is given by 50\% weight on the top-right action pair, and 50\% on the bottom-left, giving both players $0.625$.
If player~1 plays its half of either action pair in the EBS, player 2 does worse by deviating
(by a margin of at least 0.25), so no punishment
is necessary to enforce
the EBS.
Thus the EBS is enforceable for any $K$ and $\epsilon < 0.375K + 0.25$.
\end{exmp}
\subsection{Objectives}\label{sec:sub:regretdef}
\noindent The metric of regret, which we aim to minimize, varies based on the class of player 2 our algorithm faces. For a player 2 algorithm $\mathfrak{B}$, regret with respect to a benchmark $\mu(\mathfrak{B})$ is $\mathcal{R}(T) := T\mu(\mathfrak{B}) - \sum_{t=1}^T R_t^{(1)}$.
\paragraph{\textbf{Bounded Memory}} By condition 3 for Bounded Memory, player 2 induces a communicating MDP.
Let $\Pi$ be the set of time-independent deterministic Markov policies. Then the state-independent optimal average reward is $\mu_*^{(1)} := \max_{\pi^{(1)} \in \Pi} \lim_{t \to \infty} \frac{1}{t} \mathbb{E}_{ \pi^{(1)}}(\sum_{i=0}^t R_i^{(1)}|S_0)$. Here, $\mu(\mathfrak{B}) = \mu_*^{(1)}$.
\paragraph{\textbf{Adversarial}} Against an Adversarial player, an appropriate benchmark is the greatest expected value that player 1 can guarantee, no matter player 2's actions. This is player~1's security value: $\mu(\mathfrak{B}) = \mu_{\textsc{S}}^{(1)}$. Note the distinction from \textit{external regret} used in adversarial bandits and MDPs.
While the problem is trivial if player 2 is known to be Adversarial, since one can always play the maximin strategy, our challenge is to maintain low Adversarial regret without losing guarantees on other regret measures. This corresponds to \textit{safety} in multi-agent learning \citep{PS04}.
\paragraph{\textbf{Follower}} The concept of regret against a Follower is more complex.
Player 2's sequence of policies can vary significantly based on
player 1's actions.
Evaluating our algorithm
by
the maximum average reward in hindsight would have to account for this counterfactual dependence \citep{C14}.
However, by considering enforceability, we can define benchmarks by lower bounds on this maximum,
constrained
by the Follower's fairness value $V^{(2)}$.
We consider two cases depending on $V^{(2)}$, focusing for simplicity on the extremes where the Follower either accepts nothing less than the EBS or accepts any enforceable bargain. In principle, our framework could be extended for other $V^{(2)}$ values.
First, the EBS
is Pareto efficient, meaning
we cannot achieve greater than $\mu_{\textsc{E}}^{(1)}$ without player 2 receiving less than $\mu_{\textsc{E}}^{(2)}$.
When
the EBS can be enforced
with a fixed policy, $\mu_{\textsc{E}}^{(1)}$ is thus an appropriate
benchmark if the fairness threshold $V^{(2)}$ is player 2's part of the EBS pair.
The EBS is not always enforceable for finite $K$, however.
In this case,
the enforceable version of the
EBS is
the maximizer
$(\mu_{\textsc{E},\epsilon}^{(1)}, \mu_{\textsc{E},\epsilon}^{(2)})$ of the objective $f(u_1, u_2) = \min_{i=1,2}\{u_i - \mu_{\textsc{S}}^{(i)}\}$ in $\mathcal{U}(\epsilon)$ for some $\epsilon > 0$.
For this first case,
we therefore consider $V^{(2)} = \mu_{\textsc{E},\epsilon}^{(2)}$, where player 2 follows conditionally. If $\mathcal{U}(\epsilon)$ is empty, $(\mu_{\textsc{E},\epsilon}^{(1)}, \mu_{\textsc{E},\epsilon}^{(2)}) := (\mu_{\textsc{S}}^{(1)}, \mu_{\textsc{S}}^{(2)})$. We set $\mu(\mathfrak{B}) = \mu_{\textsc{E},\epsilon}^{(1)}$.
The second case is $V^{(2)} = 0$, i.e., player 2 follows unconditionally. Here, we compute the maximizer over $\mathcal{U}(\epsilon)$ of $f(u_1, u_2) = u_1$.
Let $(\mu_{\textsc{B},\epsilon}^{(1)}, \mu_{\textsc{B},\epsilon}^{(2)})$ be the solution to this optimization problem (the \textit{Bully values}), or $(\mu_{\textsc{B},\epsilon}^{(1)}, \mu_{\textsc{B},\epsilon}^{(2)}) := (\mu_{\textsc{S}}^{(1)}, \mu_{\textsc{S}}^{(2)})$ if no solution exists. We define $\mu(\mathfrak{B}) = \mu_{\textsc{B},\epsilon}^{(1)}$.
While these regret metrics
provide standards for
adaptability,
we must also formalize non-exploitability.
We seek a guarantee on an algorithm's performance against its best response.
It is unclear how to characterize the best response to an algorithm capable of adapting to several opponent classes. Given this, we focus on a tractable and practically relevant subproblem: guaranteeing that the best response to our algorithm is not a ``bully'' in the sense discussed in the introduction, which is the most common exploitative strategy in MARL literature \citep{PS05, LS01,Press10409, LS05}.
Even this weaker guarantee is absent from previous work, and we show numerically in Section \ref{sec:experiments} that this suffices for our algorithm to be in learning equilibrium with itself
(see Section \ref{sec:intro}) in a pool of top-performing algorithms.
\begin{definition}
Let player 2 be Bounded Memory,
and $\mu_{\textsc{M}}^{(1)}$ and $\mu_{\textsc{M}}^{(2)}$ be the expected rewards for players 1 and 2
when player 1 uses $\mathbf{v}^{(1)}_{\textsc{M}}$ and
player 2 uses $\pi^{(2)}$.
An algorithm $\mathfrak{A}$ is
\textbf{$(V^{(1)},\eta_{e})$-non-exploitable}
if, whenever
$\mu_*^{(1)} < V^{(1)} - \eta_{e}$ and $\mu_{\textsc{M}}^{(2)} > \mu_{\textsc{E},\epsilon}^{(2)}$, for all $c > 0$ player 2's regret with respect to $\mu_{\textsc{E},\epsilon}^{(2)} + c$ against $\mathfrak{A}$ is $\Omega(T)$.
\end{definition}
Our algorithm is exploitable if player 2 can profit
(do better than $\mu_{\textsc{E},\epsilon}^{(2)}$)
from
a policy against which we cannot achieve close to
some value corresponding to a standard of fairness.
The hyperparameter $V^{(1)}$ tunes the tradeoff
between exploitability and
flexibility to various opponents.
Player 2 does \textit{not} profit from
exploitation if they incur linear regret.
\begin{exmp}
In Chicken (Figure \ref{fig:chicken}), let $V^{(1)} = 0.625$ (i.e., the EBS), and consider the following strategies: a) always play action 2, b) always play the opponent's last action,
and c) play the best response to the empirical distribution of the opponent's past actions. Strategy (a) is exploitative Bounded Memory. Thus, we argue that an effective algorithm should avoid playing the ``best response'' of action 1, instead discouraging the use of this strategy by, e.g., consistently playing the EBS (see Egalitarian Leader in the next section). Strategy (b) is also Bounded Memory, but not exploitative since one can achieve at least $V^{(1)}$ against this player on average. Our algorithm should therefore learn the best response to (b). Strategy (c) is a Follower with $V^{(2)} = 0$, thus our algorithm should converge to consistently playing action 2 against (c), achieving the Bully value.
\end{exmp}
\section{Lead and Follow Fairly (LAFF)}\label{sec:ergalgo}
We apply an expert algorithm to a set of experts designed for our target classes. Expert algorithms
use an active expert to choose an action at a given time,
and switch active experts based on their relative performance \citep{C14}.
LAFF switches experts sequentially, going to the next expert in a predefined sequence only
if the rewards obtained by its active expert fall short of the current target value.
Some of the experts are also designed to guarantee non-exploitability.
\subsection{Description of Experts}
\noindent
LAFF uses an active expert for an epoch of length $H$ before checking whether to switch. Let $\tau$ be the time elapsed since LAFF started using the current instance of the active expert (at time $t_i + 1$), and define $\overline{r}^{(1)}_{i,\tau} := \frac{1}{\tau} \sum_{t=t_i + 1}^{t_i + \tau} R^{(1)}_t$ and $\overline{r}^{(2)}_{i,\tau} := \frac{1}{\tau-K} \sum_{t=t_i + K + 1}^{t_i + \tau} R^{(2)}_t$. See Figure \ref{flowchart} for a summary of algorithmic elements that these experts depend on.
\begin{figure}
\centering
\tikz{
\node[obs, xshift=-3.5cm] (f) {$\phi_F$}; %
\node[obs, xshift=-2cm] (e) {$\phi_E$}; %
\node[obs, xshift=-0.5cm] (m) {$\phi_M$}; %
\node[obs, xshift=1cm] (b) {$\phi_B$}; %
\node[latent, rectangle, above=of f, yshift=-0.5cm] (q) {Q-learning};
\node[latent, rectangle, above=of e, yshift=-0.5cm, xshift=0.75cm] (v) {$\mathbf{v}^{(1)}_{\textsc{M}}$};
\node[latent, rectangle, above=of e, yshift=-0.5cm, xshift=2.25cm] (p) {$\mathbf{v}^{(1)}_P$};
\edge {q} {f}
\edge {v} {e,m,b}
\edge {p} {e,b}
\edge {e} {f,m}
}
\caption{Algorithmic components (white) of LAFF's experts (gray). An arrow from one node to another means the former is used in computation of the output by the latter.}
\label{flowchart}
\end{figure}
\paragraph{\textbf{Conditional Follower $(\phi_F)$}}
Recall the
benchmarks $\mu_{\textsc{B},\epsilon}^{(1)}$,
$\mu_{\textsc{E},\epsilon}^{(1)}$, and
$\mu_{\textsc{S}}^{(1)}$from
Section \ref{sec:sub:regretdef}.
To handle cases where
$\mu_*^{(1)}$
against a Bounded Memory player 2
lies
between these values,
LAFF uses $\phi_F$ multiple times in the sequence (called ``instances''). This expert starts off equivalent to Optimistic Q-learning \citep{optQ}, whose regret bound
(in an MDP with $S$ states and $A$ actions)
with probability at least $1-\delta$ is $\mathcal{R}_{Q}(\tau, \delta) = \mathcal{O}((SA\log(\frac{\tau}{\delta}))^{1/3}\tau^{2/3})$. After each \textit{subepoch} of length $H^{1/2}$, if $\overline{r}^{(1)}_{i,\tau} < V^{(1)} - \frac{\mathcal{R}_{Q}(\tau, \delta/T)}{\tau}$, this expert switches to the Egalitarian Leader $\phi_E$ (below) for as long as \textit{any} instance of $\phi_F$ is used. Otherwise, it uses Optimistic Q-learning for the next subepoch.
\paragraph{\textbf{Conditional Maximin ($\phi_M$)}}
Initially, $\phi_M$ uses the policy $\pi^{(1)}(\cdot|s) = \mathbf{v}^{(1)}_{\textsc{M}}$ for all $s$. Let $\eta_{m} > 0$ be a slack variable, chosen based on the class of Adversarial players considered in Theorem \ref{hedge}. After each subepoch, if $\overline{r}^{(2)}_{i,\tau} > \mu_{\textsc{E},\epsilon}^{(2)} - \eta_{m} + \sqrt{\frac{\log(T/\delta)}{2(\tau-K)}}$, this expert switches to $\phi_E$ for the rest of the game. Otherwise, it uses $\mathbf{v}^{(1)}_{\textsc{M}}$ for the next subepoch.
\paragraph{\textbf{Egalitarian Leader ($\phi_E$)}} If there is no enforceable EBS, let $\phi_E \equiv \mathbf{v}^{(1)}_{\textsc{M}}$.
Otherwise, let the EBS action pairs be denoted $(a_{\textsc{E}}^{(1)}(y), a_{\textsc{E}}^{(2)}(y))$ for $y=0,1$,
and the weight on the first action pair
be $\alpha_{\textsc{E}}$.
While $\epsilon$-enforceability requires that a punishment of length $K$ is sufficient to make a reward pair player 2's best response, this length may not be \textit{necessary}.
We therefore consider the least harsh punishment (if any) needed to enforce the EBS, that is, the value $K' \leq K$ satisfying $K' = \max\Big\{0, \Big \lceil \frac{r(\{(a_{\textsc{E}}^{(1)}(0), a_{\textsc{E}}^{(2)}(0)), (a_{\textsc{E}}^{(1)}(1), a_{\textsc{E}}^{(2)}(1))\}) + \epsilon}{\mu_{\textsc{E},\epsilon}^{(2)} - \mu_{\textsc{S}}^{(2)}} \Big \rceil \Big\}$.
Let $\mathbf{v}^{(1)}_P := \argmin_{\mathbf{v}_1} \max_{\mathbf{v}_2} \mathbf{v}_1^\intercal\mathbf{R}^{(2)}\mathbf{v}_2$, player 1's punishment strategy.
Recall that policies in our framework are conditioned on binary signals $Y_t^{(i)}$,
whose distributions are determined
by players' reported weights $w_t^{(i)}$.
Then, for the first ${K'}$ time steps, with the realized value $y_{t}^{(1)}$ of the signal given by $w_t^{(1)} = \alpha_{\textsc{E}}$ for all $t$, $\phi_E$ plays $a_{\textsc{E}}^{(1)}(y_{t}^{(1)})$. (This
ensures that, if LAFF switches to $\phi_E$ mid-game, player 2 is not punished for
having played actions other than the EBS
before LAFF started signaling enforcement of the EBS.) Afterwards, $\phi_E$ uses the following stationary policy. If, for any of the past $K'$ timesteps, player 2 has played $A^{(2)}_t \neq a_{\textsc{E}}^{(2)}(y_t^{(2)})$
--- i.e., deviated from the EBS ---
the distribution over actions for that state is $\mathbf{v}^{(1)}_P$. Otherwise, $a_{\textsc{E}}^{(1)}(y_t^{(1)})$ is played.
\paragraph{\textbf{Bully Leader ($\phi_B$)}} This expert is defined like $\phi_E$, but using the Bully solution from Section \ref{bargtheory}
(maximizing the selfish objective).
If there is no enforceable solution, given by $(a_{\textsc{B}}^{(1)}(y), a_{\textsc{B}}^{(2)}(y))$ for $y=0,1$ and $\alpha_{\textsc{B}}$, let $\phi_B \equiv \mathbf{v}^{(1)}_{\textsc{M}}$. Otherwise, define $\phi_B$ just as $\phi_E$ for this solution.
\subsection{Algorithm}
We design the selection of experts by LAFF (Algorithm \ref{followfirst}) such that, for any of our target classes, LAFF eventually
commits to the optimal expert against player 2 in a sequence $\{\phi_j\}_j$.
Over an epoch, the active expert is executed,
and we update this expert's average rewards
since it was made active (line \ref{record}). Afterwards, LAFF switches to the next expert in the schedule if and only if it rejects the hypothesis that the current expert's expected value exceeds its corresponding target $\mu_j$ (line \ref{baselinecheck}).
The false positive rate of this hypothesis test is controlled by a function $\mathcal{B}$, which
decreases with
$\sqrt{\tau}$.
We define $\mathcal{B}$ in the proof of Lemma \ref{followregret} (see Appendix).
\multilinecomment{
The false positive rate of this hypothesis test is controlled by a function $\mathcal{B}$ of the time elapsed since the last switch (line \ref{tauup}), defined:
\begin{align*}
\xi(\epsilon, r) &:= \begin{cases}
\frac{\epsilon}{2K'},& \text{if } r \geq 0\\
\frac{\epsilon + r}{2K'},& \text{if } -\epsilon < r < 0\\
-r, & \text{otherwise},
\end{cases} \\
\mathcal{B}(\tau) &:= \frac{1}{\tau} \cdot \frac{K'\xi(\epsilon, r(\mathcal{X})) + C_1T_0 + K'+1}{\xi(\epsilon, r(\mathcal{X}))} \\
&+ \frac{1}{\tau} \cdot \frac{C_2\mathcal{R}_{Q}(\tau, \frac{\delta}{T}) + (3 + \xi(\epsilon, r(\mathcal{X})))\sqrt{\frac{\tau \log(\frac{T}{\delta})}{2}}}{\xi(\epsilon, r(\mathcal{X}))}.
\end{align*}
Where $\mathcal{X} = \mathcal{X}_{\textsc{B}} := \{(a_{\textsc{B}}^{(1)}(y), a_{\textsc{B}}^{(2)}(y))\}_{y=0,1}$ for expert index $j \leq 2$, $\mathcal{X} = \mathcal{X}_{\textsc{E}} := \{(a_{\textsc{E}}^{(1)}(y), a_{\textsc{E}}^{(2)}(y))\}_{y=0,1}$ for $j > 2$, and $\delta > 0$ is some confidence level.
}
Because $\mu_{\textsc{B},\epsilon}^{(1)} \geq \mu_{\textsc{E},\epsilon}^{(1)} \geq \mu_{\textsc{S}}^{(1)}$, and the optimal reward $\mu_*^{(1)}$ against a Bounded Memory player may be greater than $\mu_{\textsc{B},\epsilon}^{(1)}$ or in between these values, $\{\phi_j\}_{j}$ prioritizes the order of experts based on the optimal average reward they could achieve against the corresponding player 2 class (line \ref{initline}).
\section{Analysis}
We will now show that LAFF meets our key criteria of adaptability
and non-exploitability.
See Appendix for proofs of lemmas and the detailed proof of Theorem \ref{hedge}.
Lemma \ref{followregret} shows that
with high probability player 2's rewards against $\phi_E$ are not much greater than the EBS
(thus non-exploitability is feasible),
and player 1's rewards against a Follower are near the target when the correct Leader is used.
\begin{algorithm}
\caption{Lead and Follow Fairly (LAFF)}\label{followfirst}
\begin{algorithmic}[1]
\State \textbf{Init} target schedule $\{\mu_j\}_j = \{\mu_{\textsc{B},\epsilon}^{(1)}, \mu_{\textsc{B},\epsilon}^{(1)},\mu_{\textsc{E},\epsilon}^{(1)},\mu_{\textsc{E},\epsilon}^{(1)},$ $\mu_{\textsc{S}}^{(1)}\}$, expert schedule $\{\phi_j\}_j = \{\phi_F, \phi_B, \phi_F, \phi_E,$ $\phi_F, \phi_M\}$, expert index $j = 1$, $\tau = 0$, $R_\tau = 0
\label{initline}
\For{$i=1,2,\dots,\ceil{T/H}$}
\For{$t=(i-1)H + 1,\dots,\min\{iH, T\}$}
\State Run expert $\phi_j$
\State $R_\tau \leftarrow R_\tau + \mathbf{R}^{(1)}(A^{(1)}_t, A^{(2)}_t)$ \label{record}
\EndFor
\State $\tau \leftarrow \tau + H$ \label{tauup}
\If{$j < |\{\phi_j\}_j|$ and $\frac{R_\tau}{\tau} < \mu_j - \mathcal{B}(\tau)$} \label{baselinecheck}
\State $j \leftarrow j +1$, $\tau \leftarrow 0$, $R_\tau \leftarrow 0$
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\begin{lemma}
\label{followregret}
\textbf{(Reward Bounds When LAFF Leads)}
If player 1 uses $\phi_E$ over a sequence of length $\tau+K'$ starting at time $t^*+1$, then
with probability at least $1- \frac{3\delta}{T}$:
\begin{align*}
&\sum_{t=t^*+K'+1}^{t^* + K' + \tau} R^{(2)}_t \leq K' + 1 + \tau\mu_{\textsc{E},\epsilon}^{(2)} + 3\sqrt{\textstyle{\frac{1}{2}}\tau\log(\frac{T}{\delta})}.
\end{align*}
If player 2 is a Follower with $V^{(2)} = 0$, and player 1 uses $\phi_B$, then with probability at least $1- \frac{5\delta}{T}$, we have $\overline{r}^{(1)}_{i,\tau} \geq \mu_{\textsc{B},\epsilon}^{(1)} - \mathcal{B}(\tau)$.
If $V^{(2)} = \mu_{\textsc{E},\epsilon}^{(2)}$, and player 1 uses $\phi_E$, then with probability at least $1- \frac{5\delta}{T}$, we have $\overline{r}^{(1)}_{i,\tau} \geq \mu_{\textsc{E},\epsilon}^{(1)} - \mathcal{B}(\tau)$.
\end{lemma}
Lemma \ref{conditional_experts} guarantees that with high probability, LAFF follows or uses the maximin strategy against non-exploitative players, and punishes exploitative players.
\begin{lemma}
\label{conditional_experts}
\textbf{(False Positive and Negative Control of Exploitation Test)} Consider a sequence of $k$ epochs each of length $H$.
Let $m^*_{F}$ or $m^*_{M}$ be, respectively, the index of the \textit{subepoch} within this sequence at the start of which $\phi_F$ or $\phi_M$ switches to punishing with $\phi_E$, if at all (if not, let $m^*_{F}$ or $m^*_{M} = \infty$). Let $\eta_{e} \geq \frac{2\mathcal{R}_{Q}(H/2, \delta/T)}{H} + \sqrt{\frac{2S^2A\log(c_0/\delta)}{c_1H}}$, where $c_0, c_1$ are defined as in Theorem 5.1 of \citet{MT05}, and $\eta_{m} \geq \sqrt{\frac{\log(T/\delta)}{2(H/2-K)}} + \sqrt{\frac{64e\log(N_q/\delta^2)}{(1-\lambda)(H/2-K)}}$, where $\lambda$ and $N_q$ are constants with respect to time defined in Lemma \ref{raolemma} (see Appendix).
Then, suppose player 2 is Bounded Memory, and $\phi_F$ is used. If $\mu_*^{(1)} < V^{(1)} - \eta_{e}$, then with probability at least $1-\delta$, $m^*_{F} \leq \ceil{\frac{H^{1/2}}{2}}$. If $\mu_*^{(1)} \geq V^{(1)}$, then with probability at most $\frac{kH^{1/2}\delta}{T}$, $m^*_{F} < \infty$. If $\phi_M$ is used, and $\mu_{\textsc{M}}^{(2)} > \mu_{\textsc{E},\epsilon}^{(2)}$, then with probability at least $1-\delta$, $m^*_{M} \leq \ceil{\frac{H^{1/2}}{2}}$.
Suppose player 2 is Adversarial, with a sequence of action distributions $\{\pi^{(2)}_t\}$ such that, for any $M \geq H^{1/2} - K$ and $i$, $\frac{1}{M} \sum_{t=i+1}^{i+M} {\mathbf{v}^{(1)}_{\textsc{M}}}^\intercal \mathbf{R}^{(2)} \pi^{(2)}_t \leq \mu_{\textsc{E},\epsilon}^{(2)} - \eta_{m}$. Then, if $\phi_M$ is used, with probability at most $\frac{kH^{1/2}\delta}{T}$, $m^*_{M} < \infty$.
\end{lemma}
Our main result, Theorem \ref{hedge}, claims that 1)
against each of our target classes,
LAFF achieves a regret bound of the same order
as Optimistic Q-learning
in single-agent MDPs \citep{optQ},
and 2) LAFF satisfies non-exploitability.
\begin{theorem}
\label{hedge}
Let $\mathcal{C}$ be the set of player 2 algorithms that are any of the following:
\begin{itemize}
\item Adversarial, with a sequence of action distributions $\{\pi^{(2)}_t\}$ such that $\frac{1}{M} \sum_{t=i+1}^{i+M} {\mathbf{v}^{(1)}_{\textsc{M}}}^\intercal \mathbf{R}^{(2)} \pi^{(2)}_t \leq \mu_{\textsc{E},\epsilon}^{(2)} - \eta_{m}$ for any $M \geq T^{1/4}$ and $i$,
\item Follower, with $V^{(2)} \in \{0, \mu_{\textsc{E},\epsilon}^{(2)}\}$, or
\item Bounded Memory, with
$\mu_*^{(1)} \geq V^{(1)}$.
\end{itemize}
Let $\eta_{m}$ and $\eta_{e}$ satisfy the conditions of Lemma \ref{conditional_experts}.
Then, with probability at least $1-5\delta$, LAFF satisfies:
\begin{align*}
\max_{\mathcal{C}} \mathcal{R}(T) &= \mathcal{O}(\mathcal{R}_{Q}(T, \delta/T)).
\end{align*}
Further, with probability at least $1-6\delta$, LAFF is
$(V^{(1)},\eta_{e})$-non-exploitable
when there exists an enforceable EBS.
\end{theorem}
If there is no enforceable EBS, $\mu_{\textsc{E},\epsilon}^{(2)} = \mu_{\textsc{S}}^{(2)}$ and so we cannot guarantee player 2 does worse than $\mu_{\textsc{E},\epsilon}^{(2)}$ in expectation.
The class of Adversarial players for which Theorem \ref{hedge} holds is technically restrictive. However, non-exploitability requires that for each strategy (expert) used by our algorithm that could be exploited, including Conditional Maximin, we exclude from our target class some subset of opponents. That is, we cannot guarantee low Adversarial regret against players who receive more than the EBS value against maximin, because such players may exploit us.
\begin{sketch}
For each opponent class, we need to show that with high probability LAFF does not lock in to
a suboptimal expert for that class. If LAFF locks in to an expert for which the corresponding target value $\mu_j$ is \textit{greater} than the opponent's benchmark $\mu(\mathfrak{B})$, this implies LAFF consistently receives rewards such that ``regret'' with respect to $\mu_j$ grows like $\mathcal{R}_Q$, by design of $\mathcal{B}(\tau)$. But since the benchmark is less than $\mu_j$, the true regret is also bounded as desired.
We therefore only need to consider the cases of $\mu_j \leq \mu(\mathfrak{B})$. First, we know that each expert achieves at most $\mathcal{R}_Q$ regret against its target opponent class, by, respectively: the definitions of $\mathcal{R}_Q$ (for non-exploitative Bounded Memory) and maximin (for Adversarial), and Lemma \ref{followregret} (for Followers).
Lemma \ref{conditional_experts} ensures with high probability that $\phi_F$ and $\phi_M$ do not switch to $\phi_E$ when not exploited, so they inherit the desired regret bounds.
Then, we need only show that once LAFF reaches the expert
whose target class matches the opponent
(thus guaranteeing low regret using that expert), with high probability LAFF does not switch.
But
if using the corresponding expert gives LAFF low regret with respect to $\mu(\mathfrak{B}) \geq \mu_j$, then its rewards are sufficiently high that the condition for switching experts (line \ref{baselinecheck} of Algorithm \ref{followfirst}) never holds. The first claim of the theorem follows.
\begin{figure*}[ht]
\centering
\begin{tabular}{ccc}
\ \ Unconditional Follower (Q-Learning) & \ \ \ \ \ \ \ \ Conditional Follower (LAFF) & \ \ \ \ \ \ \ Bounded Memory (FTFT) \\
\includegraphics[width=5.3cm]{figures/1.png} & \includegraphics[width=5.3cm]{figures/2.png} & \includegraphics[width=5.3cm]{figures/3.png} \\
\end{tabular}
\begin{tabular}{cc}
\ \ \ \ \ \ \ \ \ Adversarial (Manipulator) & \ \ \ \ \ \ \ \ Exploitative (Bully) \\
\includegraphics[width=5.3cm]{figures/4.png} & \includegraphics[width=5.3cm]{figures/5.png} \\
\end{tabular}
\caption{The first four plots show LAFF's average regret, in each of 11 games detailed in the Appendix, for the following opponents: Unconditional Follower (Q-Learning), Conditional Follower (LAFF), Bounded Memory (FTFT), Adversarial (Manipulator). The last plot shows the regret of an Exploitative (Bully) algorithm against LAFF.}
\label{fig:regrets}
\end{figure*}
To show non-exploitability, suppose LAFF locks in to the first instance of $\phi_F$. By Lemma \ref{conditional_experts}, $\phi_F$ detects evidence of exploitation sufficiently early that the remaining time left in the game is linear in $T$. After detecting exploitation, $\phi_F$ plays the same policy as $\phi_E$. But by Lemma \ref{followregret}, against this policy player 2 cannot guarantee an average reward greater than $\mu_{\textsc{E},\epsilon}^{(2)}$ plus a term that vanishes at a rate $T^{1/2}$. The second claim of the theorem follows for the other possible locked-in experts as well by considering two facts. First, whenever $\phi_E$ or $\phi_B$ is used, Lemma \ref{followregret} again bounds player 2's rewards, since by Pareto efficiency of the EBS player 2's rewards from the Bully solution cannot exceed $\mu_{\textsc{E},\epsilon}^{(2)}$. Second, if LAFF reaches $\phi_M$, again Lemma \ref{conditional_experts} ensures sufficiently fast detection of exploitation with high probability.
\end{sketch}
\section{Numerical Experiments}\label{sec:experiments}
Code for the experiments in this section is available on
Github.\footnote{\url{https://github.com/digiovannia/ad_expl}}
We evaluate LAFF by three empirical metrics. First,
we find LAFF's empirical regret against one
algorithm from each target class.
Second, LAFF and a set of top-performing repeated games algorithms compete in a round-robin tournament. For each algorithm, we find its rewards against its best response algorithm in this set,
and check if it is in a learning equilibrium by applying a Nash equilibrium solver \citep{Knight2018} to the matrices of empirical rewards for algorithm pairs.
These criteria evaluate exploitability: more exploitable algorithms have lower rewards against algorithms that optimize against them,
and an exploitable algorithm cannot be in equilibrium with itself unless the fairness threshold $V^{(1)}$ is low. Finally, we perform a replicator dynamic simulation \citep{CO18}. Each generation, the algorithms' fitness values are computed as averages of the round-robin scores weighted by the distribution of the population of algorithms. Then, the population distribution is updated in proportion to fitness. This evaluates how well a given algorithm performs when the distribution of its opponents is determined by those algorithms' own performance.
Exploitability is thus implicitly penalized by accounting for opponents' incentives.
Details on the implementation of these experiments are in the Appendix. We set $V^{(1)} = \mu_{\textsc{E},\epsilon}^{(1)}$.
Our set of competitors to LAFF consists of Bounded Memory (Bully, Forgiving Generalized Tit-for-Tat or FTFT), Follower (M-Qubed, Q-Learning, Fictitious Play), and expert (Manipulator, S++) algorithms. See Appendix for details and sources.
We chose these algorithms because, first, they performed
well
in a repeated games tournament \citep{CO18},
and second,
they cover our opponent classes.
S++ and Manipulator do not fall cleanly into any of those classes, but they are the closest comparisons in previous literature to LAFF, since they
adapt to a variety of opponents by switching between Leader and Follower experts.
To ensure sufficient diversity of test games, we choose games based on the taxonomy of Figure 1 in \citet{topology}. Six game families
are categorized by the structures of their Nash equilibria.
We use two games from each family, one with symmetric rewards and one with asymmetric, except Cyclic, which has no symmetric games (see Appendix).
\begin{table*}
\centering
\caption{Rewards of algorithm pairs, averaged over games and trials (pure learning equilibria in are highlighted in bold text, and each algorithm's reward against its best response is in blue)}
\label{tab:emp_matrix}
\begin{tabular}{ccccccccc}
\toprule
& S++ & Manipulator & M-Qubed & Bully & Q-Learning & LAFF & FTFT & FP \\
\midrule
S++ & 0.75, 0.76 & 0.73, 0.80 & \textcolor{blue}{0.73}, 0.81 & 0.65, 0.77 & 0.82, 0.76 & 0.71, 0.8 & 0.70, 0.68 & 0.72, 0.55 \\
Manipulator & 0.87, 0.68 & 0.76, 0.71 & 0.77, 0.65 & \textcolor{blue}{0.65}, 0.77 & 0.89, 0.67 & 0.70, 0.65 & 0.71, 0.60 & 0.76, 0.55 \\
M-Qubed & 0.88, \textcolor{blue}{0.68} & 0.68, 0.68 & 0.80, 0.74 & \textcolor{blue}{0.65}, 0.80 & 0.79, 0.75 & 0.76, 0.73 & 0.78, 0.65 & 0.62, 0.56 \\
Bully & 0.86, 0.61 & 0.83, \textcolor{blue}{0.60} & 0.85, \textcolor{blue}{0.61} & 0.48, 0.44 & \textbf{\textcolor{blue}{0.91}, \textcolor{blue}{0.63}} & 0.61, 0.49 & 0.72, 0.55 & 0.76, \textcolor{blue}{0.56} \\
Q-Learning & 0.82, 0.77 & 0.73, 0.83 & 0.79, 0.67& \textbf{\textcolor{blue}{0.68}, \textcolor{blue}{0.85}} & 0.83, 0.74 & 0.71, 0.84 & 0.81, \textcolor{blue}{0.67} & 0.64, 0.56 \\
LAFF & 0.87, 0.65 & 0.71, 0.66 & 0.74, 0.72 & 0.55, 0.61 & 0.90, 0.66 & \textbf{\textcolor{blue}{0.77}, \textcolor{blue}{0.74}} & 0.80, 0.70 & 0.75, 0.57 \\
FTFT & 0.64, 0.70 & 0.49, 0.71 & 0.59, 0.76& 0.60, 0.71 & 0.59, 0.78 & \textcolor{blue}{0.61}, 0.78 & 0.80, 0.75 & 0.46, 0.72 \\
FP & 0.70, 0.73 & \textcolor{blue}{0.66}, 0.74 & 0.66, 0.55 & 0.63, 0.73 & 0.69, 0.57 & 0.61, 0.71 & 0.71, 0.60 & 0.68, 0.55 \\
\bottomrule
\end{tabular}
\end{table*}
\paragraph{\textbf{Regret Bounds}} Figure \ref{fig:regrets} shows LAFF's regret, averaged over 50 trials, in games against an algorithm from each target class, and the regret of an exploitative Bounded Memory algorithm against LAFF.
We chose Manipulator as ``Adversarial'' because it does not play the EBS and is not a pure Leader or Follower.
However, in the symmetric Unfair game, the empirical rewards indicate that Manipulator attempts to exploit LAFF,
so LAFF punishes Manipulator at the expense of the Adversarial regret guarantee.
From the plot evaluating player 2's regret, we also exclude four games where player 2's Bully solution equals the EBS, since in these cases $\mu_*^{(1)} \geq V^{(1)}$
(player 1 is not exploited by playing the optimal policy).
In most games, LAFF's regret
eventually plateaus,
while the exploitative player has linear regret, showing that LAFF is non-exploitable.
In three games,
LAFF has linear regret against an Unconditional Follower and non-exploitative Bounded Memory player. This may be due to the practical difficulty of choosing hyperparameters for tests used to decide when to switch to the next expert; these tests depend on some unknown quantities, so for our experiments, we tuned $\mathcal{B}(\tau)$ on a training set of four games that are not included in the set of 11 games for these results
(see Appendix).
Longer time horizons may be required for
the conditions on $\eta_{e}$ in Lemma \ref{conditional_experts} to hold.
We used a horizon of $T=2 \cdot 10^5$ to be on
the same approximate scale as experiments in other works on repeated games \citep{CG10, LS05, C14}.
\begin{figure}[ht]
\centering
\includegraphics[width=7cm]{figures/rep.png}
\caption{Replicator dynamic results, where the bold curves are average population shares and shaded regions are plus and minus one standard deviation.}
\label{fig:rd_results}
\end{figure}
\paragraph{\textbf{Round Robin}} Table \ref{tab:emp_matrix} shows the average rewards of each algorithm pair across the 11 games and 50 trials,
which provide an empirical bimatrix for the \textit{learning game}, i.e., a meta-game in which users choose algorithms to deploy across different repeated games.
An algorithm's reward against its best response (highlighted in blue) measures how much it bullies when possible and avoids exploitation.
Both as player 1 and player 2, LAFF is second by this metric, behind Bully. We also highlight the pure strategy Nash equilibria of this learning game (in bold), noting that LAFF is in a learning equilibrium with itself. Unfortunately, the pairing in which Q-Learning follows Bully is also an equilibrium. Thus there is an equilibrium selection problem, e.g., both users might choose Bully and receive very low rewards. However, in practice it may be easier for users to coordinate on both using LAFF, because there is no conflict over choosing which side is the Leader (Bully) versus the Follower (Q-Learning).
\paragraph{\textbf{Replicator Dynamic}}
On average over 1000 runs,
LAFF converges to 100\% of the population in the pool of algorithms (Figure \ref{fig:rd_results}), based on fitness computed as the \textit{minimum} of an algorithm's average reward over the set of games when playing as player 1 versus player 2. This metric matches the motivation
for the EBS; algorithm users will not know \textit{a priori} which of the two ``sides'' of the game they will be in. Thus, they may prefer their algorithm to cooperate with itself (maximize an egalitarian objective), instead of bullying its copy in hopes of being on the side of the bully.
\section{Discussion}
When choosing algorithms for multi-agent interactions, users
will have to trade off robustness to the variety
of possible algorithms they might face, with avoiding providing other users incentives to exploit them \citep{SLRC21}.
We have presented an algorithm for repeated games that balances these desiderata.
Both properties can facilitate cooperation between learning agents, while still allowing them to accept generous offers.
If LAFF faces an agent who ``follows'' fair, Pareto efficient bargaining proposals, the Egalitarian Leader leads them to a mutual benefit over their security values.
If the other agent's fairness standard is different, the Conditional Follower can follow this alternative proposal using RL if it is not exploitative;
otherwise, the exploitation penalty encourages the other player to be more cooperative.
Against exploitable agents, the Bully Leader can benefit from a more self-interested bargain.
Finally, if the other player is unwilling to cooperate at all but is not exploitative, Conditional Maximin ensures safety. In future work, more experts can be added
based on agent classes that we have neglected. For example, while LAFF includes Leader experts only for the extreme cases in which player 2 has a high or minimal fairness standard, one could add Leaders for other bargaining solutions.
The biggest limitations of our approach are restrictive assumptions required for our non-exploitability criterion, and the strictness of this criterion. The margin $\eta_{e}$ is small only for sufficiently large time horizons,
hence the linear regret in some of our experiments. Though LAFF successfully punishes players against whom it receives less than fair rewards, this is only strategically necessary when such players \textit{benefit} from playing this way (genuine ``exploitation'').
It may not be practically necessary
to modify the experts to not punish
when the opponent also does worse,
because an opponent would not have an incentive to lead with a Pareto inefficient policy.
Finally, we note that
our approach is not intended to provide the optimal balance of the adaptability-exploitability tradeoff;
in particular, keeping a fixed fairness threshold
may not be ideal if it
prevents an algorithm from
cooperating with algorithms
that follow other intuitively ``fair'' standards \citep{SLRC21}.
\begin{contributions}
Both authors conceived and carried out the research project jointly. A.D.~wrote the paper and code for
numerical experiments. A.T.~helped edit the paper.
\end{contributions}
\begin{acknowledgements}
A.D.~acknowledges the support of a grant
from the Center on Long-Term Risk Fund.
\end{acknowledgements}
|
1,314,259,995,916 | arxiv |
\section*{Acknowledgments}
We thank Albert De Roeck, Zhen Liu, David Milstead, Hideyuki Oide, and Brian Shuve for useful comments on early drafts. We thank Zach Marshall for useful discussions during the preparation of this review, and Timothy Gershon for information on LHCb results.
AS is supported by the Israel Science Foundation, the United States-Israel Binational Science Foundation, and the German-Israeli Foundation for Scientific Research and Development. CO is supported by the Swedish Research Council. LL is supported by the US Department of Energy.
\section{Conclusion}
\label{sec:conclusions}
The particle physics community is rapidly defining the lifetime frontier as an important part of its BSM search program. The lack of discovery of BSM physics at the LHC thus far has motivated physicists to explore theoretically motivated and sometimes overlooked lifetime ranges.
The number of searches targeting LLP signatures has greatly increased in recent years, as has that of LLP interpretations of standard analyses. These searches often employ new experimental techniques or address previously unexplored theoretical scenarios. In addition, new, dedicated LLP experiments have been proposed for both colliders and non-collider facilities.
Reflecting the growing interest in LLPs, this paper reviews their theoretical motivations, detectable signatures, and the common analysis techniques used in searching for them at modern colliders.
A comprehensive summary of experimental LLP results, particularly those from the LHC, is given.
Finally, promising new avenues and considerations regarding sensitivity to BSM via LLPs at future facilities are discussed.
In closing, we wish to emphasize the importance of this lifetime frontier in the development of the physics programs, accelerators, and detectors of the future. As LLP searches often have very low background, they are particularly promising as sensitive probes of BSM physics at future, high-luminosity colliders. For most proposed future facilities, the detector designs are in their infancy. With decades before the realization of these experiments, it is imperative that their detectors be built considering the needs of unconventional search signatures from the very beginning, ideally with enough flexibility to allow pursuing signatures that have not yet been possible to explore. Recent history has shown that multiple areas of interest and ideas regarding BSM physics can arise during the multi-decade lifetimes of modern collider facilities. When it comes to the design of individual sensors and their readout systems, the overall detector layout, trigger architectures, and reconstruction algorithms, future collaborations would do well to keep LLP searches in mind as a major driver of the detector design.
\section{Review of Published Searches}
\label{sec:searches}
In this section, we review experimental searches for the types of signatures described in Sec.~\ref{sec:signatures} at the LHC and, at times, other colliders. The program of direct-detection searches for LLPs is summarized first, followed by the program of indirect detection searches.
\subsection{Direct Detection: Signatures from LLPs Interacting with the Detector}
\label{sec:direct-detection}
A sufficiently long-lived particle created in a collider experiment could directly interact with the detector while traversing it. Depending on its lifetime, the LLP can deposit energy in only the innermost subdetectors, or even travel through all layers and leave signals throughout.
\subsubsection{Detector-Stable Charged LLPs}
\label{sec:detectorstablecllps}
A cornerstone of the LLP search programs at the current and most recent colliders is the suite of searches targeting heavy charged particles with decay lengths long enough that they interact directly with significant parts of the detector. Being heavy, these particles move at a speed significantly lower than that of a SM particle of the same momentum. This yields two unique experimental signatures: high ionization energy loss (Sec.~\ref{sec:dedx}) and delayed arrival in distant detector subsystems (Sec.~\ref{sec:ddcllps}). Since the particle's time of flight and specific ionization are measured independently, the tails of their distributions are uncorrelated for the background. Therefore, combining requirements on these two variables provides powerful background rejection.
Searches for charged long-lived particles (CLLPs) are often optimized for SUSY models, where the CLLPs are sleptons and $R$-hadrons, described in Sec.~\ref{sec:susytheory}. A heavy, detector-stable slepton would appear similar to a high-momentum muon, in that it would leave a stiff track in the ID, pass through the calorimeter, and continue as a track in the MS. However, unlike a high-$\pT$ muon, the CLPPs would be slow and highly ionizing. An $R$-hadron also interacts hadronically as it moves slowly through the detector. Its heavy parton is mostly a spectator in these processes, acting as a reservoir of kinetic energy for the bound light quarks and gluons which undergo low-energy scattering with nuclei in the detector material. Therefore, rather than a high-energy hadronic shower, the calorimeter signature is that of a penetrating particle with little energy loss compared to that of a SM hadron.\footnote{The case of very slow $R$-hadrons that stop in the detector is discussed in Sec.~\ref{sec:stoppedllps}.}
The hadronic scattering processes can change the composition of the light-quark system, so that the electric charge of the $R$-hadron can vary during its flight. When electrically charged, an $R$-hadron also loses energy via ionization.
While SUSY serves as a strong motivation for CLLP searches, these searches are sensitive to other theoretical frameworks. Model-independent limits are often provided by the experiments, along with enough information about the acceptance and the efficiency of the analysis to enable reinterpretation in other models.
\paragraph{Pre-LHC Searches.}
At LEP, the ALEPH, DELPHI, L3, and OPAL collaborations have all performed CLLP searches in $e^+e^-$ collisions in data sets with $\sqrt{s}$ ranging between 130 and 209~GeV. In the context of SUSY, the searches primarily targeted long-lived sleptons and charginos. GMSB scenarios were often used for interpreting the results.
The ALEPH collaboration performed a CLLP search using $dE/dx$ measurements in their time-projection chamber (TPC) in data sets with $\sqrt{s}$ up to 172~GeV. With 0.3 events expected from background and none observed, exclusion limits were calculated. Long-lived staus and smuons below 67~GeV and charginos below 86~GeV were excluded. More model-independent cross-section upper limits in the range 0.2-0.4~pb were set for CLLPs with masses up to 86~GeV~\cite{Barate:1997dr}. ALEPH also included constraints from CLLP searches in their summary statement on GMSB in Ref.~\cite{Heister:2002vh}. The DELPHI detector was equipped with a ring imaging Cherenkov detector and a TPC, which together provided powerful particle identification capabilities and were exploited for searching for gluino-based $R$-hadrons. In the LEP1 data set with $\sqrt{s} = m_Z$, a search for long-lived gluinos, assumed to be pair-produced through final-state radiation of a gluon in $Z\rightarrow q\bar{q}$ events, excluded the mass range $2< m_{\tilde{g}} < 18$~GeV. Subsequently, using 609~pb$^{-1}$ of LEP2 data with $\sqrt{s}$ of 189-209~GeV, a search for $R$-hadrons produced in decays of pair-produced squarks was performed and excluded long-lived gluinos up to around 90~GeV~\cite{Abdallah:2002qi}. The OPAL experiment searched for pair-produced CLLPs using its jet chamber, which provided up to 159 $dE/dx$ measurements for each CLLP candidate track. Several data sets with $\sqrt{s} = $130-209~GeV were used, and the lack of observed excess over the expected background was interpreted as exclusions of smuons and staus with $m < 98$~GeV, as well as charginos with $m < 102$~GeV in constrained MSSM models~\cite{Abbiendi:2003yd}. OPAL expanded this search to include event topologies with large track multiplicities in order to improve sensitivity also to signal scenarios with production of color-charged particles~\cite{Abbiendi:2005gc}. This paper summarized constraints on GMSB and used a data set of 693.1~pb$^{-1}$ to set similar mass limits to the ones of Ref.~\cite{Abbiendi:2003yd} in the GMSB framework. The L3 collaboration produced exclusion limits for heavy charged leptons using $dE/dx$ measurements in their tracking chamber. Using their full LEP2 data set of 450~pb$^{-1}$ at $\sqrt{s} = $192-208~GeV, they excluded masses below 102.6~GeV~\cite{Achard:2001qw}.
In connection with a measurement of anti-deuteron production, The H1 experiment~\cite{Abt:1996hi} at the HERA collider at the Deutsches Elektronen-Synchrotron laboratory in Hamburg, Germany performed a search for CLLPs~\cite{Aktas:2004pq}. The search used $dE/dx$ measurements in the jet chamber and track $\pT$ measurements in the tracker. An upper limit of 0.19~nb was set on the cross section for photoproduction of positively (negatively) charged LLPs with mass larger than that of the triton (anti-deuteron) in a given kinematic range in collisions of positrons and protons with energies 27.6~GeV and 820~GeV, respectively.
In Run I of the Tevatron, several searches for CLLPs were performed by the CDF collaboration in $p\bar{p}$ collisions at $\sqrt{s} = 1.8$~TeV. In the first two iterations~\cite{Abe:1989es,Abe:1992vr}, time-of-flight measurements performed with the hadronic calorimeter were used. The third analysis, conducted with 90~pb$^{-1}$ of data, $dE/dx$ measurements in the central tracking chamber and silicon vertex detector were used as well~\cite{Acosta:2002ju}. With yields compatible with background expectations, cross-section upper limits around 1~pb were obtained for strong production of fourth-generation quarks with mass up to 270~GeV and Drell-Yan production of sleptons with 80~GeV$ < m < $120~GeV.
With the increased energy of $\sqrt{s} = 1.96$~TeV in Run 2, The D0 collaboration published two searches for CLLPs using 1.1~\cite{Abazov:2008qu} and 5.2~fb$^{-1}$~\cite{Abazov:2011pf}.
A follow-up paper combined these results and provided additional interpretations for them~\cite{Abazov:2012ina}.
Using time-of-flight measurements in the drift tubes of the muon detector and $dE/dx$ in the silicon microstrip tracker, D0 excluded long-lived gaugino-like and higgsino-like charginos below masses of 278~GeV and 244~GeV, respectively, and set cross-section upper limits for pair-production of staus with 100~GeV$ < m < $300~GeV in the $\mathcal{O}(10)$~fb range. Top squarks were excluded below a mass in the rage 285-305~GeV, depending on the assumptions of the interactions between the $R$-hadron and the detector material.
The final statement on CLLPs from the CDF collaboration used 1.0~fb$^{-1}$ of 1.96~TeV $p\bar{p}$ data and included timing measurements from a new dedicated TOF detector~\cite{Aaltonen:2009kea}.
The result was interpreted as exclusions for pair-produced stop-based $R$-hadrons with $m < 249$~GeV, corresponding to upper cross-section limits in the vicinity of 50~fb.
\paragraph{Searches at LHC.}
The first CLLP searches at LHC were performed with the 7~TeV $pp$ data collected in 2010. The CMS collaboration used 3.1~pb$^{-1}$ to look for ID tracks with high $dE/dx$ measured in the silicon-strip tracking detector, with and without compatible tracks in the MS. Exclusion limits for $R$-hadrons were established for gluinos at $m < 398$~GeV (311~GeV for $R$-hadrons assumed to be neutral in the MS) and stops at $m < 202$~GeV~\cite{Khachatryan:2011ts}. Shortly afterwards, the ATLAS collaboration published a CLLP search using 34~pb$^{-1}$. They employed a combination of $dE/dx$ in the silicon pixel detector and time-of-flight measurements in the hadronic calorimeter, without requiring a track in the MS. This search raised the mass limits to up to 586~GeV for gluinos and 309~GeV for stops~\cite{Aad:2011yf}.
A separate ATLAS search, performed with the same data set, used a muon-like signature based on time-of-flight in both the MS and the calorimeter. This search yielded limits of up to 544~GeV for gluino-based $R$-hadrons and up to 120~GeV for direct pair-production of long-lived sleptons~\cite{Aad:2011hz}. The CMS measurement was repeated with 5.0~fb$^{-1}$ and additional use of time-of-flight information from the MS, extending the mass limits to up to 1098 (737)~GeV for gluinos (stops) and 223~GeV for staus~\cite{Chatrchyan:2012sp}. The following result from ATLAS used 4.7~fb$^{-1}$, and now combined all three discriminants from the ID, calorimeter, and MS. $R$-hadrons formed from gluinos, stops, and sbottoms were excluded up to a mass of 985, 683, and 612~GeV, respectively, and direct pair-production of long-lived sleptons with $m < 278$~GeV as well~\cite{Aad:2012pra}.
Using nearly 20~fb$^{-1}$ of $pp$ data at $\sqrt{s} = 8$~TeV, CMS extended the limit on the mass of gluino (stop) $R$-hadrons to up to 1322 (935)~GeV, and excluded directly pair-produced staus with mass below 339~GeV~\cite{Chatrchyan:2013oca}. These results were later reinterpreted by the collaboration in the frameworks of the phenomenological MSSM and AMSB in Ref.~\cite{Khachatryan:2015lla}. With approximately the same amount of 8~TeV data, ATLAS repeated the CLLP search and reached gluino, stop and sbottom mass limits of up to 1270, 900 and 845 GeV. Direct tau pair-production was excluded up to 290~GeV and interpretations were included for GMSB and LeptoSUSY models~\cite{ATLAS:2014fka}. With this data set, ATLAS also performed a search sensitive to lifetimes as low as $\tau=0.6$~ns, by requiring only that the CLLP leave an ID track with anomalous $dE/dx$~\cite{Aad:2015qfa}. For this signature, the lower-mass limits for gluino-based $R$-hadrons reached up to 750 (1250)~GeV for $\tau_{\tilde{g}} = 0.6~(10)$~ns, considering several different $R$-hadron decay possibilities and $\tilde{\chi}_1^0$ masses. Similarly, long-lived charginos nearly mass-degenerate with the $\tilde{\chi}_1^0$ LSP and with mass up to 239 (482)~GeV were excluded for $\tau_{\tilde{\chi}_1^\pm} = 1~(15)$~ns. The LHCb collaboration also performed a CLLP search with 3.0~fb$^{-1}$ of 7 and 8~TeV data, using their ring imaging Cherenkov detectors as a primary tool for identifying CLLPs. With zero events observed in the signal region, upper limits on the cross section were derived as a function of CLLP mass, assuming Drell-Yan-like pair-production kinematics in the pseudorapidity range $1.8 < |\eta| < 4.9$.
The resulting limits range from 3.4~fb at $m = 124$~GeV to 5.7~fb at $m = 309$~GeV~\cite{Aaij:2015ica}.
In Run-2 of the LHC (with $\sqrt{s} =13$~TeV), the full-detector CLLP search at ATLAS was performed with 3.2~fb$^{-1}$. With yields again matching those expected for the background-only hypothesis, the limits for gluino, stop, and sbottom $R$-hadrons were extended to 1580, 805 and 890 GeV, respectively~\cite{Aaboud:2016uth}.
CMS published a corresponding result using 2.5~fb$^{-1}$, reaching mass exclusions of up to 1610, 1040, and 240~GeV for long-lived gluinos, stops and directly
pair-produced sleptons~\cite{Khachatryan:2016sfv}. The ATLAS search using the ID-only signature was also performed with the 3.2~fb$^{-1}$ dataset~\cite{Aaboud:2016dgf}, and was recently repeated with 36~fb$^{-1}$ recorded in 2015-2016~\cite{Aaboud:2018hdl}. No significant excess was observed in the data, and gluino $R$-hadrons were excluded for a lifetime of 1 (10)~ns and mass of up to 1300 (2060)~GeV.
\subsubsection{Disappearing Tracks}
When a CLLP decays deep within the ID, and any charged particles produced in the decay are too soft to be reliably tracked, it can produce a disappearing-track signature. As discussed in Sec.~{\ref{sec:amsb}}, this signature is particularly motivated by AMSB SUSY models, where a heavy neutralino takes up most of the energy in the decay of the slightly heavier chargino.
Searches for disappearing tracks have been performed in Run-1 and Run-2 of the LHC by the ATLAS \cite{ATLAS:DT7TeV1,ATLAS:DT7TeV2,ATLAS:DT8TeV,ATLAS:DT13TeV} and CMS \cite{CMS:DT8TeV,CMS:DT13TeV} experiments. The first such search was performed by ATLAS~\cite{ATLAS:DT7TeV1} using $1.01~{\rm fb}^{-1}$ of $\sqrt{s}=7~\tev$ data. This and other Run-1 searches from the ATLAS collaboration look for tracks that have limited hits in the outermost layers of the ID. This approach gives maximal sensitivity at a characteristic length scale of about $0.5$~m. The trigger strategy was to require at least one radiated jet that gives rise to moderate missing transverse energy.
In Run-2 of the LHC, exploiting a newly inserted tracking layer at short radius, ATLAS was able to also pursue shorter lifetimes, with a characteristic scale of $0.3$~m~\cite{ATLAS:DT13TeV}. Along with the increase in energy to $13~\tev$ and the larger luminosity of $36.1~{\rm fb}^{-1}$, this resulted in significant improvement in the signal sensitivity over a range of model parameters.
CMS also pursued this signature in both Run-1~\cite{CMS:DT8TeV} and Run-2~\cite{CMS:DT13TeV}, the latter analysis using $38.4~{\rm fb}^{-1}$. Reflecting the fact that CMS has an all-silicon tracker, the characteristic length scale in this analysis was about $0.8$~m. A veto on calorimeter energy was used to help reduce SM backgrounds.
For these searches, the SM backgrounds tend to be hadrons scattering in material, charged leptons undergoing a large momentum change due to bremsstrahlung, and spurious tracks formed from uncorrelated detector hits. Control regions were used to obtain template momentum distributions for the background. These distributions were then fit to data in signal regions to test for the presence of signal.
The expected background yield in the Run-2 analyses was typically in the tens of events, largely dominated by the hadron background. The analyses observed no significant deviation from the SM expectation, and limits were set on the chargino lifetime and mass.
The LHC collaborations have not yet exploited a kinked-track signature. However, this has been done by the ALEPH experiment at LEP, ~\cite{Heister:2002vh}, and used for setting limits on the stau mass and lifetime.
\subsubsection{Particles with Anomalous Electric Charge}
\label{sec:anomalous-charge}
\paragraph{Fractional Electric Charge.}
Since all free charges in the SM are integer multiples of the electron charge $Q_e$, identification of a LLP with a non-integer charge would be a clear sign of BSM. Various searches have been performed to search for such particles~\cite{Perl:2009zz}.
At LEP, limits were set on CLLPs with charges $\frac{2}{3}Q_e$, $\frac{4}{3}Q_e$, and $\frac{5}{3}Q_e$ at a range of collision energies from $91$~GeV to $209$~GeV~\cite{Abbiendi:2003yd,Abreu:1996py,Akers:1995az,Buskulic:1992mr}. At the Tevatron, the CDF experiment set limits on CLLPs with charges $\frac{2}{3}Q_e$ and $\frac{1}{3}Q_e$, excluding masses up to $250~\gev$~\cite{Acosta:2002ju}.
The CMS experiment performed a search for fractionally charged particles using $5.0~{\rm fb}^{-1}$ of $\sqrt{s}=7$~TeV data~\cite{CMS:fraccharge}. The search used tracks in the ID that appeared to have significantly lower ionization than minimum-ionizing particles. A background ionization template was obtained using a sample of $Z\rightarrow\mu^+\mu^-$ events to model the low-side tail of the SM ionization distribution. Since selected events were triggered by the presence of a muon, cosmic-ray muons were also considered as a potential source of background. Given the analysis criteria, a total SM background of $0.012\pm0.007$ events was expected. No events were observed, and limits were set on the mass of fractional-charge CLLPs. For charges $\frac{1}{3}Q_e$ and $\frac{2}{3}Q_e$, masses below $140$~GeV and $310$~GeV were excluded, respectively.
This search was augmented with $18.8~{\rm fb}^{-1}$ of $\sqrt{s}=8~\tev$ data, extending the mass bounds on $\frac{1}{3}Q_e$ and $\frac{2}{3}Q_e$ CLLPs to about $200$~GeV and $480$~GeV, respectively~\cite{Chatrchyan:2013oca}.
\paragraph{Multiple Electric Charge.}
In contrast to fractionally charged particles having unusually low ionization signatures, particles with large electric charge would leave strikingly large ionization signatures in particle detectors.
The ATLAS collaboration has performed multiple searches for such particles in Run-1 of the LHC with sensitivity to charges ranging from $2Q_e$ to $17Q_e$~\cite{atlas:multicharge1,atlas:multicharge2,atlas:multicharge3}. In Ref.~\cite{atlas:multicharge1}, the variables used to identify signal were ionization in the transition radiation tracking detector (TRT) and the electromagnetic shower shape in the ECAL. Multiply charged particles would be expected to leave an unusually large number of high-threshold hits in the TRT and very narrow showers in the ECAL. These properties were exploited in this search to construct a signal region with an expected background yield of $0.019\pm0.005$ events in $3.1~{\rm pb}^{-1}$ of $\sqrt{s}=7$~TeV collision data. No events were observed, and cross section limits were set between $1$ and $12$~pb for various charges and masses.
ATLAS also published a set of searches for multicharged CLLPs using ionization information from the MS~\cite{atlas:multicharge2,atlas:multicharge3}. In these searches, independent measurements of a reconstructed muon candidate were performed in the ID and the MS. Two different signal region selections are used, one for doubly charged particles and another for larger charges. These regions were expected to contain $0.013\pm0.002\textrm{(stat.)}\pm0.003\textrm{(stat.)}$ and $0.026\pm0.003\textrm{(stat.)}\pm0.007\textrm{(stat.)}$ background events, respectively, in $20.3~{\rm fb}^{-1}$ of $\sqrt{s}=8$~TeV data. No events were observed, and limits were placed on the allowed mass of such multicharged states. Lower mass limits range from $660$~GeV for a charge of $2Q_e$ to $760$~GeV for a charge of $6Q_e$.
A Run-1 search from ATLAS that targeted magnetic monopoles~\cite{Aad:2015kta} also excluded particles with with charges in the range 10--60$Q_e$ and is described in Sec.~\ref{sec:monopole-searches}. The Run-1 CLLP search from CMS described in Sec.~\ref{sec:detectorstablecllps} also set limits on multiply charged particles, extending the mass bounds on $2Q_e$ and $6Q_e$ charges to about $690$~GeV and $780$~GeV, respectively~\cite{Chatrchyan:2013oca}.
\subsubsection{Magnetic Monopoles}
\label{sec:monopole-searches}
Magnetic monopoles have inspired searches at many colliders in the past decades, as well as in non-collider experiments~\cite{Tanabashi:2018oca,Tanabashi:2018oca,Eberhard:1971re,Ross:1973it,GRAF1991463}. In this section we review the energy-frontier searches performed at the Tevatron and at the LHC.
A general comment is in order about simulations used for interpreting the results of magnetic monopole searches. Pair production of monopoles is typically simulated as an electromagnetic process, most commonly Drell-Yan or photon fusion. This enables generation of events that can be used to study the detector response and determine the reconstruction and selection efficiencies. However, due to the large charge of magnetic monopoles, calculations and simulations that rely on perturbation theory are unreliable. Therefore, interpretation of searches in terms of model parameters is not trivial.
\paragraph{Searches with General-Purpose Detectors.}
The CDF experiment has searched for magnetic monopoles created in $p\bar p$ collisions at the Tevatron and traversing the ID tracker~\cite{Abulencia:2005hb}. Using the plastic scintillator time-of-flight counters surrounding the central outer tracker of the detector, a dedicated trigger was implemented based on the extreme ionization signature expected for a magnetic monopole. The events recorded by this trigger were scrutinized for tracks that do not bend in the azimuthal direction. No signal was found, and an upper limit on the production cross section for monopoles with mass in the range $200 < m < 800$~GeV was set at 0.2~pb.
At the LHC, the ATLAS experiment has published searches for magnetic monopoles in $pp$ collisions at $\sqrt{s} = 7$~TeV~\cite{Aad:2012qi} and 8~TeV~\cite{Aad:2015kta}. Similar to the searches for highly charged particles reported in Sec.~\ref{sec:anomalous-charge}, the monopole searches used the fraction of high-threshold hits in the transition radiation tracker and the electromagnetic shower shape in the ECAL to identify magnetic monopoles using their property of high ionization. The observed signal yields were compatible with the expectations for the background-only hypothesis. Two types of upper limits on the production cross section were extracted. In the first type, monopole pair-production with kinematics generated from a Drell-Yan process was assumed. The second type was model-independent limits for a single monopole produced in fiducial regions defined by transverse kinetic energy and pseudorapidity, in which the selection efficiency was uniform and high. For the fiducial volume analysis, the most stringent upper cross section limit was set at 0.5~fb for the production of a monopole with a magnetic charge between $0.5Q_{D}$ and $2Q_{D}$ and mass in the range 200~GeV~$< m<$~2500~GeV~\cite{Aad:2015kta}.
For the case of Drell-Yan kinematics, cross section limits were quoted for both spin-0 and a spin-1/2 monopole hypotheses, excluding masses below 430~GeV and 700~GeV for a Dirac monopole, respectively.
\paragraph{Searches with Dedicated Detectors.}
Another magnetic monopole-search method exploited at colliders involves removing material exposed to collision particles and examining it offline. Such searches have used two main techniques. The first is to examine the material for the presence of magnetic monopoles that have stopped within it, using a superconducting quantum interference device (SQuID), in which a captured monopole would induce a permanent current. The second involves searching for the characteristic tracks caused by the passage of highly ionizing particles through the material.
Parts of the CDF (lead from the Forward EM calorimeter and an aluminum cylinder) and D0 (beryllium beam pipe and aluminum cylinders) detectors were examined with the dedicated E-882 SQuID experiment~\cite{Kalbfleisch:2003yt}. The analyzed parts had been exposed to approximately 175~pb$^{-1}$ of $p\bar{p}$ collisions at $\sqrt{s} = 1.8$~TeV. No signal was seen. Assuming monopole kinematics of Drell-Yan-like pair-production, upper limits on the cross-section in the range $0.07$ to $0.2$~pb were determined for magnetic charges of 1, 2, 3, and 6 times $Q_{D}$.
The track-based technique was used in a search performed at the E0 interaction region of the Tevatron~\cite{0295-5075-12-7-007}. This search used stacks of plastic sheets, in which a highly ionizing particle would damage the chemical bonds around its trajectory, leaving a permanent track. After exposure to particles created in $p\bar{p}$ collisions at $\sqrt{s} = 1.8$~TeV, the plastic sheets were etched with NaOH, creating a visible hole where a highly ionizing particle had passed through. No coinciding holes between layers were observed, and an upper cross section limit of $0.2$~nb was set for monopoles with magnetic charge of $0.5Q_D$ or higher and mass up to 850~GeV.
The \textit{Monopole and Exotics Detector At the LHC} (MoEDAL) is a current experiment specifically designed to search for monopoles and other highly ionizing particles produced at the LHC and entering material surrounding the IP~\cite{Acharya:2014nyr}. Exploiting the fact that the LHCb detector is a single-arm spectrometer, MoEDAL is situated at the other side of the IP. The experiment consists primarily of two passive detector subsystems.
The first subsystem is the Magnetic Monopole Trapper (MMT), consisting of blocks made of aluminum, chosen for its anomalously large nuclear magnetic moment and hence expected ability to capture a magnetic monopole. MMT elements are examined by a SQuID system with a sensitivity of $0.1Q_D$. Subsequently, they are to be stored in a deep underground detector to search for late annihilations or decays of any heavy trapped particles.
The second subsystem, the Nuclear Track Detector (NTD), consists of stacks of aluminum-housed plastic mounted around the LHCb Vertex Locator (VELO) detector. The NTDs can be removed and inspected by scanning microscopes to search for tracks created by highly ionizing particles.
MoEDAL has published three sets of results with increasing exposure to LHC collisions~\cite{MoEDAL:2016jlb, Acharya:2016ukt, Acharya:2017cio}. The results were obtained only from SQuID analysis of the MMT. As in the case of the ATLAS analysis described above, the MoEDAL results were interpreted under the assumption of a Drell-Yan-like production process, and also presented as model-independent upper limits on production cross sections in fiducial volumes.
Their last paper~\cite{Acharya:2017cio}, based on exposure of 222~kg to $2.11~{\rm fb}^{-1}$ of data collected at $\sqrt{s}=13~\tev$, provides the most stringent results. No monopole candidates were found, and monopole-pair production cross-section upper limits between 40 and 105~fb were set for magnetic charges up to $5Q_D$ and masses up to 6~TeV. Monopole mass limits between 490 and 1790~GeV were obtained.
\section{Future Searches and Experiments}
\label{sec:future}
As seen from the review of results in Sec.~\ref{sec:searches}, interest in LLPs as a means for probing and discovering BSM physics has been growing rapidly. While LLP searches are being carried out, methods for exploiting new experimental signatures are constantly being developed, and the data samples available for analysis keep growing. Therefore, we conclude that the coming years will see significant expansion in LLP physics. In this section we go beyond the current searches and discuss the outlook for LLP searches at future facilities.
\subsection{Future Searches at Collider Experiments}
Approved and proposed collider experiments generally feature a large increase in the integrated luminosity relative to past or currently operating facilities of a similar nature. The large samples collected will lead to significant improvements in the sensitivity of BSM searches. In searches that are dominated by SM background, the signal and background yields grow together, so that the resulting sensitivity to the production rates of BSM particles is proportional to the square root of the integrated luminosity. As seen in Sec.~\ref{sec:searches}, many LLP searches have very low background levels, since they are often able to exploit experimental signatures (reviewed in Sec.~\ref{sec:split-susy}) that are not available to prompt BSM searches. In zero-background cases, the sensitivity grows roughly linearly with the integrated luminosity. Since LLP searches frequently target spectacular signatures with tiny irreducible backgrounds, they are therefore particularly interesting to pursue at future collider facilities. Details of these facilities and their LLP capabilities are discussed in this section.
\subsubsection{ATLAS and CMS}
As mentioned in Sec.~\ref{sec:intro}, only a small part of the roughly $150~{\rm fb}^{-1}$ data samples so far collected by each of ATLAS and CMS has been used for LLP searches. Full exploitation of these samples is expected by the start of Run-3 in 2021 which will be at $\sqrt{s} = 14$~TeV. After this running period, each experiment will have collected about $300~{\rm fb}^{-1}$ by 2024, approximately evenly split between 13 and 14~TeV. In addition to the slight energy increase and doubling of integrated luminosity, analysis techniques can improve and mature significantly during this period, leading to both increased sensitivity and significant expansion of the model-parameter space explored by LLP searches.
The subsequent High-Luminosity phase of the LHC (HL-LHC) is approved and funded. ATLAS and CMS are each to collect an integrated luminosity of about $3000~{\rm fb}^{-1}$ over a dozen years of operation starting with Run 4 in 2026~\cite{hl-lhc}.
The increased luminosity of the HL-LHC comes at the price of dramatically increased pileup, and the existing LHC experiments will undergo comprehensive upgrades for Run 4, both to withstand the high particle rates and to improve the sensitivity of precision measurements and searches for new physics in this challenging experimental environment. Some of the upgrades will provide enhanced capabilities for detecting LLPs, and we briefly comment here on some important aspects of these upgrades.
Improved ID tracking detectors will be installed, with higher granularity and improved resolutions to enable effective measurements of charged-particle tracks at luminosities nearly an order of magnitude larger than that of the present-day LHC. The layouts of these detectors can allow for improved tracking efficiencies at large displacements and less strict disappearing-track conditions, likely resulting in significant gains in LLP sensitivity. For example, these advances may make it possible for ID DV searches to loosen vertex mass and track multiplicity requirements, thus expanding the model space covered by such searches.
Despite these improvements, some LLP signatures will be negatively affected by the ID upgrades. For example, both ATLAS and CMS are building their trackers with only a few bits of digital information allocated to the measured charge deposition. This will degrade the resolution of ionization measurements in these new detectors relative to their current incarnations~\cite{atlasitkpixels,atlasitkstrips,cmsphaseiitracker}.
The barrel calorimeter systems will be upgraded with new readout electronics aiming to provide improved timing resolution down to 30~ps for photons with $\pT > 25$~GeV in the CMS ECAL barrel at the start of the LHC~\cite{cmsphaseiibarrelcalo}. Anomalous calorimeter deposit signatures can be greatly improved in the endcap of CMS with the High-Granularity Calorimeter proposed for the HL-LHC upgrade program~\cite{Collaboration:2293646}. With calorimeter cell areas of order 1~\si{\centi\metre\square} and an expected per-cell timing resolution on the order of tens of ps, measurements could identify showers that do not point back to the PV or are delayed. In ATLAS, the readout electronics for the electromagnetic and hadronic calorimeters will also be updated to improve performance at the HL-LHC~\cite{atlasphaseiilar,atlasphaseiitile}.
In addition, both ATLAS and CMS have recently added very precise MIP timing detectors to their Phase-II upgrade plans, and these can have a significant effect on LLP searches. They are primarily motivated by increased pileup and the need to exploit the time dimension of the beam spot to discern individual vertices and enable accurate track-to-vertex association despite the high vertex density. Aiming to provide timing measurements with 30~ps accuracy for all MIPs in their acceptance, they offer timing measurements of passing LLPs or their decay products with a resolution that is orders of magnitude better than what is currently achievable. As this track-to-vertex association is most challenging for tracks at shallow angle with the beam line, the High-Granularity Timing Detector will cover $2.4 < |\eta| < 4.0$ in ATLAS~\cite{atlashgtd}. Instead prioritizing coverage in the barrel region, the CMS MIP Timing Detector will cover $|\eta| < 3.0$~\cite{Collaboration:2296612}. These detectors will provide exciting new capabilities for LLP searches. The improvements may allow for reconstruction of the mass of a heavy LLP decaying with a displaced-vertices signature by using measurements of both momenta and times for charged tracks and photons, for example~\cite{Liu:2018wte}. For these applications, CMS should have an advantage given that LLPs are primarily produced centrally in many models.
A challenge faced by modern LLP searches is that the trigger systems of the detectors are largely designed for prompt particle production. As a result, current and past experiments may be blind to particular regions of model space. Some upgrades for the HL-LHC can directly address this challenge.
ATLAS and CMS plan on major redesigns of the trigger and data acquisition systems for the HL-LHC, introducing tracking abilities to their hardware-level trigger systems. The upgraded CMS tracker design includes a series of double-layers that will provide very fast reconstruction of short track stubs~\cite{cmsphaseiitrigger}. This will enable global-event track-stub tracking down to $\pt>2$~GeV at the HL-LHC collision rate of $40$~MHz, providing this information for the first time as input to the first-level trigger decision. The ATLAS upgrade program includes hardware tracking systems that will be usable in the trigger. In regions of interest, tracking may be performed with a high efficiency for charged particles with $\pt>4$~GeV at an input rate of about $1$~MHz~\cite{atlasphaseiitdaq}. The ATLAS hardware-based tracking system will do fast track-finding through pattern matching, and though the memory banks will primarily be populated with patterns that optimize the performance for prompt high-$\pT$ tracks, a portion could be dedicated to tracks with large impact parameters, \emph{i.e.} specifically targeting tracks from displaced decays. This could allow triggering directly on the decays products from a displaced vertex and could bring valuable sensitivity gains for event topologies that trigger limitations so far prevented searches from examining for LLPs. The CMS hardware tracking can similarly relax the requirements placed on the track-stubs to allow for addition displaced sensitivity at the trigger level.
\subsubsection{LHCb}
The LHCb detector will undergo extensive upgrades to enable it to collect $50~{\rm fb}^{-1}$ by 2028~\cite{Bediaga:2012uyd, Piucci:2017kih}. Of particular relevance to LLP searches, the current silicon microstrip sensors of the vertex locator (VELO) will be replaced with pixel sensors, to withstand higher track multiplicity, simplify reconstruction, and improve resolution. The distance of the VELO from the IP, which is currently $8.4~\mm$, will be reduced to $5.1~\mm$. The amount of material traversed by a particle before the first VELO hit will be reduced from 4.6\% to 1.7\% of a radiation length. These measures will lead to a 40\% improvement in the track impact-parameter resolution, leading to better prompt-background rejection and hence increased sensitivity for LLPs with small lifetimes. A new silicon-microstrip upstream tracker (UT) will improve reconstruction of LLP decays that occur after the VELO. Lastly, the hardware trigger system will be removed, and events will be selected by a software-only trigger system at the LHC collision rate.
The LHCb collaboration is interested in conducting further upgrades to allow the experiment to collect a data sample of $300~{\rm fb}^{-1}$ by the currently foreseen end of the LHC program in the 2030s. Maintaining or improving the performance of the detector given the high instantaneous luminosity will require tracking detectors with precise timing information. Similarly, increased use of FPGA and GPU technology is being explored for meeting the challenging trigger performance requirements.
\subsubsection{Belle~II}
The Belle~II experiment~\cite{Abe:2010gxa} will begin taking physics data with the full detector in 2019, and is planned to collect about $50,000~{\rm fb}^{-1}$ by the year 2025, with potential subsequent upgrades. In addition to the large luminosity increase over previous $B$~factories, Belle~II features a factor-of-2 improvement in the spatial resolution of the vertex detector. Coupled with a very small beamspot, this will increase the sensitivity of LLP searches at small distances.
Due to the relatively low particle multiplicity at a $B$~factory, the trigger requirements are much looser than at the LHC. In particular, Belle~II will even accept events with a single photon and no tracks. This increases the likelihood of retaining sensitivity to searches that have not yet been conceived. In addition, reconstruction of tracks with large $d_0$ values is much less difficult than at a high-multiplicity hadron collider. Similarly to LHCb, Belle~II is designed to cleanly identify charged particle types based on $dE/dx$ and Cherenkov-radiation measurements, a capability that can be used for CLLP detection.
Predicting the sensitivity of Belle~II to a range of LLP-predicting models is hampered by the fact that very few LLP searches have been conducted at a $B$~factory. Nonetheless, one can expect that Belle~II will have an advantage when it comes to light particles that would be difficult to trigger on at ATLAS and CMS, as well as final states involving hadrons plus missing particles or photons, which would be difficult to identify at LHCb. Studies involving $\tau$ leptons are also of interest, exploiting the clean environment and the roughly 1~nb cross section for $e^+e^-\to\tau^+\tau^-$ at $B$-factory energies.
\subsubsection{Proposed Colliders}
Beyond the timescale of the approved colliders and their experiments, proposals have been made for future colliders at the energy frontier.
In the LHC tunnel, the proposed \emph{High-Energy LHC} (HE-LHC) would operate with dipole magnetic fields of $20$~T~\cite{Assmann:1284326}. With $pp$ collisions taking place at a center-of-mass energy of $\sqrt{s}=33$~TeV, production cross sections would be much larger than at the LHC, particularly for multi-TeV BSM particles.
Further increase in heavy BSM sensitivity could come from the proposed \emph{Future Circular Collider} (FCC) at CERN~\cite{fcc-hh} or the \emph{Super proton-proton Collider} (SppC) at the Institute of High-Energy Physics (IHEP) in China~\cite{CEPC-SPPCStudyGroup:2015csa}. With a circumference of 80-100~km, these $pp$ colliders would reach energies of $\sqrt{s}=100~\tev$.
LLP searches can be expected to be an important part of the physics that would be performed at these facilities~\cite{Arkani-Hamed:2015vfh}.
New electron-positron colliders have been proposed as well.
The \emph{International Linear Collider} (ILC) is a linear $e^+e^-$ collider currently proposed for construction in Japan. Collisions would be detected in two experiments at center-of-mass energies of $\sqrt{s}=500$~GeV, with opportunities for upgrades up to $1$~TeV. A cost-saving $\sqrt{s}=250$~GeV~\cite{ilc} Higgs-factory configuration would enable precision studies of the Higgs boson, produced via $e^+e^-\to ZH$. While the cross section for this process is much smaller than that of Higgs production at LHC, the advantage of such a Higgs factory lies in the low-background environment and well understood cross sections of $e^+e^-$ collisions. However, since LLP searches typically have low backgrounds even at LHC, the case for LLP studies at a Higgs factory is largely limited to LLP decays that are difficult to reconstruct in hadron collider environments, such as low multiplicity decays or decays to weakly interacting particles.
Operating at the same energy scale as ILC but with more than an order of magnitude higher luminosity, FCC-ee is a circular $e^+e^-$ collider proposed for the 100~km FCC tunnel~\cite{Gomez-Ceballos:2013zzn}. With center-of-mass energies between $90$ and $400$~GeV, this machine could be used for high-precision $Z$, Higgs, and top-quark physics. Unlike the ILC, the circular configuration would not enable increasing the $e^+e^-$ collision energies beyond about $400~\gev$. A similar initiative, known as the Circular Electron-Positron Collider (CEPC) is also being proposed by IHEP~\cite{CEPCStudyGroup:2018rmc}.
On a longer timescale, an even higher-energy $e^+e^-$ collider has been proposed for potential installation at CERN. The \emph{Compact Linear Collider} (CLIC) would use very high-field radio-frequency technology to reach collision energies from $380$~GeV to $3$~TeV~\cite{clic}.
\subsection{Proposed Dedicated LLP Experiments at the LHC}
Several experiments dedicated to the search for LLPs have been proposed for the LHC. Targeting longer lifetimes than those that can be accessed by the main detectors, these dedicated experiments tend to be located at a significant distance from the IP, cleverly taking advantage of existing open space. These proposals span a wide range of maturities, and some have already collected data with test stands to provide proof of concept and obtain background estimations as input to detector design. None of these projects are fully funded.
The MATHUSLA experiment~\cite{Chou:2016lxi,Evans:2017lvd,Curtin:2018mvb,Alpigiani:2631491} would be an enormous tracking detector, roughly $200\times200\times20~m^3$ in size, that would sit at the surface roughly $100$~m above either the CMS or ATLAS caverns. A modular array of trackers would fill this large volume, shielded by close to $100$~m of earth from almost all backgrounds produced in the HL-LHC collisions.
Neutral LLPs with very large lifetimes produced in the collisions may decay within the volume of MATHUSLA, where displaced vertices could be reconstructed. The sensitive volume extends downward into the earth for decays into penetrating, energetic muons. Timing and pointing resolutions would allow for vetoes of cosmic backgrounds, as well as identification of promptly produced energetic muons, which could penetrate the earth shield and would be used for calibration and alignment. Initial estimates indicate that with $3000~{\rm fb}^{-1}$ of data, MATHUSLA would be sensitive to LLPs with lifetimes up to the $\tau\lesssim10~\mu$s limits obtained from Big-Bang Nucleosynthesis for some models.
The milliQan experiment~\cite{Ball:2016zrp} proposed for the HL-LHC would live in an unused underground tunnel near the CMS cavern with about $15$~m of rock shielding between the IP and the detector. The experiment would be sensitive to fractionally charged LLPs with electrical charge as low as $\mathcal{O}(10^{-3}-10^{-2})$. milliQan is estimated to be sensitive to LLP masses of up to $\mathcal{O}(1-10)$~GeV with $300~{\rm fb}^{-1}$ of collision data, significantly improving upon the reach of previous experiments.
The FASER experiment~\cite{Feng:2017uoz} would be situated hundreds of meters downstream from the IP of the ATLAS or CMS experiment, beyond the point at which the beams curve away.
Placed at 0~degrees (infinite pseudorapidity) relative to the collision axis, FASER would search for neutral LLPs with particular sensitivity to production of sub-GeV dark photons.
The CODEX-b~\cite{Gligorov:2017nwh} tracking detector would look for displaced vertices in a $10~\mbox{m}^3$ volume behind $3$~m of shielding about $25$~m from the LHCb IP. Placed at a large angle with respect to the beam line, CODEX-b would cover the uninstrumented low-pseudorapidity region of the LHCb IP. If the decommissioned DELPHI detector parked in that location could be removed, CODEX-b could potentially double in volume.
Another proposal, AL3X~\cite{Gligorov:2018vkc}, involves repurposing parts of the ALICE detector for Run-5 of the LHC. The ALICE magnet system, itself inherited from the L3 experiment, and the ALICE TPC would be used to measure the decays of LLPs produced in collisions that would take place at an IP shifted by about 11~m relative to the current ALICE IP. Moving the IP would allow insertion of shielding between the IP and the detector to remove SM backgrounds. The high-quality tracking provided by the ALICE TPC would provide active background rejection in addition to that provided by the passive shielding and roughly $100$~m of earth protecting the detector from most cosmic backgrounds. For detection of LLP signals from exotic Higgs decays, dark photons, and exotic $B$ decays, the estimated sensitivity of AL3X with only $100~{\rm fb}^{-1}$ is competitive with those of searches at ATLAS/CMS, CODEX-b, and MATHUSLA performed with $3000~{\rm fb}^{-1}$. This advantage of AL3X arises from its closer distance to the IP, the large solid-angle coverage provided by a large detector, and the low background provided by shielding and precise tracking. If the physics priorities of ALICE allow for this repurposing on the Run-5 timescale, the IP can be moved, and the LHC is able to deliver $100~{\rm fb}^{-1}$ to what is currently a low-luminosity IP, the AL3X proposal is an attractive potential path forward in the search for LLPs.
We note another possibility for placing a shielded, large-scale tracker at a distance of order 20~m from a high-luminosity IP at the LHC. The ATLAS detector has a roughly 6-m-long gap between the farthest endcap muon trigger chambers and the last precision-tracking station of the MS, which is placed next to the cavern wall~\cite{1748-0221-3-08-S08003, ATLAS:1999uwa}. Used mainly for measuring the curvature of hard muons in the toroidal magnetic field, the precision-tracking station is equipped with azimuthally oriented drift tubes. Augmenting these with layers of radially oriented tubes of the same technology would make for a full tracker with precise three-dimensional vertexing capability. Protecting this tracker from hadronic background produced in the IP by partly filling the gap with shielding would make for a LLP detector at about 20~m from the ATLAS IP, in the approximate pseudorapidity range $1.3<\eta<2.7$ and with full azimuthal coverage. Searching for a DV signature, this setup would be particularly sensitive to LLPs that decay into final states containing muons inside the large volume of the shielding, but also to decays into hadrons in the air gap between the shielding and the detector. The full ATLAS endcap systems would provide a powerful veto against hard muons produced at the IP. Unless the tracker is equipped with a magnetic field, it could not directly measure the mass of the LLP. However, a lower limit on the mass could be obtained from the distance that the daughter tracks travel inside the shielding from their production point at the DV. Since LHC is already designed to provide high luminosity at this IP, no change to the collider would be required, and the luminosity integrated by the new detector would be equal to that collected by ATLAS.
Negative impact on the prompt-physics program of ATLAS, if any, would be small. A similar configuration could also be constructed at CMS with its modular design. Some of the movable slices of detector can be moved further away from its IP, and shielding can be inserted similarly. The feasibility of these options in terms of cost, mechanical engineering, etc., has yet to be evaluated.
\section{Glossary}
\label{sec:glossary}
\begin{description}
\item[ALEPH:]
an experiment at LEP.
\item[ALP:] axion-like particle.
\item[AMSB:] anomaly-mediated supersymmetry breaking.
\item[ATLAS:] an experiment at LHC.
\item[BaBAR:] an experiment at the SLAC laboratory, USA, 1999-2008.
\item[BELLE:] an experiment at the KEK laboratory, Japan, 1999-2010.
\item[BELLE~II:] an experiment at KEK, Japan, 2018-2025.
\item[BSM:] beyond the Standard Model.
\item[CDF:] an experiment at the Tevatron.
\item[CERN:]
a particle-physics laboratory, Switzerland/France.
\item[CEPC:]
a proposed $\sqrt{s}\sim 250$~GeV circular $e^+e^-$ collider in China.
\item[CLLP:] charged, long-lived particle.
\item[CMS:] an experiment at LHC.
\item[DELPHI:] an experiment at LEP.
\item[DM:] dark matter.
\item[DV:] displaced vertex.
\item[D0:] an experiment at the Tevatron.
\item[ECAL:] electromagnetic calorimeter
\item[FCC-ee] a proposed $\sqrt{s}=90-360$~GeV $e^+e^-$ collider at CERN
\item[FCC-hh] a proposed $\sqrt{s}=100$~TeV hadron collider at CERN.
\item[FPGA:] field programmable gate arrays.
\item[GMSB:] gauge-mediated supersymmetry breaking.
\item[GPU:] graphics processing unit.
\item[GUT:] grand unified theory.
\item[HCAL:] hadronic calorimeter.
\item[HERA:] a hadron-electron collider at the DESY laboratory, Germany.
\item[HL-LHC:] the high-luminosity phase of LHC, after 2026.
\item[HIP:] highly ionizing particle.
\item[ID:] inner detector (tracker).
\item[IP:] the average interaction point of colliding beams.
\item[L3:] an experiment at LEP.
\item[LEP:] Large Electron-Positron collider, a $\sqrt{s}=90-209$~GeV $e^+e^-$ collider at CERN, 1989-2000.
\item[LHC:] Large Hadron Collider,
a $\sqrt{7-14}$~TeV $pp$ collider at CERN.
\item[LHCb:] an experiment at LHC.
\item[LLP:] long-lived particle.
\item[LSP:] lightest supersymmetric particle.
\item[MIP:] minimally ionizing particle.
\item[MoEDAL:] a magnetic-monopole and HIP search experiment at LHC.
\item[MS:] muon system.
\item[MSSM:] minimal supersymmetric standard model.
\item[NLSP:] next-to-lightest supersymmetric particle.
\item[OPAL:] an experiment at LEP.
\item[pNGB:] pseudo-Nambu-Goldstone Boson.
\item[PV:] primary vertex, point of beam-particle collision in a particular event.
\item[QCD:] quantum chromodynamics.
\item[RPV:] $R$-parity violation.
\item[SM:] Standard Model of particle physics.
\item[SppC] a proposed $\sqrt{s}=100$~TeV $pp$ collider in China.
\item[SQuID:] superconducting quantum interference device.
\item[SUSY:] supersymmetry.
\item[Tevatron:] a $1.8-1.96$~TeV $p\bar p$ collider at Fermilab, USA, 1983-2011.
\item[vev:] vacuum expectation value.
\item[WIMP:] weakly interacting massive particle.
\end{description}
\subsection{Indirect Detection: Reconstruction of LLP Decays}
This section reviews existing indirect detection searches, categorized by the detector system in which the LLP decay takes place: the tracking system (Sec.~\ref{sec:ID-based}), the calorimeters (\ref{sec:calo-based}), and the muon system (\ref{sec:ms-based}). We refer the reader to Sec.~\ref{sec:detectors} for details on the use of these subsystems for LLP searches.
\subsubsection{Searches Based on Inner Detector Signatures}
\label{sec:ID-based}
The ID provides precise tracking of charged particles, allowing one to measure their displacement relative to the IP and to identify displaced vertices (DVs). As a result, the largest number of existing indirect searches rely on ID signatures.
\paragraph{Tevatron and LEP Searches.}
We begin with searches conducted before the turn-on of the LHC. These include analyses performed by the CDF and D0 collaborations at the Tevatron $p\bar p$ collider at a center-of-mass energy of $\sqrt{s}=1.96~\tev$, as well as by the ALEPH, DELPHI, L3, and OPAL collaborations at the LEP $e^+e^-$ collider.
CDF has searched for displaced $Z$ bosons in the $Z\to e^+e^-$ decay signature~\cite{Abe:1998ee}. The invariant mass of the $e^+e^-$ pair was required to be consistent with that of the $Z$. The signal yield was extracted by examining the distribution of the radial position $\rho_{\rm DV}$ of the displaced vertices. For $\rho_{\rm DV}>0.1~\cm$, the background yield expected from the $\rho_{\rm DV}$ uncertainty distribution was 1 event, and 4 events were observed. Upper limits on the cross section for production of a $Z$ boson were calculated as a function of $\lambda_{xy}\equiv \gamma\beta_T c\tau$, where $\gamma\beta_T$ is the transverse Lorentz boost of the LLP parent of the $Z$ and $\tau$ is its lifetime.
The D0 collaboration has searched for a LLP that decays into a $\mu^+\mu^-$ pair and potentially an additional neutrino~\cite{Abazov:2006as}.
The analysis used a data sample of $0.38~{\rm fb}^{-1}$ collected at $\sqrt{s}=1.96~\tev$.
Muons were required to have an impact parameter of at least 0.1~mm. The di-muon DV was required to be at a radius of $5<\rho_{\rm DV}<20~\cm$. The expected background was determined to be $0.75\pm 1.1$ events by linear extrapolation of the numbers of events in sideband regions where signal contamination is small relative to the signal region. No events passing the final criteria were observed, and limits on the production cross section as a function of LLP lifetime were set in the context of RPV neutralino decays.
Using a data sample of $3.6~{\rm fb}^{-1}$ at $\sqrt{s}=1.96~\tev$, D0 has searched for pair production of LLPs, with each decaying into a $b\bar b$ quark pair~\cite{Abazov:2009ik}. The use of $b$ quarks was motivated by the presence of muons from bottom hadron decays, which were used for triggering. Events were required to contain two DVs, each with at least four tracks and a radius $\rho_{\rm DV}>1.6~\cm$. The signal yield was determined from the distributions of the minimal invariant mass of the tracks originating from each of the DVs, and the minimal collinearity. The expected background yield was about 5 events, consistent with the observed yield. Limits on the cross section for production of a standard-model Higgs boson that decays into two long-lived hidden-valley scalars, each of which decays into a $b\bar b$ pair.
The DELPHI collaboration has searched for a LLP in a sample containing $3.3\times 10^6$ $e^+ e^- \to Z$ events, with the $Z$ decaying hadronically~\cite{Abreu:1996pa}. The LLP signature was a DV formed from at least two tracks at a radius of $\rho_{\rm DV}>12~\cm$. The DV was required to be isolated, pass a loose collinearity cut, and have a momentum of at least 3~GeV. No events passed the full set of criteria, and limits on the branching fraction for $Z\to \nu N$, where $N$ is a heavy neutrino-like LLP, where obtained.
The experiments at LEP2 employed a series of techniques targeting long-lived sleptons as motivated in GMSB models~\cite{lep2:gmsbsleptons}. An array of prompt and long-lived techniques, including displaced lepton tracks, kinked tracks, and CLLP signatures, was used to exclude long-lived sleptons in a statistical combination of the four main LEP2 experiments. Over a wide range of lifetimes $\mathcal{O}(10^{-3}-10^{3})$~ns, selectrons below about $66$~GeV were excluded, with limits strengthening up to about $90$~GeV at longer lifetimes. Smuons below about $96$~GeV were excluded across this same lifetime range. Long-lived staus were also excluded below about $87$~GeV for lifetimes around $\mathcal{O}(10^{-3})$~ns with strengthened limits to approximately $97$~GeV at larger lifetimes. The small lifetime range in particular remains relatively unexplored at the LHC.
\paragraph{Searches at LHC.}
The CMS collaboration at LHC has searched for resonances that decay into two long-lived particles, each decaying into a pair of leptons~\cite{Chatrchyan:2012jna}. The analysis was performed with $5.1~{\rm fb}^{-1}$ of $\sqrt{s}=7~\tev$ data. An event was required to contain two DVs, each reconstructed from two opposite-charge leptons. Each DV was required to satisfy a collinearity-angle requirement and to have invariant mass greater than $15~\gev$. The DV radial position had to satisfy $\rho_{\rm DV}/\sigma_{\rho_{\rm DV}}>8$ for electrons and $\rho_{\rm DV}/\sigma_{\rho_{\rm DV}}>5$ for muons. A smooth-function fit to the $\rho_{\rm DV}/\sigma_{\rho_{\rm DV}}$ distribution of simulated background events was used to estimate the background yield. The background estimates were $0.02^{+0.09}_{-0.02}$ in the muon channel and $1.4^{+1.8}_{-1.2}$ in the electron channel, consistent with the observed yields in data of 0 and 4 events, respectively. Limits were extracted on the production cross of a heavy scalar times the branching fractions for its decay into two LLPs, each decaying into a lepton pair. The limits were calculated for several benchmark values of the scalar and LLP masses.
CMS refined this technique in a $\sqrt{s}=8~\tev$, $20.5~{\rm fb}^{-1}$ search for events that may contain only one LLP, which decays into a pair of electrons or muons~\cite{CMS:2014hka}. With higher background expected in the case of a single-DV search, the leptons were required to satisfy a tighter displacement requirement, $|d_0|/\sigma_{d_0}>12$. The sample of events with negative values of the transverse collinearity angle $\phi_{\rm col}$ was used as a signal-free control sample with which the $|d_0|/\sigma_{d_0}$ distribution of background events was modeled. Zero background event were expected, and no events were observed. The method was validated in simulated events and with data events having $|d_0|/\sigma_{d_0}<4.5$. Limits on the branching fractions of scalars to two LLPs were set.
In a separate study~\cite{Khachatryan:2014mea}, CMS used $19.7~{\rm fb}^{-1}$ of $\sqrt{s}=8~\tev$ data to search for DVs composed of an electron and a muon. Several lepton-$d_0$ requirements, ranging from $0.02~\cm$ to $0.1~\cm$, were used to define different signal regions. A data-driven method was used to estimate the background yield from heavy-flavor decays, while other background sources were determined from simulation. Background predictions ranged from 18 events to a fraction of an event, depending on the signal region, and were consistent with the observed numbers of events. Limits were extracted in the context of a supersymmetric model, with long-lived stop squarks with proper decay distances in the range $0.02<c\tau_{\tilde t} <100~\cm$.
Using $20.3~{\rm fb}^{-1}$ of $\sqrt{s}=8~\tev$ data, ATLAS searched for events with a DV composed of two leptons~\cite{Aad:2015rba}. Muon and electron candidates were required to satisfy $|d_0|>2~\mm$ and $|d_0|>2.5~\mm$, respectively. The DV position was required to be in the radial range $4<\rho_{\rm DV}<300~\mm$, and the DV invariant mass had to satisfy $m_{\rm DV}>10~\gev$. The dominant background was determined to arise from accidental crossing of prompt leptons, and was evaluated by vertex-fitting leptons from different events. The expected background yield was of order $10^{-3}$ events. No events were observed, and limits were calculated in the context of supersymmetry with gauge mediation or R-parity violation.
ATLAS and CMS have conducted a number of searches involving LLPs that decay hadronically or to a combination of hadrons and leptons, which we describe below.
Ref.~\cite{Aad:2015rba} reports an ATLAS search for events with a DV formed from at least 5 tracks, which may be hadrons or leptons. The analysis, performed with $20.3~{\rm fb}^{-1}$ of $\sqrt{s}=8~\tev$ data, is a refinement of earlier searches performed with smaller data samples~\cite{Aad:2011zb, Aad:2012zx}.
In particular, starting with Ref.~\cite{Aad:2012zx} ATLAS began reconstructing tracks with impact parameter as large as $300~\mm$ for its ID-based LLP searches~\cite{Lutz:2018gir,ATL-PHYS-PUB-2017-014}. This greatly increased the efficiency for highly displaced tracks relative to the that of the standard track reconstruction, which required $|d_0|<10~\mm$.
Tracks were required to satisfy $|d_0|>2~\mm$, and the vertices had to satisfy the requirements $4<\rho_{\rm DV}<300~\mm$ and $m_{\rm DV}>10~\gev$ were applied to DV candidates. Events were triggered by requiring a high-$p_T$ muon, electron, jets, or large MET, resulting in four different signatures. The dominant background was determined to arise when a high-$p_T$ track that accidentally passes close to the position of a low-mass, low-multiplicity DV, typically originating from material interactions. The background level was obtained by combining DVs with tracks from other events. Background from accidental combination of nearby, low-mass DVs was determined to be subdominant. The total background estimate was between about $10^{-3}$ events for the muon-trigger signature and 0.4 events for the jet-trigger signature. No events were seen, and limits were placed on scenarios in the context of Split-SUSY, GMSB, and R-parity violation.
An improved version of the MET analysis was carried out with $32.8~{\rm fb}^{-1}$ of $\sqrt{s}=13~\tev$ data~\cite{Aaboud:2017iio}. The expected background yield was determined to be of order $10^{-2}$ events. No events were observed, and limits were calculated for a Split-SUSY scenario.
CMS has searched for a signature of two jets that originate from a DV in $18.5~{\rm fb}^{-1}$ of $\sqrt{s}=8~\tev$ data~\cite{CMS:2014wda}. A DV was formed from at least 4 tracks, with at least one track from each of the candidate jets. The DV radial distance $\rho_{\rm DV}$ was required to be at least 8 times greater than its uncertainty $\sigma_{\rho_{\rm DV}}$. The invariant mass of the DV tracks and their combined transverse momentum were required to satisfy $m_{\rm DV}>4~\gev$ and $p^T_{\rm DV}>8~\gev$. The final selection of events in two signal regions was based on the number of prompt tracks (defined as those with $|d_0|<500~\mum$) in the jets, the fractions of jet energies carried by these tracks, the number of displaced tracks in the jets, the number of tracks consistent with originating from a point along the direction defined by the dijet momentum vector, and signed impact parameters of these tracks relative to this vector. Inverting some of the criteria results in 8 categories of events. Ratios between the numbers of events in the different categories are used for final background prediction and validation. The observed data yield was 2 events in one of the signal regions and 1 event in the other, consistent with the background expectation. Limits were calculated for a long-lived scalar model and for supersymmetry with R-parity violation.
This study was refined with a $2.6~{\rm fb}^{-1}$, $\sqrt{s}=13~\tev$ sample~\cite{Sirunyan:2017jdo}. Displaced jet candidates were selected based on the $|d_0|/\sigma_{d_0}$ of the tracks, the angle between each jet track's transverse-momentum vector and the transverse-plane line between the PV and the position of the track's lowest-radius detector hit, and the relative energy of jet tracks originating from the PV. Events are required to have two jets that are identified as displaced. The background was estimated from events with one displaced jets, by parameterizing the probability for misidentifying a jet as displaced as a function of the jet track multiplicity. The procedure was validated with simulated multi-jet events. The expected background yield was 1 event, and 1 event was observed in the data sample. Limits were calculated for a model in which a pair of long-lived scalars are produced from by a new vector boson, or R-parity-violating decays of a long-lived stop squark into a $b$ quark and a lepton.
CMS has searched for
long-lived particles giving rise to a DV signature in events containing at least four hadronic jets~\cite{Sirunyan:2018pwn}.
The analysis, which was an improvement over the search reported in Ref.~\cite{Khachatryan:2016unx}, used $38.5~{\rm fb}^{-1}$ of $\sqrt{s}=13~\tev$ data.
Events were required to have at least 2 DVs, each containing at least 5 tracks satisfying impact-parameter requirement $|d_0|/\sigma_{d_0}>4$. The DV radial position was required to be in the range $0.1<r_{\rm DV}<20~\mm$. This small range resulted in limited sensitivity to long lifetimes relative to the sensitivity of other searches. However, it allowed the analysis to do away with the need to model the detector material. The background and potential yields were determined from a fit to the distribution of the distance $d_{\rm 2DV}$ between the two DVs that had the largest numbers of tracks or largest masses. In the fit, the $d_{\rm 2DV}$ distribution of the background was modeled from data using DVs taken from different single-DV events. One event was observed in the data, with a $d_{\rm 2DV}$ value consistent with background. Upper limits were extracted on the cross section for production of pairs of neutralinos, gluinos, or stop squarks that decay into multijet final states.
ATLAS has searched for pair-produced LLPs that decay in the ID or the muon spectrometer~\cite{Aad:2015uaa}. We describe the analysis in Sec.~\ref{sec:ms-based}.
\paragraph{High-energy Searches at LHCb.}
The first LHCb LLP search~\cite{Aaij:2014nma} involved a displaced dijet
signature, and used $0.62~{\rm fb}^{-1}$ at $\sqrt{s}=7~\tev$. We report on
the updated analysis~\cite{Aaij:2017mic}, performed with $2.0~{\rm fb}^{-1}$
at $\sqrt{s}=7$ and $8~\tev$. The search was sensitive to DVs with
$\rho_{\rm DV}<30~\mm$ and $z_{\rm DV}<200~\mm$. Events were required
to have two jets containing tracks associated with a DV and having
total momentum consistent with the direction of the DV relative to the
PV. The DV had to satisfy $\rho_{\rm DV}>0.4~\mm$, as well as
$\rho_{\rm DV}$-dependent requirements on the number of tracks and
invariant mass.
The final analysis step was a fit to the invariant-mass
spectrum. The spectrum of the dominant background, which arose from
heavy-flavor decays or material interactions, was modeled with an
analytic function. The spectrum of SM dijet background was modeled from
events with large angular separation between the jets. No significant signal contribution was found in the fit, and limits were calculated for SM
Higgs decays into two long-lived scalars, each decaying into a $q\bar
q$ pair.
An LHCb search based on two DVs in the same event and without an
associated jet requirement is reported in Ref.~\cite{Aaij:2016isa}. The analysis used $0.62~{\rm fb}^{-1}$ of data collected at $\sqrt{s}=7~\tev$. Each DV
was required to be composed of at least tracks, have a mass $m_{\rm
DV}>6~\gev$, and have a radial displacements of $\rho_{\rm
DV}>0.4~\mm$. A fit to the combined invariant mass of the two DVs was
used to test for the presence of signal. The background distribution was
obtained from control-region data events satisfying loose cuts. The
method was validated using simulated events and data validation
regions. No significant signal was detected, and limits were calculated
for benchmark models of a scalar that decays into two long-lived
fermions.
In Ref.~\cite{Aaij:2016xmb}, LHCb reports a search for a LLP that
decays to a muon and hadrons, using $3~{\rm fb}^{-1}$ of $\sqrt{s}=7$ and
$8~\tev$ data. The DV was required to have at least 4 tracks,
including the muon, and satisfy $\rho_{\rm DV}>0.55~\mm$ and $m_{\rm
DV}>4.5~\gev$. A multivariate discriminator, calculated from the
muon $p_T$, the number of tracks in the DV, $\rho_{\rm DV}$, and the
uncertainties on the DV position, was used to further suppress
background, which was dominated by $b\bar b$ production.
A fit to the $m_{\rm DV}$ distribution was used to search for signal. The $m_{\rm DV}$ distribution of background events was obtained from a
sample of events in which only loose isolation criteria were applied
to the muon, and fitting it simultaneously with the distribution of the signal-region
sample. The resulting signal yield was consistent with zero, and
limits on single- and pair-production of neutralino in RPV scenarios
were calculated.
\paragraph{GeV-scale Searches at LHCb and $\boldsymbol{e^+e^-}$ $\boldsymbol{B}$ factories.}
The final searches described in this section are aimed at the case of
a LLP with mass of up to 10~\gev, searched for by LHCb or in $e^+e^-$ colliders running at the $\Upsilon$ energy range.
LHCb has searched for a long-lived dark photon that decays via $A'\to \mu^+\mu^-$ in $1.6~{\rm fb}^{-1}$ of $\sqrt{s}=13~\tev$ data~\cite{Aaij:2017rft}. Each muon was required to be inconsistent with originating from the PV. Consistency with the PV was required, however, for the $A'$ candidate trajectory, obtained from the dimuon DV and momentum vector. Background from photon conversions in material was reduced to a negligible level by excluding material regions, which were mapped out with hadronic interactions. The dimuon invariant-mass spectrum was fit with a smooth background model plus a signal peak function, which was moved throughout the fit range to scan for signal. No significant signal was observed, and small regions of parameter space were excluded. The excluded regions covered values of the mixing parameter $\epsilon^2$ in the range $4\times 10^{-10}$ to $2\times 10^{-9}$ for several dark-photon mass values between about 220 and 320~GeV. The low mass values reflect the high boost needed for observation of a long-lived $A'$ with $\epsilon^2$ large enough for significant production cross section.
A scalar LLP that decays to a $\mu^+ \mu^-$ pair was searched for by
LHCb in the penguin decays $B^+\to K^+ \mu^+
\mu^-$~\cite{Aaij:2016qsm} and $B^0\to K^{*0} \mu^+
\mu^-$~\cite{Aaij:2015tna}, with $K^{*0}\to K^+\pi^-$. The analyses
used $3~{\rm fb}^{-1}$ of $\sqrt{s}=7$ and $8~\tev$ data. $B$-meson
candidates were identified based on their invariant mass and a
multivariate discriminator designed to suppress non-$B$
background. Specific peaking backgrounds, such as those involving
$B\to K^{(*)}V$, where $V=\omega,\, \phi$ and $\psi$ vector-mesons,
were removed with cuts on the $\mu^+ \mu^-$ invariant-mass
$m_{\mu^+\mu^-}$. The lifetime of the dimuon DV was required to be 3
times larger than its estimated uncertainty. A fit to the
$m_{\mu^+\mu^-}$ distribution was used to test for the presence of signal as
a function of the LLP mass. In the fit, the background was modeled
as an exponential, and a signal-peak component was moved throughout the
mass range in small steps to scan for a signal peak. Limits on the
branching fractions as a function of the scalar LLP mass were
extracted.
LHCb has also searched for a LLP as part of a search for the
lepton-number-violating decay $B^- \to \pi+ \mu^-
\mu^-$~\cite{Aaij:2014aba}. The search was performed with $3~{\rm fb}^{-1}$ of data collected at $\sqrt{s}=7$ and $8~\tev$. A DV was reconstructed from the pion and one of
the muons, forming a heavy, neutrino-like LLP candidate. The distance from the
DV to the PV was required to be 10 times larger than its uncertainty.
The level of background from specific $B$ decays was obtained by fully
reconstructing these decays, and combinatorial background from random
track combinations was estimated by fitting the $B$-candidate
invariant-mass distribution outside the invariant-mass signal region.
The event yield was consistent with the expected background, which was at the level of a few tens of events. Limits on the branching fraction of the process were extracted as a
function of the LLP mass.
A neutrino-like LLP has also been searched for by the Belle
experiment, using $711~{\rm fb}^{-1}$ of $\sqrt{s}=10.59~\gev$ $e^+e^-$
collisions produced by the KEKB collider at KEK~\cite{Liventsev:2013zz}. The LLP was assumed
to be produced in semileptonic $B$-meson decays along with a lepton
(an electron or muon), as well as a hadronic state that was not
reconstructed, and to decay to a pion and a lepton. Thus, the
observed final state was two leptons, which were allowed to be of the
same charge, and a pion. A DV, formed from the pion and one of the
leptons, was required to satisfy different displacement and
collinearity requirements depending on its location and detector hits
associated with the daughter tracks. The observed yield was consistent
with the background expectation of a few events, obtained from simulation. Limits were extracted
on the electron and muon couplings of the neutrino-like LLP as a
function of its mass.
The BABAR experiment at the PEP-II $e^+e^-$ collider at SLAC searched
for a LLP that decays into any of the combinations $e^+e^-$,
$\mu^+\mu^-$, $e^\pm \mu^\mp$, $\pi^+\pi^-$, $K^+K^-$ or $K^\pm
\pi^\mp$~\cite{Lees:2015rxq}. The analysis used $448~{\rm fb}^{-1}$ of data
collected at and just below the $\Upsilon(4S)$ resonance, and
$42~{\rm fb}^{-1}$ collected at the $\Upsilon(2S)$ and $\Upsilon(3S)$
resonances. The two samples were analyzed separately, as they may
involve different LLP production mechanisms. However, no assumption
was made regarding the production mechanism. The DV was required to
satisfy a collinearity requirement and to be positioned within $1 <
\rho_{\rm DV} < 50~\cm$. The $m_{\rm DV}$ distribution was fit to a
spline representing the background component, and a signal mass-peak was
used to scan the $m_{\rm DV}$ range for a signal contribution. No
significant signal was seen, and limits were extracted in a
production-mechanism-independent way as well as for a scalar LLP produced
in penguin $B$ decays.
\subsubsection{Searches Based on Calorimeter Signatures}
\label{sec:calo-based}
The D0 collaboration has searched for a LLP that decays into two
photons or electrons observed in the electromagnetic
calorimeter~\cite{Abazov:2008zm}. The analysis used $1.1{\rm fb}^{-1}$ of
$p\bar p$ collision data. The photon candidates were required to have
transverse energies of at least 20~\gev. The segmentation of the
calorimeter in the radial direction provided 5 measurements along the
electromagnetic shower, from which the direction of the photon
momentum was obtained. The directions of the two photons were used to
extract a common vertex for their origin. For signal events, the
vertex position relative to the PV was expected to be consistent with
the momentum of the diphoton candidate. Events for which the vertex
position was in the opposite direction were used to model the
background distribution. The background expectation was a few tens of events, consistent with the observed yield. Limits were extracted on the cross section
for production of a LLP times the branching fraction of its decay into
two electrons as a function of its lifetime, as well as on the mass
vs. lifetime of a long-lived fourth generation quark.
ATLAS has searched for LLP decays into photons that originate away
from the PV~\cite{Aad:2014gfa}. The analysis, which used $20.3~{\rm fb}^{-1}$
of $\sqrt{s}=7~\tev$ data, was an improvement of an earlier,
$4.8~{\rm fb}^{-1}$ search at $\sqrt{s}=7~\tev$~\cite{Aad:2013oua}. Selected
events were required to have two photons with transverse energies of
25 and $35~\gev$, as well as total missing transverse energy of at
least 75~\gev. The longitudinal displacement $z_0$ of one of the
photons relative to the PV was measured by exploiting the segmentation of the calorimeter in the
radial and pseudorapidity directions in order to measure the direction of the
photon momentum. The arrival time $t$ of this
photon in the calorimeter, relative to that expected for a photon
originating from the PV, was also used to detect whether it originated
from the decay of a slow-moving LLP. The $t$ distributions of events
in several $z_0$ bins were simultaneously fit to obtain the background and possible signal yields in each bin. This approach exploited the relative
independence of the $t$ distribution on the background composition.
Background was studied from $Z\to e^+e^-$ events and events with low
missing transverse energy. The signal region contained 386 events. No signal was observed over the background expectation, and limits were calculated for a supersymmetry model with gauge mediation.
ATLAS has also searched for LLPs that decay within the calorimeter
system~\cite{Aad:2015asa}. The ``CalRatio'' technique used in this
search exploited the broad segmentation of the calorimeter into an
inner electromagnetic calorimeter and an outer hadronic calorimeter.
Events were required to have two energetic jets. The jets were
identified with the calorimeters and were required to be isolated from
charged tracks. Each jet was also required to satisfy $\log_{10}
(E_{\rm H}/E_{\rm EM})<1.2$, where $E_{\rm H}$ and $E_{\rm EM}$ are
the energies deposited in the hadronic calorimeter and electromagnetic
calorimeter, respectively. A data sample containing two back-to-back
jets was used to estimate the multijet background. Only one jet was
required to pass the $\log_{10} (E_{\rm H}/E_{\rm EM})$ cut, and was used
to evaluate the probabilities for passing the trigger and jet-energy
cuts. Fits to these probabilities as functions of jet energy were used
to estimate the multijet background. A smaller level of background
from cosmic rays was estimated with events triggered outside of
beam-crossing times. Background from beam-halo muons that underwent
hard bremsstrahlung in the calorimeter was suppressed with
timing cuts. Using events triggered when only one beam passed through
the detector, the level of this background was determined to be
negligible. The observed yield of 24 events was consistent with the expected background. Limits were extracted for a scalar boson that decays into two LLPs.
ATLAS has searched for ``lepton jets'' from a LLP decay in the calorimeter or
the MS~\cite{Aad:2012kw,Aad:2014yea}. We report on these searches in
Sec.~\ref{sec:ms-based}.
\subsubsection{Searches Based on Muon System Signatures}
\label{sec:ms-based}
DELPHI has searched for a LLP creating a narrow cluster of hits in the MS or the HCAL, using a sample of $3.3\times 10^6$ hadronic $Z$ decays~\cite{Abreu:1996pa}. The HCAL energy deposition was required to be consistent with that of a hadronic shower rather than a muon, while MS hits were required to point back to the IP to within 40~cm. Events were required to have no more than 3 tracks, all starting at a radius of at least 12~cm from the IP.
Background from diphoton and dilepton background was rejected by exploiting the back-to-back topology of such events. No events survived the final selection, and limits were extracted on the branching fraction of the decay $Z\to \nu N$, where $N$ is a neutrino-like LLP.
Using $1.9~{\rm fb}^{-1}$ of $\sqrt{s}=7~\tev$ data, ATLAS has searched for
``lepton jets'' from the decay of a long-lived hidden-sector photon,
identified in the MS or the
calorimeter~\cite{Aad:2012kw,Aad:2014yea}. We describe only
Ref.~\cite{Aad:2014yea}, which uses more final states.
A lepton jet was defined as two or more muons, two calorimeter
clusters consistent with electrons or hadronic jets, or two muons and
a calorimeter cluster, with the objects fitting within a narrow
cone. Lepton-jet candidates were required to be isolated from ID tracks. An event
was required to contain two lepton jets with an azimuthal separation
of $\Delta\phi>1$. Background from cosmic rays was estimated from `empty
bunch crossings, when no $pp$ collisions occur. Background from multijet events was estimated from
sidebands of the $\Delta\phi$ and $\sum p_T$ signal region. The
observed yield was consistent with the background expectation of several tens of events. Limits
were computed for a model in which the Higgs boson decays into two or
four hidden-sector photons and two stable hidden-sector particles.
ATLAS has searched for pair-produced LLPs, each giving rise to a DV in
either the ID or the MS~\cite{Aad:2015uaa}. The
analysis used $20.3~{\rm fb}^{-1}$ of $\sqrt{s}=8~\tev$ data. Events were
required to contain two DVs. DVs in the ID were formed from at
least 5 or 7 tracks (depending on the trigger), each satisfying
$10<|d_0|<500~\mm$. These DVs were further required to have a nearby
jet, potentially originating from the LLP decay products. DVs in the
MS were intended to find LLP decays that occur outside the
calorimeter, where background arises due to jets with some particles
that punch through the calorimeter. Therefore, such DVs were required
to have MS tracks with a large number of hits, and to be isolated from
tracks in the ID and from jets in the calorimeter.
The dominant background was determined to originate from multijet
events. Its level was estimated from the probability of a jet to form
a DV, obtained from data events selected with trigger criteria that
were different from those of the signal region. Two events were found
in the signal region, consistent with the background
expectation. Limits were extracted for LLPs produced in scenarios of
stealth supersymmetry, hidden-valley, and decays of the Higgs boson or
other scalars.
ATLAS has searched for displaced $\mu^+\mu^-$ pairs identified only in
the MS~\cite{Aaboud:2018jbr}, thus providing sensitivity
to LLP decays occurring as far as the outer edge of the
calorimeter. The analysis used $32.9~{\rm fb}^{-1}$ of $\sqrt{s}=13~\tev$
data. Muon candidates were required to be isolated from jets and from
tracks, and to not have a corresponding track in the ID. Each
muon was linearly extrapolated backwards, and the midpoint along the
shortest line between the two extrapolations was taken as the dimuon
DV. Angular cuts were used to remove cosmic-ray and beam-halo
background. The level of background was estimated by studying events
in which one or both muons did have a corresponding track, events with
muons that failed the isolation cuts, and events in which the two
muons had the same electric charge. The observed yield was consistent
with the background expectation of 14 events in one signal region and about 1 event in the other. Limits on GMSB and Higgs decays to two long-lived dark photons were
extracted.
\subsubsection{Out-of-Time Decays of Particles Stopped in Detectors}
\label{sec:stoppedllps}
In the case that a LLP has a long lifetime ($\tau\gtrsim10$~ns) and interacts with SM particles via strong or electromagnetic interactions, such a particle can lose momentum to interactions with dense detector material. If it loses enough momentum via nuclear or electromagnetic interactions, it can come to rest within the detector volume. If the particle is not stable, it may decay well outside of the detector trigger and readout timing windows for the collision in which it was produced. Such a decay can give rise to significant detector activity, especially in the calorimeter system, in a pair of RF buckets that are not filled in the collider. Several searches for stopped particles have been performed by the D0 \cite{PhysRevLett.99.131801}, CMS \cite{Chatrchyan:1458954,Khachatryan:2015jha,Sirunyan:2017sbs}, and ATLAS \cite{Aad:2013gva,Aad:2012zn} collaborations, looking for calorimeter and MS activity in empty bunch crossings, where no collision backgrounds are expected.
Such searches require a careful understanding of the bunch train structure of the collider (in these cases, the Tevatron and the LHC) as well as non-collision backgrounds such as beam halo, cosmic rays, and detector noise. These searches are able to set limits on LLPs with lifetimes in the range of $100$~ns to beyond years due to the unique nature of the search. In most cases, these searches for out-of-time decays of stopped particles have significant overlap in sensitivity with direct detection searches as the particle is expected the have passed through some portion of the detector before coming to a stop.
For Run-1 of the LHC, ATLAS searched for out-of-time calorimeter activity with dedicated triggers \cite{Aad:2013gva}. For gluino and squark $R$-hadron models, between about $5\%$ and $12\%$ of $R$-hadrons will come to rest within the ATLAS detector if sufficiently long-lived. This range represents the spread across $R$-hadron species and interaction models. This particular search uses two signal regions with two different requirements on the momentum scale of jets reconstructed in the calorimeters, at $100$~GeV and $300$~GeV. The different regions are sensitive to different portions of signal parameter space where the latter is more sensitive provided the decay products have large enough momentum. With these requirements, as well as additional vetoes on cosmic muons, detector noise, and beam background, no significant deviation is observed from the expected signal region yields. The $100$~GeV and $300$~GeV signal regions expect $6.4\pm2.9$ and $2.9\pm2.4$ events and observe $5$ and $0$ events, respectively. In Run-1, CMS performed similar searches~\cite{Khachatryan:2015jha,Chatrchyan:1458954} utilizing a dedicated background sample from data-taking runs performed before the first LHC collisions. Both experiments have also analyzed post-beam-dump data for additional sensitivity for decays that may happen minutes after there are no more beam backgrounds.
Building upon their own Run-1 LHC results, CMS also performed a similar search using Run-2 data recorded in 2015 and 2016, looking for out-of-time decays with the calorimeter and MS \cite{Sirunyan:2017sbs}. For the calorimeter-based search, signal regions are optimized for different lifetime ranges with background levels from about $0$ to $11$ events with no significant deviation from this observed in data. For the search channel using the MS, the same procedure is performed with background levels of $0$ to about $0.5$ events expected with no events observed in any region, consistent with SM expectations.
\section{Introduction}
\label{sec:intro}
The Standard Model of particle physics (SM) is a mathematically elegant theory
that describes fundamental physics and provides high-precision predictions consistent with decades of experimental studies.
Nonetheless, it has several important shortcomings that are of primary interest for current research in the field. Of particular relevance to the research reported here is the fact that the SM offers no explanation for the gauge hierarchy problem, the existence of dark matter, the baryon asymmetry of the universe, and the origin of neutrino masses~\cite{Tanabashi:2018oca}.
To address these inadequacies, many theories and models beyond the Standard Model (BSM) have been proposed. These generically predict new particles, in addition to those of the SM, that are in many cases observable at particle colliders. However, despite decades of searches, direct evidence for BSM particles has not been seen. This situation has resulted in the development of new ideas and methods, both theoretical and experimental, that push the search for BSM physics beyond previously studied regimes. One such frontier involves new particles with long lifetimes. This review summarizes developments and results from searches with collider experiments for long-lived particles (LLPs) that can be detected through
\begin{itemize}
\vspace{-0.2cm}
\item their direct interactions with the detector, or\vspace{-0.2cm}
\item their decay, occurring at a discernible distance from their production point. \vspace{-0.2cm}
\end{itemize}
Collider searches for BSM phenomena motivated by the problems of the SM have largely assumed that decays of new particles occur quickly enough that they appear prompt. This expectation has impacted the design of the detectors, as well as the reconstruction and identification techniques and algorithms. However, there are several mechanisms by which particles may be \emph{metastable} or even stable, with decay lengths that are significantly larger than the spatial resolution of a detector at a modern collider, or larger than even the scale of the entire detector. The impact of such mechanisms can be seen in the wide range of lifetimes of the particles of the SM, some of which are highlighted in Fig.~\ref{fig:smllpsummary}. Decay-suppression mechanisms are also at play in a variety of BSM scenarios. Thus, it is possible that BSM particles directly accessible to experimental study are long-lived, and that exploiting such signatures would discover them in collider data.
\begin{figure}[b!]
\centering
\includegraphics[width=4.5in]{figures/SMLLPSummary}
\caption{The SM contains a large number of metastable particles. A selection of the SM particle spectrum is shown as a function of mass and proper lifetime. Shaded regions roughly represent the detector-prompt and detector-stable regions of lifetime space, for a particle moving at close to the speed of light.}
\label{fig:smllpsummary}
\end{figure}
The realization that LLPs are a crucial part of the BSM collider search program has led to development of theoretical models that give rise to LLPs, reconstruction techniques exploiting their signatures, and experimental searches aiming to discover LLPs at previous accelerator facilities~\cite{Fairbairn:2006gg}.
More recently, many searches for such LLPs have been conducted by experiments at the \emph{Large Hadron Collider} (LHC)~\cite{1748-0221-3-08-S08001} at CERN. Since 2010, the LHC has been collecting data from proton-proton collisions at four primary experiments: ATLAS~\cite{1748-0221-3-08-S08003}, CMS~\cite{1748-0221-3-08-S08004}, LHCb~\cite{1748-0221-3-08-S08005}, and ALICE~\cite{1748-0221-3-08-S08002}.
So far, ATLAS and CMS have each collected a data sample of about $150~{\rm fb}^{-1}$ at a center-of-mass energy of $\sqrt{s}=13~\tev$. Only part of this sample has already been used for LLP searches, reflecting the time required to complete such an analysis. LLP searches have also been performed at $\sqrt{s}=7$ and~$8~\tev$, with the full data samples of about $5$ and $20~{\rm fb}^{-1}$, respectively.
LHCb, which is more sensitive to low-mass particles, has searched for LLPs with $3~{\rm fb}^{-1}$ of $\sqrt{s}=7$ and $8~\tev$ data, and has yet to use the $5.7~{\rm fb}^{-1}$ of data collected at $\sqrt{s}=13~\tev$ for this purpose.
Experiments at other colliders have also searched for LLPs.
Until 2011, the CDF~\cite{cdf1,cdf2} and D0~\cite{dzero} experiments at the Tevatron at Fermi National Accelerator Laboratory~\cite{tevatron} collected a total of about $10~{\rm fb}^{-1}$ at $\sqrt{s}=1.96~\tev$~\cite{Bandurin:2014bhr}, with smaller samples at lower center-of-mass energies. Samples of up to $3.6~{\rm fb}^{-1}$ have been used for LLP searches.
The \emph{Large Electron-Positron Collider} (LEP)~\cite{lep,lep2} at CERN operated from 1989 to 2000 with four main experiments, ALEPH~\cite{aleph}, DELPHI~\cite{delphi}, OPAL~\cite{opal}, and L3~\cite{l3}. A sample of $208~{\rm pb}^{-1}$ was delivered at the $Z$ resonance mass, and a total of $785~{\rm pb}^{-1}$ were recorded at other energies, up to $\sqrt{s}=209~\gev$~\cite{Assmann:2002th}.
The $B$-factory experiments BABAR~\cite{Aubert:2001tu} and Belle~\cite{Abashian:2000cg} collected data from $e^+e^-$ collisions at the $\Upsilon(4S)$ resonance mass of $10.59~\gev$, at other $\Upsilon$ resonances, and in the continuum regions off the resonances. Operating between 1999 and 2010, the two experiments collected data samples totaling about $1600~{\rm fb}^{-1}$. The largest sample used for LLP searches was $711~{\rm fb}^{-1}$.
In many LLP search analyses performed to date, the SM backgrounds have been extremely small, sometimes much less than one event. In such cases, the search sensitivity grows roughly linearly with the integrated luminosity of the data sample. This is in contrast to background-dominated BSM searches, where sensitivity is proportional to the square root of the integrated luminosity. Therefore, LLP searches are especially attractive for high-luminosity colliders. In particular, this includes the future runs of the LHC~\cite{hl-lhc}, but also those of Belle~II~\cite{Abe:2010gxa} and proposed high-energy $e^+e^-$ facilities such as FCC-ee~\cite{Gomez-Ceballos:2013zzn}.
As the focus of this review is BSM LLP searches at particle colliders, we aim to cover the broad range of theoretical models, their experimental signatures at such facilities, and published searches pursuing them. Thus, other than an occasional mention when relevant, we do not discuss experiments at non-collider facilities or results from astrophysical observations\footnote{For a review of implications of collider-accessible LLPs on cosmology and astroparticle physics, see Ref.~\cite{Fairbairn:2006gg}}.
Furthermore, following the definition of LLP signatures stated above, we do not include signatures without detectable features of the LLP or its decay.
Basic distance-scale definitions used throughout the review are indicated in Fig.~\ref{fig:smllpsummary}. A particle decay is considered \emph{prompt} if the distance between the particle's production and decay points is smaller than or comparable to the spatial resolution of the detector. By contrast, a distance significantly larger than the spatial resolution characterizes a \emph{displaced} decay. Depending on the relevant detector subsystem, the typical resolution scale is between tens of micrometers to tens of millimeters. The second distance scale of relevance is the typical size of the detector or relevant subsystem, ranging from about $10~\cm$ to 10~m. A particle is \emph{detector stable} if its decay typically occurs at larger distances.
In Sec.~\ref{sec:theory} we review the theoretical motivation and a variety of BSM scenarios that give rise to LLPs. The experimental methods used for identifying LLPs, which frequently give rise to non-standard signatures, are described in Sec~\ref{sec:signatures}. The existing experimental results are summarized in Sec.~\ref{sec:searches}. In Sec.~\ref{sec:constraints} we summarize a selection of experimental constraints on theoretical scenarios. A discussion of the future outlook given planned and proposed experiments appears in Sec.~\ref{sec:future}. We end with concluding remarks in Sec.~\ref{sec:conclusions}, and a glossary of acronyms in Sec.~\ref{sec:glossary}.
\section{Summary of Model Constraints}
\label{sec:constraints}
The result space spanned by the detector signatures reported in Sec.~\ref{sec:searches} has a non-trivial mapping onto the models described in Sec.~\ref{sec:theory}. Even a particular BSM final state may give rise to a mixture of detector signatures that depend on the LLP lifetime and boost. Conversely, a given experimental study often implies limits on the parameters of a variety of models.
In this section and the figures contained therein, we summarize the current limits for a selection of LLP scenarios as a function of lifetime. When possible, we show not only the observed limits, which can be subject to statistical fluctuations, but also the limits expected for an average measurement given the sensitivity of the analysis.
Given the multi-parameter nature of the models, these limits include assumptions made by the analysts regarding the values of parameters that are not shown in the figures. %
We also note that this summary reflects a current snapshot. In particular, when comparing the sensitivities of different search methods, one should account for the different integrated luminosity and center-of-mass energy of the data used to obtain each result.
\subsection{Long-Lived \texorpdfstring{$\tilde{g}$}{g}}
In Split-SUSY (see Sec.~\ref{sec:split-susy}), long-lived gluinos hadronize to form color-singlet $R$-hadrons which, if metastable, can decay to hadronic jets and the lightest neutralino via a virtual intermediate squark. Various detector signatures are sensitive to this signal for different gluino lifetimes, as shown in Fig.~\ref{fig:gluinosummary}.
\begin{figure}[bht]
\centering
\includegraphics[width=6in]{figures/GluinoRHadronSummary}
\caption{A broad range of limits on the mass vs. lifetime of the gluino is obtained from a number of searches~\cite{ATLAS-CONF-2018-003,Sirunyan:2018vjp,Aaboud:2017iio,Aaboud:2018hdl,Sirunyan:2017sbs,Aad:2013gva,Aaboud:2016uth,Khachatryan:2016sfv}. When available, dashed lines and open circles denote the expected limits given the experimental sensitivity, while solid lines and filled circles represent the limits that were actually observed in the experiment. Circles at lifetime values labeled as ``prompt'' denote a search based on a prompt signature, rather than a long-lived one.
}
\label{fig:gluinosummary}
\end{figure}
In the small lifetime region, searches for prompt decays of gluinos set the tightest limits. Their sensitivity decreases at moderate lifetimes, as hadronic jet reconstruction breaks down due to jet-quality requirements that are optimized for prompt jets. Both the ATLAS and CMS collaborations have produced results to this effect~\cite{ATLAS-CONF-2018-003,Sirunyan:2018vjp}. If the decays predominantly occur within the ID, a striking DV signature together with significant MET allows for a very sensitive search, excluding gluino masses up to $2.4$~TeV for lifetimes around $100$~ps~\cite{Aaboud:2017iio}. At longer lifetimes, sensitivity is provided by searches for anomalous-ionization, stopped particles decaying out of time, and slow-moving CLLPs~\cite{Aaboud:2018hdl,Sirunyan:2017sbs,Aad:2013gva,Aaboud:2016uth,Khachatryan:2016sfv}.
\subsection{Long-Lived \texorpdfstring{$\tilde{t}$}{t}}
Various models allow for a stop squark LSP that may decay via RPV couplings, and many searches have been performed by ATLAS and CMS for different RPV couplings. A summary of relevant limits is shown in Fig.~\ref{fig:rpvstopsummary}.
\begin{figure}[tb]
\centering
\includegraphics[width=6in]{figures/RPVStopRHadronSummary}
\caption{Limits on the mass vs. lifetime of a long-lived stop squark decaying via RPV couplings, obtained from LLP searches at LHC~\cite{Aaboud:2017opj,Sirunyan:2018rlj,Aaboud:2017nmi,CMS-PAS-EXO-17-009,Sirunyan:2018ryt,Sirunyan:2018pwn,Sirunyan:2017jdo,Khachatryan:2014mea,Sirunyan:2017sbs,Khachatryan:2015jha,Khachatryan:2016sfv,Aaboud:2016uth}. When available, dashed lines and open circles denote the expected limits while solid lines and closed circles represent the observed limits. If no LLP signature is labeled, the contours show the sensitivity from a search for prompt decays.}
\label{fig:rpvstopsummary}
\end{figure}
Prompt searches and short-lifetime reinterpretations of prompt searches have coverage up to lifetimes of roughly $100$~ps, especially for leptonic decays of the stop~\cite{Aaboud:2017opj,Sirunyan:2018rlj,Aaboud:2017nmi,CMS-PAS-EXO-17-009,Sirunyan:2018ryt}. Dedicated LLP searches provide significantly stronger limits for a range of lifetimes from about $10$~ps to $1$~ns~\cite{Sirunyan:2018pwn,Sirunyan:2017jdo,Khachatryan:2014mea,Khachatryan:2016sfv,Aaboud:2016uth}.
Searches for out-of-time decays of stopped particles are sensitive to long-lived stop squarks provided the decay products deposit sufficient energy in the calorimeter. Existing stopped particle searches have set limits in a relatively unmotivated model of long-lived stop squarks decaying via a gauge coupling to $t\tilde{\chi}$. These limits should, however, apply to other decay signatures given enough calorimeter energy deposition.
In the limit that the stop is detector-stable, CLLP searches~\cite{Sirunyan:2017sbs,Khachatryan:2015jha} have significant sensitivity excluding stop masses below about $1200$~GeV.
\subsection{AMSB SUSY}
As described in Sec.~\ref{sec:amsb}, AMSB SUSY can give rise to a small mass splitting between the lightest chargino and lightest neutralino. A summary of relevant searches is shown in Fig.~\ref{fig:amsb}.
\begin{figure}[tb]
\centering
\includegraphics[width=6in]{figures/AMSBSummary.pdf}
\caption{Limits on the chargino mass as a function of its lifetime in AMSB SUSY scenarios, obtained from LHC and LEP2 searches~\cite{ATLAS:DT8TeV,ATLAS:DT13TeV,CMS:DT13TeV,Aaboud:2018hdl,lep2:charginolowdM}.
The chargino is assumed to be largely wino-like. When available, dashed lines denote the expected limits while solid lines represent the observed limits. The limits from LEP2 use a combination of prompt analyses, CLLP searches, and radiation-based searches~\cite{lep2:charginolowdM}.}
\label{fig:amsb}
\end{figure}
The experiments at LEP2 set combined limits using multiple techniques, and exclude chargino masses up to around $100$~GeV across lifetime space~\cite{lep2:charginolowdM}.
Relying on the fact that the charged chargino daughter is too soft to be tracked, dedicated disappearing-track searches from the LHC set tighter limits, up to around $700$~GeV, for lifetimes between about $20$~ps and several hundred ns~\cite{ATLAS:DT7TeV1,ATLAS:DT7TeV2,ATLAS:DT8TeV,ATLAS:DT13TeV,CMS:DT8TeV,CMS:DT13TeV}. Searches for anomalous ionization are sensitive to longer tracks, corresponding to longer lifetimes. The Run-1 iteration of this search from ATLAS sets limits on long-lived charginos with lifetimes from $1$~ns up to the stable case. Charginos are excluded up to about $480$~GeV for the entirety of the lifetime range of $0.2$~ns to stable.
While not motivated by AMSB, models with a pure-Higgsino LSP also obtain small mass splittings between the lightest chargino and the neutralino LSP. Such models predict lifetimes of order $10$~ps. This low-lifetime region is particularly challenging to search in, as evidenced by the weak exclusion in the left side of Fig.~\ref{fig:amsb}. Charginos with lifetimes below $20$~ps with masses above about $100$~GeV remain unexcluded.
\subsection{Scalar Portal LLP Production}
Production of LLPs via a portal mechanism has been
studied in several sensitive searches. These are summarized in Fig.~\ref{fig:higgsportalsummary}, which shows the limits on the branching ratio for di-LLP production in $125$~GeV Higgs decays as a function of LLP lifetime for multiple LLP masses and decay modes.
\begin{figure}[tb]
\centering
\includegraphics[width=6in]{figures/HiggsSummary}
\caption{LLP-lifetime-dependent limits on the branching fraction for the decay $H\to XX$ of the Higgs boson into two LLPs. The LLP mass and probed decay mode, assumed to have a branching fraction of 100\%,
are indicated by $X(m_X/\mbox{GeV})\to YY$. All limits are obtained from LHC searches~\cite{Aaboud:2018iil,CMS:2014hka,Aaboud:2018jbr,Aad:2015asa,Aad:2015uaa}. When available, dashed lines denote the expected limits, while solid lines represent the observed limits. The region where the $H\rightarrow XX$ branching ratio is larger than $1$ is also shown.
The contours labeled ``$X(60)\rightarrow bb$'' show the sensitivity from a search for prompt decays.
}
\label{fig:higgsportalsummary}
\end{figure}
Different decay modes of the LLP lead to significantly different signatures, with limits having been extracted for LLP decays to light-flavor jets, $b$-quark jets, and light leptons. In the roughly $1$~ps regime, a reinterpretation of a prompt search for Higgs decays to four $b$-quarks excludes exclude branching ratios of order $10\%$~\cite{Aaboud:2018iil}. For leptonic decays of the LLP, displaced track techniques have been used to set limits on branching ratios below $0.1\%$ for a lifetimes between about $1$~ps and $1$~ns~\cite{CMS:2014hka,Aaboud:2018jbr}. Larger lifetimes have been probed by dedicated searches in the ATLAS calorimeters and MS, with unique sensitivity to hadronic branching fractions at the $1\%$ level~\cite{Aad:2015asa,Aad:2015uaa}. For LLP lifetimes below order $1$~ns with hadronic decays, branching ratios below $30\%$ remain unprobed.
\subsection{Magnetic Monopoles}
\begin{figure}[tb]
\centering
\includegraphics[width=6in]{figures/MonopoleSummary.pdf}
\caption{Magnetic monopole mass limits from ATLAS and MOeDAL searches~\cite{Aad:2015kta,Acharya:2017cio} are shown as a function of magnetic charge for various spins, under the assumption of a Drell-Yan-like pair-production mechanism. These interpretations are primarily useful for comparing experimental results, but are otherwise unreliable, as the large coupling makes perturbative calculations diverge.
}
\label{fig:monopolesummary}
\end{figure}
A summary of searches for magnetic monopoles can be found in Fig.~\ref{fig:monopolesummary} for various spin and magnetic charge assumptions. Despite the non-perturbative nature of monopole production, limits are obtained assuming Drell-Yan-like production. Results shown were obtained by searches at ATLAS~\cite{Aad:2015kta} and MoEDAL~\cite{Acharya:2017cio} with mass limits as large as $1790$~GeV for a magnetic charge of $3Q_D$ and spin $1$.
The two experiments used very different data set sizes and techniques. Nonetheless, with ATLAS limits dominating at small values of $q_m$ and the higher-charge regime being covered by MoEDAL, this current snapshot shows complementarity between the searches performed at a general-purpose detector and those at a dedicated experiment.
\section{Detector Signatures in Collider Experiments}
\label{sec:signatures}
This section discusses the methods by which collider-based particle detectors are used to search for LLPs. We begin in in Sec.~\ref{sec:dir-indir} with the definition of the two types of LLP detection, direct and indirect. Typical detector subsystems and their use for SM particle detection are described in Sec.~\ref{sec:detectors}. In Sec.~\ref{sec:llp-det} we review the various detector signatures produced by LLPs of different types. In Sec.~\ref{sec:acceptance-comparison} We compare the sensitivities of the different detector subsystems for detection of a LLP that decays within them using a simple acceptance-based approach. In Sec.~\ref{sec:recoconsiderations} we discuss complications arising from differences between reconstruction of prompt and displaced particles.
\subsection{Direct and Indirect LLP Detection}
\label{sec:dir-indir}
Collider searches for LLPs can be roughly categorized into two classes based on the LLP detection method: \emph{direct} detection and \emph{indirect} detection. The direct category uses experimental signatures arising from direct interaction of the LLP with the detector. By contrast, indirect searches reconstruct the decay of the LLP to SM particles. This classification is closely related to the the one used to categorize experimental searches for dark matter based on the same principle.
\subsection{Typical Detector Subsystems and Particle Detection}
\label{sec:detectors}
A typical collider experiment comprises several main detector subsystems that are used jointly to detect and measure the properties of particles produced in the collision. A schematic representation of such a generic detector is shown in Fig.~\ref{fig:genericdetector}. We note that this figure and all other schematic detector representations in this review are intended only for illustration. In particular, they do not accurately represent the relative spatial dimensions of detector subsystems or the magnetic field configurations in any specific experiment. Therefore, illustrations of charged-particle trajectories and their passage through detector subsystems are not to be understood literally.
\begin{figure}[hb]
\centering
\includegraphics[width=0.8\textwidth]{figures/GenericDetector.pdf}
\caption{A cross-sectional view of a schematic collider experiment is shown in the plane transverse to the beam direction. From the center outwards, this figure shows a generic inner tracking detector (ID), an electromagnetic calorimeter (ECAL), a hadronic calorimeter (HCAL), and muon system (MS).}
\label{fig:genericdetector}
\end{figure}
The innermost subsystem, called the inner detector (ID), is designed to detect electrically charged particles that are long-lived enough to traverse the ID. The most common such particles from the SM are two charged leptons (the electron $e$ and the muon $\mu$) and three hadrons (the pion $\pi$, kaon $K$, and proton $p$). Regions of ionization produced by such a particle in solid-state or gaseous detector sensors are detected as spatial \emph{hits} that are fit into a trajectory, referred to as a \emph{track}. The direction and curvature of the track in a magnetic field yield the particle's momentum vector and electric charge. In some detectors, the ID is enclosed in a Cherenkov-light detector used to measure the velocity of the tracked particles. Combined with the momentum measurement in the ID, this yields the particle mass with sufficient resolution to differentiate between pions, kaons, and protons in a relevant momentum range.
After passing through the tracker, particles produced in the collisions typically enter an electromagnetic calorimeter (ECAL), designed to measure the energies of photons, electrons and positrons. The energy measurement exploits the properties of electromagnetic shower production via photon radiation and $e^+e^-$ pair production, resulting from the interaction of energetic particles with the ECAL material.
Hadrons deposit energy via hadronic interactions with the detector material. Since this process involves large fluctuations and a variety of energy-deposition mechanisms, precise hadron-energy measurement is achievable only at high-energy colliders, where fluctuations are effectively averaged out. In particular, high-energy quarks and gluons hadronize into a collimated spray of hadrons known as a \emph{jet}. Containing the jet requires use of a deep hadronic calorimeter (HCAL) beyond the ECAL. While a jet can be identified solely in the calorimeters, its energy is nowadays measured from a combination of the momenta of tracks in the ID and the signals integrated in the ECAL and HCAL.
Muons do not undergo hadronic interactions, and are heavy enough that they lose energy due to ionization at a low rate. Therefore, they lose only a few GeV while traversing a typical LHC-detector calorimeter. Using this property to identify them, a muon system (MS) is built outside the calorimeter. In high-energy collider detectors, the MS is usually immersed in a magnetic field in order to measure the momenta of muons. Tracks reconstructed in the MS are often combined with tracks in the ID to obtain a high-quality momentum measurement.
When studying final states that include long-lived, weakly interacting particles, such as neutrinos in the SM, an important reconstructed quantity is missing momentum. Using three-momentum conservation and the approximate hermeticity of the detector, it is possible to measure the momentum imbalance in the event and to infer the combined momentum of the invisible set of particles. Since the interacting partons in proton collisions generally carry different fractions of the momenta of the incoming hadrons and many of the particles produced fall outside of the acceptance of the sensitive detector, the summed momenta of measured final-state particles along the beam axis $z$ are not expected to cancel. Therefore, experiments at the LHC and Tevatron measure the \emph{missing transverse momentum}, denoted $E_{\mathrm{T}}^{\mathrm{miss}}$ or MET, where momentum balance is assumed only in the $x$-$y$ plane transverse to the beam direction~\cite{1748-0221-3-08-S08003,1748-0221-3-08-S08004}\footnote{Missing momentum is the primary signature for a neutral LLP that traverses the detector without decaying or interacting, which is outside the scope of this review.}.
Collider detectors are mostly designed and constructed for optimal detection of SM particles produced in the collision. However, LLPs or their decay products would also interact with and deposit energy in the detector, with characteristics that are impacted by the long lifetimes and often high masses of the LLPs. Generally, LLP detection is less efficient and measurement of LLP properties is less precise than those of SM particles, with performance degrading as particle displacement increases. Nonetheless, collider detectors have proven to be powerful instruments for LLP searches, once experimenters take these differences into account. We return to this subtlety in Sec.~\ref{sec:recoconsiderations} after describing in Sec.~\ref{sec:llp-det} the ways in which LLPs can be studied with collider detectors.
\subsection{LLP Detector Signatures}
\label{sec:llp-det}
With the detectors at hand, there are several categories of signatures that can be used for discovering LLPs and measuring their properties. Typical signatures used for direct detection are reviewed in Secs.~\ref{sec:dedx} through~\ref{sec:anomalous-tracking}, while Secs.~\ref{sec:displaced-tracks} through~\ref{sec:displaced-cal} describe the building blocks of indirect-detection searches. The use of combination of signatures is discussed in Sec.~\ref{sec:ddcllps}.
\subsubsection{Anomalous Ionization}
\label{sec:dedx}
A detector-stable, charged LLP (CLLP) is directly detectable via the track that it forms in the ID. If the CLLP is much heavier than the proton, its speed $\beta$ will be markedly lower than that of any track-forming SM particle of the same momentum. One way to detect this is via specific ionization. The average ionization energy loss per unit distance traveled by a charged particle in material of a particular density has a $\beta$ dependence given by the Bethe-Bloch formula~\cite{Tanabashi:2018oca},
\begin{equation}
\left\langle\frac{dE}{dx}\right\rangle \sim -\frac{z^2}{\beta^2} \cdot \left[\ln \left(\frac{\beta^2}{(1-\beta^2)}\right) - \beta^2 + C \right] ,
\label{eq:bethebloch}
\end{equation}
where $C$ is a near-constant that depends on the properties of the material traversed and $z$ is the electric charge of the traversing particle.
Thus, a CLLP that is slow-moving or has charge greater than 1 can be identified via anomalously large $\left\langle\frac{dE}{dx}\right\rangle$.
Silicon-based and gaseous tracking detectors are routinely used to measure the charge deposition associated with a hit. Gas-based detectors used in the MS also have this capability. Calorimeters may also be used for identification of anomalous ionization relative to that of muons, although this has not yet been utilized in any collider CLLP search.
Magnetic monopoles are another example of LLPs that give rise to high specific energy loss through ionization.
As discussed in Sec.~\ref{sec:anomalous-tracking}, they follow anomalous trajectories in the magnetic field of the ID and are thus difficult to track. Nonetheless, their high $\left\langle dE/dx\right\rangle$ signature can be identified in ID tracking detectors as well as in calorimeters segmented to measure the shower development. For a more detailed review of the detector signatures expected for magnetic monopoles in the LHC experiments, and projected sensitivities for searches there, see Ref~\cite{DeRoeck:2011aa}.
\subsubsection{Delayed Detector Signals}
\label{sec:delayed-signals}
A heavy LLP traveling at low speed relative to a SM particle of the same momentum takes more time to cover the distance from its production vertex to a distant detector subsystem, particularly the calorimeter or MS. This ``late'' arrival constitutes a unique LLP signature. Measurement of the time of flight provides a measurement of the speed of the LLP candidate and, in conjunction with its momentum measurement, gives the LLP mass.
Since the bunch spacing at newer colliders is only of order a few meters, the detectors' subsystems are designed with high timing resolution in order to associate the detector signals to the correct bunch crossings. At the LHC, where the bunch crossings are typically separated by 25~ns, many detector subsystems have timing resolutions of $\mathcal{O}(1)$~ns. This resolution, combined with the sheer physical dimensions of the detectors, enables identification of slow moving particles with high precision. This is particularly the case for the muon spectrometer systems, which have excellent timing resolution and are located at the outermost radii. In addition, the calorimeters are finely segmented and are sensitive enough to pick up modest energy deposits along the path of a charged LLP with $\mathcal{O}(1)$~ns precision.
As the LHC luminosity increases, precise timing measurement of these relatively weak calorimeter signals in the presence of growing background from SM particles becomes increasingly challenging.
If a LLP is very slow, or if it is completely stopped in the detector and decays long afterwards, it generally gives rise to a detector signal that occurs after the triggering and readout time windows associated with the collision that produced the LLP. As a result, the LLP signal will not be associated to the correct trigger, and may be lost or misidentified. For such cases, a dedicated triggering technique has been developed for searches at LHC, utilizing the possibility that the detector signal occurs during gaps in the proton bunch train. Bunch crossings at LHC occur every $25$~ns, but some bunches are intentionally not filled and thus contain no protons. Since no collisions take place during such \emph{empty bunch crossings}, detector signals occurring at these times by the late arrival or decay of a LLP are not masked by the presence of high collision background. Therefore, searches looking for such a LLP trigger on these \emph{out-of-time} signals, and generally have very low background levels.
\begin{figure}[tb]
\centering
\includegraphics[width=5in]{figures/DisappearingTrack.pdf}
\caption{A standard track is shown traversing the entirety of the ID. A high-momentum, disappearing track is shown at the bottom of the diagram with a missing hit in the outer region of the ID indicated with an X. In the example shown here, the charged particle is decaying to a low-momentum charged particle and a weakly interacting particle.}
\label{fig:disappearingtrack}
\end{figure}
\subsubsection{Disappearing Tracks}
\label{sec:disappearing-track}
If a CLLP lives long enough to enter deep into the ID yet decays at some point within it, the track that it forms seems to disappear midway through the ID. The identification of such \emph{disappearing tracks} or \emph{tracklets} often involves a veto on ID hits at radii that are larger than the apparent point of disappearance. This signature is particularly important when any electrically charged products of the decay are too soft to be reconstructed, so that there is no displaced-vertex signature (see Sec.~\ref{sec:DVs}).
Since disappearing tracks are necessarily short and have few hits, their momentum-measurement resolution is poorer than that of standard tracks. They are also more susceptible to combinatoric backgrounds, particularly at high luminosity hadron colliders where the track multiplicity is high. This necessitates dedicated optimization and trade-off between background levels and signal efficiencies, particularly at low lifetimes.
When the CLLP decay gives rise to one charged particle that is hard enough to be tracked, the combined signature is two connected tracks at an angle, known as a \emph{kinked track}. A kinked-track search uses more information than a disappearing-track search, and is thus more effective in suppressing background. However, this comes at a cost of lower efficiency and sensitivity to a more restricted parameter space.
\subsubsection{Anomalous-Trajectory Tracks}
\label{sec:anomalous-tracking}
Depending on their properties, some track-forming LLPs may bend in the $z$-oriented magnetic field of an ID differently from a charged particle of the SM.
In particular, a magnetic monopole feels a force along the direction of the magnetic field, rather than perpendicularly to it. This leads to a track that bends parabolically in the $z$ direction for most trackers in solenoidal magnetic fields. Identifying such a track requires a dedicated tracking algorithm in three dimensions, with a resulting signature that is strikingly different from that of any SM particle.
Alternatively, one can also track a magnetic monopole only in the $(x,y)$ plane perpendicular to the magnetic field. In this plane, the trajectory of a magnetic monopole that has no electric charge appears as a straight line, corresponding to infinite-momentum electrically charged particle. Utilizing less information than 3-dimensional tracking, this approach suffers from higher background, yet is simpler to execute.
Quirks represent another example of anomalous trajectories, as discussed in Sec.~\ref{sec:neutralnaturalness}. In addition to each electrically charged quirk feeling a standard Lorentz force through the magnetic field of the ID, an additional force arises from the dark-gluon flux tube between the pair of quirks. This spring-like coupling gives rise to very complex trajectories through the detector. This again requires dedicated tracking algorithms that have not yet been used for LLP searches.
\subsubsection{Displaced Tracks}
\label{sec:displaced-tracks}
Tracks of charged particles emitted in the decay of a LLP are often measurably inconsistent with originating from the \emph{beam spot}, the spatial region where beam-particle collisions take place. Such a track is illustrated in Fig.~\ref{fig:displacedtrack}. The degree of consistency is typically determined from the track's \emph{transverse impact parameter} $d_0$. This is the shortest distance, measured in the $(x,y)$ plane transverse to the beams, between the track and the hypothesized position of the collision. This position is taken to be either the \emph{interaction point} (IP) at the center of the beam spot or the \emph{primary vertex} (PV), which is the point from which reconstructed tracks originating from the collision appear to emanate in a particular event. For the sake of simplicity, our discussion does not make a distinction between these two methods of $d_0$ calculation.
\begin{figure}[tb]
\centering
\includegraphics[width=5in]{figures/DisplacedTrack.pdf}
\caption{In addition to a standard prompt track, a displaced track with a large transverse impact parameter $d_{0}$ is shown.
}
\label{fig:displacedtrack}
\end{figure}
At hadron colliders, a PV generally exists due to the composite nature of the colliding particles. In particular, LHC typically produces tens of PVs per proton-proton bunch crossing. In this case, the most energetic PV, measured using its tracks, is usually used for calculation of $d_0$. Whether or not a PV exists in a particular $e^+e^-$-collider analysis depends on the LLP production mode under study.
Accounting for detector resolution and beam-spot size, a large value of the ratio between $d_0$ and its uncertainty $\sigma_{d_0}$ defines a \emph{displaced track}, \emph{i.e.} one produced far from the beam spot. Use of $d_0/\sigma_{d_0}$ takes advantage of the small transverse beam spot size, which is of order a few microns to a fraction of a millimeter in recent and current colliders, as well as the tens-of-micron resolution of the PV position measurement at modern detectors.
The corresponding impact parameter in the $z$ direction (along the beams) measured with respect to the PV, denoted $z_0$, is also sometimes used for determining whether a track is displaced. However, at colliders with long interaction regions and multiple interactions possible per beam crossing, large $z_0$ may denote that a track originated from another beam interaction. Thus, it is often less useful than $d_0$ for identifying the decay of a LLP.
Current collider detectors and their reconstruction software were designed to study prompt objects. Because of the additional needed resources, tracks with large values of $d_0$ or $z_0$ are often not reconstructed by default, in order to limit data acquisition and computing resources. This leads to loss of efficiency in the reconstruction of LLP decays far from the PV, particularly when the decaying LLP is not highly boosted. Recent searches overcome this limitation by performing dedicated reconstruction of highly displaced tracks for a small part of the data set, selected based on signatures related to the search analysis.
\subsubsection{Displaced Vertices}
\label{sec:DVs}
When several LLP-daughter tracks are detected, their common point of origin constitutes a \emph{displaced vertex} (DV), with a position $\vec r_{\rm DV}$ and corresponding covariance matrix that can be determined by a vertex-fitting algorithm. Such a DV is illustrated in Fig.~\ref{fig:displacedvertex}. Since the vertex involves several tracks, the distance $|\vec r_{\rm DV}|$ of the vertex from the IP or from the PV is determined more precisely than $d_0$ and directly represents the relevant decay length. The length of the transverse-plane projection of $\vec r_{\rm DV}$, denoted $\rho_{DV}$, is also used to separate signal from prompt background, as is the ratio between $\rho_{DV}$ and its uncertainty $\sigma_{\rho_{DV}}$.
\begin{figure}[tb]
\centering
\includegraphics[width=5in]{figures/DisplacedVertex.pdf}
\caption{Primary vertices formed with primary tracks are shown on a generic detector. Displaced vertices in both the ID and MS are shown.}
\label{fig:displacedvertex}
\end{figure}
Another often-used variable is the collinearity angle $\alpha_{\rm col}\equiv \cos^{-1}(\hat r_{\rm DV} \cdot \hat p_{\rm DV})$ between $\vec r_{\rm DV}$ and the sum $\vec p_{\rm DV}$ of the momentum vectors of the tracks composing the DV. The transverse collinearity angle $\phi_{\rm col}\equiv \cos^{-1}(\hat \rho_{\rm DV} \cdot \hat p^T_{\rm DV})$ is defined analogously, with the transverse-plane projections of $\vec r_{\rm DV}$ and $\vec p_{\rm DV}$.
In addition, kinematic variables of the DV can be used to suppress background. Typical variables include the invariant mass $m_{\rm DV}$ of the tracks forming the DV (often assuming the tracks have the mass of a charged pion, by convention), track multiplicity, and their combined momentum $p_{\rm DV}$ or transverse momentum $p^{\rm DV}_{\rm T}$.
Background from SM particles undergoing hadronic interactions with the detector material or support structures is often suppressed by rejecting DV candidates that are found in geometric volumes known to be dominated by dense material. Due to the effort required for accurate mapping of the detector material, the level of detail with which this is done varies, and generally leads to improved sensitivity as the analyses mature with time.
\begin{figure}[tb]
\centering
\includegraphics[width=5in]{figures/DisplacedCaloDeposits.pdf}
\caption{Prompt jet activity is shown along with two types of displaced calorimeter deposits are shown in yellow. One depicts the use of calorimeter segmentation to determine the pointing direction of the incoming particles while the other shows the production of particles within the calorimeter system thus leaving relatively small amount of energy in the ECAL.}
\label{fig:displacedcalodeposits}
\end{figure}
\subsubsection{Displaced Calorimeter Deposits}
\label{sec:displaced-cal}
The precise spatial measurements obtained from the ID are usually limited to relatively small displacements of up to tens of
cm and can be performed only for charged-particle tracks. However,
detectors at high energy colliders have deep calorimeters, and these
have been used to search for LLP decays at larger distances. Calorimeters with 3-dimensional segmentation can measure the direction of an incoming particle, yielding its $d_0$ and $z_0$, which are used as handles for identifying LLPs.
These measurements are used to identify \emph{non-pointing photons} that do originate from the IP, and to find the vertex of a multi-photon decay.
If a LLP decay occurs within the calorimeter volume itself, the longitudinal shower shapes can provide a handle on SM backgrounds. A simple shower-shape observable is the \emph{EM ratio} between the energy deposited in the ECAL and that deposited in the HCAL in an angular region that defines a jet. For a LLP decay that takes place inside the calorimeter, this ratio is anomalously low relative to that of a SM jet produced near the IP.
These signatures are illustrated in Fig.~\ref{fig:displacedcalodeposits}.
\subsubsection{Aggregate Signatures}
\label{sec:ddcllps}
While some LLPs give rise to the individual detector signatures described above, others produce multiple signatures that can be used simultaneously. In many such cases, using multiple detector subsystems allows for independent, uncorrelated handles on a single particle property, allowing for powerful rejection of SM backgrounds.
When a charged LLP (CLLP) passes through multiple detector subsystems, the signatures described above can be used in concert. One may simultaneously look for anomalous ionization in the ID, ECAL, HCAL, and the MS. If the CLLP is also slow-moving, it will give rise to delayed time of arrival at the ECAL, HCAL, and MS. These signatures are illustrated in Fig.~\ref{fig:cllp}.
Similarly, the signature of a magnetic monopole has multi-subsystem characteristics. As mentioned above, a magnetic monopole produces an anomalously shaped track and is usually also highly ionizing, properties that can be measured in the ID and the calorimeter. In addition, if heavy enough, it is slow-moving and leads to delayed signatures.
\begin{figure}[tb]
\centering
\includegraphics[width=5in]{figures/CLLP.pdf}
\caption{A heavy, charged LLP is shown at the bottom-right of the figure traversing an example detector. Its signature would include anomalously high levels of ionization in the various detector subsystems. In addition, if the LLP is sufficiently slow, detectors with sufficient timing resolution can be used to measure its speed. By contrast, a muon with the same momentum, shown at the top-right of the diagram, is minimum-ionizing and highly relativistic.}
\label{fig:cllp}
\end{figure}
A CLLP that decays after passing through (part of) the inner tracker gives rise to a track with anomalously high $dE/dx$ plus an additional signature that depends on the decay position. When the decay occurs inside the ID, the track can be a disappearing track, possibly with a displaced vertex at its endpoint. Decays inside the calorimeter system have a high-EM-fraction energy-deposition pattern, and decays outside the calorimeters produce a DV inside the MS. Depending on the speed of the CLLP, all these detected signals may in addition be delayed.
When searching for the decay products of the LLP, where assuming that the decay products are detectable allows a search to be relatively agnostic to the charge of the LLP itself, multiple signatures may be used as well. When a decay occurs inside a particular subsystem, use of signals from other subsystems can help provide further background rejection. For example, for a decay inside the ID, a DV signature may be augmented by simultaneously searching for displaced calorimeter deposits, delayed signals in both the calorimeter and the MS, and other such signatures.
As the lifetime approaches very small values, reconstruction efficiencies from standard prompt BSM decays searches increase. As a result, prompt searches can retain a tail of sensitivity for the small lifetime regime. However, the use of standard reconstruction on displaced objects can lead to additional systematic uncertainties as biases are introduced in the reconstruction, identification, and calibration techniques.
\subsection{Comparison of Detector Subsystem Acceptances}
\label{sec:acceptance-comparison}
It is useful to gain a basic understanding of the relative sensitivities of search analyses that rely on different detector subsystems for LLP detection with a volume-based acceptance study. For this purpose, we ignore the effects of background which is usually small for all displacements. Furthermore, we assume that the \emph{efficiency}, defined as the probability to trigger on the event and identify the LLP if indeed it decayed within the relevant detector subsystem, is 100\%. With these simplifications, the sensitivity for each detector subsystem can be estimated based on the subsystem \emph{acceptance}, which we define as the probability for the LLP decay to occur within that subsystem, given the LLP lifetime and boost distribution.
This narrow definition of acceptance is useful for estimating the sensitivity in a range of search analysis methods, particularly those aimed at hadronic LLP decays. However, relating it to sensitivity fails for other analysis techniques, such as those aimed at reconstructing LLP decays into muons, which penetrate the calorimeter.
With this caveat in mind, we proceed to calculate the acceptances of typical LHC detector subsystems. We define in Fig.~\ref{fig:toystudy} an example detector with three subsystems, defined by radial and longitudinal barrel-region extents: an ID ($0<\rho<1~\mbox{m}$, $|z|<1~\mbox{m}$), a calorimeter system ($1.5<\rho<4~\mbox{m}$, $|z|<4~\mbox{m}$), and a MS ($4<\rho<10~\mbox{m}$, $|z|<10~\mbox{m}$). These volumes are roughly representative of the detectors at the LHC.
\begin{figure}[tb]
\centering
\includegraphics[width=0.48\textwidth]{figures/DetAcceptanceDetector.pdf}
\includegraphics[width=0.48\textwidth]{figures/DetAcceptance.pdf}
\caption{An example detector is shown (left) containing an ID, a calorimeter system, and a MS represented in a $z$ vs. $\rho$ space. For pair-produced particles with kinematics described in the text, the volume acceptance for LLPs is shown as a function of lifetime (right). The fraction of events containing one LLP decay in each system is shown as solid lines. The fraction of events with \emph{both} LLP decays contained in a single system is shown in dashed lines.}
\label{fig:toystudy}
\end{figure}
For this detector, we determine the acceptance for pair-produced LLPs, with kinematics taken from a simulated sample of gluino pairs produced with \textsc{MadGraph5\_aMC@NLO}~\cite{madgraph} for $13$~TeV proton-proton collisions, with a gluino mass of $m_{\tilde{g}}=2$~TeV. For a given value of the gluino lifetime, the proper decay time for each gluino is sampled from an exponential distribution, and the decay position in the detector is calculated given the gluino velocity. Since our purpose is only to calculate the acceptance, it is assumed that the gluino does not undergo significant interaction with the detector material. The fractions of events that contain at least one LLP decay in each of the ID, calorimeter, or MS are shown as solid lines in Fig.~\ref{fig:toystudy}. Requiring two LLPs to decay in a particular detector subsystem results in reduced acceptance, as shown by the dashed curves in Fig.~\ref{fig:toystudy}. These curves do not represent the associated reduction in reconstruction efficiency.
The actual search sensitivities depend on the trigger and reconstruction efficiencies, as well as on background levels, while the exercise presented here simply evaluates the spatial acceptance. However, since many searches in the ID have very low background even when requiring just a single LLP, Fig.~\ref{fig:toystudy} can be interpreted to demonstrate that the ID provides the best sensitivity for a wide range of lifetimes in these scenarios. In the case of a very large MS, such as that of the ATLAS detector, the MS acceptance overtakes that of the ID in the very long lifetime ($\tau>100$~ns) regime only when requiring at least one LLP per event. Therefore, except when an ID analysis suffers from low reconstruction efficiency or particularly high background, it is likely most advantageous to search for a single decay in the ID, even in the case of pair-produced LLPs.
We note that for analyses with high background levels, requiring two LLPs is usually a necessary measure for effective background reduction, despite the loss of acceptance.
\subsection{Considerations When Using Standard-Object Reconstruction}
\label{sec:recoconsiderations}
LLP searches often utilize standard reconstruction algorithms designed for prompt objects, such as jets, photons, and leptons. Depending on the LLP parameters, standard algorithms can be sensitive to the detector signatures of LLPs, often with significant efficiency. Nonetheless, considerable effort must be spent by analysts on understanding systematic effects that arise from the displacement and/or delay of the detector signals of a LLP. As an example, jets produced in a significantly displaced decay of a heavy LLP include hadrons that impinge on the calorimeter face at a significant grazing angle. This results in calorimeter energy deposits that are both delayed and have different cluster shapes from those of promptly produced jets. Accounting for these systematic effects on the jet energy scale and resolution requires dedicated studies.
As displaced and delayed signatures are generally reconstructed with degraded efficiency compared to prompt standard objects, the interplay between LLP detection and the determination of MET can be non-trivial. For example, an electrically charged LLP could cross the calorimeter while depositing relatively little energy. As it enters the MS, it could be reconstructed as a muon with the correct momentum. However, if it is sufficiently slow, it might arrive in the MS too late for the MS signature to be associated with the correct bunch crossing. The muon signature would thus be missed, leading to a measurement of significant MET.
Thus, on the one hand, LLP searches can sometimes utilize MET as a selection criterion. However, this requires careful study of the experimental effects that lead to the MET measurement. A further complication arises from the fact that MET calculation algorithms at the trigger level are often different from those used offline.
These aspects are important to consider when reinterpreting (or \emph{recasting}) results of a particular search for application to a theoretical model that was considered in the original experimental analysis. In particular, when considering how sensitive a search targeting prompt signals would be to for a model with displaced decays, these additional uncertainties should not be neglected.
\section{Theoretical Motivation for Long-Lived Particles}
\label{sec:theory}
The proper lifetime of a particle, $\tau$, is given by
\begin{equation}
\tau^{-1}=\Gamma=\frac{1}{2m_X}\int d\Pi_f|{\mathcal M}(m_X\to \{p_f\})|^2
\end{equation}
where $m_X$ is the mass of the particle, ${\mathcal M}$ is the matrix element for the particle's decay into the decay products $\{p_f\}$, and $d\Pi_f$ is the Lorentz-invariant phase space for the decay. We use $\hbar=c=1$.
For a particle to be long-lived, there must be a small matrix element and/or limited phase space for the decay.
There are several mechanisms that typically lead to a small matrix element. One is an approximate symmetry which would, if exact, forbid the operator that mediates the decay. Small breaking of the symmetry results in a small coupling constant for this operator. Another mechanism arises from an effective higher dimension operator. In this case, the coupling constant is suppressed by powers of the scale $\Lambda >>m_X$ at which the decay is mediated. This, in fact, is the mechanism for long lifetimes in the case of weakly decaying particles in the SM. To summarize, for a model to predict LLPs, it must satisfy at least one of the following:
\begin{itemize}
\item (nearly) mass-degenerate spectra
\item small couplings
\item highly virtual intermediate states.
\end{itemize}
These conditions, and the LLPs that result from them, are generic features of many BSM models developed to address the big open questions of particle physics mentioned in Sec.~\ref{sec:intro}.
In what follows, we categorize the discussion of LLP mechanisms into models of supersymmetry (SUSY), models of Neutral Naturalness, mechanisms of producing dark matter (DM), and portal interactions between a hidden sector and the SM. We also briefly discuss magnetic monopoles.
This section is meant to provide theoretical context for the experimental searches described later in the report, and not as an exhaustive summary of theoretical models. A more detailed description of theoretical models can be found in Ref.~\cite{Curtin:2018mvb}.
Therefore, we give the most attention to those models in which there are existing searches, in particular models of SUSY.
Note that the different mechanism categories are not mutually exclusive. For example, models of SUSY can also give rise to DM, produced via the mechanisms described in Sec.~\ref{sec:DM}. We summarize the dominant features that gives rise to long lifetimes for the different scenarios in Table~\ref{tab:models}.
\begin{table}
\centering
\begin{tabular}{c l||c|c|c}
& & Small coupling & Small phase space & Scale suppression\\ \hline
\multirow{4}{*}{\rotatebox[origin=c]{90}{\large{SUSY}}}&GMSB & & & \Checkmark\\
&AMSB & &\Checkmark & \\
&Split-SUSY & & & \Checkmark\\
&RPV &\Checkmark & & \\ \hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{\large{NN}}}&Twin Higgs & \Checkmark & & \\
&Quirky Little Higgs & \Checkmark && \\
&Folded SUSY & &\Checkmark & \\ \hline
\multirow{3}{*}{\rotatebox[origin=c]{90}{\large{DM}}}&Freeze-in &\Checkmark & & \\
&Asymmetric & & & \Checkmark\\
&Co-annihilation & & \Checkmark& \\
\hline
\multirow{4}{*}{\rotatebox[origin=c]{90}{\large{Portals}}}
&Singlet Scalars & \Checkmark & & \\
&ALPs & & & \Checkmark\\
& Dark Photons & \Checkmark & &\\
&Heavy Neutrinos& & &\Checkmark
\end{tabular}
\caption{Dominant feature that gives rise to long-lived particles in the theoretical models and mechanisms discussed in the text.}
\label{tab:models}
\end{table}
\subsection{Supersymmetry}
\label{sec:susytheory}
The mass of the SM Higgs boson is required to be around the electroweak scale ($M_{EW}\sim 100$ GeV) due to arguments of perturbative unitarity~\cite{Tanabashi:2018oca}. Since the Higgs boson is a scalar particle, it is sensitive to quantum corrections that are proportional to the cutoff energy scale below which the SM is a good effective field theory. In particular, the Higgs boson mass squared diverges quadratically with the cutoff scale.
For a cutoff scale far above the weak scale, maintaining the Higgs mass at its physical value requires fine tuning of the corresponding SM parameter. This is known as the \emph{Hierarchy Problem}. Of the solutions to the Hierarchy Problem, supersymmetry is the most well-known and well-studied~\cite{Martin:1997ns}. The dominant contribution of the quadratic divergence of the Higgs mass come from the top quark loop. SUSY protects the weak-scale value of the Higgs mass by introducing a colored scalar partner to the top, $\tilde t$, which cancels out the quadratic divergence. Many models of SUSY give rise to naturally long-lived particles, and have thus served as standard benchmarks in many of the LHC LLP searches. The simplest variation of SUSY is the Minimal Supersymmetric Standard Model (MSSM). If SUSY were an exact symmetry, we would have a spectrum of superpartners that would be mass degenerate with the SM particles. Since we have not observed these particles, we know that SUSY must be a broken symmetry. Within the MSSM, one has a variety of options for breaking SUSY, which in turn determine the phenomenology.
\subsubsection{Gauge-Mediated SUSY Breaking}
The simplest SUSY model that gives rise to LLP signatures is \emph{Gauge-Mediated SUSY Breaking} (GMSB)~\cite{Giudice:1998bp}.
In GMSB, SUSY is broken via the gauge interactions of the chiral messenger superfields, $\Phi$, which interact with the goldstino superfield $X$ through the superpotential
\begin{equation}
W=\lambda_{ij}\bar\Phi_i X \Phi_j.
\end{equation}
SUSY is broken when $X$ acquires a \emph{vacuum expectation value} (vev) along the scalar and auxiliary components,
\begin{equation}
\langle X \rangle = M+\theta^2 F,
\end{equation}
where $M$ is the messenger mass scale and $\sqrt{F}$ is proportional to the mass splitting inside the supermultiplet.
One feature of GMSB is that the gravitino, $\widetilde G$, is typically the lightest supersymmetry partner (LSP), and that the attributes that give rise to LLP signatures depend only on $F$. In particular, the next to lightest superpartner (NLSP) decays to the gravitino and a SM particle via higher dimensional operators that are suppressed by $1/F$. The mass of the gravitino is given by
\begin{equation}
m_{\widetilde G}=\frac{F}{k\sqrt{3} M_{Pl}},
\label{eq:mgravitino}
\end{equation}
where $M_{Pl}=(8\pi G_N)^{-1/2}\simeq 2.4\times 10^{18}$ GeV is the reduced Planck mass and $G_N$ is the gravitational constant. The constant $k\equiv F/F_0 < 1$, where $F_0$ is the fundamental scale of SUSY breaking, depends on how SUSY breaking is communicated to the messengers. The suppression by $M_{Pl}$ results in the gravitino being very light.
If the neutralino $\widetilde\chi_1^0$ is the NLSP, its inverse decay width is given by
\begin{equation}
\Gamma^{-1}(\widetilde\chi_1^0\to\widetilde G +{\rm SM})=\frac{16\pi F^2}{k^2\kappa_i m_\chi^5}\sim \frac{1}{\kappa_i}\left(\frac{\sqrt{F/k}}{10^6~{\rm GeV}}\right)^4\left(\frac{300~{\rm GeV}}{m_\chi}\right)^5\times10^{-2} ~{\rm ns} ,
\label{eq:tau-gmsb}
\end{equation}
where $m_\chi$ is the mass of the neutralino and $\kappa_i$ is a parameter that depends on the neutralino mixing matrix. For example, if $\tilde\chi_1^0$ is a pure Bino, the superpartner of the SM $U(1)$ gauge boson, then the decay is dominantly into a photon with $\kappa_i=\kappa_\gamma\equiv |N_{11}\cos\theta_W+N_{12}\sin\theta_W|^2$ where $\theta_W$ is the weak-mixing angle and $N_{1i}$ are the components of $\tilde\chi_0$ in standard notation~\cite{HABER198575}.
We see that $\sqrt{F/k}\sim 10^6$~GeV gives rise to a long-lived neutralino that decays to a displaced photons or $Z$ via the diagrams shown in Fig.~\ref{fig:GMSB}.
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{figures/GMSB-schannel.pdf}
\includegraphics[width=0.35\textwidth]{figures/GMSB-tchannel.pdf}
\caption{Example of long-lived neutralino NLSP ($\tilde\chi_0$)
production at a hadron collider through either an $s$-channel $Z$ (left) or $t$-channel squark exchange (right). The neutralino decays predominantly into a gravitino $\tilde G$ and a $\gamma$ or $Z$ for a Bino-like or Higgsino-like neutralino, respectively. The blue circle denotes the vertex that makes $\tilde\chi_0$ long-lived.
}
\label{fig:GMSB}
\end{figure}
In general, the long-lived neutralino NLSP can be a mixture of the Bino, Wino, and Higgsino gauge eigenstates, leading to a wider variety of final states than described in this section~\cite{Ruderman:2011vv}.
Although the Higgsino and Wino are not the NLSP in the most minimal version of GMSB, they can occur in General Gauge Mediation~\cite{Meade:2008wd,Buican:2008ws,Cheung:2007es} and with potentially interesting long-lived signatures~\cite{Meade:2010ji,ATL-PHYS-PUB-2017-019}.
\subsubsection{Anomaly-Mediated SUSY Breaking}
\label{sec:amsb}
One can also break SUSY through a combination of anomaly and gravity effects. This is known as \emph{Anomaly-mediated SUSY breaking} (AMSB) and gives rise to a different pattern of masses and signatures from those of GMSB. In general, the superconformal anomaly will give rise to soft mass parameters that break SUSY~\cite{Randall:1998uk,Giudice:1998xp}.
In fact, this effect is present in any model with SUSY breaking, but is subdominant if there are other mechanisms for SUSY breaking, such as GMSB.
In pure anomaly mediation, the gaugino masses are given by
\begin{equation}
M_i=\frac{\beta(g_i^2)}{2g_i^2}m_{\tilde G},
\end{equation}
where $g_i$ is the gauge coupling constant for gauge groups $i=1,2,3$, corresponding to $U(1), SU(2)$, and $SU(3)$, respectively,
$\beta(g_i^2)$ is the corresponding renormalization group beta-function~\cite{Gherghetta:1999sw}, and $m_{\tilde G}$ is the gravitino mass.
AMSB predicts mass ratios of $M_1:M_2:M_3\simeq 3:1:7$, so that the Wino is the LSP.
One consequence of this mass hierarchy, and a defining feature of AMSB, is that the lightest chargino is nearly mass degenerate with the lightest neutralino due to an approximate custodial symmetry~\cite{SIKIVIE1980189}. The mass difference is given by~\cite{Giudice:1998xp}
\begin{equation}
m_{\widetilde\chi_\pm}-m_{\widetilde \chi_0}=\frac{M_W^4}{\mu^3}\sin 2\beta+\frac{M_W^4\tan^2\theta_W}{(M_1-M_2)\mu^2}\sin^2 2\beta,
\end{equation}
where $M_W$ is the mass of the $W$ boson, $\mu$ is the supersymmetric Higgs mass, and $\tan\beta$ is the ratio of up and down-type Higgs vevs.
$\widetilde\chi_\pm$ decays with inverse decay width~\cite{Asai:2008sk}
\begin{equation}
\Gamma^{-1}(\widetilde \chi^\pm\to\widetilde \chi_0+X^\pm)\sim \left(\frac{800~{\rm MeV}}{m_{\widetilde \chi_\pm}-m_{\widetilde \chi_0}}\right)^3\times10^{-3}~{\rm ns},
\end{equation}
where $X$ is a SM particle, \emph{e.g.} $\widetilde W^\pm\to\widetilde W_0+\pi^\pm$.
In contrast to the GMSB scenario above, where the NLSP was a neutralino, the NLSP here is the chargino. Being long-lived and charged, it directly interacts with the detector, leaving a unique track signature. Several production modes for the chargino are shown in Fig.~\ref{fig:AMSB}.
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{figures/AMSB-chargedN-N0W.pdf}
\includegraphics[width=0.3\textwidth]{figures/AMSB-gogo-N0Npm-4q2N0W.pdf}
\includegraphics[width=0.3\textwidth]{figures/AMSB-N0Npmj.pdf}
\caption{Various production modes for the long-lived chargino ($\chi^\pm$) at a hadron collider in models of AMSB. The chargino then decays to a neutralino $\tilde\chi_0$ and a soft SM particle $X^\pm$.
}
\label{fig:AMSB}
\end{figure}
\subsubsection{Split-SUSY}
\label{sec:split-susy}
Models of \emph{split-SUSY}~\cite{ArkaniHamed:2004fb,Giudice:2004tc} give rise to long-lived gluinos, which can have interesting signatures at the LHC~\cite{Hewett:2004nw,Kilian:2004uj}. In these models, SUSY is no longer the solution to the hierarchy problem. Instead, SUSY breaking occurs at a scale of $m_S\gg 1000~\tev$,
and all the scalars are ultra-heavy, except for one, which serves as the Higgs boson. By contrast, the fermions, particularly the gluino,
can have weak-scale masses due to chiral symmetries. This setup solves some of the issues in other SUSY models, including the absence of experimental evidence of superpartners,
avoids proton decay,
solves the SUSY flavor and CP problems, as well as the cosmological gravitino and moduli problems, but at the expense of a fine-tuning~\cite{Martin:1997ns}.
The long lifetime of the gluino arises due to the fact that it can only decay through a virtual squark, as shown in Fig.~\ref{fig:splitSUSY}.
Since the squarks are ultra-heavy by construction, this decay is highly suppressed.
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{figures/Split-gogo-qvsq-2qN2qN.pdf}
\caption{Production of long-lived gluinos ($\tilde g$) in a hadron collider, which subsequently decay through an off-shell squark in models of split-SUSY.
}
\label{fig:splitSUSY}
\end{figure}
The effective operators
that give rise to gluino decay are higher-dimensional and suppressed by the squark mass scale, $m_S$:
\begin{equation}
{\cal O}^{(6)}\sim \frac{g_s^2}{m_S^2}\bar q\tilde g\bar{\tilde \chi} q,~~~{\cal O}^{(5)}\sim \frac{g_s^2}{16\pi^2m_S}\tilde g\sigma_{\mu\nu}\tilde \chi G^{\mu\nu},
\end{equation}
where $g_s$ is the strong-coupling constant. These operators lead to the gluino decays $\tilde g\to\chi^0 g, \tilde g\to\chi^0 q\bar q$, and $\tilde g\to\chi^\pm q\bar q^\prime$. An example of the second process is given in Fig.~\ref{fig:splitSUSY}.
Parametrically, the inverse of the gluino decay width is~\cite{Gambino:2005eh}
\begin{equation}
\Gamma^{-1}\sim\frac{4}{N}\left(\frac{m_S}{10^3~{\rm TeV}}\right)^4\left(\frac{1~{\rm TeV}}{m_{\tilde g}}\right)^5\times10^{-4}~\rm{ns},
\end{equation}
where $N$ is an ${\cal{O}}(1)$ normalization factor that depends on the exact parameters of the theory, as well as on the particular decay channel.
There are several mass scales to note. For $m_S>10^3~\gev$, the gluino is long-lived enough to hadronize into a color-singlet state known as an $R$-hadron before decaying~\cite{Farrar:1978xj}. The bosonic gluino $R$-baryon is composed of a gluino and $qqq$ states, while the fermionic gluino $R$-meson and $R$-glueball are formed through a gluino binding to $q\bar q$ and gluon states, respectively. The $R$-hadron flavor structure is analogous to that of ordinary baryons, mesons, and glueballs~\cite{Tanabashi:2018oca}. For $m_S>10^6~\gev$, the gluino travels macroscopic distances before decaying, and for $m_S>10^7~\gev$ it typically decays outside the detector or is stopped in the detector material. At $m_S>10^9~\gev$, the $R$-hadrons may begin to affect nucleosynthesis in the early universe, and at $m_S>10^{13}~\gev$ it is effectively stable, since it has a lifetime longer than the age of the universe.
The mass spectrum of allowed $R$-hadron states has been studied in a variety of ways. Simple models based on constituent-quark and gluon masses give predictions for mass splittings between various states~\cite{PhysRevD.12.147,Kraan:2004tz,Farrar:2010ps,BUCCELLA1985311}. Limited calculations from lattice QCD also exist for certain simplified states~\cite{Marsh:2013xsa}. The phenomenology of $R$-hadron detection can depend greatly on the mass spectrum, especially for the identity of the lightest of these states. Heavier states will tend to cascade to the lightest state in interactions with material, and the charge of this lightest state impacts the character of allowed signatures. Neutral $R$-hadrons lose energy through hadronic interactions, while charged ones also lose energy via ionization. Due to hadronic scattering, $R$-hadrons can change electric charge as they pass through detector material, and can also become doubly charged, giving rise to unique signatures~\cite{Kraan:2004tz,deBoer:2007ii,Mackeprang:2006gx,Mackeprang:2009ad}. $R$-hadrons that decay inside the detector can be detected via displaced or delayed decays, as well as ``disappearing'' tracks. These signatures are discussed in Sec.~\ref{sec:signatures}.
\subsubsection{SUSY Models with \texorpdfstring{$R$}{R}-Parity Violation}
In all the models discussed in the previous sections, there is an implicit global $\mathbb{Z}_2$ symmetry known as $R$-parity, with quantum number $R_p=(-1)^{3(B-L)+2S}$, where $B$, $L$, and $S$ are baryon number, lepton number, and spin, respectively. All SM particles have $R_p=+1$ and their superpartners have $R_p=-1$. This forbids dangerous tree-level renormalizable operators that violate baryon and lepton number, which can lead to proton decay and flavor violation. However, the $\mathbb{Z}_2$ symmetry is not theoretically required for supersymmetry. Therefore, one can remove it and allow for more general \emph{$R$-parity-violating} (RPV) interactions, as long as the experimental constraints are satisfied.
Models of RPV SUSY have been studied extensively~\cite{Barbier:2004ez,Kon:1994xe,Dreiner:1997uz,Barry:2013nva}.
The most general renormalizable Lagrangian with
RPV operators is,
in term of the left-handed chiral superfields,
\begin{equation}
W_{\rm RPV}=\mu_i L_i H_u + \frac{1}{2}\lambda_{ijk} L_i L_j\bar e_k +\lambda'_{ijk}L_iQ_j\bar d_k +\frac{1}{2}\lambda{''}_{ijk}\bar u_i\bar d_j\bar d_k,
\end{equation}
where $\mu_i, \lambda_{ijk}, \lambda'_{ijk}, \lambda''_{ijk}$ are the coefficients for
the RPV interactions. For example, non-zero values for both $\lambda'$ and $\lambda''$
lead to proton decay. In models of dynamical RPV (dRPV)~\cite{Csaki:2013jza}, $R$-parity is conserved at some high-scale, and its breaking is communicated to the visible sector at a mediating scale $M$. As a result, additional non-holomorphic
operators are generated in the K\"ahler potential part of the superpotential.
These take the form
\begin{equation}
W_{nh{\rm RPV}}=\kappa_i\bar e_i H_d H_u^\dagger+\kappa_i^\prime L_i^\dagger H_d+\eta_{ijk}\bar u_i\bar e_j\bar d_k^\dagger+\eta^\prime_{ijk}Q_i\bar u_j L^\dagger_k+\frac{1}{2}\eta^{\prime\prime}_{ijk} Q_i Q_j\bar d_k^\dagger,
\end{equation}
and couple to the SUSY-breaking field $X=M+\theta^2 F_X$.
Since RPV operators
are highly constrained from flavor measurements and non-observation of proton decay~\cite{Chemtob:2004xr}, the RPV coefficients must be small. In the case of the non-holomorphic operators, the coefficients are small since they are suppressed by $\epsilon_X\equiv F_X/M^2$, which can be as small as ${\mathcal O}(10^{-16})$, depending on the SUSY-breaking mediation scheme.
As a result, particles with long-lifetimes are a generic feature of RPV theories.
One experimental signature of RPV can be displaced decays of the LSP~\cite{Aad:2012zx,Khachatryan:2014mea}.
For example, a neutralino can decay into a lepton and two quarks via an off-shell slepton (shown in Fig.~\ref{fig:RPVlambda}), with a mean inverse decay width of~\cite{Dreiner:1991pe}
\begin{equation}
\Gamma^{-1}(\widetilde\chi_0\to \ell_i q_j q_k)\sim \left(\frac{m_{\tilde {\ell}i}}{750{~\rm GeV}}\right)^4\left(\frac{100{~\rm GeV}}{m_{\widetilde \chi_0}}\right)^5\left(\frac{10\times 10^{-5}}{\lambda_{ijk}^\prime}\right)^2\times0.1{~\rm ns},
\end{equation}
where the indices $i, j, k$ denote the lepton and quark generations.
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{figures/RPV-ff-N0N0-nullnull.pdf}~~
\includegraphics[width=0.35\textwidth]{figures/RPV-gg-stopstop-lqdd.pdf}
\caption{(Left) Decays of the long-lived neutralino $\tilde{\chi}^0_1$ into 3 leptons via the RPV coupling $\lambda_{ijk}$ (circles). (Right) Decay of a long-lived $\tilde t$ via either the RPV $\lambda^\prime$ or $\eta^{\prime\prime}$ couplings (circles).}
\label{fig:RPVlambda}
\end{figure}
The lightest stop can decay directly into a lepton and quark with a inverse decay width of
\begin{equation}
\Gamma^{-1}(\widetilde t_1\to \ell^+_i q_k)\sim \left(\frac{500{~\rm GeV}}{m_{\widetilde t_1}}\right)\left(\frac{10^{-7}}{\lambda'_{ijk}}\right)^2\left(\frac{0.12}{\cos^2\theta_t}\right)\times 10^{-3}{~\rm ns},
\end{equation}
where $\theta_t$ is the mixing angle between the left- and right-handed stops.
Recently, there have been several studies on the LHC signature of LLPs with hadronic RPV decays~\cite{Csaki:2015uza,Liu:2015bma}. For example, the $\eta_{333}^{\prime\prime}$ coefficient
induces a $\tilde t\to\bar b\bar b$ decay with inverse decay width
\begin{equation}
\Gamma^{-1}(\tilde t\to\bar b\bar b)\sim \left(\frac{300{~\rm GeV}}{m_{\tilde t}}\right)\left(\frac{M}{10^9{~\rm GeV}}\right)^2\left|\frac{1}{\eta_{333}^{\prime\prime}}\right|^2\times 1.5{~\rm ns}.
\end{equation}
We also note that if the LSP is a long-lived stop, it forms an $R$-hadron.
\subsection{Neutral Naturalness}
\label{sec:neutralnaturalness}
An alternative class of models to solve the Hierarchy Problem involves models of \emph{Neutral Naturalness}. These models rely on discrete symmetries that result in colorless top partners that protect the weak-scale, in contrast to the colored top partners in traditional SUSY models.
Neutral naturalness includes the Twin Higgs~\cite{Chacko:2005pe,Chacko:2005un}, Folded SUSY~\cite{Burdman:2006tz}, and Quirky Little Higgs~\cite{Cai:2008au} models. These models lead naturally to \emph{Hidden Valley} scenarios~\cite{Strassler:2006im,Strassler:2006ri,Han:2007ae}, in which there is a confining hidden sector that is neutral under the SM and only interacts with the SM through so-called portal-type interactions,
which we discuss in more details in Sec.~\ref{sec:portals}. Such models can lead to various signatures at colliders, and have been studied in the context of emerging jets~\cite{Schwaller:2015gea}, \emph{Soft Unclustered Energy Patterns} (SUEPs, also known as \emph{soft bombs})~\cite{Knapen:2016hky},
and semi-visible jets~\cite{Cohen:2015toa,Cohen:2017pzm} to name a few.
\emph{Twin Higgs} is a class of pseudo-Nambu-Goldstone Boson (pNGB) models with two exact copies of the SM
related by a discrete $\mathbb{Z}_2$ symmetry, and a scalar potential
\begin{equation}
V = \frac{\lambda}{2} \left( |H|^2 + |H^T|^2 \right)^2 ~~,
\label{eq:portal}
\end{equation}
where $H$ and $H^T$ are the Higgs doublets of the SM and the Twin sector, respectively.
The key feature is that this scalar potential respects a $\rm{SU}(4)$ symmetry. In the vacuum, this symmetry is spontaneously broken,
giving rise to a vev $\langle H^T \rangle = f/\sqrt{2}$, and the SM Higgs emerges as a light pNGB.
Since the quadratic corrections to both doublets are equal and respect the $\rm{SU}(4)$ symmetry, they do not contribute to the mass of the SM Higgs. As a result, the weak scale is protected from quadratic corrections. The most minimal version of the Twin Higgs is one in which the Twin sector contains only the third generation fermions, and is known as the \emph{Fraternal Twin Higgs}~\cite{Craig:2015pha}. In this scenario, the Twin gluons are the lightest objects charged under the Twin color. These Twin gluons can hadronize into long-lived glueball states, which then decay back to SM particles through the Higgs portal~\cite{Juknevich:2009gg,Juknevich:2009ji}. Models of \emph{Folded SUSY} are similar in spirit to those of the Twin Higgs, but instead of having a twin copy of the SM gauge groups, they only have a twin $SU(3)$.
In the \emph{Quirky Little Higgs} model, the top partner is an uncolored ``top quirk" charged under a hidden $SU(3)$ gauge group. Quirks and anti-quirks are stable, heavy
particles that are connected by a flux tube of the dark gluons of the hidden $SU(3)$~\cite{Kang:2008ea}. In QCD, the quark mass is much smaller than the confining scale, $m_q\ll \Lambda_{\rm QCD}$, and so the gluon flux tube easily breaks into multiple bound states.
By contrast, quirky models have the opposite hierarchy between the quirk mass and the confining scale, $m_Q\gg \Lambda$. As a result, the dark gluon flux tube does not break easily and instead causes the quirks to have macroscopic oscillations before they eventually annihilate. This leads to exotic signatures~\cite{Knapen:2017kly}, particularly when the quirk is electrically charged.
The coupling between the SM Higgs $h$ and the top partners in models of neutral naturalness induces a loop-level coupling of the Higgs to the hidden gluons. The resulting effective coupling is of the form
\begin{equation}
{\cal L}\supset\theta^2\frac{\widehat \alpha_3}{12\pi}\frac{h}{v}\widehat G^a_{\mu\nu}\widehat G^{\mu\nu}_a,
\label{eq:NNoperator}
\end{equation}
where $\widehat\alpha_3$ is the Twin $SU(3)$ coupling, $\widehat G_a^{\mu\nu}$ is the Twin gluon field strength,
$\theta$ is a model-dependent mixing angle, and $v$ is the vev of $H$. For the Twin or Quirky Little Higgs models, $\theta^2\simeq v^2/f^2$, where $f$ is the scale of spontaneous global symmetry breaking. For Folded SUSY, $\theta^2\simeq m_t^2/2 m_{\tilde t}^2$, where $\tilde t$ is the scalar top partner.
The hidden $SU(3)$ sector contains a spectrum of glueball states. The lightest of these is typically the scalar $G_{0^{++}}$ (where $0^{++}$ indicates its $J^{PC}$ quantum numbers), which can decay back into SM states through the Higgs-portal interaction of Eq.~\ref{eq:NNoperator}. This can result in exotic Higgs decays
at the LHC~\cite{Curtin:2015fna} or displaced decays of the glueball with a mean inverse decay width of~\cite{Craig:2015pha}
\begin{equation}
\Gamma^{-1}(G_{0^{++}}\to h^*\to XX)\sim \left(\frac{10{~\rm GeV}}{m_0}\right)^7\left(\frac{f}{5{~\rm TeV}}\right)^4\times 10^3{~\rm ns},
\end{equation}
where $m_0$ is the mass of $G_{0^{++}}$,
$X$ is a SM state, and $h^*$ is an off-shell higgs.
The next lightest glueball, $G_{2^{++}}$, has a mass of $m_2 \sim 1.4\, m_0$, and is metastable. It predominantly decays through radiative Higgs production, $G_{2^{++}} \to G_{0^{++}} h$.
Depending on the parameters,
the $G_{2^{++}}$ can be long-lived and give rise to displaced vertices at the LHC.
Diagrams for production and decay of these glueballs are shown in Fig.~\ref{fig:TH}.
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{figures/TH-0pp.pdf} ~~~~~~~ \includegraphics[width=0.35\textwidth]{figures/TH-2pp.pdf}
\caption{Production of the long-lived glueballs $G_{0^{++}}$ (left) and $G_{2^{++}}$ (right) at a hadron collider through Higgs decay, and their subsequent decay into SM particles $X$ or $G_0^{++}$ and SM particles via Higgs decay.
}
\label{fig:TH}
\end{figure}
In addition, there is a spectrum of twin-quarkonia states. In particular, the phenomenology is enriched if there is also a sufficiently light twin bottom quark. This may lead to an overall enhancement in the twin hadron production rate, giving rise to some combination of twin glueballs and twin bottomonium~\cite{Juknevich:2009ji,Craig:2015pha}.
\subsection{Dark Matter}
\label{sec:DM}
In this section, we move from a discussion on models to one on mechanisms related to dark matter that give rise to LLPs. We present different ways to populate the observed cosmological DM relic abundance, which also give rise to LLPs at colliders. Specifically, we discuss Freeze-in DM, Asymmetric DM, and Co-annihilating DM, and give explicit examples of how these mechanisms are manifested in models of SUSY.
\subsubsection{Freeze-in DM}
Models of \emph{Freeze-in} DM~\cite{Hall:2009bx} generically give rise to LLPs in colliders, and have been studied in much detail (see {\emph{e.g.}}~\cite{Hall:2010jx,Cheung:2010gk,Cheung:2011nn,Co:2015pka,Garny:2018ali,Heeba:2018wtf}).
The freeze-in mechanism is effectively the inverse of the well-known thermal ``freeze-out'' mechanism~\cite{Kolb:1990vq},
and works by populating the DM abundance through $\chi_2\to\chi_1 + X$ decays, where $\chi_2$ is in thermal equilibrium in the early universe, $\chi_1$ is the DM particle, and $X$ represents one or more SM particles. For example, in one specific realization of ``freeze-in" in the context of SuperWIMP theories~\cite{Feng:2003uy,Feng:2004zu}, $\chi_2$ is a charged slepton that decays into a lepton and the gravitino, $\tilde\ell^\pm\to\ell^\pm\tilde G$, as discussed in~\cite{Cheung:2010gk}.
The key feature of freeze-in models is that the interaction between $\chi_2$ and $\chi_1$ is given by a very feeble coupling $g_{12}$, such that $\chi_1$ is thermally decoupled from the plasma. The feebleness of $g_{12}$ results in a long lifetime for $\chi_2$, which can be seen via displaced signatures at colliders.
The relic abundance of $\chi_1$ is related to the $\chi_2$ decay width $\Gamma_{\chi_2}$ through
\begin{equation}
\Omega_{\chi_1} h^2 = \frac{10^{27}}{g_\star^{3/2}} \frac{ m_1 \Gamma_{\chi_2}}{m_2^2}\,,
\end{equation}
where $\Omega_{\chi_1} h^2$ is the cosmological density of $\chi_1$,
and $g_\star$ is the number of
relativistic degrees of freedom at a temperatures $T \approx m_2$ around the $\chi_2$ mass. In the SM, $g_\star(100{~\rm GeV})\simeq 100$ while
$g_\star(100{~\rm MeV})\simeq 10$~\cite{Tanabashi:2018oca}. Taking $\chi_1$ to constitute all of the DM today, \emph{i.e.} $\Omega_{\chi_1} h^2=0.11$~\cite{Tanabashi:2018oca}, one obtains a prediction for the inverse decay width of $\chi_2$,
\begin{equation}
\Gamma^{-1}(\chi_2\to\chi_1+X)\sim\left(\frac{m_1}{100~{\rm{GeV}}}\right)\left(\frac{200~{\rm{GeV}}}{m_2}\right)^2\left(\frac{100}{g_*(m_2)}\right)^{3/2}\times10^6{~\rm{ns}}.
\end{equation}
Thus, the $\chi_2$ is practically stable on detector scales, and can be detected directly if it is electrically charged.
This direct correlation between the cosmological abundance of dark matter and the lifetime of the NLSP allows for precision collider tests of the freeze-in origin of dark matter. The production of $\chi_2$ at colliders depends on the specific implementation of the freeze-in mechanism.
\subsubsection{Asymmetric DM}
Models of \emph{Asymmetric DM} (ADM)~\cite{Kaplan:2009ag,Zurek:2013wia,Petraki:2013wwa} connect the observed DM abundance to the baryon abundance, and thus explain the relatively similar abundances in the dark and visible sectors. The asymmetry is transferred between the visible and dark sectors through higher dimensional operators of the form
\begin{equation}
{\cal O}_{ADM}=\frac{{\cal O}_{B-L}{\cal O}_{X}}{\Lambda^{n+m-4}},
\end{equation}
where ${\cal O}_{B-L}$ is a SM operator that contains baryon number minus lepton number but no gauge quantum numbers, ${\cal O}_{X}$ is an operator that contains DM number, and $n,m$ are the dimensions of ${\cal O}_{B-L}$ and ${\cal O}_{X}$, respectively. ADM can be realized in a variety of different ways. For example, in SUSY, the simplest operators giving rise to ADM are given by
\begin{equation}
W_{ADM}=XLH\, , \; \frac{XU_i^c D_j^c D_k^c}{\Lambda_{ijk}}\, , \; \frac{XQ_i^c L_j D_k^c}{\Lambda_{ijk}}\, ,\; \mbox{and}~\frac{X L_i L_j E_k^c}{\Lambda_{ijk}}\, ,
\end{equation}
where $X$ is the supermultiplet containing the DM candidate, $U^c, D^c,E^c$ are the right-handed anti-quarks and charged anti-leptons, $Q,L$ are the left-handed quark and lepton doublets, and $H$ is the Higgs doublet. $i,j,k$ are the flavor indices.
These interactions allow the LSP to decay into the $X$-sector
plus SM particles. Depending on the size of $\Lambda$, this decay can be long-lived.
As an example, the fermionic operator ${\cal O}_{B-L}=qld^c$ leads to a 3-body decay of the squark LSP, with inverse decay width
\begin{equation}
\Gamma^{-1} (\tilde q\to q' \ell \tilde x)\sim \left(\frac{(F^{(3-\rm{body})})^{-1}}{10^{-5}~\rm{mm}}\right)\left(\frac{100~\rm{GeV}}{m_{\tilde q}}\right)^3\left(\frac{\Lambda_{ijk}}{100~\rm{TeV}}\right)^2 \times10^{-3}{~\rm ns},
\end{equation}
where we have ignored the final state particle masses. If the neutralino is the LSP, then it 4-body decay proceeds through an off-shell squark, and has inverse decay width
\begin{eqnarray}
\Gamma^{-1} (\widetilde\chi_0\to q' \ell \tilde x)\sim \left(\frac{F^{(4-\rm{body})^{-1}}}{100~\rm{mm}}
\right)\left(\frac{100~\rm{GeV}}{m_{\widetilde\chi_0}}\right)^7\left(
\frac{m_{\tilde q} }{500~\rm{GeV}}\right)^4\left(\frac{\Lambda_{ijk}}{100~\rm{TeV}}\right)^2&&\\
\times x^5[(10x^3-120 x^2-120x)+60(1-x)(2-x)\log(1-x)]^{-1}\times10^{-3}{~\rm ns},&&\nonumber
\end{eqnarray}
where $x=(m_{\tilde\chi_0}/m_{\tilde q})^2$, and $F^{(3-\rm{body})}$, $F^{(4-\rm{body})}$ are the 3-body and 4-body coefficients found in~\cite{Kim:2013ivd}.
Diagrams for production and decay of these LSPs are shown in Fig.~\ref{fig:ADM}.
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{figures/ADM-gg-sqsq-DMlqDMlq.pdf}~~~~~~~~
\includegraphics[width=0.35\textwidth]{figures/ADM-ff-N0N0-qqlDMqqlDM.pdf}
\caption{(Left) Decay of the long-lived LSP squark $\tilde{q}$, and
(Right) Decay of a long-lived LSP neutralino $\tilde{\chi}_1^0$
in asymmetric DM scenarios.
}
\label{fig:ADM}
\end{figure}
\subsubsection{Co-annihilating DM}
For
models with more than one particle species in the dark sector, the dark matter relic abundance can be set by annihilation between two different species. This is known as \emph{co-annihilation}. The effective annihilation cross-section between a DM particle $\chi_1$ and its co-annihilation partner $\chi_2$,
taking them to be in thermal and chemical equilibrium, is given by~\cite{Griest:1990kh}
\begin{eqnarray}
\sigma_{\rm eff}=\frac{g_{1}^2}{g_{\rm eff}^2}\left[\sigma_{11}+2\sigma_{12}\frac{g_2}{g_{1}}(1+\Delta)^{3/2}{\rm exp}(-x\Delta)\right.\nonumber\\
\left. +\sigma_{22}\frac{g_2^2}{g_{1}^2}(1+\Delta)^{3}{\rm exp}(-2x\Delta)\right],
\end{eqnarray}
where $\Delta=(m_2-m_1)/m_{1}$, $x=m_{1}/T$, $g_{1,2}$ is the number of degrees of freedom for $\chi_{1,2}$, and $g_{\rm eff}=\sum_{i=1}^N g_i(1+\Delta_i)^{3/2}{\rm exp}(-x\Delta_i)$ is the number of effective degrees of freedom of the dark sector with $\Delta_2=\Delta$ and $\Delta_1=0$.
This co-annihilation process, which determines the current dark matter relic abundance, also plays a crucial role in the phenomenology of dark matter production at colliders~\cite{Izaguirre:2015zva,Baker:2015qna,Buschmann:2016hkc}. If the mass-splitting $\Delta$ is small compared to the masses of the decay products, $\chi_2$ can be long-lived.
As an explicit example, let us consider the low-energy Lagrangian
\begin{equation}
{\cal L}\supset \bar\chi(i\partial-m_\chi)\chi+\bar\psi(i\partial-m_\psi)\psi+(yh\bar\chi\psi+h.c.),
\end{equation}
where $\chi$ is the DM particle, $\psi$ is its co-annihilation partner, and $m_\psi>m_\chi$. This scenario can occur in models of supersymmetry in which both the LSP and NLSP are predominantly mixtures of Bino-Higgsinos (see, \emph{e.g.}~\cite{ArkaniHamed:2006mb,Cheung:2012qy}). In this scenario, the $\psi$ can be long-lived with inverse decay width
\begin{equation}
\Gamma^{-1}(\psi\to\chi \bar f f)\sim \frac{74}{N_c y^2}\left(\frac{10^{-3}}{y_f}\right)^2\left(\frac{10^{-2}}{\Delta}\right)^5\left(\frac{100{~\rm GeV}}{m_\chi}\right)^5\times 10{~\rm ns},
\end{equation}
where $f$ is a SM fermion, $y_f$ is its Yukawa coupling to the Higgs, and $N_c=3$ for quark final states and 1 for leptons. This decay is depicted in Fig.~\ref{fig:coann}.
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{figures/coann-ff-N0N0-N0ffN0ff.pdf}
\caption{Production of the long-lived $\psi$ which decays into the DM $\chi$ in a model of co-annihilation.
}
\label{fig:coann}
\end{figure}
\subsection{Effective Portals to a Hidden Sector}
\label{sec:portals}
The concept of a hidden sector appears in some of the models reviewed above. LLPs also arise when details of the hidden sector are not known, and one posits only the existence of a feeble interaction between the SM and the hidden sector, mediated by a single new field.
The interactions of this type between new physics and the SM are greatly restricted by the gauge and Lorentz symmetries of the SM, and can be categorized into various ``portals" defined by the mediating particle. The dominant interaction terms that give rise to these portals are
\begin{equation}
\cal L\supset
\begin{cases}
(\mu S+\lambda S^2)H^\dagger H & {\rm scalar} \\[13pt]
\displaystyle{\frac{a}{f}}\,\widetilde F_{\mu\nu}F^{\mu\nu} & {\rm pseudoscalar}\\[13pt]
-\displaystyle{\frac{\epsilon}{2\cos\theta_W}}\,F^{\prime}_{\mu\nu}F^{\mu\nu} & {\rm vector}\\[13pt]
y_n LHN & {\rm neutrino}
\end{cases}~~.
\end{equation}
In this section, we introduce each of these portals, as well as the various channels in which the LLP manifests itself.
\subsubsection{Scalar Portal}
\label{sec:scalar-portal}
The simplest extension to the SM is to add a real singlet scalar, $S$, which interacts with the SM Higgs $H$ doublet
through the renormalizable Lagrangian
\begin{equation}
{\cal L}\supset -\frac{\epsilon}{2} S^2 |H|^2+\frac{\mu_S}{2} S^2-\frac{\lambda_S}{4!}S^4+\mu_H^2 |H|^2-\lambda_H|H|^4,
\end{equation}
where we have imposed a discrete $\mathbb{Z}_2$ symmetry $S\to -S$ that removes the linear and cubic in $S$ terms~\footnote{See \emph{e.g.}~\cite{Curtin:2013fra,Evans:2017lvd} for a discussion of this model in the context of exotic Higgs decays}.
If both $S$ and $H$ have nonzero vevs, $S=s+v_s$ and $H=(h+v_h)/\sqrt{2}$, then the two physical scalar particles,
$h$ and $s$, can mix with a mixing angle $\sin\theta=\epsilon v_h v_s/(m_h^2-m_s^2)+{\cal O}(\epsilon^3)$. As a result, $s$ couples to SM fermions $f$ through the term
\begin{equation}
{\cal L}\supset \sin\theta\frac{m_f}{v_h}sf\bar f.
\end{equation}
For sufficiently small $\sin\theta$, the singlet scalar $s$ can be long-lived,
as long as its rapid decay to hidden-sector states is forbidden, \emph{e.g.} due to kinematics.
Its lifetime is then given by
\begin{equation}
\Gamma^{-1}(s\to f\bar f)\sim \left(\frac{0.2}{\sin\theta}\right)^2\left(\frac{100{~\rm MeV}}{m_s}\right)\left(\frac{0.511{~\rm MeV}}{m_f}\right)^2\left(1-\frac{4m_f^2}{m_s^2}\right)^{-3/2}\times 3.8~{\rm ns}.
\end{equation}
If the scalar mass is less than half the Higgs mass, it can be pair-produced at the LHC through exotic Higgs decays with partial width
\begin{equation}
\Gamma(h\to ss)=\frac{\lambda_S\sin^2\theta m_h^3}{48\pi m_s^2}\left(1+2\frac{m_s^2}{m_h^2}\right)^2\sqrt{1-4\frac{m_s^2}{m_h^2}}.
\end{equation}
If $s$ is light enough, it can also be produced through rare meson decays, in particular $\Upsilon\to s\gamma$ and the penguin decays such as $B\to sK$~\cite{GRINSTEIN1988363}.
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{figures/Scalar-ss.pdf}
\caption{Di-scalar production from an exotic Higgs decay.}
\label{fig:scalar-schannel}
\end{figure}
\subsubsection{Pseudoscalar Portal}
One can also consider a pseudoscalar particle, $a$, which arises as a pNGB in theories with a spontaneously broken global symmetry. One famous example of such a particle is the \emph{axion}, which was introduced to solve the strong CP problem of QCD and arises from the breaking of the $U(1)$ Peccei-Quinn symmetry~\cite{Peccei:1977ur,Wilczek:1977pj}. The QCD axion has a fixed relationship between its mass $m_a$ and decay constant $f$. More general models allow for these two parameters to be independent, in which case the $a$ is commonly known as an \emph{axion-like particle} (ALP). A key feature of ALPs is that they have derivative couplings to fermions, and therefore their masses are protected from radiative corrections through a shift-symmetry. As a result, one can have naturally light ALPs.
The ALP Lagrangian, including interaction with the SM fields up to dimension~5, is given by
\begin{eqnarray}
{\cal L}&\supset& \frac{(\partial_\mu a)^2}{2}- \frac{a}{f}\left[c_G\frac{g_S^2}{16\pi^2}\widetilde{G}_{\mu\nu}^AG^{A\mu\nu}+c_W\frac{g^2}{16\pi^2}\widetilde{W}_{\mu\nu}^AW^{A\mu\nu}+c_Y\frac{{g_Y}^2}{16\pi^2}\widetilde{B}_{\mu\nu}B^{\mu\nu}\right]\nonumber\\
&+&\frac{\partial_\mu a}{f}\sum_i\frac{c_i}{2}\bar\psi_i\gamma_\mu\gamma_5\psi,
\label{eq:alpL}
\end{eqnarray}
where the notation $\widetilde X_{\mu\nu}=\frac{1}{2}\epsilon^{\mu\nu\alpha\beta}X_{\alpha\beta}$ represents a dual field strength tensor.
The $c_j$ are model-dependent coupling constants and $f$ is a scale for the UV completion. Generically, the fermion couplings of the ALPs are subdominant to the gauge couplings.
The $a$ can be produced in $Z\to a\gamma$, through inclusive $\gamma^*\to\gamma a$ at $e^+e^-$ colliders,
and via flavor-changing neutral current meson decays, such as $B \to aK$ and $K \to a\pi$~\cite{Izaguirre:2016dfi}. At beam-dump experiments, the ALP can be produced via Primakoff production, $\gamma\gamma\to a$~\cite{Dobrich:2015jyk},
or via emission from a fermion (see~\cite{Essig:2013lka} for a review). One can also search for ALPs in diphoton initial states in ultra-peripheral heavy-ion collisions~\cite{Knapen:2016moh}. The ALP subsequently decays to either fermions or photons, depending on the exact model parameters, with inverse decay widths
\begin{eqnarray}
\Gamma^{-1}(a\to \gamma\gamma)&\sim&\left(\frac{f}{15{~\rm TeV}}\right)^2\left(\frac{300{~\rm MeV}}{m_a}\right)^3\times 10^{-3}{~\rm ns},\\
\Gamma^{-1}(a\to f_i\bar f_i)&\sim&4\left(\frac{300{~\rm MeV}}{m_a}\right)\left(\frac{f}{15{~\rm TeV}}\right)^2\left(\frac{100{~\rm MeV}}{m_{f_i}}\right)^2\times 10^{-4}{~\rm ns},
\end{eqnarray}
where we have taken $c_j=1$.
At dimension-6 and higher, there are additional couplings between the ALP and the Higgs,
\begin{equation}
{\cal L}\supset \frac{c_{ah}}{f^2}(\partial^\mu a)(\partial_\mu a)H^\dagger H+\frac{c_Zh}{f^3}(\partial^\mu a)\left(H^\dagger iD_\mu H +{\rm h.c.}\right)H^\dagger H+\ldots,
\end{equation}
which lead the exotic Higgs decays $h\to aa$ and $h\to Za$~\cite{Bauer:2017ris}. These couplings can also contribute to the meson decays mentioned above.
\subsubsection{Vector Portal}
Adding a dark-sector Abelian gauge group, $U(1)_D$, to the SM leads to a vector portal interaction between the ``dark photon" $A^\prime$ and the SM hypercharge gauge boson $A$ through kinetic mixing,
\begin{equation}
{\cal L}\supset-\frac{\epsilon}{2\cos\theta_W}F^\prime_{\mu\nu} F^{\mu\nu}
\end{equation}
where $F^{\mu\nu}$ and $F^{\prime\mu\nu}$ are the field strength tensors for the SM hypercharge $U(1)_Y$ and for the $U(1)_D$ gauge groups, respectively.
The coefficient $\epsilon$ parameterizes the strength of the mixing between the two gauge fields, and in principle can have arbitrary value. However, values in the range $\epsilon^2\sim10^{-8}-10^{-4}$ are favored if the interaction is generated by heavy particles charged under both $U(1)_D$ and $U(1)_Y$~\cite{Holdom:1985ag}.
The dark photon can obtain mass through either a dark Higgs~\cite{Batell:2009yf,Curtin:2014cca} or the Stueckelberg mechanism~\cite{Stueckelberg:1900zz}. In the former scenario, either the dark photon or the dark Higgs can be long-lived.
A result of the kinetic mixing between $A^\prime$ and the SM photon is that the $A^\prime$ can be produced, when kinematically allowed, in any scenario in which a photon is produced. Therefore, one can search for the $A^\prime$ at $B$-factories~\cite{Batell:2009yf,Essig:2013vha}, electron beam dump experiments~\cite{Essig:2010xa}, and at both lepton~\cite{Essig:2009nc} and hadron~\cite{Curtin:2014cca} colliders. For $m_A^\prime<m_\pi$, $A^\prime$ is produced in the decay $\pi^0\to A^\prime\gamma$, and can be searched for at proton fixed-target experiments, where pions are copiously produced~\cite{Batell:2009di}.
Once produced, the $A^\prime$ will decay into any charged SM particle pair through its kinetic mixing with the SM photon. The width for this decay is
\begin{equation}
\Gamma(A^\prime\to \bar f f)=\frac{1}{3}\epsilon^2\alpha\left(1+\frac{2m_f^2}{{m_A^\prime}^2}\right)\sqrt{{m_A^\prime}^2-4m_f^2}.
\end{equation}
In the case $m_A^\prime\gg m_f$, the lifetime of the dark photon decaying into fermions is given by
\begin{equation}
\Gamma^{-1}(A^\prime\to \bar f f)\sim 2.6 \left(\frac{10^{-5}}{\epsilon}\right)^2\left(\frac{100{~\rm MeV}}{m_A^\prime}\right)^2\times 10^{-2}{~\rm ns}.
\end{equation}
If $m_{A^\prime}<2m_e$, then its only possible decay channel is $A^\prime\to 3\gamma$ with inverse decay width~\cite{Pospelov:2008jk,McDermott:2017qcg}
\begin{equation}
\Gamma^{-1}(A^\prime\to 3\gamma)\sim \left(\frac{0.003}{\epsilon}\right)^2\left(\frac{m_e}{m_{A'}}\right)^9~\rm{s}.
\end{equation}
In this case the dark photon flies hundreds of meters before decaying, even for $\epsilon\sim 1$, and is seen only as missing energy in colliders.
\subsubsection{Heavy-Neutrino Portal}
The SM predicts that neutrinos are massless, but the observation of neutrino oscillations provides evidence that neutrinos do have small, non-zero masses~\cite{Tanabashi:2018oca}. A simple
way to generate neutrino masses is to add a set of three right-handed neutrinos, $n_i$,
with Majorana masses $M_{i}$ and no SM-gauge quantum numbers. They can couple to the SM via neutrino-Higgs Yukawa interactions in the Lagrangian,
\begin{equation}
{\cal L}_{\rm Type-I}\supset y_{\alpha i} L^\alpha n_i^c H-\frac{M_{ij}}{2}n_i^cn_j^c+{\rm h.c.},
\label{eq:seesaw-lag}
\end{equation}
where $i=1,2,3$,
$L^\alpha \equiv \genfrac(){0pt}{2}{\nu^\alpha}{\ell^\alpha}$
is the left-handed lepton doublet with generation index $\alpha=e,\mu,\tau$, and $y_{\alpha i}$ are the Yukawa couplings. $M_{ij}=M_{ji}$ are the elements of a $3\times 3$ right-handed neutrino Majorana mass matrix. After electroweak symmetry breaking, the Yukawa terms generate a
Dirac mass matrix
for the neutrinos, $m_D^{\alpha i}=y^{\alpha i}v$, where $v$ is the SM Higgs vev. This results in 6 potentially massive Majorana fermions, which are linear combinations of $\nu_\alpha$ and $n_i$. In this basis, the
"$6\times 6$"
neutrino Majorana mass matrix is
\begin{eqnarray}
m_\nu^{\alpha i}=\begin{pmatrix}
0&m_D^{\alpha i}\\
m_D^{i\alpha} & M^i
\end{pmatrix}.
\label{eq:neutrino-mass-matrix}
\end{eqnarray}
Diagonalizing this matrix and taking the limit where the $m_D\ll M$, we end up with 3 heavy neutrinos, $N$, which are predominantly the right-handed $n_i$ states and have masses of order $M$, and 3 light neutrinos, $\nu$,
which are predominantly the $\nu_\alpha$
states and have masses of order $m_{\nu}\sim m_D^2/M$. These light states are the ones which are observed experimentally.
This is the simplest example of the seesaw mechanism~\cite{Minkowski:1977sc,Yanagida:1979as,Mohapatra:1979ia,GellMann:1980vs} and is known as the Type-I seesaw\footnote{See~\cite{King:2003jb,dGneutrino} for a review of additional neutrino mass mechanisms.}. A result of the mixing between the mass and flavor eigenstates is that the heavy neutrino states $N$ acquire a small coupling under the weak interactions.
The small mixing angle $\theta^2\simeq m_\nu/M$ characterizes the strength of the interaction of $N$ with the SM.
Since $N$ couples to the SM through weak interactions, it can be
produced
in rare decays of
ground-state mesons that are heavier than the $N$. Heavier $N$ states can be produced in the vector-boson decays $W^\pm\to\ell^\pm N$ and $Z\to \nu N$. Likewise, all decays of $N$ are mediated by either neutral- or charged-current interactions~\cite{Johnson:1997cj,Gorbunov:2007ak,Helo:2010cw,Bondarenko:2018ptm}. For sufficiently small $\theta$, the $N$ flight distance is macroscopic. For example, if $M_N\ll m_W$, then its inverse decay width into leptons is given by
\begin{equation}
\Gamma^{-1}(N\to\ell_\alpha^-\ell_\beta^+\nu_\beta) \sim \left(\frac{12{~\rm GeV}}{M_N}\right)^5\left(\frac{10^{-4}}{|\theta|^2}\right)\times10^{-3}{~\rm ns}.
\end{equation}
As with all weak processes, the semileptonic decays
$N\to\ell q_\alpha \bar q_\beta$
and
$N\to\nu q_\alpha \bar q_\alpha$
occur as well. If $N$ is heavy enough, its decay final states include an on-shell boson, particularly $N\to \ell^\pm W^\mp $,
with smaller branching fractions for $N\to Z\nu$ and $N\to h\nu$~\cite{Basso:2008iv}. Fig.~\ref{fig:RHN} shows diagrams for production and decay of the $N$.
\begin{figure}
\centering
\includegraphics{figures/RHN-Wpm-lpmlmpnu.pdf} ~~~~~~~
\includegraphics{figures/RHN-N-ZWnul.pdf}
\caption{Example processes for the production and decay of a light, right-handed neutrino (left) and decay of a heavy $N$ into and on-shell gauge boson and a lepton (right).
}
\label{fig:RHN}
\end{figure}
\subsection{Magnetic Monopoles}
A strong theoretical motivation for the existence of monopoles was proposed by Dirac as a way to explain charge quantization in quantum electrodynamics (QED)~\cite{Dirac:1931kp,Dirac:1948um}\footnote{More comprehensive reviews on magnetic monopole solutions can be found in~\cite{Milton:2006cp,Balestra:2011ks,Rajantie:2012xh,Acharya:2014nyr}. A summary of the recent status of searches can be found in~\cite{Patrizii:2015uea}}. Dirac demonstrated that adding a magnetic monopole, now commonly referred to as the Dirac monopole, to the theory and quantizing angular momentum leads to the following relationship between electric charge $q_e$ and magnetic charge $q_m$,
\begin{equation}
q_m q_e=\frac{n}{2},
\label{eq:mag-charge}
\end{equation}
where $n$ is an integer. This
results in a magnetic charge $q_m = n Q_D$ where $Q_D\equiv Q_e/{2\alpha}$ is the Dirac charge, $Q_e$ is the electron's electric charge, and $\alpha\simeq1/137$ is the fine-structure constant. We can then define an analogous magnetic fine-structure constant $\alpha_m\equiv Q_D^2/(4\pi)\simeq 34.25$. An experimental consequence of the large magnetic coupling is that the monopole is a highly ionizing particle (HIP), which experiences large electromagnetic energy losses as it traverses through matter. A theoretical consequence is that calculations of monopole processes are pushed into the non-perturbative regime.
Monopoles arise naturally in Grand Unified Theories (GUTs) as topological defects of space-time whenever a gauge group is spontaneously broken into an exact $U(1)$ subgroup~\cite{tHooft:1974kcl,Polyakov:1974ek}. An example of this is~\cite{Dokos:1979vu}
\begin{equation}
SU(5)\to SU(3)\otimes SU(2)\otimes U(1),
\end{equation}
which results in a monopole with mass
\begin{equation}
M_{\rm mon}\sim\frac{\Lambda_{\rm GUT}}{\alpha}.
\end{equation}
For a GUT unification scale of $10^{16}$ GeV, this yields $M_{\rm mon}\sim 10^{17}-10^{18}$ GeV. One can produce intermediate-mass monopoles with $M_{\rm mon}\sim 10^7-10^{14}$ GeV through additional symmetry-breaking schemes~\cite{Huguet:1999bu,Wick:2000yc}. However, these are still far above the reach of current collider probes. Lower-mass monopoles, known as Cho-Maison monopoles, can be produced through the electroweak symmetry-breaking and can be interpreted as a hybrid between the Dirac monopole and the 't~Hooft-Polyakov GUT monopole~\cite{Cho:1996qd,Bae:2002bm}. Assuming that the Cho-Maison monopole is a topological soliton, one can estimate its mass to be in the 1-10~TeV range~\cite{Cho:2012bq,Cho:2013vba}. The non-perturbative nature of the monopole makes a more accurate estimate of the mass difficult. A priori, such monopoles could be pair-produced electromagnetically at colliders. However, their large couplings might cause them to annihilate immediately or form bound monopole-antimonopole states known as monopolonium~\cite{HILL1983469,Dubrovich:2002gp,Epele:2007ic}. These states can be produced through photon-fusion at the LHC~\cite{Epele:2008un}.
Another class of defect solutions is known as electroweak strings. These were proposed in the context of the SM by Nambu, who suggested that they have a monopole and antimonopole at either end~\cite{Nambu:1977ag}. The mass of the monopole and the tension of the string are roughly in the TeV range\footnote{See~\cite{Achucarro:1999it} for further discussion of electroweak strings.}.
The estimated mass of the Nambu monopole is given by~\cite{Nambu:1977ag}
\begin{equation}
M_N\sim\frac{4\pi}{3e}\sin^{5/2}\theta_W\sqrt{\frac{m_h}{M_W}}\mu\simeq 689~\rm{GeV},
\end{equation}
where $M_W$ is the $W$ boson mass, $m_h$ is the Higgs boson mass, $\sin\theta_W$ is the weak mixing angle, $\mu=M_W/g$, and $g$ is the $SU(2)$ gauge coupling.
The dumbbell configuration can rotate and emit electromagnetic radiation, and can possibly have a lifetime long enough to be observed at the LHC~\cite{James:1992wb,James:1992zp}.
|
1,314,259,995,917 | arxiv | \section{Introduction}
The ternary intermetallic system Al--Pd--Mn has been of great interest in the last
years, because it forms a high number of complex metallic alloy compounds (CMAs). In
this paper we focus on the $\Xi$ phases, which are approximants of a decagonal
quasicrystal with a lattice constant of $1.6$ nm in the periodic direction. Under
plastic deformation, these phases show a novel type of dislocations, so-called
metadislocations, which were first described by Klein \textit{et al.}\cite{Klein1999}
\emph{Ab initio} studies of these metadislocations, even with fast codes using
density functional theory like VASP,\cite{Kresse1993,Kresse1996} are currently
unfeasible. Their spatial extent is about $200\text{ \AA{}}$ and they involve more
than $10\,000$ atoms -- impossible to simulate even with state of the art
\emph{ab initio} programs.
With classical molecular dynamics (MD) it is easily possible to simulate structures
with millions of atoms in reasonable time. The treatment of atoms as point masses
interacting with an effective potential allows for microscopic insight into
many processes on the atomic scale. The ability to control almost any aspect of the
simulation can be used for optimizing the structure, determining physical properties
or explaining physical phenomena in detail.
However, obtaining an effective potential for classical molecular dynamics is not
straightforward. In order to extract reliable results, a potential has to be
adjusted to the specific physical conditions considered. These can be for example
high pressures, strain, surfaces or phase boundaries. A common way is to fit
a potential such that it reproduces experimental data like lattice constants,
cohesive and surface energies \cite{Foiles1986,Mei1991} or simply combining pure
element potentials into an alloy potential.
For ternary systems, like Al-Pd-Mn, establishing a potential with these approaches
is very challenging. The small number of available experimental data is not enough
to fit reliable effective potentials. Hence, to obtain a potential that can be used
for structure analysis and optimization, we apply the force-matching method
\cite{Ercolessi1994} using the \potfit package.\cite{Brommer2006,Brommer2007} In the
force-matching method, results from \emph{ab initio} simulations are used as
reference data to adjust the parameters of a potential. This not only dramatically
increases the amount of information available for fitting (the total number of
datapoints can easily reach several thousands). Also, if the reference data is found
to be insufficient, more pertinent reference data can be generated at relatively low
cost. This makes it possible to create realistic potentials for binary or ternary
systems. In our case we used forces on individual atoms, the cohesive energy and
stresses on the unit cells to fit a reliable potential.
In Sec.~\ref{sec:eam} we describe the interaction model used in this research.
The fitting procedure using the force-matching method is presented in
Sec.~\ref{sec:fitting}, the reference data used is given in Sec.~\ref{sec:ref}. The
results will be discussed in detail in Sec.~\ref{sec:results}.
\section{EAM Potentials}
\label{sec:eam}
A common way to describe atomic interactions in metals is the \emph{embedded atom
method} (EAM).\cite{Daw1983} It implicitly includes many-body interactions by a
term which depends on the environment of every atom. The potential energy of a
system described with the EAM method can be written as
\begin{eqnarray}
E_{\text{pot}}=\frac{1}{2}\sum_{\substack{i,j\\j\neq i}}
\Phi_{ij}(r_{ij})+\sum_iF_i(n_i)\label{eqn:eam1},\\
\text{with}\qquad n_i=\sum_{j\neq i}\rho_j(r_{ij}).
\label{eqn:eam2}
\end{eqnarray}
The first term in \eqref{eqn:eam1} represents the pair interactions between atoms
$i$ and $j$ at a distance $r_{ij}=|\bm{r}_j-\bm{r}_i|$. The function $F_i(n_i)$
is the embedding energy of atom $i$ in the host density $n_i$. This density $n_i$
\eqref{eqn:eam2} is calculated as the sum over contributions from the neighboring
atoms, with $\rho_j$ being the transfer function of atom $j$. It does not represent
an actual physical density; $n_i$ is a purely empirical quantity.
For the pair and transfer part, we have tested three different combinations of
analytic functions as model potentials. Potential I has oscillations in the pair
potential but not in the transfer function. In contrast, potential II has
oscillations only in the transfer function. Finally a third potential has
oscillations in both functions.
For the simple pair potential without oscillations we chose a Morse potential.
It has a single minimum and is used in model II only:
\begin{equation}
\Phi(r)=\Psi\left(\frac{r-r_c}{h}\right)D_e\left[(1-e^{-a(r-r_e)})^2-1\right].
\label{eqn:morse}
\end{equation}
$\Psi$ is a cutoff function, where the free parameters $r_c$ and $h$ describe the
cutoff radius and the smoothing of the potential. The remaining parameters are
$D_e,a,$ and $r_e$; $D_e$ is the depth of the potential minimum, $r_e$ the
equilibrium distance and $a$ the width of the potential minimum. The pair potential
function with oscillations is adopted from Mihalkovi\v{c}
\textit{et al.}:\cite{Mihalkovic2008}
\begin{equation}
\Phi(r)=\Psi\left(\frac{r-r_c}{h}\right)\left[
\frac{C_1}{r^{\eta_1}}+\frac{C_2}{r^{\eta_2}}\cos(kr+\varphi)\right].
\label{eqn:eopp}
\end{equation}
This ``empirical oscillating pair potential'' (EOPP) has been used in various works
on complex metallic alloys and quasicrystals,
\cite{Mihalkovic1996, Mihalkovic2002, Krajci1992} as it provides great flexibility.
The first term of \eqref{eqn:eopp} with the parameters $C_1$ and $\eta_1$ controls
the short-range repulsion. The second term is responsible for the damping
($C_2,\eta_2$) of the oscillations with the frequency $k$.
The cutoff function $\Psi(x)$ is defined by
\begin{equation}
\Psi(x)=\frac{x^4}{1+x^4}\\
\label{eqn:cutoff}
\end{equation}
for $x<0$ and $\Psi(x)\equiv0$ for $x\geq 0$. This function guarantees that the
potential functions as well as their derivatives up to the second order approach
zero smoothly at the cutoff distance $r_c$.
Two different analytic forms were used as transfer functions; one allows for
oscillations, the other one does not. The latter one is a simple exponential decay
frequently used in established EAM potentials:\cite{Johnson1989, Pasianot1992,
Mei1991}
\begin{equation}
\rho(r)=\alpha\exp(-\beta r),
\label{eqn:exp_decay}
\end{equation}
where $\alpha$ is the amplitude and $\beta$ is the decay constant. This function
is used in model I. For models II and III, we used an oscillating transfer function,
which is taken from Ref.~\onlinecite{Chantasiriwan1996}:
\begin{equation}
\rho(r)=\Psi\left(\frac{r-r_c}{h}\right)\frac{1+a_1\cos(\alpha r)+a_2\sin(\alpha
r)}{r^\beta}.
\label{eqn:csw}
\end{equation}
The four free parameters are $a_1, a_2, \alpha$ and $\beta$, where $a_1$ and $a_2$
determine the amplitude of the oscillations, $\alpha$ is the wave vector
and $\beta$ controls the decay.
The embedding function $F(n)$ was adopted from Ref.~\onlinecite{Johnson1989}. It is
based on the general equation of state from Rose \textit{et al.}\cite{Rose1984}
The original form is given as
\begin{equation}
F(n)=F_0\left[\frac{q}{q-p}\left(\frac{n}{n_e}\right)^p-
\frac{p}{q-p}\left(\frac{n}{n_e}\right)^q\right]+F_1\frac{n}{n_e}.
\label{eqn:johnson}
\end{equation}
The parameters in this function are $F_0,F_1,p,q$ and $n_e$. $p$ and $q$ are real
values and $n_e$ is the equilibrium density. In this paper we use this function in
the limit $p\rightarrow q$ and chose $n_e=1$:
\begin{equation}
F(n)=F_0\left[1-q\log n\right]n^q+F_1n,
\label{eqn:pohlong}
\end{equation}
because the original form is numerically unstable with our optimization algorithms.
The number of free parameters of our three potential models is comparatively large.
The non-oscillating (oscillating) pair potential has 3~(6) parameters, and the
non-oscillating (oscillating) transfer function requires 2~(4) values. All models
share the embedding function with 3 free parameters. Every pair and transfer function
has one additional parameter $h$ for the cutoff function $\Psi$. The cutoff radius
$r_c$ is kept fixed at $7\text{ \AA{}}$. In a ternary system like Al--Pd--Mn with 12
potential functions, this adds up to a total number of 60, 48 and 66 parameters for
the models I, II and III, respectively.
\section{Fitting Procedure}
\label{sec:fitting}
All force-matching was performed with the \potfit package of Brommer and Gähler,
\cite{Brommer2006,Brommer2007} which has previously been used to optimize
tabulated pair and EAM potentials. For this work, its capabilities were extended to
analytic potential models.
All free parameters of the analytic functions were fitted to an \emph{ab initio}
reference database containing relaxed ($T=0$) structures, snapshots from
\emph{ab initio} MD simulations at higher temperatures and a few strained samples
(see Tables \ref{tab:structs1} and \ref{tab:structs2}). All \emph{ab initio}
calculations were performed with the Vienna Ab initio Simulation Package (VASP)
\cite{Kresse1993,Kresse1996} using the generalized gradient approximation (GGA) and
the Projector Augmented Wave (PAW) method.\cite{Kresse1999}
Two different optimization algorithms were used to fit the potentials. They both
minimize the sum of squares defined by
\begin{equation}
Z=\sum\omega_E|\Delta E|^2+\sum|\Delta F|^2+\sum\omega_S|\Delta S|^2,
\end{equation}
where $\Delta E$, $\Delta F$ and $\Delta S$ are the energy, force and stress
residuals. These deviations are calculated as the difference of the \emph{ab initio}
and the EAM value, e.g. $$\Delta E=E_{\text{EAM}}-E_{\text{\emph{ab initio}}}.$$
$\omega_E$ and $\omega_S$ are global weights for the energies and stresses.
$\omega_E=22\,500$ was chosen to obtain potentials that yield very precise energies,
but also reasonable forces. For configurations with about 150 atoms, this effectively
weighs the energies with a factor of approximately 50. The stress weight $\omega_S$
was set to 750, so that the total weight of the six stress tensor components per
configuration is approximately equal to ten times the weight of all forces in one
configuration.
The first optimization algorithm used is simulated annealing.\cite{Kirkpatrick1983}
It is based on the Metropolis criterion, where a decrease in the target function $Z$
is always accepted and an increase only with a probability $P=e^{-\Delta Z/T}$. This
allows the algorithm to escape local minima. The artificial temperature $T$ is
steadily decreased during the optimization. To ensure that the fit converged to the
global minimum, the optimization was restarted with a high temperature several times.
Subsequently a conjugate gradient based method \cite{Powell1965} was applied to
converge to the final optimum. During the fitting procedure, all parameters were
confined to a predefined range by use of numerical punishments.
\section{Reference data}
\label{sec:ref}
The structures used as reference data are shown in Tables \ref{tab:structs1} and
\ref{tab:structs2}. There are 119 configurations with a total of $16\,103$ atoms.
The number of reference datapoints is $49\,340$. They consist of $48\,309$ forces,
119 energies and 714 stresses.
\begin{table}[htp]
\begin{ruledtabular}
\begin{tabular}{cccc}
& Al--Mn structures & Al--Pd structures & \\ \hline
& Al$_{10}$Mn$_3$.\textit{hP}26 & AlPd.\textit{cP}8 & \\
& Al$_{11}$Mn$_4$.\textit{aP}15 & Al$_{21}$Pd$_8$.\textit{tI}116 & \\
& Al$_{12}$Mn.\textit{cI}26 & Al$_3$Pd$_2$.\textit{hP}5 & \\
& Al$_6$Mn.\textit{oC}28 & & \\
& AlMn.\textit{tP}4 & &
\end{tabular}
\end{ruledtabular}
\caption{Binary structures ($T=0$) used to fit the potentials, with their
corresponding Pearson symbol.}
\label{tab:structs1}
\end{table}
In addition to the binary and ternary structures, one reference configuration
for each of the pure elements was also included. These were, in detail,
Al.\textit{cF}4, Pd.\textit{cF}4 and Mn.\textit{cI}58. This was done to get
reliable reference points for the calculation of the enthalpy of formation.
\begin{table}[htp]
\begin{ruledtabular}
\begin{tabular}{lcc}
& Number of atoms & $\Delta H$ (eV/atom) \\ \hline
$T=0$ & Al$_{92}$Pd$_{28}$Mn$_{10}$\footnotemark[1] & $-0.512$ \\
& Al$_{92}$Pd$_{28}$Mn$_8$\footnotemark[1] & $-0.485$ \\
& Al$_{112}$Pd$_{36}$Mn$_6$\footnotemark[2] & $-0.526$ \\
& Al$_{114}$Pd$_{34}$Mn$_6$\footnotemark[2] & $-0.503$ \\
& Al$_{112}$Pd$_{34}$Mn$_6$\footnotemark[2] & $-0.512$ \\
& Al$_{110+x}$Pd$_{32}$Mn$_{8}$\footnotemark[2] & see Sec. \ref{subsec:refinement}
\\
& Al$_{124}$Pd$_{8}$Mn$_{24}$\footnotemark[2]\footnotemark[3] & $-0.297$ \\
& Al$_{147}$Pd$_{43}$Mn$_{18}$\footnotemark[2] & $-0.485$ \\
& Al$_{294}$Pd$_{88}$Mn$_{16}$\footnotemark[2] & $-0.491$ \\
$T>0$ & Al$_{92}$Pd$_{28}$Mn$_8$\footnotemark[1]\footnotemark[4] & -- \\
& Al$_{92}$Pd$_{28}$Mn$_{10}$\footnotemark[1] ($1500$ K) & -- \\
\end{tabular}
\end{ruledtabular}
\footnotetext[1]{Structure generated from canonical cell tiling \cite{Henley1991}}
\footnotetext[2]{From structure optimization}
\footnotetext[3]{T-Al-Pd-Mn, see Sec.~\ref{sec:ref}}
\footnotetext[4]{From several MD runs at $600$, $1100$ and $1800$ K with small
strains}
\caption{Ternary structures used to fit the potentials and their \emph{ab initio}
formation enthalpy $\Delta H$.}
\label{tab:structs2}
\end{table}
All atomic configurations from binary systems (Table~\ref{tab:structs1}) were
taken from the alloy database of Widom \textit{et al.}\cite{alloydb} and have
been fully relaxed with \emph{ab initio} methods. They were chosen to provide more
data for the Pd--Pd and Mn--Mn interactions. Magnetism was not included in
our \emph{ab initio} calculation; it was shown that the manganese atoms in the
$\Xi$-phases are nonmagnetic.\cite{Hippert1999} Because the structures we
want to investigate are on the aluminum-rich side of the phase diagram,
there is only little data for the Mn--Pd interaction.
The reference configurations for the $\Xi$-phases are from different sources.
The structures in Table~\ref{tab:structs2} denoted with superscript a were taken
from the alloy database.\cite{alloydb} They were generated with the canonical cell
tiling,\cite{Henley1991} which creates hypothetical models by decorating a tiling
with clusters. To compensate for the low amount of manganese in these samples
and the hence resulting lack of data, five of the aluminum atoms were replaced
by manganese in some configurations. \emph{Ab initio} molecular dynamics simulations
with VASP \cite{Kresse1993,Kresse1996} were run with these samples at $600$, $1100$
and $1800$ K to obtain different local atomic configurations. These calculations
were done in the generalized gradient approximation (GGA) with PAW
potentials.\cite{Kresse1999}
At the same time, \emph{ab initio} structure optimization was carried out for two of
the $\Xi$-phases. Particularly this were the $\Xi$-phase with the smallest unit cell,
which contains about 152 atoms and is called $\xi$ and the next bigger one,
containing about 304 atoms, which is called $\xi'$. All structures generated in the
course of this optimization are denoted in Table \ref{tab:structs2} by superscript b.
To judge the stability of these structures, their energy is compared to a mixture
of competing phases, the convex hull. This hull, defined over a ternary phase
diagram, contains the cohesive energies of all stable compounds as vertices.
If the energy of a structure is above this convex hull, it could decompose
into the neighboring structures and thus lower its energy. If the energy
of a new structure is below the convex hull, it is considered to be
thermodynamically stable. The structures which define the convex hull for the
$\Xi$-phases, are T-AlPdMn, Al$_{12}$Mn, Al$_{21}$Pd$_8$ and Al$_3$Pd$_2$. They
have also been included in the reference database. A detailed description of these
phases and the convex hull is given in Ref.~\onlinecite{Frigan2011}.
\section{Results}
\label{sec:results}
We determined parameters for all three potential models from the reference data
described above. The root mean square (RMS) errors for forces, energies and
stresses after the optimization are in the same order of magnitude for all
models (see Table \ref{tab:rms_opt}). While model III has the smallest errors for
forces and energies, model I has the biggest errors for all three quantities.
Model II has the smallest stress deviations. While the force error for model I is
about 20\% larger than the one for model III, the energy error is significantly
larger with about 50\% difference.
\begin{table}[htp]
\begin{ruledtabular}
\begin{tabular}{lrrr}
RMS errors for & \multicolumn{1}{c}{Model I} & \multicolumn{1}{c}{Model II} &
\multicolumn{1}{c}{Model III} \\ \hline
forces & $265.63$ & $221.40$ & $220.07$ \\
energies & $19.36$ & $14.49$ & $12.53$ \\
stresses & $99.99$ & $76.83$ & $98.30$
\end{tabular}
\end{ruledtabular}
\caption{Root mean square errors after the optimization for forces (in meV/\AA),
energies (in meV/atom) and stresses (in kPa). This data is calculated
with the reference configurations used for fitting the potentials.}
\label{tab:rms_opt}
\end{table}
A graphical representation of these errors can be seen in Fig.~\ref{fig:scatter}.
The scatterplots in the upper row display the energies of the reference data.
Forces are shown in the lower row. The range of the force plots is due to the many
high temperature MD simulations that are included in the reference data. The forces
therein can become very large because of the short interatomic distances that may
occur at these temperatures.
\begin{figure}[htp]
\includegraphics{scatter.pdf}
\caption{Scatterplot for energies and forces with the EAM values on the vertical
axis and the \emph{ab initio} reference data on the horizontal axis. The insets are
magnified by a factor 4.5.}
\label{fig:scatter}
\end{figure}
These errors cannot solely be used to judge the quality and transferability of the
potentials. For that purpose another set of \emph{ab initio} data has been extracted
from
the structure optimization. It has not been included in the reference data and
can be used to determine the transferability of the different potentials.
The same errors as before have been calculated and can be seen in Table
\ref{tab:rms_test}. As with the reference data, model III has the lowest force
and energy errors. The relative error of the energy is about 0.2\%, for stresses
about 5\% and 550\% for forces. This is due to the fact that all configurations in
this test data are ground state structures and therefore only contain very small
forces.
\begin{table}[htp]
\begin{ruledtabular}
\begin{tabular}{lrrr}
RMS errors for & \multicolumn{1}{c}{Model I} & \multicolumn{1}{c}{Model II} &
\multicolumn{1}{c}{Model III} \\ \hline
forces & $141.90$ & $131.90$ & $130.46$ \\
energies & $10.42$ & $10.47$ & $10.28$ \\
stresses & $32.39$ & $23.76$ & $36.89$ \\
\end{tabular}
\end{ruledtabular}
\caption{Root mean square errors for forces (in meV/\AA), energies (in meV/atom) and
stresses (in kPa). This data is calculated with test data, containing
only structures that were not included in the optimization process.}
\label{tab:rms_test}
\end{table}
The errors for the test data in Table \ref{tab:rms_test} are smaller than those of
the reference configurations in Table \ref{tab:rms_opt}, because there are only
ground states included and no high temperature MD runs.
Based upon these simple energy and force considerations, all the potential models
appear to be of similar quality. Model III, however, should be slightly superior to
the other two potentials. Further tests are necessary to determine the
performance of the potentials in different situations. They will be presented in
Subsection~\ref{subsec:tests}.
\subsection{Structure Refinement}
\label{subsec:refinement}
In Ref.~\onlinecite{Frigan2011}, the structure of the $\Xi$-phases of
Al--Pd--Mn has been optimized by energy minimization in \emph{ab initio} and
molecular dynamics simulations. We use several of the structures tested there to
judge the quality of the optimized potentials. The $\Xi$-phases consist of
columns of pseudo-Mackay icosahedral clusters (PMIs),\cite{Sun1996} a slight
deviation of the famous Mackay icosahedron.\cite{Mackay1962}
\begin{figure}[htp]
\begin{center}
\includegraphics{cluster.pdf}
\caption{(color online) Detailed structure of the pseudo-Mackay icosahedral cluster.
A few atoms of the second and third shell are omitted to see the central atom and
the aluminum atoms in the first shell. The icosahedron and icosidodecahedron are
indicated by planes and bars, respectively. The central atom is manganese. Aluminum
atoms are depicted in yellow (light gray), and palladium atoms in magenta (dark
gray).}
\label{fig:pmi}
\end{center}
\end{figure}
Every PMI cluster consists of a single atom at the center with a first shell of an
experimentally poorly determined number of aluminum atoms. The second shell is an
icosahedron of 12 transition metal atoms and the outer shell an icosidodecahedron of
30 aluminum atoms, see Fig.~\ref{fig:pmi}. Almost all atoms of the $\Xi$-phases
belong to these clusters. It is difficult to measure the exact number of atoms in the
first shell because aluminum atoms are hard to observe in diffraction experiments.
For the lowest quasicrystal approximant, the $\xi$-phase, there are four PMI clusters
in one unit cell. Several different occupancies of aluminum atoms in the first
shell were tested in Ref.~\onlinecite{Frigan2011}. Each configuration was
denoted by a single number, giving the average number of aluminum atoms per
cluster. Structures from eight up to eleven atoms per PMI were generated and tested.
The results with the different potential models can be seen in
Table~\ref{tab:engdiff}. All structures were completely relaxed with \emph{ab initio}
methods, the corresponding \emph{ab initio} energy is given in column 2. The energies
of these configurations with the generated EAM potentials have been calculated after
subsequent relaxation with the respective potentials. This relaxation causes small
displacements of the atoms from their \emph{ab initio} determined positions. For
models I these average displacements are 0.10~\AA{}/atom, 0.08~\AA{}/atom for model
II and 0.11~\AA/atom for model III. This clearly shows that all potential
models can stabilize the ground states of all structures that were generated.
\begin{table}[htp]
\begin{ruledtabular}
\begin{tabular}{D{.}{.}{3}rrrr}
\multicolumn{1}{c}{Number of atoms} & \multicolumn{1}{c}{$E_\text{\emph{ab
initio}}$} &
\multicolumn{3}{c}{$\Delta E$ (meV/atom)}\\
\multicolumn{1}{c}{per PMI} & (eV/atom) & Model I & Model II & Model III \\ \hline
8 & $-4.753$ & $-13$ & $-12$ & $-20$\\
8.25 & $-4.755$ & $-6$ & $-7$ & $-13$\\
8.5 & $-4.756$ & $-1$ & $-3$ & $-5$\\
8.75 & $-4.757$ & $+3$ & $+2$ & $+2$ \\
9 & $-4.755$ & $+4$ & $+4$ & $+3$\\
9.25 & $-4.747$ & $+1$ & $+1$ & $+1$ \\
9.5 & $-4.741$ & $+1$ & $+0$ & $+2$\\
9.75 & $-4.731$ & $-6$ & $-5$ & $-2$\\
10 & $-4.731$ & $0$ & $+2$ & $+4$\\
10.25 & $-4.714$ & $-12$ & $-12$ & $-5$\\
10.5 & $-4.704$ & $-15$ & $-17$ & $-7$\\
10.75 & $-4.692$ & $-19$ & $-21$ & $-13$\\
11 & $-4.683$ & $-22$ & $-24$ & $-17$\\
\end{tabular}
\end{ruledtabular}
\caption{Cohesive energies (in eV/atom) of different optimized
configurations for the $\xi$-phase. The energy differences $\Delta E$ between the
\emph{ab initio} calculations and the respective model are given in meV/atom.}
\label{tab:engdiff}
\end{table}
All models are having difficulties with the energies of structures that contain less
than 9 or more than 10 atoms in the inner shells of the PMI clusters. This may be an
indication for the mechanical instability found during the structure
optimization.\cite{Frigan2011} The energy of these structures is highly unfavorable;
at elevated temperatures some atoms drifted from the outer shell to the inner shell
or vice versa to achieve an inner shell with 9 or 10 aluminum atoms.
All energy differences between the \emph{ab initio} and EAM calculations are smaller
than 10 meV/atom for configurations ranging from 8.5 to 10 atoms per
PMI cluster. This energy is considered a critical threshold for the accuracy of the
potentials. Regarding the energy differences between the different structures, which
are on the order of 1 meV/atom, all potentials can evidently distinguish between
these different configurations.
The structure optimization in Ref.~\onlinecite{Frigan2011} yielded four almost
stable structures, which are different from the ones shown in
Table~\ref{tab:engdiff}. There, not only the atoms in the inner shell are varied, but
also atoms not belonging to the PMI clusters. These alterations were not done in a
systematic manner, the structures will be listed in tabular form. The amount of
atoms for $\xi$- and $\xi'$-phases is the same, only the arrangement of the PMI
cluster columns is different. These structures were tested with the three different
potentials. The results can be seen in Table \ref{tab:engdiff_stable}. The two upper
structures in this table are $\xi$-phase, the lower two structures are $\xi'$.
\begin{table}[htp]
\begin{ruledtabular}
\begin{tabular}{rrrrr}
& \multicolumn{1}{c}{$E_\text{\emph{ab initio}}$} &
\multicolumn{3}{c}{$\Delta E$ (meV/atom)}\\
composition & (eV/atom) & Model I & Model II & Model III \\ \hline
$\xi$-228--64--12 & $-4.702$ & $-5$ & $+4$ & $0$ \\
$\xi$-224--68--12 & $-4.748$ & $+1$ & $+7$ & $+4$ \\
$\xi'$-228--64--12 & $-4.703$ & $-5$ & $+3$ & $+1$ \\
$\xi'$-224--68--12 & $-4.748$ & $+1$ & $+5$ & $+5$
\end{tabular}
\end{ruledtabular}
\caption{Cohesive energies (in eV/atom) of the four almost stable phases after
relaxation. The energy differences $\Delta E$ are given in meV/atom. The composition
is given in numbers of aluminum, palladium and manganese atoms, in this order. All
configurations have 9 aluminum atoms in the inner shell of the PMI clusters.}
\label{tab:engdiff_stable}
\end{table}
After the relaxation with the effective potentials, all models show a very good
agreement with the \emph{ab initio} calculated energies. The mean displacements after
the relaxation are again in the same order of magnitude as before, 0.11~\AA{}/atom
for model I, 0.08~\AA{}/atom for model II and 0.15~\AA{}/atom for model III.
Based on these pure energy comparisons, all three potential models seem to be of
equal quality, with slight advantages for model III.
\subsection{Tests}
\label{subsec:tests}
A force-matched potential is only useful, if it can reproduce key quantities
that were not directly included in the reference data. Here, we subjected the
three potentials to a series of tests. The first test is whether the potential
can stabilize the $\xi$-phase even at elevated temperatures. As there was a
large number of high temperature \emph{ab initio} MD simulations included in the
optimization, the potentials should be able to preserve the structure of the
$\xi$-phase under these conditions. We carried out an \emph{ab initio} MD simulation
at 1200 K for 50 ps,\cite{Frigan2011} where the phase is still mechanically stable.
In a time-averaged picture of the density, the atoms in the two outer shells of the
PMI clusters did not move, but the atoms in the first shell showed some rotational
degree of freedom.
All three models were able to stabilize the structure at this temperature. While
models II and III give the same results as the \emph{ab initio} calculation (cf.\
Ref.~\onlinecite{Frigan2011}), model I shows additional degrees of freedom. In the
time-averaged picture the atoms forming the outer shell of the PMIs are not as
steady as in the \emph{ab initio} simulation. Also the atoms, which do not belong the
these clusters, exhibit a density distribution that is twice as large as expected.
This means that model I may have difficulties stabilizing the structure at even
higher temperatures or against fluctuations in the local atomic arrangement.
For molecular dynamics simulations, the stabilization of different phases can be a
problem. We checked some well known phases for all three potential models
with respect to cohesive energy and phase stability. The results can be seen in
Table \ref{tab:suspects}.
All three potentials can stabilize the different phases. The deviation of the atomic
positions after relaxation compared to the \textit{ab initio} reference values is
very small. The energies are reproduced with errors of under 200 meV/atom.
\begin{table*}[htp]
\begin{ruledtabular}
\begin{tabular}{ccrrrrrrrrr}
& \multicolumn{1}{c}{$E_\text{\emph{ab initio}}$} &
\multicolumn{3}{c}{Model I} &
\multicolumn{3}{c}{Model II} & \multicolumn{3}{c}{Model III} \\
System & $E$ (eV/atom) & $E_{\text{EAM}}$ & $\Delta E$ & $\Delta x$ &
$E_{\text{EAM}}$ & $\Delta E$ & $\Delta x$ & $E_{\text{EAM}}$ & $\Delta E$ &
$\Delta x$ \\ \hline
AlPd.\textit{cP}2 (B2) & -5.330 & -5.430 & -0.100 & 0 & -5.503 & -0.173 & 0 & -5.445
& -0.115
& 0 \\
AlPd$_3$.\textit{cF}16 (D$0_{3}$) & -5.236 & -5.426 & -0.190 & 0 & -5.424 & -0.188 &
0 &
-5.430 & -0.194 & 0 \\
Al$_3$Pd.\textit{tI}8 (D$0_{22}$) & -4.421 & -4.546 & -0.125 & 0 & -4.540 & -0.119 &
0 &
-4.560 & -0.139 & 0\\
Al$_3$Pd.\textit{cP}4 (L1$_{2}$) & -4.609 & -4.647 & -0.038 & 0.13 & -4.651 & -0.042
& 0.17
& -4.650 & -0.041 & 0.17 \\ \hline
Al$_3$Mn.\textit{tI}8 (D$0_{22}$) & -5.132 & -5.053 & 0.079 & 0.04 & -5.129 & 0.003
& 0.01
& -5.175 & -0.043 & 0\\
Al$_3$Mn.\textit{cP}4 (L1$_{2}$) & -5.032 & -5.187 & -0.155 & 0 & -5.180 & -0.148 & 0
& -5.197
& -0.165 & 0
\end{tabular}
\end{ruledtabular}
\caption{Cohesive energies for different phases in the Al-Pd-Mn system. All energies
and energy differences are given in eV/atom. The mean square displacements ($\Delta
x$) after relaxation are given in \AA{}/atom. A displacement of 0 means the value is
smaller than 10$^{-4}$ \AA{}/atom.}
\label{tab:suspects}
\end{table*}
Another important test is the calculation of formation enthalpies $\Delta H$ with
the potentials.
$\Delta H$ is defined as the energy difference of a structure to the tie plane
of the pure element energies. This has been calculated for all configurations in
Tables \ref{tab:engdiff} and \ref{tab:engdiff_stable}. The reference energies are
given in Table \ref{tab:pureelements}. For the structures with different amounts of
aluminum atoms in the inner shell of the PMI clusters, the results can be seen in
Figure~\ref{fig:enthalpy}. The deviations from the \emph{ab initio} enthalpies are
very similar to those from Table~\ref{tab:engdiff}. For less than 8.5 and more than
10 atoms in the inner shell of the PMI clusters the enthalpies differ more than 10
meV/atom.
\begin{table}[htp]
\begin{ruledtabular}
\begin{tabular}{crrrr}
& $E_\text{\emph{ab initio}}$ &
\multicolumn{3}{c}{$\Delta E$ (meV/atom)} \\
& (eV/atom) & Model I & Model II & Model III \\ \hline
Al.\textit{cF}4 & $-3.688$ & $-5$ & $-4$ & $-3$ \\
Pd.\textit{cF}4 & $-5.199$ & $0$ & $0$ & $0$ \\
Mn.\textit{cI}58 & $-8.964$ & $+68$ & $-7$ & $+3$ \\
\end{tabular}
\end{ruledtabular}
\caption{\emph{Ab initio} energies (in eV/atom) and the differences for the
effective potentials (in meV/atom) for the pure elements. These energies were
used for the calculation of the formation enthalpies $\Delta H$.}
\label{tab:pureelements}
\end{table}
\begin{figure}[htp]
\begin{center}
\includegraphics{enthalpy.pdf}
\caption{Comparison of the \emph{ab initio} formation enthalpy $\Delta H$ (in
eV/atom) with the three potential models. The lines between the \emph{ab initio}
datapoints are added as a guide to the eye.}
\label{fig:enthalpy}
\end{center}
\end{figure}
The enthalpies for the four almost stable structures are shown in Table
\ref{tab:formationenthalpy_stable}. All three models give very accurate enthalpies
with deviations all smaller than 10 meV/atom.
\begin{table}[htp]
\begin{ruledtabular}
\begin{tabular}{rrrrr}
& \multicolumn{1}{c}{$\Delta H_{\text{\emph{ab initio}}}$} &
\multicolumn{3}{c}{$\Delta H_{\text{EAM}}-\Delta H_{\text{\emph{ab initio}}}$}\\
composition & (eV/atom) & Model I & Model II & Model III \\ \hline
$\xi$-228--64--12 & $-0.488$ & $-6$ & $+1$ & $-1$ \\
$\xi$-224--68--12 & $-0.513$ & $0$ & $+4$ & $+3$ \\
$\xi'$-228--64--12 & $-0.488$ & $-6$ & $0$ & $0$ \\
$\xi'$-224--68--12 & $-0.514$ & $0$ & $+2$ & $+3$ \\
\end{tabular}
\end{ruledtabular}
\caption{\emph{Ab initio} formation enthalpies $\Delta H$ (in eV/atom) of the four
almost stable phases and the differences for the effective potentials (in meV/atom)
after relaxation. The composition is given in numbers of aluminum, palladium and
manganese atoms, in this order. All configurations have 9 aluminum atoms inside the
PMI clusters.}
\label{tab:formationenthalpy_stable}
\end{table}
During the structure optimization a very long \emph{ab initio} MD run with
$50\,000$ steps at 1200 K was performed. Snapshots were taken from this
simulation at different timesteps and quenched very rapidly. This has also
been done with the EAM potentials. The results show a very good agreement
for different snapshots. The structures only differ very slightly in atomic
positions. While there is a steady offset of about 100 meV/atom in the
energy for higher temperatures, the overall trend can clearly be followed.
For lower temperatures and $T=0$ the energies were in the same order as for
the structures in Table \ref{tab:engdiff}. There were no major
differences for all three potential models.
To determine if a structure is thermodynamically stable, the energy difference of
this structure to the convex hull is calculated. If this difference is negative, the
structure is stable, otherwise it it unstable. For more details on the convex hull
see Ref.~\onlinecite{Frigan2011}. This energy difference has been calculated for all
structures in Table~\ref{tab:engdiff} and is shown in Fig.~\ref{fig:convexhull}.
\begin{figure}[htp]
\begin{center}
\includegraphics{convexhull.pdf}
\caption{Difference of the cohesive energy to the convex hull for different amounts
of aluminum atoms in the inner shell of the PMI clusters. The datapoints for model
I are shifted by +10 meV/atom and by +5 meV/atom for model II.}
\label{fig:convexhull}
\end{center}
\end{figure}
For the sake of clarity, the datapoints of model I and II are shifted by +10
and +5 meV/atom. While these models show a clear decrease of the energy difference
with increasing number of atoms inside the PMI cluster, model III has minima for 9
and 10 atoms, like the \emph{ab initio} reference calculation. As this is the main
criterion for performing a structure optimization, models I and II cannot be used
for this purpose. Only model III is able to reproduce the shape of
the \emph{ab initio} calculation.
The melting point for the $\xi$-phase has been determined with all three potential
models. In MD simulations the volume per atom has been calculated while the sample
was heated from 950 K to 1400 K. At the melting point, there is a jump in atomic
volume, which corresponds to the melting transition. For model I this was found at
1130 K, for model II at 1370 K and for model III at 1300 K. With this method, the
melting point is generally overestimated, due to the high heating rates. For the
simulations we chose a heating rate of $5\times 10^{-5}$ K per timestep, this equals
$5\times 10^9$ K/s. If one compares these temperatures with the experimental value of
1118 K, the value for the potential model I seem to be too low, model II and III are
in the expected temperature range.
Another test we performed is the calculation of the elastic constants.
All $\Xi$-phases have an orthorhombic unit cell. The corresponding nine
elastic constants were determined by examining the cohesive energy during
homogeneous deformations of the sample.
\begin{table}[htp]
\begin{ruledtabular}
\begin{tabular}{crrrr}
-- & \emph{ab initio} & Model I & Model II & Model III \\ \hline
$C_{11}$ & $175.79$ & $255.25$ & $244.66$ & $200.98$ \\
$C_{22}$ & $192.75$ & $269.79$ & $246.74$ & $193.61$ \\
$C_{33}$ & $227.46$ & $243.53$ & $246.64$ & $160.57$ \\
$C_{12}$ & $58.76$ & $158.83$ & $145.57$ & $102.76$ \\
$C_{13}$ & $67.85$ & $146.75$ & $146.78$ & $92.95$ \\
$C_{23}$ & $56.34$ & $151.19$ & $146.51$ & $107.04$ \\
$C_{44}$ & $72.54$ & $42.57$ & $42.42$ & $42.77$ \\
$C_{55}$ & $67.77$ & $41.46$ & $47.19$ & $46.66$ \\
$C_{66}$ & $71.25$ & $48.51$ & $48.21$ & $43.76$ \\
\end{tabular}
\end{ruledtabular}
\caption{Elastic constants of $\xi$-Al--Pd--Mn in GPa.}
\label{tab:elconst}
\end{table}
The results with all three models (see Table \ref{tab:elconst}) show only very little
agreement with the \emph{ab-initio} values. Only model III can reproduce $C_{11}$ and
$C_{22}$. All other elastic constants differ by up to a factor of 3. The potentials
are apparently not able to reproduce the shear stress. However, this behavior is to
be expected, if one takes into account that these potentials were generated for
energy minimization purposes. For other applications, like calculating mechanical
properties, an extended database, containing enough data on shears, should be used.
The only samples used for these potentials, that included deformations, were high
temperature \emph{ab-initio} MD snapshots. These were strained along either of the
cartesian axes, which are perpendicular to the periodic stacking axis of the
quasicrystal. The corresponding elastic constants are $C_{11}$ and $C_{22}$, which
are the only ones correctly reproduced by model III.
This clearly shows that force matched potentials are limited in their applications.
They give very accurate results regarding the energy and forces because they are
tuned to these quantities. For other physical properties, like elastic
constants, the potentials are less accurate.
\section{Summary}
The Al--Pd--Mn potentials presented are very well suited to model the
energetics of the $\Xi$-phases. They were obtained with the force-matching
method, which is fitting the parameters to a large database of \emph{ab initio}
determined reference data. All three analytic potential models tested were
able to reproduce the \emph{ab initio} values of the energies with very high
accuracy. The error sum of the fitting process for all three potentials is very
similar, yet they show very different properties when used in MD simulations.
\begin{table*}[htp]
\begin{ruledtabular}
\begin{tabular}{crrrrrrr}
\multicolumn{8}{c}{EOPP pair function} \\
pair & \multicolumn{1}{c}{$C_1$} & \multicolumn{1}{c}{$\eta_1$}
& \multicolumn{1}{c}{$C_2$} & \multicolumn{1}{c}{$\eta_2$}
& \multicolumn{1}{c}{$k$} & \multicolumn{1}{c}{$\varphi$}
& \multicolumn{1}{c}{$h$}\\ \hline
Al-Al & $586.4805$ & $7.6769$ & $-0.0333$ & $1.0012$ & $3.7658$ & $3.8484$
& $1.3897$ \\
Al-Mn & $338.7250$ & $7.5484$ & $-0.4212$ & $1.9271$ & $2.7530$ & $0.0033$
& $0.5000$ \\
Al-Pd & $981.8107$ & $9.1908$ & $-89.9193$ & $4.7322$ & $0.2491$ & $1.3235$
& $0.6211$ \\
Mn-Mn & $3.8460$ & $19.9995$ & $-44.5953$ & $4.1469$ & $1.2084$ & $1.0115$
& $1.5938$ \\
Mn-Pd & $12.8931$ & $3.4348$ & $-90.3824$ & $4.4851$ & $1.6212$ & $0.0005$
& $0.5007$ \\
Pd-Pd & $6625.3081$ & $9.5962$ & $99.8792$ & $6.1164$ & $3.8088$ & $2.5086$
& $0.5235$ \\
\hline \multicolumn{7}{c}{transfer function} \\
element & \multicolumn{1}{c}{$a_1$} & \multicolumn{1}{c}{$a_2$}
& \multicolumn{1}{c}{$\alpha$} & \multicolumn{1}{c}{$\beta$}
& \multicolumn{1}{c}{$h$} \\ \hline
Al & $0.1317$ & $0.0399$ & $2.7507$ & $2.3142$ & $1.9995$ \\
Mn & $-1.5432$ & $1.0321$ & $1.6018$ & $2.4154$ & $1.9996$ \\
Pd & $0.4962$ & $0.7317$ & $2.9972$ & $3.4308$ & $0.5001$ \\
\hline \multicolumn{7}{c}{embedding function} \\
element & \multicolumn{1}{c}{$F_0$} & \multicolumn{1}{c}{$F_1$}
& \multicolumn{1}{c}{$q$} & & & \\ \hline
Al & $-2.9403$ & $0.5639$ & $-1.3026$ \\
Mn & $-1.5862$ & $1.3917$ & $-5.3935$ \\
Pd & $-4.0016$ & $0.9432$ & $-5.7749$ \\
\hline \multicolumn{7}{c}{cutoff radius $r_c = 7$ \AA{}}
\end{tabular}
\end{ruledtabular}
\caption{Parameters for the model III EAM potential with $r$ in units of \AA{}
and $V(r)$ in eV.}
\end{table*}
\begin{figure*}
\includegraphics{potential.pdf}
\caption{(color online) Plots of the 12 functions of the EAM potential (model III)
for Al--Pd--Mn.}
\label{fig:potential}
\end{figure*}
The differences of the models become visible when calculating energy differences
like formation enthalpies or the convex hull. There, model III shows the smallest
deviations and can reproduce the \emph{ab initio} values with very high accuracy. The
models I and II also give very good energies differences but cannot be used to
predict the stability of a structure with the calculation of the convex hull. This
indicates that oscillations on two length scales, like in model III, are necessary.
However, the reasons for this are unclear. For further structure determination and
analysis of the metadislocations in the $\Xi$-phases, the model potential III will
be used.
\begin{acknowledgments}
We would like to thank Alejandro Santana Bonilla and Marek Mihalkovi\v{c}
for intensive discussions and providing some test and reference data. This project
has been funded by the European Network of Excellence ``Complex Metallic Alloys''
(NMP3-CT-2005-500140) and by Deutsche Forschungsgemeinschaft, Paketantrag
``Physical Properties of Complex Metallic Alloys, (PAK 36)'', TR 154/24-2.
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
|
1,314,259,995,918 | arxiv | |
1,314,259,995,919 | arxiv | \section{Introduction}
Static vacuum spacetimes satisfying the Einstein field equations
play a central role in general relativity,
since such spacetimes are expected to represent a final state of the evolution of matter under self-gravitating forces.
Several classical results show that, under certain physical conditions, a very limited number of such spacetimes exists.
We are interested here in the class of spatially compact spacetimes
with positive cosmological constant, which
is not covered by the mathematical techniques available in the literature
and, therefore, we establish here a new black hole uniqueness theorem. Our proof
overcomes several conceptual and technical
difficulties, as explained below.
By definition, a {\em static spacetime with maximal compact spacelike slices} (of class $W^{2,2}(\Mcalb)$)
is a time-oriented, $(3+1)$-dimensional Lorentzian manifold $\Nbf$
with global topology $\Nbf \simeq \RR \times \Mcal$ and Lorentzian metric
$\gbf = - f^2 \, dt^2 + g$,
where
$t$ is a coordinate on $\RR$ increasing toward the future,
$\Mcal$ is a connected, orientable, smooth topological $3$-manifold with smooth boundary $\del \Mcal$
such that $\Mcalb := \Mcal \cup \del \Mcal$ is compact and is
endowed with a $t$-independent Riemannian metric $g$ of class $W^{2,2}(\Mcal)$,
and $f: \Mcal \to (0, +\infty)$ belongs to the Sobolev space $W^{2,2}(\Mcal)$ and vanishes at the boundary.
The assumed regularity means that, in an atlas of local coordinates,
the metric coefficients admit derivatives up to second-order that are squared-integrable.
In this context, $f$ is referred to as the {\sl lapse function}, and
the vector field $\Tbf := \del/\del t$ is a future-oriented, timelike Killing field:
$$
\Lcal_\Tbf \gbf = 0, \qquad
\gbf(\Tbf,\Tbf) < 0.
$$
By definition, the hypersurfaces $t=$const. are orthogonal to $\Tbf$,
and
the spacetime $\Nbf$ is foliated by compact spacelike slices with boundary. The lapse function
is positive in $\Mcal$ and vanishes on $\del \Mcal$, so that
the zero-level set of $f$
$$
\Hcal := \big\{ f=0 \big\},
$$
referred to as the {\sl horizon}, coincides with the boundary of the slices
$
\Hcal = \del \Mcal
$
(which need not be connected).
In addition, we impose that $\Nbf$ satisfies Einstein's vacuum equations with positive cosmological constant $\Lambda>0$, that is, $\Gbf_{\mu\nu} + \Lambda \, \gbf_{\mu\nu} = 0$, where
$\Gbf_{\mu\nu} := \Rbf_{\mu\nu} - (\Rbf/2) \gbf_{\mu\nu}$ denotes Einstein's curvature tensor
(in dimensions $3+1$), $\Rbf_{\mu\nu}$ the Ricci curvature,
and $\Rbf$ the scalar curvature, respectively.
In other words, we impose
$$
\Rbf_{\mu\nu} = \Lambda \, \gbf_{\mu\nu}.
$$
Such a spacetime was discovered
by Kottler \cite{Kottler}, and its most relevant part for us is the ``interior domain of communication'',
defined as follows.
Given $m, \Lambda>0$ satisfying $(3m)^2 \Lambda \in (0, 1)$,
the {\em interior domain of the Kottler spacetime,} denoted by $\Nbf_{\Kot,m,\Lambda}$
with metric $\gbf_{\Kot,m,\Lambda}$,
is
the static spacetime with maximal compact spacelike slices, whose
lapse function $f_{\Kot, m,\Lambda}$ and
Riemannian metric $g_{\Kot, m,\Lambda}$
on the compact spacelike slices
$$
\Mcal_{\Kot, m,\Lambda} \simeq (r_\Kot^-,r_\Kot^+) \times S^2
$$
are defined by
$$
(f_{\Kot,m,\Lambda}(r))^2 := 1 - {2m \over r} - {\Lambda \over 3} r^2,
\quad
g_{\Kot,m,\Lambda} := {dr^2 \over (f_{\Kot,m,\Lambda}(r))^2} + r^2 \, g_{S^2}, \qquad r \in [r_\Kot^-,r_\Kot^+],
$$
where $g_{S^2}$ denotes the canonical metric on the unit sphere $S^2$,
$m$ is interpreted as the mass of the spacetime, and $r_\Kot^\pm = r_{\Kot^\pm,m,\Lambda}$
are the two positive roots of
the cubic polynomial $r \mapsto r (f_{\Kot,m,\Lambda}(r))^2$.
These manifolds are also called {\em Schwarzschild-de~Sitter spacetimes}
and
provide us with a two-parameter
family of static spacetimes with compact spacelike slices, which
are locally (but not globally) conformally flat.
Note that the horizon of a Kottler spacetime, denoted here by $\Hcal_{\Kot,m,\Lambda}$, consists of
the two connected components
$$
\Hcal^\pm_{\Kot,m,\Lambda} := \big\{ r = r^\pm_{\Kot,m,\Lambda} \big\}.
$$
We point out that the spacetimes $\Nbf_{\Kot,m,\Lambda}$
may be extended beyond their horizon: one component of $\Hcal_{\Kot,m,\Lambda}$
is an ``inner horizon'' connecting to an interior black hole region
while the other component is a cosmological horizon connecting to a non-compact exterior domain of communication
(asymptotic to de~Sitter). The interior domain is, both mathematically and physically,
the region of interest
and for instance, as $\Lambda \to 0$, converges to the outer communication domain of the Schwarzschild spacetime
dealt with in the classical black hole theorems.
Finally, one more family of spacetimes are relevant in the present work, that is, the
{\em de~Sitter spacetimes,}
parametrized by their cosmological constant $\Lambda>0$.
We denote by $\Nbf_{dS,\Lambda}$ one domain of communication of the de Sitter spacetime,
whose spacelike slices
have the topology of a half-sphere $S^3$ and whose horizon $\Hcal_{dS,\Lambda}$
admits a single component diffeomorphic to the $2$-sphere $S^2$.
\section{Main results}
We are now in a position to state our ridigity results, under
the regularity condition that the level set achieving the maximum of
the lapse function is a regular surface.
\vskip.25cm
\begin{theorem}[Uniqueness theorem for Kottler spacetime]
\label{main}
The interior domain of the Kottler spa\-cetimes $\Nbf_{\Kot,m, \Lambda}$
parameterized by their mass $m>0$ and cosmological constant $\Lambda>0$
together with the domain of communication $\Nbf_{dS,\Lambda}$ of the de~Sitter spacetimes
are, up to global isometries, the unique static spacetimes with maximal compact spacelike slices
and regular maximal level set,
satisfying Einstein's field equations with positive cosmological constant.
\end{theorem}
\vskip.25cm
We emphasize that no restriction is assumed a~priori on
the topology of the spacelike slices, and
this topology is finally identified as part of
the conclusion of the theorem, which also provides us with the metric.
Hence, the above theorem is of interest in both general relativity and topology.
A large literature is available on black hole uniqueness theorems, and
we will not try to review it here but will only quote works that are most related to the
present discussion.
Classical works deal with the case ${\Lambda=0}$, and goes back to Israel \cite{Israel},
Hawking \cite{Hawking},
and many others. For more recent works, see Lindblom \cite{Lindblom} and
Beig and Simon \cite{BeigSimon}.
The class of (vacuum) spacetimes with negative cosmological constant ${\Lambda<0}$
was tackled only recently. (See \cite{LR1} for references.)
In contrast with the above results
and despite active research on the subject in the past twenty years, the class
of spacetimes with positive cosmological constant is not amenable
to the mathematical techniques developed
in the existing literature.
Our purpose in the present paper is to introduce a new approach
which overcomes these (technical and conceptual) difficulties and
to establish a
uniqueness theorem for the case ${\Lambda>0}$. As we will show,
we have to combine
arguments from partial differential equations and differential geometry,
and, most importantly, to work within a class of possibly singular foliations.
Our method of proof also applies in the Riemannian setting and
allows us to establish the validity
of Besse conjecture \cite{Besse}.
(See also the earlier works \cite{Kobayashi,Lafontaine} for special cases.)
\vskip.25cm
\begin{theorem}[Besse conjecture in Riemannian geometry]
All compact three-manifolds $(M,g)$, on which there exists a non-trivial solution $f$ to the dual linearized curvature equation $L^*(f)=0$ with regular maximal level set,
are given by the following list (up to isometries):
\begin{itemize}
\item The sphere $S^3$ endowed with the canonical metric. In this case, one has
$f=\cos(d(.,x_0))$ where $d$ is the Riemannian distance to a point $x_0$,
and the kernel of $L^*$ has dimension $\dime \Ker(L^*)=4$.
\item A finite quotient of the product $S^1\times S^2$ endowed with the canonical product metric. In this case one has
$\dime \Ker(L^*)=2$.
\item A finite quotient of the twisted product $S^1\times S^2$ endowed with the metric $g=dx^2 + h^2(x) \, g_{S^2}$.
These twisted products depend upon two real parameters and an integer parameter, and
$\Ker(L^*)= h' \, \RR$.
\end{itemize}
\end{theorem}
\vskip.25cm
\section{Elements of proof}
Let us indicate several key elements of our proof of Theorem~\ref{main}.
We consider a
static spacetime $\Nbf$ with maximal compact spacelike slices $\Mcal$
(and $W^{2,2}$ regularity) satisfying Einstein's field equations with positive cosmological constant $\Lambda>0$.
Using the $(3+1)$-splitting,
the Einstein equations on the $4$-dimensional spacetime are equivalent
to a problem posed on the $3$-manifold $\Mcal$ with boundary, i.e.,
to the partial differential equations (for the lapse function $f$ and metric $g$)
$$
\nabla df - (\Delta f) \, g - f \, \Rc = 0,
$$
with the additional constraint that the scalar curvature $R$ of $(\Mcal,g)$ coincides with) the
cosmological constant and, therefore,
is a constant; specifically, one has
$
R = 2 \Lambda>0.
$
In the Einstein equations, the field of $1$-forms $df$ is the differential of $f$, while
$\nabla$ denotes the covariant derivative in $(\Mcal,g)$,
$\nabla df$ the Hessian of $f$,
$\Delta$ the Laplacian operator (normalized to have negative eigenvalues),
and $\Rc$ the $3$-dimensional Ricci curvature, respectively.
By taking the trace of the Einstein equations, we deduce that
$$
\Delta f = - {R \over 2} f.
$$
In other words, $f$ is an eigenfunction of the Laplace operator defined on the (unknown)
Riemannian manifold $(\Mcal,g)$.
Our objective is to determine {\sl all triplets of solutions} $(\Mcal, g, f)$ satisfying
the Einstein equations
and, in particular, to determine the topology of $\Mcal$.
From the lapse function associated with the natural $(3+1)$--foliation of the spacetimes
under consideration, we define a (possibly) degenerate $(2+1)$--foliation
and investigate the topology and geometry of its leaves.
It is convenient to introduce certain {\sl normalized} geometric invariants of this foliation,
which make sense globally on the manifold $\Mcal$, even at points where the gradient $\nabla f$ vanishes
(and the foliation possibly becomes degenerate.
We also introduce the {\sl Hawking mass density,}
defined from the Gauss curvature and mean-curvature of the $2$-slices,
which again makes sense globally on the manifold, even at critical points.
The Hawking mass density, used here,
appears classically as an integrant in Hawking's original definition.
Using the notion of Hawking mass density, we establish a pointwise version of Penrose inequality
on the horizon, which allows us to identify a topological $2$-sphere within the connected components of the horizon.
An ``optimal'' Kottler model with well-chosen ADM mass is introduced, which covers the region
limited by certain level sets of the lapse function.
Finally, several maximum principle arguments are developed for Einstein's field equations of static spacetimes,
which apply to the possibly degenerate $(2+1)$-foliation under consideration.
For further details we refer to \cite{LR1,LR2}.
|
1,314,259,995,920 | arxiv | \section{Introduction}
\subsection{Overview and motivation}
An important class of field theories in physics is represented by \textit{gauge theories}. These are theories containing a redundant number of degrees of freedom which causes physical quantities to be invariant under certain local transformations, called \textit{gauge symmetries}. Indeed the presence of gauge symmetries lead to challenging problems from the definition of path integral to the general problem of understanding the perturbative quantization of a gauge theory. Since the physical information about a classical field theory is encoded in the set of solutions of the Euler--Lagrange equations (the \textit{critical locus}), a possible solution to deal with such problems is to consider the critical locus modulo the gauge symmetries. The fields are then constructed as functions on this quotient. However, this is not feasible since it turns out that these quotients are, in general, singular. Batalin and Vilkovisky introduced a method, which is known today as the \textit{BV formalism} \cite{BV77,BV81,BV83}, that employs symplectic (co)homological tools \cite{KT79} to treat these field theories, in particular it overcomes difficulties connected to the singularity of the quotient by taking homological resolution of the critical locus. A crucial observation in the BV formalism is also that gauge-fixing then corresponds to the choice of a Lagrangian submanifold.
Another method developed around the same time is the \textit{BFV formalism} by Batalin, Fradkin and Vilkovisky \cite{FV77,BF83,BF86}, which deals with gauge theories in the Hamiltonian setting, while the BV construction is formulated in the Lagrangian approach.
Recently, the study of gauge theories on spacetime manifolds with boundary lead Cattaneo, Mnev and Reshetikhin \cite{CMR11,CMR14} to relate these two formulations in order to develop the \textit{BV-BFV formalism}. Their idea was that, under certain conditions, BV theories in the bulk can induce a BFV theory on the boundary. This approach was successfully applied to a large number of physical theories such as e.g. electrodynamics, Yang-Mills theory, scalar field theory and $BF$-theories \cite{CMR14}. In particular, the \textit{AKSZ construction}, developed in \cite{AKSZ97}, produces naturally a large variety of theories which satisfy automatically the BV-BFV axioms as it was shown in \cite{CMR14}. This is quite remarkable since many theories of interest are actually of AKSZ-type, such as e.g. Chern--Simons (CS) theory, $BF$-theory and the Poisson sigma model (PSM) \cite{CMR14}.
In \cite{CMR17}, a perturbative quantization scheme for gauge theories in the BV-BFV framework was introduced, which was called \textit{quantum BV-BFV formalism}. The importance of this method relies on its compatibility with cutting and gluing in the sense of \textit{topological quantum field theories} (TQFTs). The quantum BV-BFV formalism has been applied successfully in various physically relevant theories such as e.g. $BF$-theory and the PSM \cite{CMR17}, split CS theory \cite{CMnW17} and CS theory \cite{CMnW21}, the relational symplectic groupoid\footnote{The relational symplectic groupoid was first defined in \cite{CC15}.} \cite{CMoW17} and 2D Yang--Mills theory on manifolds with corners \cite{Ir18,IM19}.
An important effort has been spent to study TQFT within the quantum BV-BFV framework. Indeed, the method has been introduced to accomplish the goal of constructing perturbative topological invariants of manifolds with boundary compatible with cutting and gluing for topological field theories. During the years, two prominent TQFTs have been studied in detail: CS theory \cite{AS91,AS94} in \cite{CMnW17,We18} and the PSM \cite{SS94,Ik94} in \cite{CMoW20}.
In \cite{CMoW19} a globalized version of the (quantum) BV-BFV formalism in the context of nonlinear split AKSZ sigma models on manifolds with and without boundary by using methods of formal geometry à la Bott \cite{Bo11}, Gelfand and Kazhdan \cite{GK71} (see also \cite{BCM12} for an application of the globalization procedure for the PSM in the context of a closed source manifold) was developed. Their construction is able to detect changes of the quantum state when one modifies the constant map around which the perturbation is developed. This required them to formulate a ``differential" version of the \textit{(modified) Classical Master Equation} and the \textit{(modified) Quantum Master Equation}, which are the two key equations in the BV(-BFV) formalism. As an example, this procedure was applied to the PSM on manifolds with boundary and extended to the case of corners in \cite{CMoW20}.
In this paper, we continue the effort in analyzing TQFTs within the quantum BV-BFV formalism by studying the \textit{Rozansky--Witten (RW) theory}.
The RW model is a topological sigma model with a source 3-dimensional manifold $\Sigma_3$, which was introduced by Rozansky and Witten in \cite{RW96} through a \textit{topological twist} of a 6-dimensional supersymmetric sigma model with target a hyperK{\"a}hler manifold $M$. Of particular interest is the perturbative expansion of the RW partition function. Rozansky and Witten obtained this expansion as a combinatorial sum in terms of Feynman diagrams $\Gamma$, which are shown to be trivalent graphs:
\begin{equation}
Z_M(\Sigma_3)=\sum_\Gamma b_\Gamma(M)I_\Gamma(\Sigma_3),
\end{equation}
the $b_\Gamma (M)$ are complex valued functions on trivalent graphs constructed from the target manifold, while $I_\Gamma(\Sigma_3)$ contains the integral over the propagators of the theory and depends on the source manifold. There are evidences which suggest that $I_\Gamma(\Sigma_3)$ are the \textit{LMO invariants} of Le, Murakami and Ohtsuki \cite{LMO98}. On the other hand, Rozansky and Witten showed that $b_\Gamma(M)$ satisfy the famous AS (which is reflected in the absence of tadpoles diagrams) and IHX relations. As a result, $b_\Gamma(M)$ constitute the \textit{Rozansky--Witten weight system} for the \textit{graph homology}, the space of linear combinations of equivalence classes of trivalent graphs (modulo the AS and IHX relations). This means that the RW weights can be used to construct new finite type topological invariants for 3-dimensional manifolds \cite{B95}.
The RW theory opened up a new branch of research which was undertaken by many mathematicians and physicists (e.g. \cite{HT99,Th00}). Shortly after the original paper, Kontsevich understood that the RW invariants could be obtained by the characteristic classes of foliations and Gelfand-Fuks cohomology \cite{Ko99}. Inspired by the work of Kontsevich, Kapranov reformulated the weight system in cohomological terms (instead of using differential forms) in \cite{KA99}. This idea relies on the fact that one can replace the Riemann curvature tensor by the Atiyah class \cite{At57}, which is the obstruction to the existence of a global holomorphic connection. As a consequence of Kontsevich's and Kapranov's approaches, the RW weights were understood to be invariant under the hyperK{\"a}hler metric on $M$: in fact, the model could be constructed more generally with target a holomorphic symplectic manifold. In this way, the RW weights were also called \textit{RW invariants}\footnote{The terminology is unfortunate as in reality the proper invariants should be the products of the weights with $I_\Gamma(\Sigma_3)$.} of $M$ (see \cite{Sa04} for a detailed exposition). On the other hand, the possibility to consider as target manifold a holomorphic symplectic manifold was later interpreted in the context of topological sigma models by Rozansky and Witten in the appendix of \cite{RW96}.
In the last 20 years, the RW model has been the focus of intense research in order to formulate it as an extended TQFT (see \cite{Sa00,RS02}), in order to investigate its boundary conditions and defects \cite{KRS09, KR09}, and in order to construct its globalization formulation \cite{QZ10,KQZ13,CLL17}.
\subsection{Our contribution}
The main contribution of this paper is to add the RW theory to the list of TQFTs which have been studied successfully within the globalized version of the quantum BV-BFV framework \cite{CMoW19}. This will be a step towards the higher codimension quantization of RW theory, which will possibly lead to new insights towards the 3-dimensional correspondence between CS theory \cite{Wit89} and the Reshetikhin--Turaev construction \cite{Reshetikhin1991} from the point of view of (perturbative) extended field theories described by Baez--Dolan \cite{BD95} and Lurie \cite{Lu09}. Moreover this could also help in understanding (generalizations of a globalized version of the) Berezin--Toeplitz quantization (star product) \cite{Schlichenmaier2010} through field-theoretic methods using cutting and gluing similarly as it was done for Kontsevich's star product \cite{Ko03} in the case of the PSM in \cite{CMoW20}.
We construct the BV-BFV extension of an AKSZ model having a 3-dimensional manifold $\Sigma_3$ (possibly with boundary) as source and a holomorphic symplectic manifold $M$ as target with holomorphic symplectic form $\Omega$. Following \cite{KA99}, we define a formal holomorphic exponential map $\varphi$. This is used to linearize the space of fields of our model obtaining
\begin{equation}
\Tilde{\mathcal{F}}_{\Sigma_3, x}=\Omega^{\bullet}(\Sigma_3)\otimes T^{1,0}_{x}M,
\end{equation}
where $\Omega^{\bullet}(\Sigma_3)$ denotes the complex of de Rham forms on the source manifold and $T^{1,0}_{x}M$ is the holomorphic tangent space on the target. In order to vary the constant solution around which we perturb, we define a classical Grothendieck connection which can be seen as a complex extension of the Grothendieck connection used in \cite{CMoW19,CMoW20}. In this way, we construct a \textit{formal global action} for our model, i.e.
\begin{equation}
\Tilde{\Sc}\surg\coloneqq\varint_{\Sigma_3}\bigg(\frac{1}{2}\Omega_{ij}\hat{\mathbf{X}}^id\hat{\mathbf{X}}^j+\Big(\hat{R}^i\sur\Big)_j(x; \hat{\mathbf{X}})\Omega_{il}\hat{\mathbf{X}}^l dx^j+\Big(\hat{R}^i\sur\Big)_{\Bar{j}}(x; \hat{\mathbf{X}})\Omega_{il}\hat{\mathbf{X}}^l dx^{\Bar{j}}\bigg)
\end{equation}
with $\hat{\mathbf{X}}^i$ the coordinates of the spaces of fields $\Tilde{\mathcal{F}}_{\Sigma_3, x}$ organized as superfields, $x$ is the constant map over which we expand, $\Big(\hat{R}^i\sur\Big)_j$ and $\Big(\hat{R}^i\sur\Big)_{\Bar{j}}$ the components of the Grothendieck connection given by
\begin{equation}
\begin{split}
&R^i_j(x;y)dx^j:=-\bigg[\bigg(\frac{\partial \varphi}{\partial y}\bigg)^{-1}\bigg]^i_p\frac{\partial \varphi^p}{\partial x^j}dx^j,\\
&R^i_{\Bar{j}}(x;y)dx^{\Bar{j}}:=-\bigg[\bigg(\frac{\partial \varphi}{\partial y}\bigg)^{-1}\bigg]^i_p\frac{\partial \varphi^p}{\partial x^{\Bar{j}}}dx^{\Bar{j}},
\end{split}
\end{equation}
where $\{y^i\}$ are the generators of the fiber of $\reallywidehat{\Sym}^\bullet(T^{\vee1,0}M)$. The formal action is such that the \textit{differential Classical Master Equation} (dCME) is satisfied, namely
\begin{equation}
d_M\Tilde{\mathcal{S}}\surg+\frac{1}{2}(\Tilde{\mathcal{S}}\surg,\Tilde{\mathcal{S}}\surg)=0,
\end{equation}
with $d_M=d_x+d_{\Bar{x}}$ the sum of holomorphic and antiholomorphic Dolbeault differentials on $M$. The dCME presented here is different from the one presented in e.g. \cite{BCM12,CMoW19,CMoW20} since there $d_M$ was the de Rham differential on the body of the target manifold.
The globalized model is then shown to be a globalization of the RW model \cite{RW96}, which reduces to the RW model itself in the appropriate limits. Our globalization of the RW model is compared with other globalization constructions as the one developed in \cite{CLL17} for a closed source manifold by using Costello's approach \cite{Co11a,Co11b} to \textit{derived geometry} \cite{To06, To14, PTTV13}, the procedure in \cite{Ste17} which extends the work of \cite{CLL17} to manifolds with boundary and the procedure in \cite{QZ10,KQZ13}. In general, our model is compatible with all these apparently different views. In particular, we give a detailed account of the similarities between our method and the one in \cite{CLL17}, thus confirming the claim in Remark 3.6 in \cite{CMoW19} about the equivalence between Costello's approach and ours.
In order to quantize the theory according to the quantum BV-BFV formalism, we formulate a split version of our globalized RW model. Since the globalization is controlled by an $\Linf$-algebra, following \cite{Ste17} and inspired by the work of Cattaneo, Mnev and Wernli for CS theory \cite{CMnW17}, we assume that we can split the $\Linf$-algebra in two isotropic subspaces. The action of the \textit{globalized split RW model} is then
\begin{equation}
\Tilde{\Sc}\surgS
\braket{\hat{\mathbf{B}}, D\hat{\mathbf{A}}}+\Big\langle\Big(\hat{R}\sur\Big)_j(x; \hat{\mathbf{A}}+\hat{\mathbf{B}})dx^j,\hat{\mathbf{A}}+\hat{\mathbf{B}}\Big\rangle+\Big\langle\Big(\hat{R}\sur\Big)_{\bar{j}}(x; \hat{\mathbf{A}}+\hat{\mathbf{B}})dx^{\bar{j}},\hat{\mathbf{A}}+\hat{\mathbf{B}}\Big\rangle,
\end{equation}
where $\braket{-,-}$ denotes the BV symplectic form on the space of fields $\Tilde{\mathcal{F}}\surgS$ with values in the Dolbeault complex of $M$, $\hat{\mathbf{A}}^i$ and $\hat{\mathbf{B}}_i$ are the fields found from the splitting of the field $\hat{\mathbf{X}}^i$, and $D$ denotes the superdifferential. Note that $d$ is the de Rham differential on the target, not on the source.
Finally, we quantize the globalized split RW model within the quantum BV-BFV formalism framework. Here, we obtained the following two theorems.
\begin{theoremn}[Flatness of the qGBFV operator (Theorem \ref{thm:flatness})]
The \emph{quantum Grothendieck BFV (qGBFV) operator} $\nabla_{\textup{G}}$ for the anomaly-free globalized split RW model squares to zero, i.e.
\begin{equation}
\label{flatness_GBFV}
(\nabla_{\textup{G}})^2\equiv0,
\end{equation}
where
\begin{equation}
\nabla_{\textup{G}}=d_{M}-i\hbar \Delta_{\mathcal{V}_{\Sigma_3, x}}+\frac{i}{\hbar}\boldsymbol{\Omega}_{\partial \Sigma_3}=d_x+d_{\Bar{x}}-i\hbar \Delta_{\mathcal{V}_{\Sigma_3, x}}+\frac{i}{\hbar}\boldsymbol{\Omega}_{\partial \Sigma_3},
\end{equation}
with $d_M$ the sum of the holomorphic and antiholomorphic Dolbeault differentials on the target $M$, $\Delta_{\mathcal{V}_{\Sigma_3, x}}$ the BV Laplacian and $\boldsymbol{\Omega}_{\partial \Sigma_3}$ the full BFV boundary operator.
\end{theoremn}
\begin{theoremn}[mdQME for anomaly-free globalized split RW model (Theorem \ref{thm:mdQME})]
Consider the full covariant perturbative state $\hat{\psi}_{\Sigma_3,x}$ as a quantization of the anomaly-free globalized split RW model. Then
\begin{equation}
\label{mdqme_thm}
\bigg(d_M-i\hbar \Delta_{\mathcal{V}_{\Sigma_3, x}}+\frac{i}{\hbar}\boldsymbol{\Omega}_{\partial \Sigma_3}\bigg)\boldsymbol{\hat{\psi}}\surgR=0.
\end{equation}
\end{theoremn}
The proof of both the theorems is very similar to the ones exhibited in \cite{CMoW19} for non linear split AKSZ sigma models. Hence, we refer to \cite{CMoW19} when the procedure is the same whereas we remark when there are differences (which are related to the presence of the sum of the holomorphic and antiholomorphic Dolbeault differentials in the quantum Grothendieck BFV operator instead of the de Rham differential as in \cite{CMoW19}).
We provide an explicit expression for the BFV boundary operator up to one bulk vertices in the $\mathbb{B}$-representation by adapting to our case the degree counting techniques of \cite{CMoW19}. Unfortunately, due to some complications related to the number of Feynman rules, we are not able to provide an explicit expression of the BFV boundary operator in the $\mathbb{B}$-representation in the case of a higher number of bulk vertices. See \cite{Sac21} for a limited example of graphs that appear when there are three bulk vertices.
This paper is structured as follows:
\begin{itemize}
\item In Section \ref{sec:BV-BFV} we introduce the most important notions of the classical an quantum BV-BFV formalism. Moreover, we give an overview of \textit{AKSZ theories}
\item In Section \ref{sec:RW_model} we introduce the necessary preliminaries to understand the RW model
\item In Section \ref{sec:Classical_Theory} we define an AKSZ model which upon globalization can be reduced to the RW model.
\item In Section \ref{sec:comp_orig_RW} we compare our construction to the original construction by Rozansky and Witten.
\item In Section \ref{sec:comp_other} we compare our globalization construction with other globalization constructions of the RW model.
\item In Section \ref{sec:BF-like_formulation} we give a $BF$-like formulation by a splitting of the fields of the RW model in order to to be able to give a suitable description of its quantization.
\item In Section \ref{sec:pert_quant_RW} we quantize the globalized split RW model according to the quantum BV-BFV formalism introduced in Section \ref{sec:BV-BFV}.
\item In Section \ref{sec:mdQME} we introduce the \emph{quantum Grothendieck BFV operator} for the globalized split RW model, we prove that it is flat and, in the end, we use it to prove the modified differential Quantum Master Equation.
\item Finally, in Section \ref{sec:outlook} we present some possible future directions.
\end{itemize}
\textbf{Notation}
Throughout the whole paper, we will keep the following conventions:
\begin{itemize}
\item we will drop the wedge product wherever its presence would make the expressions too cumbersome;
\item we will employ the Einstein summation convention, meaning that expressions of the form $A^iB_i$ should be interpreted as $\sum_i A^iB_i$;
\item we will denote the dual of a vector space $V$ as $V^\vee$.
\end{itemize}
\textbf{Acknowledgements}
We thank A. S. Cattaneo, I. Contreras, P. Mnev and T. Willwacher for comments and remarks.
D. S. wants to thank P. Steffens for sending him his master thesis and for valuable comments.
This research was (partially) supported by the NCCR SwissMAP, funded by the Swiss National Science Foundation, and by the SNF grant No. 200020\_192080.
This paper is based on the master thesis of D. S. \cite{Sac21}.
\section{The BV-BFV formalism}
\label{sec:BV-BFV}
\subsection{Classical BV-BFV formalism}
In this section, we will recall the classical BV-BFV formalism as in \cite{CMR14}. We also refer to \cite{Mnev2019} for an excellent introduction to the BV formalism and to \cite{CattMosh1} for a more detailed exposition on the BV-BFV formalism.
\begin{defn}[BV manifold]
\label{BVmfld}
A \textit{BV manifold} is a triple $(\mathcal{F}, \omega, \mathcal{S})$ where $\mathcal{F}$ is a supermanifold\footnote{See \cite{BerLei75} for an original reference and \cite{CattaneoSchaetz2011,CattMosh1} for a concise introduction.} with $\mathbb{Z}$-grading, $\omega$ an odd symplectic form of degree $-1$ and $\mathcal{S}$ an even function of degree zero on $\mathcal{F}$, satisfying the \textit{Classical Master
Equation} (CME):
\begin{equation}
\label{cms}
(\mathcal{S},\mathcal{S})=0.
\end{equation}
We denote the odd Poisson bracket associated to $\omega$ with round brackets $(-,-)$. Usually we refer to $(-,-)$ as the \textit{BV bracket} (or \emph{anti bracket}).
\end{defn}
\begin{rmk}
Note that we have two different gradings, the $\mathbb{Z}_2$-grading from
the supermanifold structure and an additional $\mathbb{Z}$-grading. In a physics context, the $\mathbb{Z}$-grading corresponds to the ghost number $gh$ and the $\mathbb{Z}_2$-grading corresponds to the ``parity", which distinguishes bosonic and fermionic particles.
\end{rmk}
Equivalently, one may introduce the \textit{Hamiltonian vector field} $Q$ of $\mathcal{S}$, with ghost degree 1, defined by
\begin{equation}
\label{cms2}
\iota_{Q}\omega=\delta \mathcal{S}
\end{equation}
Furthermore, we require $Q$ to be a \textit{cohomological vector field}, i.e.
\begin{equation}
\label{Q}
[Q,Q]=0,
\end{equation}
where $[-,-]$ denotes the Lie bracket of vector fields.
\begin{rmk}
Using symplectic geometry tools, one can show that Eq. \eqref{Q} and Eq. (\ref{cms2}) together imply $(S,S)=\text{constant}$, which for degree reasons reduces to Eq. (\ref{cms}).
This allows us then to write an equivalent definition of a BV manifold as a quadruple collection of data $(\mathcal{F}, \omega, \mathcal{S}, Q)$ satisfying Eq. \eqref{cms2} and Eq. (\ref{Q}).
\end{rmk}
\begin{defn}[BV theory] A $d$-dimensional \emph{BV theory} on a closed manifold $M$ is the association of a BV manifold to every closed $d$ manifold $M$:
\begin{equation}
M \mapsto (\mathcal{F}_M,\omega_M,\mathcal{S}_M,Q_M)
\end{equation}
\end{defn}
The natural question one might ask is how these definitions extend to the case of a manifold with boundary, which will also be the relevant case for our work.
\begin{defn}[BV extension] A BV theory $(\mathcal{F}_M,\omega_M,\mathcal{S}_M,Q_M)$ is a \textit{BV extension} of a local field theory $M \mapsto (F_M, S_M)$ if for all $d$-manifolds $M$, the degree zero part $(\mathcal{F}_M)_0$ of $\mathcal{F}_M$ satisfies $(\mathcal{F}_M)_0=F_M$ and $\mathcal{S}_M\vert_{(\mathcal{F}_M)_0}=S_M$. In addition we want $\omega_M, \mathcal{S}_M$ and $Q_M$ to be local.
\end{defn}
Extending the BV formalism to manifolds with boundary amounts to considering its Hamiltonian counterpart, namely the BFV formalism.
\begin{defn}[BFV manifold] A \emph{BFV manifold} is a triple $(\mathcal{F}^\partial, \omega^\partial,Q^\partial)$, where similarly as in Definition \ref{BVmfld}, $\mathcal{F}^\partial$ is a graded manifold, $\omega$ an even symplectic form of degree zero, and $Q^\partial$ a degree 1 cohomological vector field on $\mathcal{F}^\partial$. Moreover, if $\omega^\partial=\delta \alpha^\partial$, i.e. exact, the BFV manifold is called \textit{exact}.
\end{defn}
The result of merging BV and BFV formalism is encapsulated in the following definition:
\begin{defn}[BV-BFV manifold, \cite{CMR14}]
A \emph{BV-BFV manifold} over a given exact BFV manifold $\mathcal{F}^\partial=(\mathcal{F}^\partial,\omega^\partial=\delta\alpha^\partial,Q^\partial)$
is a quintuple
\begin{equation}
\mathcal{F}=(\mathcal{F},\omega,\mathcal{S},Q,\pi)
\end{equation}
with $\pi:\mathcal{F} \rightarrow \mathcal{F}^\partial$ a surjective submersion obeying
\begin{equation}
\label{cme_boundary}
\iota_Q\omega=\delta \mathcal{S}+\pi^*\alpha^\partial
\end{equation}
and $Q^\partial=\delta \pi Q$, where $\delta \pi$ is the differential of $\pi$.
\end{defn}
\begin{rmk}
Note that if $\mathcal{F}^\partial$ is a point then $(\mathcal{F},\omega,\mathcal{S})$ is a BV manifold.
\end{rmk}
We will adopt the short notation $\pi:\mathcal{F} \rightarrow \mathcal{F}^\partial$ for a BV-BFV manifold.
We can now formulate a generalization of a BV theory:
\begin{defn}[BV-BFV theory]
A $d$-dimensional \textit{BV-BFV theory} associates to every closed
$(d-1)$-dimensional manifold $\Sigma$ a BFV manifold $\mathcal{F}^\partial_\Sigma$, and to a $d$-dimensional manifold $M$ with
boundary $\partial M$ a BV-BFV manifold $\pi_M:\mathcal{F}_M \rightarrow \mathcal{F}^\partial_{\partial M}$.
\end{defn}
\begin{rmk}
For $Q$ a Hamiltonian vector field of $\mathcal{S}$, one can formally write
\[(\mathcal{S},\mathcal{S})=\iota_Q\iota_Q \omega = Q(\mathcal{S}).\] In the case of a BV-BFV theory for a manifold $M$ with boundary $\partial M$, we have
\begin{equation}
Q(\mathcal{S})=\pi^*(2\mathcal{S}^\partial-\iota_{Q^\partial}\alpha^\partial).
\end{equation}
This can be phrased equivalently as
\begin{equation}
\iota_Q\iota_Q\omega=2\pi^*\mathcal{S}^\partial,
\end{equation}
which we will refer to as the \textit{modified Classical Master Equation} (mCME).
\end{rmk}
An important as well as classical example of a BV-BFV theory are the \textit{$BF$-like theories}.
\begin{defn}[$BF$-like theory]A BV-BFV theory is called \emph{$BF$-like} if
\begin{equation}
\begin{split}
\mathcal{F}_M&=\big(\Omega^\bullet(M) \otimes V[1]\big) \oplus \big(\Omega^\bullet (M)\otimes V^\vee[d-2]\big),\\
\mathcal{S}_M&=\varint_M\big(\braket{B,dA} + \mathcal{V}(A,B)\big),
\end{split}
\end{equation}
with $V$ a graded vector space, $\langle -, - \rangle$ a pairing between $V^\vee$ and $V$, and $\mathcal{V}$ a density-valued function of the fields $A$ and $B$, such that $\mathcal{S}_M$ satisfies the CME for $M$ without boundary.
\end{defn}
\begin{rmk}
\label{bv-bfv:rmk_bf_like}
Equivalently, by picking up a graded basis $e^i$ for $V$ and $e_i$ for $V^\vee$, we may define a $BF$-like theory as a BV-BFV theory with
\begin{equation}
\begin{split}
\mathcal{F}_M&=\big(\Omega^\bullet(M) \otimes V[k_i]\big) \oplus \big(\Omega^\bullet (M)\otimes V^\vee[d-k_i-1]\big),\\
\mathcal{S}_M&=\varint_M\big(B_idA^i + \mathcal{V}(A,B)\big).
\end{split}
\end{equation}
To pass from one definition to the other, it is sufficient to set $k_i=1-|e_i|$, where $|e_i|$ is the degree of $e_i$.
\end{rmk}
\subsection{Quantum BV-BFV formalism}
\label{sec_qbvbfv}
In this section, we introduce a perturbative quantization method for BV-BFV theories compatible with cutting and gluing. Originally, this procedure was proposed in \cite{CMR17} under the name of \textit{quantum BV-BFV formalism}. We start by defining what is a quantum BV-BFV theory and then we will explain how to produce such a theory by quantizing perturbatively a classical BV-BFV theory.
\begin{defn}[Quantum BV-BFV theory]
\label{def_Quantum BV-BFV theory}
Given a BV-BFV theory\footnote{The perturbative quantization scheme goes through if certain conditions are satisfied. In the following, we will be interested to $BF$-like theories for which this method works smoothly.}, a $d$-dimensional \textit{quantum BV-BFV theory} associates
\begin{itemize}
\item To every closed $(d-1)$-dimension manifold $\Sigma$ a graded $\mathbb{C}[\![\hbar]\!]$-module $\mathcal{H}_{\Sigma}$, called the \textit{space of states}.
\item To every $d$-dimensional manifold (possibly with boundary) $M$:
\begin{itemize}
\item a degree 1 coboundary operator $\Omega\surM$ on $\mathcal{H}\surM$ called \textit{quantum BFV operator}. We call $\mathcal{H}\surM$ \textit{space of boundary states}
\item a finite-dimensional BV manifold $\mathcal{V}_M$, called \textit{space of residual fields}.
\item a homogeneous element\footnote{Usually, the quantum state $\hat{\psi}_M$ will have degree 0. This is always the case when the gauge-fixing Lagrangian has degree 0, which is true for all the examples considered in this paper.}, the \textit{quantum state} $ \hat{\psi}_M\in \hat{\mathcal{H}}_M$. By denoting the space of half-densities on $\mathcal{V}_M$ as $\Dens^{\frac12}(\mathcal{V}_M)$, we define $\hat{\mathcal{H}}_M$ as $\hat{\mathcal{H}}_M\coloneqq\Dens^{\frac12}(\mathcal{V}_M)\otimes \mathcal{H}\surM$. It is a graded vector space endowed with two commuting boundary operators
\begin{equation}
\hat\Omega_{\partial M}\coloneqq\Id \otimes \Omega\surM\quad \text{and}\quad \hat\Delta_{\mathcal{V}_M}\coloneqq\Delta_{\mathcal{V}_M}\otimes\Id,
\end{equation}
where $\Delta_{\mathcal{V}_M}$ is the canonical BV Laplacian on half-densities on residual fields. However, by abuse of notation, we will still write $\Omega_{\partial M}$ whenever we actually mean $\hat{\Omega}_{\partial M}$. The same is done for the BV Laplacian.\\
We require the state to satisfy the \textit{modified Quantum Master Equation (mQME)}:
\begin{equation}
\label{ov:bv-bfv_mqme}
(\hbar^2\Delta_{\mathcal{V}_M}+\Omega\surM)\hat{\psi}_M=0
\end{equation}
\end{itemize}
\end{itemize}
\end{defn}
In the following, we will refer to a quantum BV-BFV theory with the shorthand notation
\[M\mapsto (\hat{\mathcal{H}}_M, \hat{\psi}_M,\Delta_{\mathcal{V}_M}, \Omega\surM).\]
\begin{rmk}
Since $\Delta_{\mathcal{V}_M}^2=0$, $\Omega_M$ and $\Delta_M$ endow $\hat{\mathcal{H}}_M$ with the structure of a bicomplex.
\end{rmk}
\begin{rmk}
Here we would like to precise the terminology used in Definition \ref{def_Quantum BV-BFV theory} by relating it to the literature. First of all, we call $\mathcal{H}_\Sigma$ space of fields because it is constructed by quantizing the symplectic manifold of boundary fields (as we will see below). An element of this space is thus called state. It is produced by integrating over bulk fields. However, following Wilson's ideas, it is useful to split the contribution of bulk fields into ``low energy" (or ``slow") fields, which we refer to as residual fields, and a complement (usually called ``high energy" or ``fluctuation" fields) on which we integrate over. Hence, our state will depend on both residual fields and boundary contribution. We have the following cases:
\begin{enumerate}
\item in absence of residual fields, $\hat{\psi}_M$ is referred as state in \cite{Wit89},
\item when $M$ is a cylinder, $\hat{\psi}_M$ is an evolution operator,
\item in absence of boundaries and residual fields, $\hat{\psi}_M$ is referred as partition function (see \eqref{ov.tft:part_funct_1}),
\item in the presence of both boundaries and residual fields, $\hat{\psi}_M$ will be a proper state only after we have integrated out the residual fields. We note that this is actually not always possible (see e.g. \cite{Mo20b} and references therein).
\end{enumerate}
Keeping in mind these possibilities, we still prefer to refer to $\psi_M$ as state.
\end{rmk}
\begin{defn}[Equivalence]
\label{bv-bfv:def_equiv}
Two quantum BV-BFV theories $(\hat{\mathcal{H}}_M, \hat{\psi}_M,\Delta_{\mathcal{V}_M}, \Omega\surM)$\\ and $(\hat{\mathcal{H'}}_M, \hat{\psi'}_M,\Delta_{\mathcal{V'}_M}, \Omega'\surM)$ are \textit{equivalent} if for every manifold $M$ with boundary $\partial M$ there is a quasi-isomorphism of bicomplexes
\begin{equation}
I_M: (\hat{\mathcal{H}}_M,\Delta_{\mathcal{V}_M}, \Omega\surM)\rightarrow (\hat{\mathcal{H'}}_M,\Delta_{\mathcal{V'}_M}, \Omega'\surM)
\end{equation}
such that $I_M(\hat{\psi}_M)=\hat{\psi'}_M$.
\end{defn}
\begin{defn}[Change of data]
\label{bv-bfv:def_change_data}
Two quantum BV-BFV theories $(\hat{\mathcal{H}}_M, \hat{\psi}_M,\Delta_{\mathcal{V}_M}, \Omega\surM)$ and $(\hat{\mathcal{H'}}_M, \hat\psi'_M,\Delta_{\mathcal{V'}_M}, \Omega'\surM)$ are related by change of data if there is an operator $\tau$ of degree $0$ on $\mathcal{H}\surM$ and an element $\chi\in\hat{\mathcal{H}}_M$ with $\deg(\chi)=\deg(\psi)-1$ such that
\begin{equation}
\begin{split}
\Omega'\surM&=[\Omega\surM, \tau],\\
\hat\psi'_M&=(\hbar^2\Delta_{\mathcal{V}_M}+\Omega\surM)\chi-\hat{\tau}\hat{\psi}_M,
\end{split}
\end{equation}
where $\hat{\tau}=\Id\otimes \tau$ is the extension of $\tau$ to $\hat{\mathcal{H}}_M$.
\end{defn}
\subsubsection{BV pushforward}
Let $(\mathcal{M}_1,\omega_1)$ and $(\mathcal{M}_2,\omega_2)$ two graded manifolds with odd symplectic forms $\omega_1$ and $\omega_2$ and canonical Laplacians $\Delta_1$ and $\Delta_2$, respectively. Consider $\mathcal{M}=\mathcal{M}_1 \times \mathcal{M}_2$ with symplectic form $\omega=\omega_1+\omega_2$ and canonical Laplacian $\Delta$. The space of half-densities on $\mathcal{M}$ factorizes as
\begin{equation}
\Dens^{\frac12}(\mathcal{M})=\Dens^{\frac12}(\mathcal{M}_1)\hat{\otimes}\Dens^{\frac12}(\mathcal{M}_2).
\end{equation}
If we do a BV integration in the second factor, over the Lagrangian submanifold $\mathcal{L} \subset \mathcal{M}_2$, we are able to define a BV pushforward map on half-densities
\begin{equation}
\varint_{\mathcal{L}}: \Dens^{\frac12}(\mathcal{M})\xrightarrow{\Id\otimes \varint_{\mathcal{L}}}\Dens^{\frac12}(\mathcal{M}_1).
\end{equation}
This map is also called \textit{fiber BV integral}, its properties are defined by the following theorem.
\begin{thm}[Batalin--Vilkovisky--Schwarz]
\label{bv_push}
Let $(\mathcal{M}_1,\omega_1)$ and $(\mathcal{M}_2,\omega_2)$ two graded manifolds with odd symplectic forms $\omega_1$ and $\omega_2$ and canonical Laplacian $\Delta_1$ and $\Delta_2$, respectively. Consider $\mathcal{M}=\mathcal{M}_1 \times \mathcal{M}_2$ with product symplectic form $\omega$ and canonical Laplacian $\Delta$ and let $\mathcal{L}, \mathcal{L}' \subset \mathcal{M}_2$ be any two Lagrangian submanifolds which can be deformed into each-other. For any half-density $f \in \mathrm{Dens}^{\frac12}(\mathcal{M})$ one has:
\begin{enumerate}
\item $\varint_{\mathcal{L}}\Delta f=\Delta_1 \varint_{\mathcal{L}}f$
\item $\varint_{\mathcal{L}}f-\varint_{\mathcal{L'}}f=\Delta_1 \xi$ for some $\xi \in \mathrm{Dens}^{\frac12}(\mathcal{M}_1)$, if $\Delta f=0$.
\end{enumerate}
\end{thm}
\subsubsection{Summary}
Let us explain here how to construct a quantum BV-BFV theory. Consider a classical BV-BFV theory $\pi: \mathcal{F}_M\rightarrow\mathcal{F}^{\partial}_{\partial M}$. Note that from now on we will assume $\mathcal{F}_M$ and $\mathcal{F}^{\partial}_{\partial M}$ to be vector spaces. This will be the case when we will quantize the \textit{globalized split RW theory}.
The main steps can be summarized as follows
\begin{enumerate}
\item[(i)]\textbf{(Geometric Quantization)} Given a $(d-1)$-manifold $\Sigma$, the BV-BFV theory associates to it a symplectic manifold $(\mathcal{F}^{\partial}_\Sigma, \omega^{\partial}_\Sigma, Q^{\partial}_\Sigma$). The idea here is to construct the space of states $\mathcal{H}_\Sigma$ and the quantum BFV operator $\Omega_\Sigma$ as a \textit{geometric quantization}\footnote{For an introduction to geometric quantization see e.g. \cite{BW97}.} of this symplectic vector space\footnote{In fact, for the case when $\Sigma$ is given by the boundary of another manifold, the BFV operator is constructed by the methods of \emph{deformation quantization} as it was also pointed out in \cite{Moshayedi2021}.}.
In order to accomplish such task, we require the data of a polarization $\mathcal{P}$ on this symplectic vector space, in particular, we consider real fibrating polarizations. Then, it is sufficient to split $\mathcal{F}^{\partial}_\Sigma$ into Lagrangian subspaces as
\begin{equation}
\mathcal{F}^{\partial}_\Sigma\cong \mathcal{B}^{\mathcal{P}}_{\Sigma}\times \mathcal{K}^{\mathcal{P}}_\Sigma,
\end{equation}
with $\mathcal{K}^{\mathcal{P}}_\Sigma$ thought as a Lagrangian distribution on $\mathcal{F}^{\partial}_\Sigma$ and $\mathcal{B}^{\mathcal{P}}_{\Sigma}$ identified with the leaf space of the polarization, i.e. $\mathcal{B}^{\mathcal{P}}_{\Sigma}=\mathcal{F}^{\partial}_\Sigma/\mathcal{P}$. If we assume the 1-form $\alpha^{\partial}_\Sigma$ to vanish along $\mathcal{P}$ and in the case of real polarization, the space of states $\mathcal{H}_{\Sigma}$ is modeled as a space of complex-valued functionals on $\mathcal{B}^{\mathcal{P}}_{\Sigma}$ (or more generally $\mathcal{H}_{\Sigma}$ is the space of polarized sections of the trivial ``prequantum" line bundle over $\mathcal{F}^{\partial}_{\Sigma}$). This means that the space of states is obtained as a geometric quantization of the space of boundary fields as we preannounced above.
On the other hand, when $\alpha^{\partial}_\Sigma \Big|_{\mathcal{P}}\neq 0$, we can use a gauge transformation and modify $\alpha^{\partial}_\Sigma$ by an exact term $\delta f^{\mathcal{P}}_\Sigma$, with $f^{\mathcal{P}}_\Sigma$ a local functional. Consequently, if we assume from now on $\Sigma=\partial M$, to preserve Eq. (\ref{cme_boundary}), we change $\Sc$ by a boundary term obtaining $\Sc^{\mathcal{P}}$. In this case, with $\Sc^{\mathcal{P}}$ and $\alpha^{\mathcal{P}}_{\partial M}$, we have a new BV-BFV manifold.
\item[(ii)] \textbf{(Extraction of boundary fields)} The aim is to split bulk and boundary field contributions in the space of fields $\mathcal{F}_M$. We proceed as follows: consider the projection $\mathcal{F}^{\partial}_{\partial M}\xrightarrow{p^{\mathcal{P}}_{\partial M}} B^{\mathcal{P}}_{\partial M}$, we have a surjective submersion
\begin{equation}
p^{\mathcal{P}}_{\partial M}\circ \pi:\mathcal{F}_M\rightarrow B^{\mathcal{P}}_{\partial M}.
\end{equation}
Assume we can choose a section $\sigma$ on $\mathcal{F}_M\rightarrow B^{\mathcal{P}}_{\partial M}$ such that we can split
\begin{equation}
\label{ov:bv-bfv_splitting}
\mathcal{F}_M\cong \sigma(B^{\mathcal{P}}_{\partial M}) \otimes \mathcal{Y}.
\end{equation}
The space $\sigma(B^{\mathcal{P}}_{\partial M})$ is a bulk extension of $B^{\mathcal{P}}_{\partial M}$ which we denote as $\tilde{B}^{\mathcal{P}}_{\partial M}$.
This splitting is subjected to the following assumption\footnote{In order to accomplish this assumption, we are forced to choose singular extensions of boundary fields. The boundary fields are thus extended by 0 to the bulk.}:
\begin{assump}
\label{ov:bv-bfv_ass1}
There is a weakly symplectic form $\omega_{\mathcal{Y}}$ on $\mathcal{Y}$ such that $\omega_M$ is the extension of $\omega_{\mathcal{Y}}$ to $\mathcal{F}_M$.
\end{assump}
In the splitting (\ref{ov:bv-bfv_splitting}), the space $\mathcal{Y}$ is a complement of $\Tilde{B}^{\mathcal{P}}_{\partial M}$ which is interpreted as the space of \textit{bulk fields} (while $\Tilde{B}^{\mathcal{P}}_{\partial M}$ is thought of as the space of \textit{boundary fields} extended to the bulk).
\item[(iii)] \textbf{(Construction of $\Omega_{\partial M}$)} As a result of the geometric quantization procedure, $\mathcal{H}_{\partial M}$ is a cochain complex. Following the same line of thought, we construct the coboundary operator $\Omega_{\partial M}$ as quantization of the boundary actions $\Sc^{\partial}_{\partial M}$. We can proceed as follows. Assume we have Darboux coordinates $(q,p)$ on $\mathcal{F}^{\partial}_{\partial M}$. In particular $q$ are coordinates on $\mathcal{B}^{\mathcal{P}}_{\partial M}$ and $p$ are coordinates on the fiber $p^{\mathcal{P}}_{\partial M}:\mathcal{F}^{\partial}_{\partial M}\rightarrow B^{\mathcal{P}}_{\partial M}$, which is still part of $\mathcal{Y}$. We define $\Omega_{\partial M}$ as the standard-ordering quantization of $\Sc^{\partial}_{\partial M}$:
\begin{equation}
\label{omega_std_ordering}
\Omega_{\partial M}\coloneqq \Sc^{\partial}_{\partial M}\bigg(q, -i\hbar \frac{\partial}{\partial q}\bigg),
\end{equation}
where all the derivatives are positioned on the right.
\item[(iv)] \textbf{(Choice of residual fields)}
We further split the bulk contributions in $\mathcal{Y}$ into residual fields and a complement $\mathcal{Y}'$, which represents the space of \textit{fluctuation fields} (also called ``high-energy" or ``fast" fields). This means, we choose a splitting
\begin{equation}
\mathcal{Y}\cong \mathcal{V}^{\mathcal{P}}_M\times \mathcal{Y}'
\end{equation}
which depends on the boundary polarization and satisfies the following assumption:
\begin{assump}
\label{ov:bv-bfv_ass2}
The following holds:
\begin{enumerate}
\item[(1)] $\mathcal{V}^{\mathcal{P}}_M$ and $\mathcal{Y}'$ are BV manifolds,
\item[(2)] $\mathcal{V}^{\mathcal{P}}_M$ is finite-dimensional,
\item[(3)] The symplectic form splits as $\omega_{\mathcal{Y}}=\omega_{\mathcal{V}^{\mathcal{P}}_M}+\omega_{\mathcal{Y}'}$.
\end{enumerate}
\end{assump}
Usually, the space $\mathcal{V}^{\mathcal{P}}_M$ is chosen as the space of solutions of $\delta \Sc^0_M=0$ modulo gauge transformations, where $\Sc^0_M$ is the quadratic part of the action $\Sc_M$. This is called minimal choice, and we refer to this space as the space of \textit{zero modes}. Other choices are possible and they are all related by the equivalence relations defined above (see Definition \ref{bv-bfv:def_equiv} and Definition \ref{bv-bfv:def_change_data}). Finally, we sum up the last two bullet points with the following definition:
\begin{defn}[Good splitting, \cite{CMoW19}]
\label{bv-bfv_good_split}
A splitting
\begin{equation}
\mathcal{F}_M\cong \mathcal{B}^{\mathcal{P}}_{\partial M}\times \mathcal{V}^{\mathcal{P}}_M\times \mathcal{Y}'
\end{equation}
is called \textit{good} if it satisfies Assumptions \ref{ov:bv-bfv_ass1} and \ref{ov:bv-bfv_ass2}.
\end{defn}
Given a good splitting, an element $\mathbf{X}$ of $\mathcal{F}_M$ is written accordingly as $\mathbf{X}=\mathbb{X}+\mathsf{x}+\xi$.
\item[(v)] \textbf{(The state)} When we have a good splitting, the gauge-fixing consists of choosing a Lagrangian $\mathcal{L}\subset \mathcal{Y}'$. Set $\mathcal{Z}_M=\mathcal{B}^{\mathcal{P}}_{\partial M}\times \mathcal{V}^{\mathcal{P}}_M$ (\textit{bundle of residual fields} over $\mathcal{B}^{\mathcal{P}}_{\partial M}$). Then, we define $\hat{\mathcal{H}}^{\mathcal{P}}_M\coloneqq\Dens^{\frac12}(\mathcal{Z}_M)= \Dens^{\frac12}(\mathcal{V}^{\mathcal{P}}_M)\times \Dens^{\frac12}(\mathcal{B}^{\mathcal{P}}_{\partial M})$ and the BV Laplacian $\hat{\Delta}_{\mathcal{V}^{\mathcal{P}}_M}\coloneqq\Id\otimes \Delta_{\mathcal{V}^{\mathcal{P}}_M}$ (as before the hat will be omitted in the following).
The state is then defined as a BV pushforward of the exponential of the bulk action
\begin{equation}
\label{ov.bv-bfv_state}
\hat{\psi}_M(\mathbb{X},\mathsf{x})\coloneqq\varint_{\xi\in \mathcal{L}}e^{\frac{i}{\hbar}\Sc_M(\mathbb{X}+\mathsf{x}+\xi)}.
\end{equation}
Moreover, if $\Delta_{\mathcal{Y}}\Sc^{\mathcal{P}}_M=0$, as a consequence of Theorem \ref{bv_push}:
\begin{itemize}
\item $\hat{\psi}_M$ is closed under the coboundary operator $\hbar^2\Delta_{\mathcal{V}^{\mathcal{P}}_M}+\Omega\surM$, i.e. Eq. (\ref{ov:bv-bfv_mqme}) holds,
\item the state does not change under smooth deformation of the gauge-fixing Lagrangian $\mathcal{L}$ used in the BV pushforward up to $(\hbar^2\Delta_{\mathcal{V}^{\mathcal{P}}_M}+\Omega\surM)$-exact terms.
\end{itemize}
\item[(vi)] \textbf{(Perturbative expansion)} The procedure detailed so far is valid for finite-dimensional situations. However, the space of fields $\mathcal{F}_M$ is usually infinite-dimensional, since, for example, it can contain the de Rham complex of differential forms over $\partial M$. As a result, the integral in Eq. (\ref{ov.bv-bfv_state}) is ill-defined. To fix this problem, we \textit{define} the integral perturbatively, i.e. as formal power series in $\hbar$ with coefficients given by sums of Feynman diagrams. For the perturbative expansion to be well-defined, we need the following assumption to be satisfied:
\begin{assump}
\label{ov.bv-bfv_ass3}
The restriction of the action $\Sc^{\mathcal{P}}_M$ to $\mathcal{L}$ has isolated critical points.
\end{assump}
We note that this does not hold for every Lagrangian.
\begin{rmk}
It is important to highlight that, for Assumption \ref{ov.bv-bfv_ass3} to be satisfied, we need to choose carefully the residual fields. The problem here is represented by \textit{zero modes} $\mathcal{V}^0_M$, which can be present in the quadratic part of the bulk action. The zero modes are bulk fields configurations that are annihilated by the kinetic operator and correspond to the tangent directions to the Euler--Lagrange moduli space (solutions of $\delta \Sc^0_M=0$ modulo gauge transformation). Hence, their presence implies non-isolated critical points in the action: the perturbative expansion is obstructed. To solve this situation, we need the space of residual fields to at least contain the space of zero modes, i.e. $\mathcal{V}^0_M\subseteq \mathcal{V}_M$. In this way, we can obtain a good gauge-fixing Lagrangian, which satisfies Assumption \ref{ov.bv-bfv_ass3}. We call it \textit{minimal choice} (or \textit{minimal realization} of the state)\footnote{We have a non-minimal realization when $\mathcal{V}^0_M\subset \mathcal{V}_M$. In that case, we can pass from a non-minimal realization to a smaller one by BV pushforward, which can be interpreted as a sort of \textit{renormalization group flow} \cite{Ir18}.}, when $\mathcal{V}_M\cong\mathcal{V}^0_M$.
\end{rmk}
\noindent When we pass to the infinite case, another problem arises: the BV Laplacian is ill-defined. Therefore, every equation containing it is only \textit{formal}. In this regard, Theorem \ref{bv_push} has only been proven in the finite-dimensional setting. Hence, we can not conclude that the mQME is satisfied even if the action is formally annihilated by the Laplacian. The mQME has to be verified for each theory at the level of Feynman diagrams.
In this paper, we add the globalized RW theory to the class of $BF$-like theories for which the mQME has been proven in the infinite-dimensional perturbative setting. The proof relies on Stokes' Theorem for integrals over compactified configuration spaces.
\end{enumerate}
\subsection{Quantum states in $BF$-like theories}
\label{bv-bfv_sub_qs_bf_like}
In $BF$-like theories one can define the quantum state in a perturbative way using Feynman graphs via integrals defined on the configuration space of these graphs. Two convenient choices of polarizations in $BF$-like theories are the $\frac{\delta}{\delta \mathbf{A}}$- and the $\frac{\delta}{\delta \mathbf{B}}$-polarization. Concretely, we fix a polarization by splitting the boundary $\partial M$ into two parts $\partial_1 M$ and $\partial_2M$, where we choose the polarization $\frac{\delta}{\delta \mathbf{B}}$ on $\partial_1M$ and $\frac{\delta}{\delta \mathbf{A}}$ on $\partial_2M$. The associated space of leaves for the $A$-leaf and $B$-leaf are denoted by $\mathbb{A} \in \mathcal{B}_{\partial M}^{\frac{\delta}{\delta \mathbf{B}}}$ and $\mathbb{B} \in \mathcal{B}_{\partial M}^{\frac{\delta}{\delta \mathbf{A} }}$ respectively.
For $BF$-like theories, the first splitting determined by the polarization is
\begin{equation}
\begin{split}
\mathcal{B}^{\mathcal{P}}_{\partial M}&=\big(\Omega^\bullet (\partial_1M)\otimes V[1]\big) \oplus \big(\Omega^\bullet(\partial_2M)\otimes V^{\vee}[d-2]\big), \\
\mathcal{Y}&=\big(\Omega^\bullet(M,\partial_1M) \otimes V[1]\big) \oplus \big(\Omega^\bullet(M,\partial_2M)\otimes V^{\vee}[d-2]\big).
\end{split}
\end{equation}
The minimal space of residual fields is
\begin{equation}
\mathcal{V}^{\mathcal{P}}_M \cong \big(H^\bullet (M,\partial_1M) \otimes V[1]\big) \oplus (H^\bullet \big(M,\partial_2M)\otimes V^{*}[d-2]\big)
\end{equation}
for $V$ some graded vector space. One way to get a good splitting is then by considering a splitting of the complex of de Rham forms with relative boundary conditions into a subspace $\mathcal{V}^{\mathcal{P}}_M$ isomorphic to cohomology and a complementary space $\mathcal{Y}'$ in a way compatible with the symplectic structure. This can be done by using a Riemannian metric and embed the cohomology as harmonic forms.
As a result, the space of fields $\mathcal{F}_M$ splits as $\mathcal{F}_M=\mathcal{B}^{\mathcal{P}}_{\partial M} \times \mathcal{V}^{\mathcal{P}}_M \times \mathcal{Y}'$, where an element $(\mathbf{A}, \mathbf{B}) \in \mathcal{F}_M$ is given by
\begin{equation}
\begin{split}
\mathbf{A}&=\mathbb{A} + \underline{\mathbf{A}}=\mathbb{A} +\mathsf{a} + \alpha,\\
\mathbf{B}&=\mathbb{B}+\underline{\mathbf{B}}=\mathbb{B} + \mathsf{b} + \beta.
\end{split}
\end{equation}
There is one last ingredient that we need to introduce before defining the quantum state, namely the \textit{composite fields}. We denote them by square brackets $[ \hspace{2mm}]$, i.e. for a boundary field $\mathbb{A}$ we have $[\mathbb{A}^{i_1} \dots \mathbb{A}^{i_k}]$. One can think of them as a \textit{regularization} of higher functional derivatives, in the sense
that a higher functional $\frac{\delta^k}{\delta \mathbb{A}^{i_1}\dots \delta\mathbb{A}^{i_k}}$ is replaced by a first order functional derivative $\frac{\delta^k}{[\delta \mathbb{A}^{i_1}\dots \delta\mathbb{A}^{i_k}]}$. For further details see \cite{CMR17}.
\begin{defn}[Regular functional]
\label{regularfct}
A \textit{regular functional} on the space of base boundary fields is a linear combination of expressions of the form
\begin{equation}
\label{ov:bv-bfv.reg_func}
\begin{split}
\varint_{\mathrm{C}_{m_1}(\partial_1M)\times\mathrm{C}_{m_2}(\partial_2M)} L^{J^1_1\dots J^{l_1}_1\dots J^1_2\dots J^{l_2}_2\dots}_{I^1_1\dots I^{r_1}_1\dots I^1_2\dots I^{r_2}_2\dots}\wedge \pi^*_1\prod^{r_1}_{j=1}\bigg[\mathbb{A}^{I^j_1}\bigg]\wedge \dots&\wedge \pi^*_{m_1}\prod^{r_{m_1}}_{j=1}\bigg[\mathbb{A}^{I^j_{m_1}}\bigg]\wedge \dots \\
&\wedge \pi^*_1\prod^{l_1}_{j=1}\bigg[\mathbb{B}_{J^j_1}\bigg]\wedge \dots \wedge \pi^*_{m_1}\prod^{l_{m_2}}_{j=1}\bigg[\mathbb{B}_{J^j_{m_2}}\bigg],
\end{split}
\end{equation}
where $I^j_i$ and $J^j_i$ are (target) multi-indices and $L^{J^1_1\dots J^{l_1}_1\dots J^1_2\dots J^{l_2}_2\dots}_{I^1_1\dots I^{r_1}_1\dots I^1_2\dots I^{r_2}_2\dots}$ is a smooth differential form on the direct product of compactified configuration spaces $\mathrm{C}_{m_1}(\partial_1M)\times \mathrm{C}_{m_2}(\partial_2M)$, which depends on the residual fields. A regular functional is called \textit{principal} if all multi-indices have length 1. For more details on configuration spaces and configuration space integrals we refer to \cite{Kontsevich1994,BC,CamposIdrissiLambrechtsWillwacher2018} (see also Remark \ref{rem:conf}).
\end{defn}
\begin{defn}[Full space of boundary states]
The \textit{full space of boundary states} $\mathcal{H}^{\mathcal{P}}_{\partial M}$ is given by the linear combination of regular functionals of the form (\ref{ov:bv-bfv.reg_func}).
\end{defn}
\begin{defn}[Principal space of boundary states] The \textit{principal space of boundary
states} $\mathcal{H}_{\partial M}^{\mathcal{P}, \text{princ}}$ is defined as the subspace of $H^{\mathcal {P}}_{\partial M}$
, where we only consider principal regular functionals.
\end{defn}
We use Feynman rules and graphs to define the state. Let us elaborate them in the BV-BFV setting (for perturbations of abelian $BF$-theory).
\begin{defn}[($BF$) Feynman graph]
A \textit{(BF) Feynman graph} is an oriented graph with three types of vertices $V(\Gamma)=V_{\text{bulk}}(\Gamma)\sqcup V_{\partial_1}\sqcup V_{\partial_2}$, called bulk vertices and type 1 and 2 boundary vertices, such that
\begin{itemize}
\item bulk vertices can have any valence,
\item type 1 boundary vertices carry any number of incoming half-edges (and no outgoing half-edges),
\item type 2 boundary vertices carry any number of outgoing half-edges (and no incoming half-edges),
\item multiple edges and loose half-edges (leaves) are allowed.
\end{itemize}
\end{defn}
A labeling of a Feynman graph is a function from the set of half-edges to $\{1,\dots,\dim V\}$.
\begin{defn}[Principal graph]A Feynman graph is called \textit{principal} if all boundary vertices (type 1 and type 2) are univalent or zero valent.
\end{defn}
Let $\Gamma$ be a Feynman graph and $M$ a manifold with boundary $\partial M=\partial_1 M \sqcup \partial_2M$ and define
\begin{equation}
\mathrm{Conf}_\Gamma(M)\coloneqq \mathrm{Conf}_{V_{\text{bulk}}}(M) \times \mathrm{Conf}_{V_{\partial_1}}(\partial_1 M) \times \mathrm{Conf}_{V_{\partial_2}}(\partial_2 M).
\end{equation}
The Feynman rules are given by a map associating to a Feynman graph $\Gamma$ a differential form $\omega_\Gamma \in \Omega^\bullet(\mathrm{Conf}_\Gamma(M))$.
\begin{defn}[($BF$) Feynman rules] Let $\Gamma$ be a labeled Feynman graph. We choose a configuration $\iota:V(\Gamma) \rightarrow \mathrm{Conf} (\Gamma)$, such that decompositions are respected. Then, we \textit{decorate} the graph according to the following rules, namely, the \textit{Feynman rules}:
\begin{itemize}
\item Bulk vertices in $M$ decorated by ``vertex tensors"
\begin{equation}
\mathcal{V}^{i_1\dots i_s}_{j_1\dots j_t} \coloneqq \frac{\partial^{s+t}}{\partial \underline{\mathbf{A}}^{i_1}\dots\partial \underline{\mathbf{A}}^{i_s} \partial \underline{\mathbf{B}}_{j_1}\dots \partial \underline{\mathbf{B}}_{j_t}} \bigg|_{\underline{\mathbf{A}}=\underline{\mathbf{B}}=0} \mathcal{V}(\underline{\mathbf{A}},\underline{\mathbf{B}}),
\end{equation}
where $s, t$ are the out- and in-valencies of the vertex and $i_1, \dots, i_s$ and $j_1, \dots, j_t$ are the labels of the out- and in- oriented half-edges and $\mathcal{V}(\underline{\mathbf{A}},\underline{\mathbf{B}})$ is the interaction term in a $BF$-like theory.
\item Boundary vertices $v \in V_{\partial_1}(\Gamma)$ with incoming half-edges labeled $i_1, \dots, i_k$ and no out-going half-edges are decorated by a composite field $[\mathbb{A}^{i_1} \dots \mathbb{A}^{i_k}]$ evaluated at the point (vertex location) $\iota(v)$ on $\partial_1 M$.
\item Boundary vertices $v \in V_{\partial_2}$ on $\partial_2 M$ with outgoing half-edges labeled $j_1 \dots j_l$ and no in-going half-edges are decorated by $[\mathbb{B}_{j_1} \dots \mathbb{B}_{j_l}]$ evaluated at the point on $\partial_2 M$.
\item Edges between vertices $v_1, v_2$ are decorated with the propagator $\eta (\iota(v_1),\iota(v_2))\cdot \delta^i_j$, with $\eta$ the propagator induced by $\mathcal{L} \subset \mathcal{Y}'$, the gauge-fixing Lagrangian.
\item Loose half-edges (leaves) attached to a vertex $v$ and labeled $i$ are decorated with the residual
fields $\mathsf{a}_i$ (for out-orientation), $\mathsf{b}^i$
(for in-orientation) evaluated at the point $\iota(v)$.
\end{itemize}
\begin{figure}[hbt!]
\centering
\begin{subfigure}[b]{0.25\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=1pt}]
\vertex[blob] (m) at (0,-2) {$\mathsf{a}$};
\vertex (a) at (0,0) {$x$} ;
\diagram*{
(a) -- [fermion, edge label' = $i$] (m)
};
\vertex [right=3em of m] {\(=\mathsf{a}^i(x)\)};
\end{feynman}
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.25\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex[blob] (m) at (0,-2) {$\mathsf{b}$};
\vertex (a) at (0,0) {$x$} ;
\diagram*{
(m) -- [fermion, edge label = $i$] (a)
};
\vertex [right=3em of m] {\(=\mathsf{b}^i(x)\)};
\end{feynman}
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (a) at (0,0) {$x$};
\vertex (b) at (2,0) {$y$} ;
\diagram*{
(a) -- [fermion, edge label = $i\hspace{5mm} j$] (b)
};
\vertex [right=3em of b] {\(=\delta^i_j\eta(x,y)\)};
\end{feynman}
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}
\caption{Feynman rules for residual fields and propagator.}
\label{bv-bfv:fig_feyn_rules1}
\end{figure}
\begin{figure}[hbt!]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (a) at (-1,0)
\vertex (b) at (1,0);
\vertex (m1) at (0, 1);
\diagram*{
(a) -- m [dot] -- (b),
(m1) -- [fermion] m
};
\vertex [below=0.75em of m] {\(\mathbb{B}\)};
\vertex [left=0.25em of a] {\(\partial_2 \Sigma_3\)};
\end{feynman}
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\quad
\begin{subfigure}[b]{0.3\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (a) at (-1,0)
\vertex (b) at (1,0);
\vertex (m1) at (0, -1);
\diagram*{
(a) -- m [dot] -- (b),
m -- [fermion] (m1)
};
\vertex [above=0.75em of m] {\(\mathbb{A}\)};
\vertex [left=0.25em of a] {\(\partial_1 \Sigma_3\)};
\end{feynman}
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.42\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (a) at (-1.2, 0.35);
\vertex (b) at (-0.3, 0.79);
\vertex (c) at (+1.2, 0.4);
\vertex (e) at (-1.2, -0.35);
\vertex (f) at (-0.3, -0.79);
\vertex (g) at (+1.2, -0.4);
\diagram*{
(a) -- [fermion] m [dot],
(b) -- [fermion] m [dot],
(c) -- [fermion] m [dot],
(e) -- [anti fermion] m [dot],
(f) -- [anti fermion] m [dot],
(g) -- [anti fermion] m [dot],
};
\vertex [right=7em of m] {\(=\mathcal{V}^{i_1\dots i_s}_{j_1\dots j_t}\)};
\vertex [above=0.2em of m, label=80:\(\dots\)] {};
\vertex [] at (-1.35,0.4) {\(j_1\)};
\vertex [] at (-0.45,1) {\(j_2\)};
\vertex [] at (1.4, 0.4) {\(j_t\)};
\vertex [] at (-1.35,-0.4) {\(i_1\)};
\vertex [] at (-0.45,-1) {\(i_2\)};
\vertex [] at (1.4, -0.4) {\(i_s\)};
\end{feynman}
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\caption{Feynman rules for boundary fields and interaction vertices.}
\label{bv-bfv:fig_feyn_rules2}
\end{figure}
\begin{figure}[hbt!]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (a) at (-2,0)
\vertex (b) at (2,0);
\vertex (d) at (0.4, 0.75);
\vertex (e) at (1, 0.75);
\vertex (f) at (-1, 0.75);
\diagram*{
(a) -- m [dot] -- (b),
(d) -- [anti fermion] m,
(e) -- [anti fermion] m,
(f) -- [anti fermion] m
};
\vertex [below=0.75em of m] {\([\mathbb{A}^{i_1}\dots \mathbb{A}^{i_k}]\)};
\vertex [left=0.25em of a] {\(\partial_1 \Sigma_3\)};
\node at (-0.2, 0.5) {$\dots$};
\node at (-1.1, 0.6) {$i_k$};
\node at (0.5, 1) {$i_2$};
\node at (1.2, 0.9) {$i_1$};
\end{feynman}%
\end{tikzpicture}
\caption{}
\label{fig:comp_fields}
\end{subfigure}%
\qquad \begin{subfigure}[b]{0.3\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (a) at (-2,0)
\vertex (b) at (2,0);
\vertex (d) at (0.4, 0.75);
\vertex (e) at (1, 0.75);
\vertex (f) at (-1, 0.75);
\diagram*{
(a) -- m [dot] -- (b),
(d) -- [fermion] m,
(e) -- [fermion] m,
(f) -- [fermion] m
};
\vertex [below=0.75em of m] {\([\mathbb{B}_{j_1}\dots \mathbb{B}_{j_l}]\)};
\vertex [left=0.25em of a] {\(\partial_2 \Sigma_3\)};
\node at (-0.2, 0.5) {$\dots$};
\node at (-1.1, 0.6) {$j_l$};
\node at (0.5, 1) {$j_2$};
\node at (1.2, 0.9) {$j_1$};
\end{feynman}
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\caption{Feynman rules for the composite fields.}
\label{fig:comp_fields_old}
\end{figure}
The differential forms given by the decorations are denoted collectively by $\omega_d$. The differential form $\omega_\Gamma$ at $\iota$ is then defined by the product of all decorations and summing over all labels:
\begin{equation}
\omega_\Gamma=\sum_{\text{labellings of $\Gamma$}}\;\,\, \prod_{{\text{decorations $d$ of $\Gamma$}}} \omega_d.
\end{equation}
The Feynman rules are represented in Figs. \ref{bv-bfv:fig_feyn_rules1}, \ref{bv-bfv:fig_feyn_rules2} and \ref{fig:comp_fields_old}.
\end{defn}
\begin{rmk}[Configuration spaces]\label{rem:conf}
We will exploit the Fulton--MacPherson/Axelrod--Singer compactification of configuration spaces on manifolds with boundary (FMAS compactification \cite{FM94,AS94}). Axelrod and Singer proved via non-trivial analytic tools, that the propagator, a priori defined only on the open configuration space $\mathrm{Conf}_2 (M)$, extends to the compactification $\mathrm{C}_2(M)$. This implies that also $\omega_\Gamma$, for all Feynman graphs $\Gamma$, extends to the compactification $\mathrm{C}_\Gamma (M)$ of $\mathrm{Conf}_\Gamma(M)$. But, adding strata of lower codimension leaves the integrals unchanged, and hence this proves that the integrals appearing in Eq. \eqref{principalqs} below are finite. In addition, one can exploit the combinatorics of the stratification for various computations using Stokes' theorem.
\end{rmk}
\begin{defn}[Principal quantum state]
Let $M$ be a manifold (with boundary). Given a $BF$-like BV-BFV theory $\pi_M:\mathcal{F}_M \rightarrow \mathcal{F}^\partial_{\partial M}$, a polarization $\mathcal{P}$ on $\mathcal{F}^\partial_{\partial M}$, a good splitting $\mathcal{F}_M=\mathcal{B}^{\mathcal{P}}_{\partial M} \times \mathcal{V}^{\mathcal{P}}_M \times \mathcal{Y}'$ and $\mathcal{L} \subset \mathcal{Y}'$, the gauge-fixing Lagrangian, we can define the \textit{principal part of the quantum state} by the formal power series
\begin{equation}
\label{principalqs}
\hat{\psi}_{M}(\mathbb{A},\mathbb{B}; \mathsf{a},\mathsf{b}) \coloneqq T_{M}\exp\bigg(\frac{i}{\hbar}\sum_{\Gamma}\frac{(-i\hbar)^{\loops(\Gamma)}}{|\Aut(\Gamma)|}\varint_{\text{C}_\Gamma(M)}\omega_{\Gamma}(\mathbb{A}, \mathbb{B}; \mathsf{a}, \mathsf{b})
\bigg),
\end{equation}
where for an element $(\mathbf{A}, \mathbf{B}) \in \mathcal{F}_M$, we denote the split by
\begin{equation}
\begin{split}
\mathbf{A}&=\mathbb{A} + \mathsf{a} + \alpha,\\
\mathbf{B}&=\mathbb{B} + \mathsf{b} + \beta.
\end{split}
\end{equation}
The sum runs over all connected, oriented, principal $BF$ Feynman graphs $\Gamma$, $\Aut(\Gamma)$ denotes the set of all automorphisms of $\Gamma$, and $\loops(\Gamma)$ denotes the number of all loops of $\Gamma$. The coefficient $T_M$ is related to the Reidemeister torsion of $M
; its exact expression is not needed in our context.
\end{defn}
\begin{rmk}
The formal power series in (\ref{principalqs}) is the definition of the formal perturbative expansion
of the BV integral
\begin{equation}
\hat{\psi}_M(\mathbb{A},\mathbb{B}; \mathsf{a},\mathsf{b})=\varint_{\mathcal{L}\subset \mathcal{Y}'}e^{\frac{i}{\hbar}\mathcal{S}_M(\mathbf{A},\mathbf{B})} \in \hat{\mathcal{H}}_M^{\mathcal{P}} \coloneqq \hat{\mathcal{H}}_{\partial M}^{\mathcal{P}} \otimes \Dens^{\frac12}(\mathcal{V}_M^{\mathcal{P}}).
\end{equation}
\end{rmk}
Given a good splitting (see Definition \ref{bv-bfv_good_split}), the action can be decomposed as (\cite{CMR17})
\begin{equation}
\Sc_M=\hat{\Sc}_M+\hat{\Sc}^{\, \text{pert}}_M+\Sc^{\, \text{res}}+\Sc^{\, \text{source}},
\end{equation}
where
\begin{equation}
\begin{split}
\hat{\Sc}_M&=\varint_{\Sigma_3}\beta_id\alpha^i,\\
\hat{\Sc}^{\, \text{pert}}_M&=\varint\sur \mathcal{V}(\mathsf{a}+\alpha, \mathsf{b}+\beta),\\
\Sc^{\, \text{res}}&= - \bigg(\varint_{\partial_2\Sigma_3}\mathbb{B}_i\mathsf{a}^i-\varint_{\partial_1\Sigma_3}\mathsf{b}_i\mathbb{A}^i\bigg),\\
\Sc^{\, \text{source}}&= - \bigg(\varint_{\partial_2\Sigma_3}\mathbb{B}_i\alpha^i-\varint_{\partial_1\Sigma_3}\beta_i\mathbb{A}^i\bigg).\\
\end{split}
\end{equation}
\begin{rmk}
We can rewrite
\begin{equation}
\hat{\psi}_M(\mathbb{A},\mathbb{B}; \mathsf{a},\mathsf{b})=T_{M}\big\langle e^{\frac{i}{\hbar}(\Sc^{\text{res}}+\Sc^{\text{source})}}\big\rangle=T_{M}e^{\frac{i}{\hbar}\Sc^{\text{eff}}},
\end{equation}
\end{rmk}
where $\langle\ \rangle$ denotes the expectation value with respect to the bulk theory $\hat{\Sc}+\hat{\Sc}^{\text{pert}}$ and
\begin{equation}
\Sc^{\, \text{eff}}=- \bigg(\varint_{\partial_2M}\mathbb{B}_i\mathsf{a}^i-\varint_{\partial_1M}\mathsf{b}_i\mathbb{A}^i\bigg)+\varint_{\partial_2M\times\partial_1M}\pi^*_1\mathbb{B}_i\eta^i_{\; j} \pi^*_2\mathbb{A}^j.
\end{equation}
Note that the \textit{effective action} manifests as we sum over connected graphs.
We are now interested in constructing a product on the full state space using composite fields. We define the \textit{bullet product}:
\begin{equation}
\label{bulletproof}
\begin{split}
&\varint_{\partial_1M} u_i \wedge \mathbb{A}^i \bullet \varint_{\partial_1M} v_j \wedge \mathbb{A}^j \coloneqq \\
&(-1)^{\mid \mathbb{A}^i \mid (d-1+\mid v_j\mid)+\mid u_i\mid(d-1)} \bigg( \varint_{\mathrm{C}_2(\partial_1M)} \pi_1^*u_i \wedge \pi_2^*v_j \wedge \pi_1^*\mathbb{A}^i \wedge \pi_2^*\mathbb{A}^j+\varint_{\partial_1M}u_i \wedge v_j \wedge [\mathbb{A}^i \mathbb{A}^j] \bigg ),
\end{split}
\end{equation}
with $u,v$ smooth differential forms depending on the bulk and residual fields.
\begin{rmk}
Consider the operator $\varint_{\partial_1M} F^{ij} \frac{\delta^2}{\delta \mathbb{A}^i \delta \mathbb{A}^j}$. It can be interpreted as $\varint_{\partial_1M} F^{ij} \frac{\delta}{\delta[\mathbb{A}^i\mathbb{A}^j]}$, and therefore, we have
\begin{equation}
\varint_{\partial_1M}F^{ij}\frac{\delta^2}{\delta \mathbb{A}^i \delta \mathbb{A}^j}\bigg (\varint_{\partial_1M} u_i \wedge \mathbb{A}^i \bullet \varint_{\partial_1M}v_j \wedge \mathbb{A}^j \bigg )= \varint_{\partial_1M} u_iv_jF^{ij},
\end{equation}
which matches our prediction.
\end{rmk}
\begin{defn}[Full quantum state]
\label{full_quantum_state}
Let $M$ be a manifold (with boundary). Given a $BF$-like BV-BFV theory $\pi_M:\mathcal{F}_M \rightarrow \mathcal{F}^\partial_{\partial M}$, a polarization $\mathcal{P}$ on $\mathcal{F}^\partial_{\partial M}$, a good splitting $\mathcal{F}_M=\mathcal{B}_{\partial M}^{\mathcal{P}} \times \mathcal{V}_M^{\mathcal{P}} \times \mathcal{Y}'$ and $\mathcal{L} \subset \mathcal{Y}'$, the gauge-fixing Lagrangian, we can define the \textit{full quantum state} by the formal power series
\begin{equation}
\boldsymbol{\hat{\psi}}_M(\mathbb{A},\mathbb{B};\mathsf{a},\mathsf{b})=T_{M}\exp\bigg(\frac{i}{\hbar}\sum_{\Gamma}\frac{(-i\hbar)^{\text{loops}(\Gamma)}}{|\Aut(\Gamma)|}\varint_{\text{C}_\Gamma(M)}\omega_{\Gamma}(\mathbb{A}, \mathbb{B}; \mathsf{a}, \mathsf{b})
\bigg).
\end{equation}
\end{defn}
\begin{rmk}Exploiting the bullet product in \eqref{bulletproof}, we can write the full quantum state as the expectation value
\begin{equation}
\boldsymbol{\hat{\psi}}_M(\mathbb{A},\mathbb{B};\mathsf{a},\mathsf{b})=T_{M}\big\langle e^{\frac{i}{\hbar}(\Sc^{\text{res}}+\Sc^{\text{source}})}_{\bullet}\big\rangle=T_{M}e^{\frac{i}{\hbar}\Sc^{\text{eff}}}_{\bullet},
\end{equation}
with $e_\bullet$ the exponential with respect to the bullet product.
\end{rmk}
\subsubsection{The BFV boundary operator}
peOur next ingredient is the quantum BFV boundary operator for $BF$-like theories \cite{CMR17}. We will follow the same procedure as with the state, writing firstly its principle part and then extending to a regularization using the composite fields. One obtains the quantum BFV boundary operator via the quantization of the BFV action such that Theorem \ref{bv-bfv_thm_mqme} is satisfied.
\begin{defn}[Principal part of the BFV boundary operator] The \textit{principal part} of the BFV boundary operator is given by
\begin{equation}
\Omega^{\text{princ}}=\underbrace{\Omega_0^{\mathbb{A}}+\Omega_0^{\mathbb{B}}}_{\coloneqq \Omega_0}+\underbrace{\Omega_{\text{pert}}^{\mathbb{A}}+\Omega_{\text{pert}}^{\mathbb{B}}}_{\coloneqq \Omega_{\text{pert}}^{\text{princ}}},
\end{equation}
where
\begin{equation}
\begin{split}
\Omega_0^{\mathbb{A}} &\coloneqq (-1)^d i\hbar \varint_{\partial_1M} \bigg(d\mathbb{A} \frac{\delta}{\delta \mathbb{A}} \bigg),\\
\Omega_0^{\mathbb{B}} &\coloneqq (-1)^d i\hbar \varint_{\partial_2M} \bigg(d\mathbb{B} \frac{\delta}{\delta \mathbb{B}} \bigg),
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\Omega_{\text{pert}}^{\mathbb{A}}&\coloneqq \sum_{n,k \geq 0}\sum_{\Gamma_1'}\frac{(i \hbar)^{\text{loops($\Gamma_1')$}}}{\mid \Aut(\Gamma_1')\mid} \varint_{\partial_1M}\bigg(\sigma_{\Gamma_1'}\bigg)_{i_1\dots i_n}^{j_1\dots j_k}\wedge \mathbb{A}^{i_1}\wedge \dots \wedge \mathbb{A}^{i_n}\bigg((-1)^d i\hbar\frac{\delta}{\delta \mathbb{A}^{j_1}}\bigg)\dots \bigg((-1)^d i\hbar \frac{\delta}{\delta \mathbb{A}^{j_k}} \bigg),\\
\Omega_{\text{pert}}^{\mathbb{B}}&\coloneqq \sum_{n,k \geq 0}\sum_{\Gamma_2'}\frac{(i \hbar)^{\text{loops($\Gamma_2')$}}}{\mid \Aut(\Gamma_2')\mid} \varint_{\partial_2M}\bigg(\sigma_{\Gamma_2'}\bigg)_{i_1\dots i_n}^{j_1\dots j_k}\wedge \mathbb{B}_{j_1}\wedge \dots \wedge \mathbb{B}_{j_k}\bigg((-1)^d i\hbar\frac{\delta}{\delta \mathbb{B}_{i_1}}\bigg)\dots \bigg((-1)^d i\hbar \frac{\delta}{\delta \mathbb{B}_{i_n}} \bigg),
\end{split}
\end{equation}
where, for $\mathbb{F}_1=\mathbb{A}$, $\mathbb{F}_2=\mathbb{B}$ and $l \in \{1,2\}$, $\Gamma_l'$ runs over graphs with
\begin{itemize}
\item $n$ vertices on $\partial_lM$ of valence 1 with adjacent half-edges oriented inwards and decorated with boundary fields $\mathbb{F}_l^{i_1},\dots,\mathbb{F}_l^{i_n}$ all evaluated at the point of collapse $p\in \partial_lM$,
\item $k$ outward leaves if $l = 1$ and $k$ inward leaves if $l = 2$, decorated with variational derivatives
in boundary fields
\begin{equation}
(-1)^d i\hbar \frac{\delta}{\delta \mathbb{F}_l^{j_1}},\dots,(-1)^di\hbar\frac{\delta}{\delta\mathbb{F}_l^{j_k}}
\end{equation}
at the point of collapse,
\item no outward leaves if $l = 2$ and no inward leaves if $l = 1$ (graphs with them do not contribute).
\end{itemize}
\end{defn}
The form $\sigma_{\Gamma_l'}$ is obtained as the integral over the compactification $\tilde{\mathrm{C}}_{\Gamma_l'}(\mathbb{H}^d)$ of the open configuration space modulo scaling and translation, with $\mathbb{H}^d$ the $d$-dimensional upper half-space:
\begin{equation}
\sigma_{\Gamma_l'}=\varint_{\tilde{\mathrm{C}}_{\Gamma_l'(\mathbb{H}^d)}} \omega_{\Gamma_l'},
\end{equation}
where $\omega_{\Gamma_l'}$ is the product of limiting propagators at the point $p$ of collapse and vertex tensors.
Our goal now is to describe the BFV boundary operator with composite fields. For this, we introduce the following auxiliary concept.
Consider the regular functional in (\ref{regularfct}), where we get a term $L$ replaced by $dL$ plus all the terms corresponding to the boundary of the configuration space. Since $L$ is smooth, its restriction to the boundary is smooth as well, and can be integrated on the fibers giving rise to a smooth form on the base configuration space. For example
\begin{equation}
\Omega_0 \varint_{\partial_1M}L_{IJ} \wedge [\mathbb{A}^I] \wedge [\mathbb{A}^{J}]= \pm i \hbar \varint_{\partial_1M}dL_{IJ}\wedge [\mathbb{A}^I]\wedge [\mathbb{A}^J],
\end{equation}
\begin{equation}
\Omega_0\varint_{\mathrm{C}_2(\partial_1M)}L_{IJK}\wedge \pi_1^*([\mathbb{A}^I]\wedge[\mathbb{A}^J]) \wedge \pi_2^*[\mathbb{A}^K]\pm i\hbar\varint_{\partial_1M}\underline{L_{IJK}}\wedge [\mathbb{A}^I]\wedge [\mathbb{A}^J] [\mathbb{A}^K],
\end{equation}
with $\underline{L_{IJK}}=\pi_*^{\partial}$, where $\pi^\partial : \partial \mathrm{C}_2(\partial_1M)\rightarrow\partial_1M$ is the canonical projection. \newline
For any two regular functionals $S
_1$ and $S_2$ we can write
\begin{equation}
\Omega_0(S_1 \bullet S_2)=\Omega_0(S_1) \bullet S_2 \pm S_1 \bullet \Omega_0(S_2).
\end{equation}
The rest of the allowed generators are products of expressions in the following shape:
\begin{equation}
\varint_{\partial_1M}L^J_{I^1 \dots I^r}[\mathbb{A}^{I_1}]\wedge \dots \wedge [\mathbb{A}^{I_r}] \frac{\delta^{\mid J \mid}}{\delta [\mathbb{A}^I]},
\end{equation}
\begin{equation}
\varint_{\partial_2M}L^{J^1\dots J^l}_{I}[\mathbb{B}_{J_1}]\wedge \dots \wedge [\mathbb{B}_{J_l}] \frac{\delta^{\mid I \mid}}{\delta [\mathbb{B}_J]}.
\end{equation}
\begin{defn}[Full BFV boundary operator]
\label{full_bfv_def}
The \textit{full BFV boundary operator} is
\begin{equation}
\boldsymbol{\Omega}_{\partial M}\coloneqq\Omega_0 + \underbrace{\boldsymbol{\Omega}_{\text{pert}}^{\mathbb{A}} + \boldsymbol{\Omega}_{\text{pert}}^{\mathbb{B}}}_{\boldsymbol{\Omega}_{\text{pert}}},
\end{equation}
with
\begin{equation}
\begin{split}
\boldsymbol{\Omega}_{\text{pert}}^{\mathbb{A}}&\coloneqq \sum_{n,k \geq 0}\sum_{\Gamma_1'}\frac{(i \hbar)^{\text{loops($\Gamma_1')$}}}{\mid \Aut(\Gamma_1')\mid} \varint_{\partial_1M}\bigg(\sigma_{\Gamma_1'}\bigg)_{I_1\dots I_n}^{J_1\dots J_k}\wedge \mathbb{A}^{I_1}\wedge \dots \wedge \mathbb{A}^{I_n}\bigg((-1)^{kd} (i\hbar)^{k}\frac{\delta^{\mid J_1\mid + \dots +\mid J_k\mid}}{\delta [\mathbb{A}^{J_1} \dots \mathbb{A}^{J_k}]}\bigg),\\
\boldsymbol{\Omega}_{\text{pert}}^{\mathbb{B}}&\coloneqq \sum_{n,k \geq 0}\sum_{\Gamma_2'}\frac{(i \hbar)^{\text{loops($\Gamma_2')$}}}{\mid \Aut(\Gamma_2')\mid} \varint_{\partial_2M}\bigg(\sigma_{\Gamma_2'}\bigg)^{I_1\dots I_n}_{J_1\dots J_k}\wedge \mathbb{B}_{I_1}\wedge \dots \wedge \mathbb{B}_{I_n}\bigg((-1)^{kd} (i\hbar)^{k}\frac{\delta^{\mid J_1\mid + \dots +\mid J_k\mid}}{\delta [\mathbb{B}_{J_1} \dots \mathbb{B}_{J_k}]}\bigg),
\end{split}
\end{equation}
where, for $\mathbb{F}_1=\mathbb{A}$, $\mathbb{F}_2=\mathbb{B}$ and $l \in \{1,2\}$, $\Gamma_l'$ runs over graphs with
\begin{itemize}
\item $n$ vertices on $\partial_lM$, where vertex $s$ has valence $| I_s| \geq 1$, with adjacent half-edges oriented inwards and decorated with boundary fields $[\mathbb{F}_l^{I_1}],\dots,[\mathbb{F}_l^{I_n}]$ all evaluated at the point of collapse $p\in \partial_lM$,
\item $| J_1 |+ \dots+| J_k|$ outward leaves if $l = 1$ and $| J_1|+ \dots+| J_k |$ inward leaves if $l = 2$, decorated with variational derivatives
in boundary fields
\begin{equation}
(-1)^d i\hbar \frac{\delta}{\delta [\mathbb{F}_l^{J_1}]},\dots,(-1)^di\hbar\frac{\delta}{\delta[\mathbb{F}_l^{J_k}]}
\end{equation}
at the point of collapse,
\item no outward leaves if $l = 2$ and no inward leaves if $l = 1$ (graphs with them do not contribute).
\end{itemize}
\end{defn}
Similarly as before, the form $\sigma_{\Gamma_l'} $
can be obtained as the integral over the compactified configuration space $\tilde{\mathrm{C}}_{\Gamma_l'}(\mathbb{H}^d)$, given by
\begin{equation}
\sigma_{\Gamma_l'}=\varint_{\tilde{\mathrm{C}}_{\Gamma_l'(\mathbb{H}^d)}} \omega_{\Gamma_l'},
\end{equation}
where $\omega_{\Gamma_l'}$ is the product of limiting propagators at the point $p$ of collapse and vertex tensors.
\begin{thm}[\cite{CMR17}]
\label{bv-bfv_thm_mqme}
Let $M$ be a smooth manifold (possibly with boundary). Then the following statements are satisfied:
\begin{enumerate}
\item The full covariant state $\boldsymbol{\hat{\psi}}_M$ satisfies the \textit{modified Quantum Master Equation} (mQME):
\begin{equation}
(\hbar^2 \Delta_{\mathcal{V}_M} + \boldsymbol{\Omega}_{\partial M}) \boldsymbol{\hat{\psi}}_M=0.
\end{equation}
\item The full BFV boundary operator $\boldsymbol{\Omega}_{\partial M}$ squares to zero:
\begin{equation}
(\boldsymbol{\Omega}_{\partial M})^2=0.
\end{equation}
\item A change of propagator or residual fields leads to a theory related by change of data as in Definition \ref{bv-bfv:def_change_data}.
\end{enumerate}
\end{thm}
\subsection{AKSZ theories}
\label{subsec:AKSZ}
In \cite{AKSZ97}, Alexandrov, Kontsevich, Schwarz, and Zaboronsky presented a class of local field theories that are compatible with the BV construction called \textit{AKSZ theories}. The compatibility here means that the constructed local actions are solutions to the CME. These theories thus form a subclass of BV theories. We describe here the essential concepts needed for the future sextions\footnote{Maybe it is appropriate here to give a short warning about the notation. In the previous chapters, we have mostly denoted by $M$ the source, whereas now we will denote the target by $M$. Moreover, before the letter $\Sigma$ was reserved for a manifold with one dimension less than the target. From now on it will be mostly used for the source.}.
\begin{defn}[Differential graded symplectic manifold] A \textit{differential graded
symplectic manifold} of degree $k$ is a triple
\begin{equation}
(M,\Theta_M,\omega_M=d_M\alpha_M)
\end{equation}
with $M$ a $\mathbb{Z}$-graded manifold, $\Theta_M \in \mathcal{C}^{\infty}(M)$ a function on $M$ of degree $k+1$, $d_M$ the de Rham differential on $M$ and $\omega_M \in \Omega^2(M)$, an exact symplectic form of degree $k$ with primitive 1-form $\alpha_M \in \Omega^1(M)$, such that
\begin{equation}
(\Theta_M, \Theta_M)_{\omega_M}=0,
\end{equation}
where $(-,-)_{\omega_M}$ is the odd Poisson bracket induced by the symplectic form $\omega_M$.
\end{defn}
\begin{rmk}
We denote by $Q_M$ the Hamiltonian vector field of $\Theta_M$. Then the quadruple $(M,Q_M,\Theta_M,\omega_M=d_M\alpha_M)$ is also called a \textit{Hamiltonian $Q$-manifold}.
\end{rmk}
\subsubsection{AKSZ sigma models}
Let $\Sigma_d$ be a $d$-dimensional compact, oriented manifold and let $T[1]\Sigma_d$ be its shifted tangent bundle. We fix a Hamiltonian $Q$-manifold
\begin{equation}
(M,Q_M,\Theta_M,\omega_M=d_M \alpha_M)
\end{equation}
of degree $d-1$ for $d \geq 0$. The space of fields can be defined as the mapping space space of graded manifolds:
\begin{equation}
\mathcal{F}_{\Sigma_d} \coloneqq \Maps(T[1]\Sigma_d,M)
\end{equation}
where $\Maps$ denotes the mapping space. Our goal is to endow $\mathcal{F}_{\Sigma_d}$ with a $Q$-manifold structure, and to do this we consider the lifts of the de Rham differential $d_{\Sigma_d}$ on $\Sigma_d$ and of the cohomological vector field $Q_M$ on the target $M$ to the mapping space. Therefore, we get the following cohomological vector field
\begin{equation}
Q_{\Sigma_d} \coloneqq \hat{d}_{\Sigma_d}+\hat{Q}_M,
\end{equation}
with $\hat{d}_{\Sigma_d}$ and $\hat{Q}_M$ the corresponding lifts to the mapping space. We remark that we can see $d_{\Sigma_d}$ as the cohomological vector field on $T[1]\Sigma_d$. Consider the push-pull diagram
\begin{equation}
\mathcal{F}_{\Sigma_d} \xleftarrow{\text{p}} \mathcal{F}_{\Sigma_d} \times T[1]\Sigma_d \xrightarrow {\text{ev}} M,
\end{equation}
with $\mathrm{p}$ and $\mathrm{ev}$ the projection and evaluation map respectively. One can construct a \textit{transgression} map
\begin{equation}
\mathcal{T}_{\Sigma_d} \coloneqq \mathrm{p}_*\mathrm{ev}^*:\Omega^\bullet(M) \rightarrow \Omega^\bullet (\mathcal{F}_{\Sigma_d}).
\end{equation}
Note that map $p_*$ is given by fiber integration on $T[1]\Sigma_d$. As a next step, we will endow the space of fields with a symplectic structure $\omega_{\Sigma_d}$ defined as:
\begin{equation}
\omega_{\Sigma_d} \coloneqq (-1)^d \mathcal{T}_{\Sigma_d}(\omega_M) \in \Omega^2(\mathcal{F}_{\Sigma_d}).
\end{equation}
Remarkably, we get a solution $\mathcal{S}_{\Sigma_d}$ of the CME, namely the BV action functional
\begin{equation}
\mathcal{S}_{\Sigma_d} \coloneqq \underbrace{ \iota_{{\hat{d}_{\Sigma_d}}}\mathcal{T}_{\Sigma_d}(\alpha_M)}_{\coloneqq \mathcal{S}_{\Sigma_d}^{\text{kin}}} \underbrace{ +\mathcal{T}_{\Sigma_d}(\Theta_M)}_{\coloneqq \mathcal{S}_{\Sigma_d}^{\text{target}}}\in \mathcal{C}^\infty(\mathcal{F}_{\Sigma_d}).
\end{equation}
We can indeed check that
\begin{equation}
(\mathcal{S}_{\Sigma_d},\mathcal{S}_{\Sigma_d})_{\omega_{\Sigma_d}}=0.
\end{equation}
Note that the symplectic form $\omega_{\Sigma_d}$ has degree $(d-1)-d=-1$ as predicted and the action $\mathcal{S}_{\Sigma_d}$ has degree 0. Hence, this setting induces a BV manifold $(\mathcal{F}_{\Sigma_d}, \mathcal{S}_{\Sigma_d}, \omega_{\Sigma_d})$. Let $\{x^\mu\}$ and $\{u^i\}$ $(1 \leq i \leq d)$, be local coordinates on $M$ and $\Sigma_d$ respectively. We will denote the odd fiber coordinates of degree $+1$ on $T[1]\Sigma_d$ by $\theta^i=d_{\Sigma_d}u^i$. For a field $\mathbf{X} \in \mathcal{F}_{\Sigma_d}$ we then have the following local expression
\begin{equation}
\begin{split}
\mathbf{X}^\mu(u,\theta)=\sum_{l=0}^d\,\, \underbrace{\sum_{1 \leq i_1<\dots<i_l\leq d}\mathbf{X}^\mu_{i_1\dots i_l}(u) \theta^{i_1} \wedge \dots\wedge \theta^{i_l}}_{\mathbf{X}^\mu_{(l)}(u,\theta)} \in \bigoplus_{l=0}^d\mathcal{C}^\infty(\Sigma_d) \otimes \bigwedge\nolimits^lT^\vee \Sigma_d.
\end{split}
\end{equation}
The functions $\mathbf{X}^\mu_{i_1\dots i_l} \in \mathcal{C}^\infty(\Sigma_d)$ have degree $\deg(x^\mu)-l$ on $\mathcal{F}_{\Sigma_d}$. The symplectic form $\omega_M$ and its primitive 1-form $\alpha_M$ on $M$ are given by
\begin{equation}
\begin{split}
\alpha_M&=\alpha_\mu(x)d_Mx^\mu \in \Omega^1(M),\\
\omega_M&=\frac{1}{2}\omega_{\mu_1 \mu_2}(x)d_Mx^{\mu_1} d_Mx^{\mu_2} \in \Omega^2(M).
\end{split}
\end{equation}
Using the above equations, we locally get the following expressions for the BV symplectic form, its primitive 1-form and the BV action functional:
\begin{equation}
\begin{split}
\alpha_{\Sigma_d}&=\varint_{\Sigma_d}\alpha_\mu(\mathbf{X})\delta\mathbf{X}^\mu \in \Omega^1(\mathcal{F}_{\Sigma_d}),\\
\omega_{\Sigma_d}&=(-1)^d\frac{1}{2}\varint_{\Sigma_d}\omega_{\mu_1\mu_2}(\mathbf{X})\delta \mathbf{X}^{\mu_1} \delta \mathbf{X}^{\mu_2} \in \Omega^2(\mathcal{F}_{\Sigma_d}),\\
\mathcal{S}_{\Sigma_d}&=\varint_{\Sigma_d} \alpha_\mu(\mathbf{X})d_{\Sigma_d}\mathbf{X}^\mu + \varint_{\Sigma_d} \Theta_M(\mathbf{X}) \in \mathcal{C}^\infty(\mathcal{F}_{\Sigma_d}).
\end{split}
\end{equation}
We have denoted by $\delta$ the de Rham differential on $\mathcal{F}_{\Sigma_d}$. Using Darboux coordinates on $M$, we can write
\begin{equation}
\omega_M=\frac{1}{2}\omega_{\mu_1\mu_2}d_Mx^{\mu_1} d_Mx^{\mu_2},
\end{equation}
with $\omega_{\mu_1 \mu_2}$ constant, implying that $\alpha_M=\frac{1}{2}x^{\mu_1}\omega_{\mu_1\mu_2}d_Mx^{\mu_2}$. We get the BV symplectic form
\begin{equation}
\begin{split}
\omega_{\Sigma_d}&=\frac{1}{2} \varint_{T[1]\Sigma_d}\mu_{\Sigma_d}(\omega_{\mu_1\mu_2} \delta \mathbf{X}^{\mu_1} \delta\mathbf{X}^{\mu_2}) \\
&=\frac{1}{2}\varint_{\Sigma_d}(\omega_{\mu_1 \mu_2}\delta \mathbf{X}^{\mu_1} \delta \mathbf{X}^{\mu_2})^{\text{top}}.
\end{split}
\end{equation}
The (master) action is
\begin{equation}
\mathcal{S}_{\Sigma_d}=\varint_{T[1]\Sigma_d}\mu_{\Sigma_d}\bigg(\frac{1}{2} \mathbf{X}^{\mu_1} \omega_{\mu_1 \mu_2} D\mathbf{X}^{\mu 2}\bigg)+(-1)^d \varint_{T[1]\Sigma_d}\mu_
{\Sigma_d}\mathbf{X}^*\Theta_M,
\end{equation}
with $\mu_{\Sigma_d}$ a canonical measure on $T[1]\Sigma_d$ and $D=\theta^j\frac{\delta}{\delta u_j}$, the superdifferential n $T[1]\Sigma_d$.
\section{The Rozansky--Witten model}
\label{sec:RW_model}
The RW model is a 3-dimensional topological sigma model. It was originally discovered with target a hyperK{\"a}hler manifold in \cite{RW96} as a result of a topological twist of 3-dimensional $N=4$ super Yang-Mills theory. However, shortly after, Kapranov \cite{KA99} and Kontsevich \cite{Ko99} showed that the model required less structure than originally thought: the target manifold does not have to be hyperk{\"a}hler, but, more generally, it can have a holomorphic symplectic structure. Since we will focus on this latter case, here we will present how this generalization proposed by Kapranov and Kontsevich was understood in the context of topological sigma models by Rozansky and Witten. After the work of Kapranov and Kontsevich, Rozansky and Witten added an appendix to \cite{RW96}, where they explained how to extend their formulation of the model to the case of a holomorphic symplectic target manifold.
\begin{notat}
Except for the name of the manifolds, which we adapt to the notation we will use in Section \ref{sec:Classical_Theory}, the notation will be the same as in \cite{RW96}.
\end{notat}
\subsection{First definitions}
Let $\Sigma_3$ be the source 3-dimensional manifold and $(M, \omega)$ the target holomorphic symplectic manifold. The fields are the following:
\begin{itemize}
\item \textit{bosonic fields} described by the smooth maps $\phi:\Sigma_3\rightarrow M$, in local coordinates we have $\phi^I(x^\mu)$ and $\Bar{\phi}^{\Bar{I}}(x^\mu)$,
\item \textit{fermionic (or Grassman) fields} $\eta\in\Gamma(\Sigma_3,\phi^*T^{0,1}M)$ and $\chi\in\Gamma(\Sigma_3,\Omega^1(M)\otimes\phi^*T^{1,0}M)$, with $T^{1,0}(M)$ and $T^{0,1}M$, the holomorphic and anti-holomorphic tangent bundle, respectively. In local coordinates we can write them as $\eta^{\Bar{I}}_\mu(x^\mu)$ and $\chi^I_\mu(x^\mu)$.
\end{itemize}
Consider a single fermionic symmetry on these fields, which we will denote by $\Bar{Q}$. Its action is:
\begin{alignat}{2}
&\delta\phi^I=0,\qquad &&\delta\Bar{\phi}^{\Bar{I}}=\eta^{\Bar{I}},\\
&\delta\eta^{\bar{I}}=0,\qquad &&\delta\chi_\mu^I=-\partial_\mu\phi^I.
\end{alignat}
To introduce the Lagrangian density of the theory, we add some extra structure to the target manifold. Let $\Gamma^I_{JK}$ be a symmetric connection in the holomorphic tangent bundle of $M$, i.e. $\Gamma^I_{JK}=\Gamma^I_{KJ}$. The $(1,1)$-part of the curvature, related to $\Gamma_{JK}^I$ represents the \textit{Atiyah class}\footnote{The Atiyah class is defined to be the obstruction to the existence of a global holomorphic connection.} of $M$ \cite{At57}:
\begin{equation}
\Riem{I}{J}{K}{\Bar{L}}=\frac{\partial \Chr{I}{J}{K}}{\partial \Bar{\phi}^{\Bar{L}}}.
\end{equation}
In \cite{RW96}, it is noted that the connection does not have to be compatible with the holomorphic symplectic form\footnote{This compatibility condition will be assumed for the RW model when we compare it with the model we will develop in Section \ref{sec:Classical_Theory} (see Section \ref{sec:comp_orig_RW}).} $\omega_{IJ}$. We require $\omega_{IJ}$ to be non-degenerate and closed, i.e.
\begin{equation}
\frac{\partial \omega_{IJ}}{\partial \Bar{\phi}^{\Bar{K}}}=0,\qquad \frac{\partial \omega_{IJ}}{\partial \phi^K}+\frac{\partial \omega_{KI}}{\partial \phi^J}+\frac{\partial \omega_{JK}}{\partial \phi^I}=0.
\end{equation}
Rozansky and Witten define a $\Bar{Q}$-invariant Lagrangian density $\mathscr{L}$ to be $\mathscr{L}\coloneqq \mathscr{L}_2+\mathscr{L}_1$. The Lagrangian $\mathscr{L}_2$ is given by
\begin{equation}
\mathscr{L}_2=\frac12\frac{1}{\sqrt{h}}\epsilon^{\mu\nu\rho}\bigg(\omega_{IJ}\chi_\mu^I\nabla_\nu\chi_\rho^J-\frac{1}{3}\omega_{IJ}R^J_{KL\Bar{M}}\chi_\mu^I\chi_\nu^K\chi_\rho^L\eta^{\Bar{M}}+\frac{1}{3}(\nabla_L\Omega_{IK})(\partial_\mu\phi^I)\chi_\nu^K\chi_\rho^L \bigg),
\end{equation}
where $\nabla_\mu$ is a covariant derivative with respect to the pullback of the connection $\Chr{I}{J}{K}$:
\begin{equation}
\nabla_\mu\chi^I_\nu=\partial_\mu\chi^I_\nu+(\partial_\mu\phi^J)\Chr{I}{J}{K}\chi^K_\nu.
\end{equation}
In order to construct the $\Bar{Q}$-exact Lagrangian $\mathscr{L}_1$, we need to choose an Hermitian metric $g_{i\Bar{j}}$ on $M$. We define by
\begin{equation}
\Tilde{\Gamma}^{\Bar{I}}_{\Bar{J}\Bar{K}}\coloneqq\frac12g^{\Bar{I}L}\bigg(\frac{\partial g_{L\Bar{J}}}{\partial \Bar{\phi}^{\Bar{K}}}+\frac{\partial g_{L\Bar{K}}}{\partial \Bar{\phi}^{\Bar{J}}}\bigg),\quad \Tilde{T}^{\Bar{I}}_{\Bar{J}\Bar{K}}\coloneqq\frac12g^{\Bar{I}L}\bigg(\frac{\partial g_{L\Bar{J}}}{\partial \Bar{\phi}^{\Bar{K}}}-\frac{\partial g_{L\Bar{K}}}{\partial \Bar{\phi}^{\Bar{J}}}\bigg),
\end{equation}
the \textit{symmetric connection} and the \textit{torsion} associated with $g_{I\Bar{J}}$. Then $\mathscr{L}_1$ is defined by
\begin{equation}
\mathscr{L}_1\coloneqq\Bar{Q}\bigg(g_{I\Bar{J}}\chi_\mu^I(\partial_\mu\Bar{\phi}^{\Bar{J}})\bigg)=g_{I\Bar{J}}\partial_\mu\phi^I\partial_\mu\Bar{\phi}^{\Bar{J}}+g_{I\Bar{J}}\chi^I_\mu\Tilde{\nabla}_\mu\eta^{\Bar{J}},
\end{equation}
where $\Tilde{\nabla}_\mu$ is a covariant derivative with respect to the connection $\Tilde{\Gamma}^{\Bar{I}}_{\Bar{J}\Bar{K}}+\Tilde{T}^{\Bar{I}}_{\Bar{J}\Bar{K}}$, i.e. we have
\begin{equation}
\Tilde{\nabla}_\mu\eta^{\Bar{I}}=\partial_\mu \eta^{\Bar{I}}+(\partial_\mu\Bar{\phi}^{\Bar{J}})(\Tilde{\Gamma}^{\Bar{I}}_{\Bar{J}\Bar{K}}+\Tilde{T}^{\Bar{I}}_{\Bar{J}\Bar{K}})\eta^{\Bar{K}}.
\end{equation}
Moreover, if $g_{I\Bar{J}}$ is a K{\"a}hler metric, then $\Tilde{T}^{\Bar{I}}_{\Bar{J}\Bar{K}}=0$.
\subsection{Perturbative quantization}
\label{sec:pert_exp}
The partition function of the RW model is
\begin{equation}
Z_{M}(\Sigma_3)\coloneqq\varint e^{\frac{i}{\hbar}S} \mathscr{D}[\phi^i]\mathscr{D}[\eta^I]\mathscr{D}[\chi^I_\mu],
\end{equation}
where $S\coloneqq\varint_{\Sigma_3} \mathscr{L}$ and $\mathscr{D}$ is a formal measure.
As mentioned in \cite{RW96} (see also \cite{Th99,HT99}), in order to do a perturbative expansion around critical points of the action (which are constant maps from $\Sigma_3$ to $M$), we need to deal with the \textit{zero modes}:
\begin{itemize}
\item \textit{bosonic zero modes}: they are constant modes of $\phi$;
\item \textit{fermionic zero modes}: here we should distinguish two cases
\begin{itemize}
\item if $M$ is a rational homology sphere (i.e. the first Betti number $b_1=0$), the fermionic zero modes are the constant modes of $\eta$. There are $2n$ zero modes if $\dim M=4n$.
\item if $M$ is not a rational homology sphere but the first Betti number $b_1>0$, then there are $2nb_1$ zero modes of $\chi_\mu$ additionally.
\end{itemize}
\end{itemize}
Taking into account the zero modes, one can decompose $\phi^i=\phi^i_0+\phi^i_\perp$, where $\phi^i_0$ are the constant maps and $\phi^i_\perp$ are required to be orthogonal to $\phi^i_0$. Similarly, the $\eta^I$ are also decomposed as $\eta^I=\eta^I_0+\eta^I_\perp$, where $\eta^I_0$ are harmonic 0-forms with coefficients in the fiber $V_{\phi_0}$ of the $\Sp(n)$-bundle $V\rightarrow X$ and $\eta^I_\perp$ are orthogonal to the harmonic part.
For our purposes, we will only consider the Lagrangian $\mathscr{L}_2$, in light of these decompositions we can rewrite it as
\begin{equation}
\label{RW:L_2}
\mathscr{L}_2=\frac12\bigg(\omega_{IJ}(\phi_0)\chi^Id\chi^J+\frac13\Riem{}{IJ}{K}{\Bar{L}}(\phi_0)\chi^I\chi^J\chi^K\eta^{\Bar{L}}_0+\frac{1}{3}(\nabla\Omega_{IK})(\partial\phi^I_\perp)\chi^K\chi^L\bigg).
\end{equation}
As a result of an analysis on the absorption of fermionic zero modes by the Feynman diagrams, Rozansky and Witten concluded that only diagrams with trivalent vertices contribute. Moreover, these trivalent vertices have to be exactly $2n$ to saturate the $2n$ zero modes of $\eta$. They call these diagrams ``minimal". The Lagrangian $L_2$ contains the following vertex with the needed properties
\begin{equation}
V=\frac16\Riem{}{IJ}{K}{\Bar{L}}(\phi_0)\chi^I\chi^J\chi^K\eta^{\Bar{L}}_0.
\end{equation}
Here we should think of $\eta_0$ as a ``coupling constant", in fact, we should focus on the order of the $\eta^I_0$ during the perturbative expansion.
Since all the fields $\eta$ are used to absorb zero modes, we only need the propagators for the fields $\phi^i$ and $\chi^I_{mu}$. According to \cite{RW96}, these are
\begin{equation}
\label{propagators_RW}
\begin{split}
&\braket{\chi^I_\mu(x_1),\chi^J_\nu(x_2)}=\hbar\omega^{IJ}G^{\chi}_{\mu\nu}(x_1,x_2),\\
&\braket{\phi^i(x_1),\phi^j(x_2)}=-\hbar g^{ij}G^{\phi}(x_1,x_2),
\end{split}
\end{equation}
with $G^{\chi}_{\mu\nu}(x_1,x_2)$ and $G^{\phi}(x_1,x_2)$ Green's functions. We refer to \cite{RW96} for a detailed description of the Green's functions.
The Feynman diagrams participating in the calculation of the partition function depend only on the dimension of the target manifold $M$ and on the first Betti number $b_1$ of the source 3-manifold $\Sigma_3$. The former causes the number of vertices of the graphs to be equal to $2n$. The latter has consequences on the valence of the vertices. We have the following cases:
\begin{itemize}
\item($b_1=0$) There are no $\chi$ zero modes to absorb. Hence, all the Feynman diagrams are closed graphs with $2n$ trivalent vertices. This is the case when $\Sigma_3$ is a rational homotopy sphere.
\item($b_1=1$) There are $2n$ $\chi$ zero modes coming from a harmonic 1-form. As a consequence, each vertex absorbs exactly one zero mode of $\chi$, and thus all the Feynman diagrams are closed graphs with $2n$ bivalent vertices.
\item($b_1=2$) There are $4n$ $\chi$ zero modes coming from a two harmonic 1-forms on $\Sigma_3$. As a consequence, each vertex absorbs exactly two zero modes of $\chi$, one for each harmonic 1-form, and thus all the Feynman diagrams are closed graphs with $2n$ univalent vertices.
\item($b_1=3$) There are $6n$ $\chi$ zero modes coming from three harmonic 1-forms on $\Sigma_3$. As a consequence, each vertex absorbs exactly three zero modes of $\chi$, one for each harmonic 1-form, and thus all the Feynman diagrams are a collection of $2n$ totally disconnected vertices with no edges.
\item($b_1\geq 4$) The $\chi$ zero modes become too many and they can not be integrated out by the $\chi$ present in the vertices (at most three), so the RW partition function vanishes.
\end{itemize}
Let us denote by $\Gamma_{n,m}$ the set of all closed graphs with $2n$ $m$-valent vertices and as $Z_{M, \Gamma}(\Sigma_3; \phi^i_0)$ the sum of all the contributions of the minimal Feynman diagrams corresponding to a given graph $\Gamma$. The total contribution of Feynman diagrams is
\begin{equation}
Z_{M}(\Sigma_3; \phi^i_0)=\sum_{\Gamma\in\Gamma_{n,3-b_1(\Sigma_3)}}Z_{M, \Gamma}(\Sigma_3; \phi^i_0),
\end{equation}
where each $Z_{M, \Gamma}(\Sigma_3; \phi^i_0)$ can actually be written as a sum of two factors
\begin{equation}
Z_{M, \Gamma}(\Sigma_3; \phi^i_0)=\text{W}_\Gamma(M;\phi_0^i)\sum_a I_{\Gamma,a}(\Sigma_3).
\end{equation}
We will explain each factor one-by-one. First, $I_{\Gamma,a}(\Sigma_3)$ includes the integral over $\Sigma_3$ of the propagators $G^{\chi}_{\mu\nu}(x_1,x_2)$ and $G^{\phi}(x_1,x_2)$ as well as $\chi$ zero modes. The sum is over all the possible ways to contract the fields of $\Gamma$ with the propagators (\ref{propagators_RW}). On the other hand, the factor $\text{W}_{\Gamma}(M;\phi^i_0)$ is a product of tensors $\Riem{}{IJ}{K}{L}$ coming from the vertices $V_1$ and $V_2$, which are contracted by the $\omega^{IJ}$ contained in the propagators. After antisymmetrizing over the anti-holomorphic indices (coming from the zero modes' contributions), we obtain a $\Bar{\partial}$-closed $(0,2n)$-form on $M$. In other words, we have a map
\begin{equation}
\Gamma_{n,3}\rightarrow H^{0,2n}(M),
\end{equation}
where $H^{0,2n}(X)$ is the Dolbeault cohomology. This corresponds to a weight system, the \textit{Rozansky--Witten weight system}. By definition a function on $\Gamma_{n,3}$ is called a weight if it satisfies the AS and IHX relations (see also \cite{B95}).
The AS relation means that $\Gamma_{n,3}$ is antisymmetric under the permutation of legs at a vertex. For RW, this is not valid on the nose since the curvature tensor is completely symmetric. However, we can prove the vanishing of tadpole diagrams (i.e. diagrams with a loop centered at a vertex), which is consistent with the AS relation. The proof follows simply because the loop is constructed by contracting two indices of the symmetric tensor $\Riem{}{IJ}{K}{\Bar{L}}\eta^{\Bar{L}}_0$ with $\omega^{IJ}$, which is antisymmetric.
On the other hand, the IHX relation means that the sum over all possible (three) ways of collapsing a propagator such that we obtain a graph with a four-valent vertex, while the other vertices are trivalent, vanishes (see Fig. \ref{fig:IHX}).
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\diagram*{
a -- m [dot] -- b,
c -- e [dot] -- d,
m -- e
};
\end{feynman}
\draw [dashed] (1.75,-0.5) arc[x radius=0.75, y radius=0.75, start angle=0, end angle=360];
\node[] at (3, -0.45) {$+$};
\end{tikzpicture}
\end{subfigure}\hspace{-1.5cm}
\begin{subfigure}[b]{0.33\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}
\vertex (c) at (-0.6,0.25);
\node[dot] (m) at (-0.6, -0.5);
\node[dot] (e) at (0.6, -0.5);
\vertex (b) at (-0.6, -1.25);
\vertex (a) at (0.6, -1.25);
\vertex (f) at (0.6, 0.25);
\diagram*{
(c) -- (m) -- (b),
(f) -- (e) -- (a),
(m) -- (e)
};
\end{feynman}
\draw [dashed] (0.85,-0.5) arc[x radius=0.85, y radius=0.85, start angle=0, end angle=360];
\node at (1.75,-0.5) {$+$};
\end{tikzpicture}
\end{subfigure}\hspace{-1.75cm}
\begin{subfigure}[b]{0.33\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}
\vertex (c) at (-0.75,-1);
\node[dot] (m) at (-0.375, -0.75);
\node[dot] (e) at (0.375, -0.75);
\vertex (b) at (0.75, -1);
\vertex (a) at (-0.75, 0);
\vertex (f) at (0.75, 0);
\diagram*{
(c) -- (m) -- (f),
(b) -- (e) -- (a),
(m) -- (e)
};
\end{feynman}
\draw [dashed] (0.75,-0.5) arc[x radius=0.75, y radius=0.75, start angle=0, end angle=360];
\node at (1.75,-0.5) {$=0$};
\end{tikzpicture}
\end{subfigure}
\caption{The IHX relation. It holds whenever the graphs are identical outside the dashed circle.} \label{fig:IHX}
\end{figure}
Explicitly, the sum of the three contributions is equal to the expression
\begin{equation}
\eta^{\Bar{L}}_0\eta^{\Bar{L'}}_0\omega^{IJ}\bigg(\Riem{}{IJ}{K}{\Bar{L}}\Riem{}{I'J'}{K'}{\Bar{L'}}+\Riem{}{IJ}{K'}{\Bar{L}}\Riem{}{I'J'}{K}{\Bar{L'}}+\Riem{}{IJ}{J'}{\Bar{L}}\Riem{}{I'K}{K'}{\Bar{L'}}\bigg)- \Bar{L}\leftrightarrow \Bar{L'},
\end{equation}
with the notation $\Bar{L}\leftrightarrow \Bar{L'}$ we mean that we are subtracting the same quantity with the indices $\Bar{L}$ and $\Bar{L}'$ switched, in this way the expression vanishes. In other words, the IHX relation follows as a result of the Bianchi identity for the curvature tensor $R$. The validity of the IHX relation ensures we are obtaining topological invariants of 3-manifolds in the perturbative expansion of the partition function \cite{Sa04}.
At this point, we can take the product of the $(0,2n)$-form which is the image of the graph $\Gamma$ with the $(2n,0)$-form $\omega^n\in H^{2n,0}(M)$ and integrate the resulting $(2n,2n)$-form over $M$. In this way we obtain the weights $b_\Gamma(M)$, which are numbers called \textit{Rozansky--Witten invariants} studied by Sawon in \cite{Sa04}. More explicitly, we have
\begin{equation}
b_\Gamma(M)= \frac{1}{(2\pi)^{2n}} \varint_M \text{W}_\Gamma(M;\phi^i_0)\omega^n.
\end{equation}
Finally, the RW partition function is shown to be \cite{RW96}
\begin{equation}
Z_{M}(\Sigma_3)=\Big|H_1(\Sigma_3, \mathbb{Z})\Big|'\sum_{\Gamma\in\Gamma_{n,3-b_1(\Sigma_3)}}b_\Gamma(M)\sum_aI_{\Gamma,a},
\end{equation}
where $\Big|H_1(\Sigma_3, \mathbb{Z})\Big|'$ is the number of torsion elements in $H_1(\Sigma_3, \mathbb{Z})$ (see \cite{FG91}).
\subsection{Comparison with Chern--Simons theory}
In this section, we are going to explore briefly the similarities between CS theory and the RW theory as exhibited in \cite{RW96}. The main message is that RW is a kind of ``Grassmann odd version" of CS theory. Let us make this more precise.
Recall the CS Lagrangian
\begin{equation}
\mathscr{L}_{\text{CS}}=\mathrm{Tr}\bigg(AdA+\frac{2}{3}A^3\bigg).
\end{equation}
Let us compare it with the RW Lagrangian in \eqref{RW:L_2}. As we can see from Table \ref{Tab:RW_CS} (where we denote by $T_a$ the generators of the Lie algebra and $f_{abc}$ the structure constants), there is almost a direct match. We are using the word ``almost" because the symmetry properties of the various objects in the table are reversed: $\mathrm{Tr} T_aT_b$ is symmetric in its arguments while the holomorphic symplectic form $\omega_{IJ}$ is antisymmetric, $f_{abc}$ is totally antisymmetric whereas $\Riem{}{IJ}{K}{L}\eta^L_0$ is totally symmetric. However, this should not come as a surprise since by definition $A^a$ is an anti-commuting object, while $\chi^I$ is commuting.
\begin{table}[h]
\centering
\begin{tabular}{ |cc| }
\toprule
CS & RW \\
\midrule
$A^a$ & $\chi^I$ \\
$\mathrm{Tr} T_aT_b$ & $\omega_{IJ}$ \\
$f_{abc}$ & $\Riem{}{IJ}{K}{L}\eta^L_0$ \\
\bottomrule
\end{tabular}
\caption{Comparison between CS theory and RW theory.}
\label{Tab:RW_CS}
\end{table}
By doing the associations in the table, the vertex in CS is the same as the vertex in RW. It follows that the diagrams of the two theories coincide. Consequently, the partition function differs only for the weight factors since for RW they are proportional to the curvature tensor of $M$ rather than the structure constants of a Lie group.
Other similarities come at the level of gauge-fixing. We refer the interested reader to \cite{RW96,HT99} for a detailed discussion.
\begin{rmk}
\label{rmk_diff_rw_cs}
There is an important difference between CS and RW theories. In the RW model, the vertex carries an odd Grassmann odd harmonic mode $\eta^I_0$, hence it can never appear more than $2n$ times in any diagram. This corresponds to a natural \textit{cut-off} of the perturbative expansion of the RW model.
\end{rmk}
\section{Classical formal globalization}
\label{sec:Classical_Theory}
The idea is to construct a 3-dimensional topological sigma model, which, when globalized, reduces to the original RW model. In particular, we are interested in the formulation of the RW model with target a holomorphic symplectic manifold\footnote{This construction reduces to the RW model considered in the bulk of \cite{RW96}, when we consider as target a hyperK{\"a}hler manifold.} (i.e. a complex symplectic manifold with a holomorphic symplectic form, see also the appendix in \cite{RW96}). Hence, let $M$ be a holomorphic symplectic manifold endowed with coordinates $X^i$ and $X^{\Bar{i}}$, and with a holomorphic symplectic form $\Omega=\Omega_{ij}\delta X^i\delta X^j$, i.e. a closed, non degenerate $(2,0)$ form. Moreover, consider a 3-dimensional manifold $\Sigma_3$ and construct an AKSZ sigma model\footnote{See Section \ref{subsec:AKSZ} for an introduction.} with source $T[1]\Sigma_3$ and target $M$. In this case, the space of maps is
\begin{equation}
\mathcal{F}\sur\coloneqq \Maps\bigg(T[1]\Sigma_3, M\bigg).
\end{equation}
On the source manifold, we choose bosonic coordinates $\{u\}$ (ghost degree 0) on $\Sigma_3$ and fermionic odd coordinates $\{\theta\}$ (ghost degree 1) on the fibers of $T[1]\Sigma_3$. Moreover, by picking up local coordinates $X^i$ on $M$, maps in $\mathcal{F}\sur$ can be described by a superfield $\mathbf{X}$, whose components are chosen as:
\begin{equation}
\mathbf{X}^i=X^i(u)+\theta^\mu X^i_\mu(u)+\frac{1}{2!}\theta^\mu\theta^\nu X^i_{\mu\nu}(u)+\frac{1}{3!}\theta^\mu\theta^\nu\theta^\chi X^i_{\mu\nu\chi}(u),
\end{equation}
where $X^i$ is a 0-form, $X^i_\mu$ is a 1-form etc.
To these maps maps $X^i, X^i_\mu, \dots$ we assign ghost degrees such that the ghost degree of $\mathbf{X}$ is equal to the one of $X^i$ (that is 0), for example $X^i_{\mu\nu}$ has form degree 2 and ghost degree $-2$.
Now, we can define a symplectic form for the space of fields. Since it should have ghost degree $-1$, we assign ghost degree\footnote{More precisely, we add a formal parameter $q$ of ghost degree 2 in front of $\Omega_{ij}$. The parameter is immediately suppressed from the notation.} 2 to $\Omega_{ij}$ (in this way the target manifold has degree 2 and the AKSZ construction can be done without problems) and define
\begin{equation}
\omega\sur=\varint_{T[1]\Sigma_3}\mu\sur\bigg(\frac12\Omega_{ij}\delta \mathbf{X}^i\delta \mathbf{X}^j\bigg),
\end{equation}
where by $\delta$ we denote the de Rham differential on the space of fields. Since we have a canonical Berezinian $\mu_{\Sigma_3}$ on $T[1]\Sigma_3$ of degree $-3$, the symplectic form has degree $-1$ as we desired. Hence, the space of fields is equipped with an odd Poisson bracket $(-,-)$
We have an associated AKSZ action given by
\begin{equation}
\Sc\sur=\varint_{T[1]\Sigma_3}\mu_{\Sigma_3}\bigg(\frac{1}{2}\Omega_{ij}\mathbf{X}^iD\mathbf{X}^j\bigg),
\end{equation}
where $D=\theta^\mu\frac{\partial}{\partial u^\mu}$ is the differential on $T[1]\Sigma_3$. When $\Sigma_3$ is a closed manifold, the action $\Sc\sur$ satisfies the CME
\begin{equation}
(\Sc\sur,\Sc\sur)=0.
\end{equation}
Equivalently, we can introduce a cohomological Hamiltonian vector field $Q\sur$ on $\mathcal{F}\sur$ defined by
\begin{equation}
\iota_{Q\sur}\omega\sur=\delta S\sur.
\end{equation}
This vector field has the following form
\begin{equation}
Q\sur=\varint_{T[1]\Sigma_3}\mu_{\Sigma_3}\bigg( D\mathbf{X}^i\frac{\delta}{\delta \mathbf{X}^i}\bigg).
\end{equation}
The above can be restated by saying that $(\mathcal{F}\sur, \omega\sur, \Sc\sur)$ is a BV manifold.
In the presence of boundaries, the model can be extended to a BV-BFV theory by associating the BV-BFV manifold
\begin{equation}
(\mathcal{F}\bon, \omega\bon=\delta \alpha\bon, \Sc\bon, Q\bon)
\end{equation}
over the BV manifold $(\mathcal{F}_{\Sigma_3}, \omega_{\Sigma_3}, Q_{\Sigma_3})$, with the following set of data
\begin{equation}
\begin{split}
&\mathcal{F}\bon=\Maps (T[1]\partial \Sigma_3, X), \\
&\Sc\bon=\varint_{T[1]\partial\Sigma_3}\mu_{\partial \Sigma_3}\bigg(\frac{1}{2}\Omega_{ij}\mathbf{X}^iD\mathbf{X}^j\bigg),\\
&\alpha\bon=\varint_{T[1]\partial\Sigma_3}\mu_{\partial \Sigma_3}\bigg(\frac12\Omega_{ij}\mathbf{X}^i\delta \mathbf{X}^j\bigg),\\
&Q\bon=\varint_{T[1]\partial\Sigma_3}\mu_{\partial \Sigma_3}\bigg(D\mathbf{X}^i\frac{\delta}{\delta \mathbf{X}^i}\bigg),
\end{split}
\end{equation}
with $\mu_{\partial \Sigma_3}$ the Berezinian on the boundary $\partial \Sigma_3$ of degree $-2$. The data is such that
\begin{equation}
\iota_{Q\sur}\omega\sur=\delta \Sc\sur+\pi^*\alpha\bon.
\end{equation}
\begin{rmk}
A possible modification of the model consists in coupling the target manifold with $\mathfrak{g}^{\vee}[1]\otimes \mathfrak{g}[1]$ or $\mathfrak{g}[1]$, forming thus the ``$BF$-RW" model and the ``CS-RW" model \cite{KQZ13}, respectively. In this way, after globalization, one should get an extension of the results obtained by K{\"a}llén, Qiu and Zabzine \cite{KQZ13}.
\end{rmk}
\subsection{Globalization}
\label{sec:globalization}
In the last section, we introduced a very simple AKSZ sigma model. Here we globalize that construction using methods of formal geometry \cite{GK71,Bo11} (see Appendix \ref{app:formal_geometry} for an introduction) following \cite{CMoW19}. First, we expand around critical points of the kinetic part of the action. The Euler--Lagrange equations for our model are simply $d\mathbf{X}^i=0$, which means that the component of $\mathbf{X}^i$ of ghost degree 0 is a constant map: we denote it by $x^i$ and we think of it as a \textit{background field} \cite{Moshayedi2021}. Moreover, since we want to vary $x$ itself, we lift the fields as the pullback of a formal exponential map at $x$. We also note that the fields $\mathbf{X}^{\Bar{i}}$ are just \emph{spectators}, which means that they do not contribute to the action, hence we can think of taking constant maps also in the antiholomorphic direction.
The above allows to linearize the space of fields $\mathcal{F}\sur$ by working in the formal neighbourhoods of the constant map $x\in M$. We define the following \textit{holomorphic formal exponential map}
\begin{align}
\begin{split}
\varphi:\ T^{1,0}M &\rightarrow M\\
(x,y)&\mapsto \varphi^{i}(x,y)=x^i+y^i+\frac{1}{2}\varphi^i_{jk}(x^i,x^{\Bar{i}})y^jy^k+\dots
\end{split}
\end{align}
\begin{rmk}
We think about the holomorphic formal exponential map here defined as an extension to the complex case of the formal exponential map used in e.g. \cite{CF01}. This notion should correspond to the ``canonical coordinates'' introduced in \cite{BCOV94} and the holomorphic exponential map applied by Kapranov to the RW case in \cite{KA99}.
\end{rmk}
The formal exponential map lifts $\mathcal{F}\sur$ to
\begin{align}
\begin{split}
\Tilde{\varphi}_{x}:\ \Tilde{\mathcal{F}}_{\Sigma_3, x}:=\Maps (T[1]\Sigma_3, T^{1,0}_{x}M)&\rightarrow \Maps (T[1]\Sigma_3, M)\\
\hat{\mathbf{X}}&\mapsto \mathbf{X}
\end{split}
\end{align}
which is given by precomposition with $\varphi^{-1}_{x}$, i.e. $\Tilde{\mathcal{F}}_{\Sigma_3, x}=\varphi^{-1}_{x}\circ\mathcal{F}\sur$ and $\mathbf{X}=\varphi_{x}(\hat{\mathbf{X}})$. Now, since the target is linear, we can write the space of fields as
\begin{equation}
\Tilde{\mathcal{F}}_{\Sigma_3, x}=\Omega^{\bullet}(\Sigma_3)\otimes T^{1,0}_{x}M.
\end{equation}
Consequently, we lift the BV action, the BV 2-form and the primitive 1-form obtaining:
\begin{equation}
\begin{split}
&\Sc\surg:=\mathrm{T}\Tilde{\varphi}_{x}^*\Sc\sur=\varint_{T[1]\Sigma_3}\mu\sur\, \bigg(\frac12\Omega_{ij}\hat{\mathbf{X}}^iD\hat{\mathbf{X}}^j\bigg),\\
&\omega\surg\coloneqq\Tilde{\varphi}^*_{x}\omega\sur=\varint_{T[1]\Sigma_3}\mu\sur\,\bigg(\frac12\Omega_{ij}\delta \hat{\mathbf{X}}^i\delta \hat{\mathbf{X}}^j\bigg),\\
&\alpha^{\partial}_{\partial \Sigma_3, x}\coloneqq \Tilde{\varphi}^*_{x}\alpha\bon=\varint_{T[1]\partial\Sigma_3}\mu_{\partial\Sigma_3}\,\bigg(\frac12\Omega_{ij}\hat{\mathbf{X}}^i\delta \hat{\mathbf{X}}^j\bigg),
\end{split}
\end{equation}
where $\mathrm{T}$ denotes the Taylor expansion around the fiber coordinates $\{y\}$ at zero.
This set of data satisfies the mCME for any $x\in M$:
\begin{equation}
\iota_{Q\surg}\omega\surg=\delta \Sc\surg+\pi^*\alpha^{\partial}_{\partial \Sigma_3, x},
\end{equation}
with $Q\surg=\varint_{T[1]\Sigma_3}\mu_{\Sigma_3}\, \Big(D\hat{\mathbf{X}}^i\frac{\delta}{\delta \hat{\mathbf{X}}^i}\Big)$. Hence, we have a BV-BFV manifold associated to the space of fields $\Tilde{\mathcal{F}}\surg$.
The next remark introduces an important ingredient to write down the globalized action.
\begin{rmk}
\label{rmk_R}
The constant map $x:T[1]\Sigma_3\rightarrow M$ in $\mathcal{F}\sur$ can be thought of as an element in $M$. Hence, we have a natural inclusion $M\hookrightarrow \mathcal{F}\sur$. We exploit this fact by defining, for a constant field $x$ and $\mathbf{X}\in \mathcal{F}\sur$, a 1-form:
\begin{equation}
R\sur=\Big(R\sur\Big)_j(x;\mathbf{X})dx^j+\Big(R\sur\Big)_{\Bar{j}}(x;\mathbf{X})dx^{\Bar{j}}\in \Omega^1\Big(M, \Der\Big(\reallywidehat{\Sym}^\bullet(T^{\vee1,0}M)\Big)\Big).
\end{equation}
As before, we lift this 1-form to $\Tilde{\mathcal{F}}\surg$. This lift, denoted by $\hat{R}\sur$, is locally written as:
\begin{equation}
\hat{R}\sur=\Big(\hat{R}\sur\Big)_j(x;\hat{\mathbf{X}})dx^j+\Big(\hat{R}\sur\Big)_{\Bar{j}}(x;\mathbf{X})dx^{\Bar{j}}.
\end{equation}
\end{rmk}
\subsection{Variation of the classical background}
\label{class:sec:var_class_back}
So far, the classical background $x$ has been fixed. However, our aim is to vary $x$ and construct a global formulation of the action. Hence, we understand the collection $\{\Sc\surg\}_{x\in M}$ as a map $\hat{\Sc}\sur$ to be given by $\hat{\Sc}\sur: x\mapsto S\surg$ and we compute how it changes over $M$. In order to accomplish this task, inspired by \cite{CMoW19,CMoW20, KQZ13, BCM12}, choosing a background field $x\in M$, we define
\begin{equation}
\label{class:glob_terms}
\Sc\surgR\coloneqq\varint_{\Sigma_3}\bigg(\Big(\hat{R}^i\sur\Big)_j(x; \hat{\mathbf{X}})\Omega_{il}\hat{\mathbf{X}}^l dx^j+\Big(\hat{R}^i\sur\Big)_{\Bar{j}}(x; \hat{\mathbf{X}})\Omega_{il}\hat{\mathbf{X}}^l dx^{\Bar{j}}\bigg)=\mathcal{S}_R+\Sc_{\bar{R}}.
\end{equation}
The integrand is a well defined term of degree 3, since we assigned degree 2 to the symplectic form and $\hat{R}\sur$ is a 1-form on $M$. After integration, $\Sc\surgR$ is then of total degree 0.
The term $\hat{R}\sur$ has been introduced in Remark \ref{rmk_R}. However, its connection with the globalization procedure is not clear. To explain it, we introduce the \textit{classical Grothendieck connection} adapted to our case (see Appendix \cite{CF01}).
\begin{defn}[Classical Grothendieck connection]
Given a holomorphic formal exponential map $\varphi$, we can define the associated \textit{classical Grothendieck connection} on $\reallywidehat{\Sym}^\bullet(T^{\vee1,0}M)$, given by $\Gr \coloneqq d_M+R$, where $d_M$ is the sum of the holomorphic and antiholomorphic Dolbeault differentials on $M$ and $R\in \Omega^1\Big(M, \Der\Big(\reallywidehat{\Sym}^\bullet(T^{\vee1,0}M)\Big)\Big)$. By using local coordinates $\{x\}$ on the basis and $\{y\}$ on the fibers, we have $R=R_j(x;y)dx^j+R_{\bar{j}}(x;y)dx^{\bar{j}}$, where $R_j=R^i_j(x;y)\frac{\partial}{\partial y}$ and $R_{\bar{j}}=R^i_{\bar{j}}(x;y)\frac{\partial}{\partial y}$ with
\begin{equation}
\begin{split}
&R^i_j(x;y)dx^j:=-\bigg[\bigg(\frac{\partial \varphi}{\partial y}\bigg)^{-1}\bigg]^i_p\frac{\partial \varphi^p}{\partial x^j}dx^j,\\
&R^i_{\Bar{j}}(x;y)dx^{\Bar{j}}:=-\bigg[\bigg(\frac{\partial \varphi}{\partial y}\bigg)^{-1}\bigg]^i_p\frac{\partial \varphi^p}{\partial x^{\Bar{j}}}dx^{\Bar{j}}.
\end{split}
\end{equation}
\end{defn}
Note that $R^i_j(x;y)$ and $R^i_{\bar{j}}(x;y)$ are formal power series in the second argument, namely
\begin{equation}
\begin{split}
R^i_j(x;y)&=\sum^{\infty}_{k=0} R^i_{j;j_1,\dots,j_k}(x)y^{j_1}\dots y^{j_k},\\
R^i_{\bar{j}}(x;y)&=\sum^{\infty}_{k=0} R^i_{{\bar{j}};j_1,\dots,j_k}(x)y^{j_1}\dots y^{j_k}.
\end{split}
\end{equation}
\begin{rmk}
\label{class:RMK_Grothendieck_prop}
The classical Grothendieck connection has a couple of important properties:
\begin{itemize}
\item It is flat, which can be rephrased by saying that the following equation is satisfied
\begin{equation}
\label{class:flatnessR}
d_MR+\frac12[R,R]=0.
\end{equation}
\item A section $\sigma$ is closed under $\mathcal{D}_{\text{G}}$ i.e. $\mathcal{D}_{\text{G}}\sigma=0$ if and only if $\sigma=\mathrm{T}\varphi^*_xf$, where $f\in \mathcal{C}^\infty(M)$.
\end{itemize}
In more down-to-Earth terms, the second property says that the classical Grothendieck connection selects those sections which are global.
\end{rmk}
Finally, we can clarify the relation between $\hat{R}\sur$ and the Grothendieck connection. The components $\Big(\hat{R}\sur^i\Big)_j(x;\hat{\mathbf{X}})$ and $\Big(\hat{R}\sur^i\Big)_{\bar{j}}(x;\hat{\mathbf{X}})$ are given by the components of the classical Grothendieck connection $R^i_j(x;y)$ and $R^i_{\bar{j}}(x;y)$ evaluated in the second argument at $\hat{\mathbf{X}}$.
Having set up all the necessary tools, we can compute how $\hat{\Sc}\sur$ varies when we change the background $x\in M$. On a closed manifold, we have
\begin{equation}
\label{dx_eq}
d_M \hat{\Sc}\sur=-(\Sc\surgR, \hat{\Sc}\sur),
\end{equation}
which follows from the Grothendieck connection and that $\Sc\surg=\mathrm{T}\varphi^*_x\Sc\sur$.
The above identities can be collected in a nicer way via the following definition.
\begin{defn}(Formal global action) The \textit{formal global action} for the model is defined by
\begin{equation}
\label{glob_action}
\begin{split}
\Tilde{\Sc}\surg&\coloneqq\varint_{\Sigma_3}\bigg(\frac{1}{2}\Omega_{ij}\hat{\mathbf{X}}^id\hat{\mathbf{X}}^j+\Big(\hat{R}^i\sur\Big)_j(x; \hat{\mathbf{X}})\Omega_{il}\hat{\mathbf{X}}^l dx^j+\Big(\hat{R}^i\sur\Big)_{\Bar{j}}(x; \hat{\mathbf{X}})\Omega_{il}\hat{\mathbf{X}}^l dx^{\Bar{j}}\bigg)\\
&=\hat{\Sc}\sur+\underbrace{\Sc_R+\Sc_{\bar{R}}}_{\coloneqq\Sc\surgR}.
\end{split}
\end{equation}
\end{defn}
By using the formal global action, the \textit{differential Classical Master Equation} (dCME) is satisfied
\begin{equation}
\label{class:dCME}
d_M\Tilde{\Sc}\surg+\frac{1}{2}(\Tilde{\Sc}\surg,\Tilde{\Sc}\surg)=0.
\end{equation}
\begin{rmk}
Note that $\Tilde{\Sc}\surg$ is an inhomogeneous form over $M$, where $\hat{S}\sur$ is a 0-form and $S\surgR$ is a 1-form. Therefore, Eq. \eqref{class:dCME} has a 0-form, a 1-form and a 2-form part. Specifically, the 0-form part
\begin{equation}
(\hat{\Sc}\sur,\hat{\Sc}\sur)=0,
\end{equation}
is the usual CME. The 1-form part:
\begin{equation}
d_M\hat{\Sc}\sur+(\Sc\surgR,\hat{\Sc}\sur)=0,
\end{equation}
means that $\hat{\Sc}\sur$ is a global object (see Remark \ref{class:RMK_Grothendieck_prop}). The 2-form part
\begin{equation}
d_M\Sc\surgR+\frac{1}{2}(\Sc\surgR,\Sc\surgR)=0,
\end{equation}
means that the operator $\mathcal{D}_G$ is flat connection (see Eq. (\ref{class:flatnessR})). Explicitly, we have
\begin{align}
&d_x\Sc_R+\frac12(\Sc_R,\Sc_R)=0 \label{dcme_1},\\
&d_x\Sc_{\bar{R}}+\frac12(\Sc_R,\Sc_{\bar{R}})=0\label{dcme_2},\\
&d_{\bar{x}}\Sc_R+\frac12(\Sc_{\bar{R}},\Sc_R)=0 \label{dcme_3},\\
&d_{\bar{x}}\Sc_{\bar{R}}+\frac12(\Sc_{\bar{R}},\Sc_{\bar{R}})=0. \label{dcme_4}
\end{align}
\end{rmk}
Let $\Sigma_3$ be (again) a manifold with boundary. The BV-BFV theory on $\Tilde{\mathcal{F}}\surg$ furnishes the cohomological vector field $Q\surg$. Moreover, by using the lift of $\hat{R}\sur$, we can define
\begin{equation}
\Tilde{Q}\surg=Q\surg+\hat{R}\sur.
\end{equation}
Then, the \textit{modified differential Classical Master Equation} (mdCME) is satisfied:
\begin{equation}
\label{mdcme}
\iota_{\Tilde{Q}\surg}\omega\surg=\delta \Tilde{\Sc}\surg+\pi^*\alpha^{\partial}_{\partial \Sigma_3, x},
\end{equation}
where
\begin{equation}
\begin{split}
\Tilde{Q}\surg=\varint\sur\bigg(-d\hat{\mathbf{X}}\frac{\delta}{\delta \hat{\mathbf{X}}}&-\Omega^{pq}\frac{\delta \Big(\hat{R}^i\sur\Big)_j(x; \hat{\mathbf{X}})}{\delta \hat{\mathbf{X}}^p}\Omega_{il}\hat{\mathbf{X}}^l dx^j\frac{\delta}{\delta \hat{\mathbf{X}}^q}-
\Big(\hat{R}^i\sur\Big)_j(x; \hat{\mathbf{X}})dx^j\Omega_{ip}\frac{\delta}{\delta \hat{\mathbf{X}}^p}\\
&-\Omega^{pq}\frac{\delta \Big(\hat{R}^i\sur\Big)_{\bar{j}}(x; \hat{\mathbf{X}})}{\delta \hat{\mathbf{X}}^p}\Omega_{il}\hat{\mathbf{X}}^l dx^{\Bar{j}}\frac{\delta}{\delta \hat{\mathbf{X}}^q}-
\Big(\hat{R}^i\sur\Big)_{\bar{j}}(x; \hat{\mathbf{X}})dx^{\Bar{j}}\Omega_{ip}\frac{\delta}{\delta \hat{\mathbf{X}}^p}\bigg).
\end{split}
\end{equation}
In preparation for the comparisons we will draw in the following section, we redefine the components $\Big(\hat{R}^i\sur\Big)_j$ and $\Big(\hat{R}^i\sur\Big)_{\bar{j}}$ by a multiplicative factor $1/k!$ as
\begin{equation}
\label{redef_R}
\begin{split}
\Big(\hat{R}^i\sur\Big)_j(x;\hat{\mathbf{X}})&=\sum^{\infty}_{k=0} \frac{1}{(k+1)!}\hat{R}^i_{j;j_1,\dots,j_k}(x)\hat{\mathbf{X}}^{j_1}\dots \hat{\mathbf{X}}^{j_k},\\
\Big(\hat{R}^i\sur\Big)_{\bar{j}}(x;\hat{\mathbf{X}})&=\sum^{\infty}_{k=0}\frac{1}{(k+1)!} \hat{R}^i_{{\bar{j}};j_1,\dots,j_k}(x)\hat{\mathbf{X}}^{j_1}\dots \hat{\mathbf{X}}^{j_k}.
\end{split}
\end{equation}
\section{Comparison with the original Rozansky--Witten model}
\label{sec:comp_orig_RW}
In this section, we show that the globalized model we have just constructed reduces to the RW model (Section \ref{sec:RW_model}) and, moreover it provides a globalization of the former.
In order to compare effectively these models, we need to be more explicit about the terms involved in the classical Grothendieck connection. First, we discuss the choice of holomorphic formal exponential map in more detail. Since our target is a symplectic manifold, we choose the formal exponential map which preserves the symplectic form considered in \cite{QZ15} and we adapt it to our case, i.e.
\begin{equation}
\label{comp:exp_map_zq}
\varphi^{i}=x^i+y^i-\frac{1}{2}\Chr{i}{j}{k}y^jy^k+\bigg\{-\frac{1}{6}\partial_c\Chr{i}{j}{k}+\frac{1}{3}\Chr{i}{m}{c}\Chr{m}{j}{k}-\frac{1}{24}\Riem{i}{c}{j}{k}\bigg\}y^cy^jy^k+O(y^4),
\end{equation}
where $\Riem{i}{c}{j}{k}=(\Omega^{-1})^{bi}R^{\hspace{2mm} a}_{bc\; k}\Omega_{aj}$.
The Grothendieck connection is then
\begin{equation}
\label{Grothendieck_con}
\Gr=dx^i\frac{\partial}{\partial x^i}+dx^{\bar{i}}\frac{\partial}{\partial x^{\bar{i}}}+
dx^j\Big(R\sur\Big)_j +dx^{\Bar{j}}\Big(R\sur\Big)_{\Bar{j}},
\end{equation}
where the third term on the right hand side was computed in \cite{QZ15},
\begin{equation}
\Big(\hat{R}^i\sur\Big)_{j} dx^j=-\bigg[\bigg(\frac{\partial \varphi}{\partial y}\bigg)^{-1}\bigg]^i_p\frac{\partial \varphi^p}{\partial x^j}dx^j=-\bigg[dx^j\bigg(\delta^i_j+\Chr{i}{k}{j}y^k-\bigg(\frac{1}{8}R^{\hspace{1.5mm} i}_{j\; ks}+\frac{1}{4}R^{\hspace{2.5mm} i}_{jk\; s}\bigg)y^ky^s\bigg)\frac{\partial}{\partial y^i}+\dots\bigg],
\end{equation}
whereas the fourth term is
\begin{equation}
\begin{split}
\Big(\hat{R}^i\sur\Big)_{\bar{j}}dx^{\Bar{j}}&=-\bigg[\bigg(\frac{\partial \varphi}{\partial y}\bigg)^{-1}\bigg]^i_p\frac{\partial \varphi^p}{\partial x^{\Bar{j}}}dx^{\Bar{j}} =\bigg[-\frac{1}{2}\Chr{p}{ab}{,\Bar{j}}y^ay^bdx^{\Bar{j}}\bigg][\delta^i_p+\dots] =-\bigg[-\frac{1}{2}\Chr{i}{ab}{,\Bar{j}}y^ay^bdx^{\Bar{j}}+\dots\bigg]\\
&=\Riem{i}{a}{b}{\bar{j}}y^ay^bdx^{\bar{j}}-\dots .
\end{split}
\end{equation}
Considering the terms coming from the classical Grothendieck connection and the redefinition \eqref{redef_R}, we can re-write the formal global action \eqref{glob_action} as
\begin{equation}
\label{class:expl_S_global}
\Tilde{\Sc}\surg=\varint\sur\bigg(\frac12 \Omega_{ij}\hat{\mathbf{X}}^id\hat{\mathbf{X}}^j-\frac12\Chr{i}{j}{k}\hat{\mathbf{X}}^k\Omega_{il}\hat{\mathbf{X}}^ldx^j-\delta^i_j\Omega_{il}\hat{\mathbf{X}}^ldx^j+\dots+\frac{1}{3!}\Riem{i}{k}{s}{\bar{j}}\hat{\mathbf{X}}^k\hat{\mathbf{X}}^s\Omega_{il}\hat{\mathbf{X}}^ldx^{\bar{j}}+\dots\bigg).
\end{equation}
For convenience, we recall the RW action \cite{RW96}
\begin{equation}
\label{class:S_RW_original}
S_{\text{RW}}=\varint\sur\frac12\frac{1}{\sqrt{h}}\epsilon^{\mu\nu\rho}\bigg(\Omega_{IJ}\chi_\mu^I\nabla_\nu\chi_\rho^J-\frac13\Omega_{IJ}R^J_{KL\Bar{M}}\chi_\mu^I\chi_\nu^K\chi_\rho^L\eta^{\Bar{M}}+\frac13(\nabla_L\Omega_{IK})(\partial_\mu\phi^I_\perp)\chi_\nu^K\chi_\rho^L \bigg).
\end{equation}
If we assume that the connection is compatible with the symplectic form, the third term in the RW action \eqref{class:S_RW_original} drops. We are left with the first two terms. By associating $\hat{\mathbf{X}}^i\leftrightarrow \chi^I$ and $dx^{\bar{j}}\leftrightarrow \eta^{\bar{M}}$, we can sum up the comparison in Table \ref{class:Tab.model_RW}.
\begin{table}[h!]
\centering
\begin{tabular}{|lcc|}
\toprule
& Kinetic term & Interaction term \\
\midrule
Original RW model & $\frac12\frac{1}{\sqrt{h}}\epsilon^{\mu\nu\rho}\Omega_{IJ}\chi_\mu^I\nabla_\nu\chi_\rho^J$ & $-\frac{1}{3!}\Omega_{IJ}R^J_{KL\Bar{M}}\chi_\mu^I\chi_\nu^K\chi_\rho^L\eta^{\Bar{M}}$ \\
Our model & $\frac12 \Omega_{ij}\hat{\mathbf{X}}^id\hat{\mathbf{X}}^j-\frac12\Chr{i}{j}{k}\hat{\mathbf{X}}^k\Omega_{il}\hat{\mathbf{X}}^ldx^j$ & $\frac{1}{3!}\Riem{i}{k}{s}{\bar{j}}\hat{\mathbf{X}}^k\hat{\mathbf{X}}^s\Omega_{il}\hat{\mathbf{X}}^ldx^{\bar{j}}$ \\
\bottomrule
\end{tabular}
\caption{Comparison between kinetic term and interaction term for the RW theory and our model.}
\label{class:Tab.model_RW}
\end{table}
The sign discrepancy comes from having defined the connection as $\nabla=d-\Gamma$ which gives a negative sign in front of the $\Chr{i}{j}{k}$ (see Eq. \eqref{comp:exp_map_zq}).
Moreover, when the curvature misses the $(2,0)$-part (which could happen when we have a Hermitian metric), the remaining terms in our model are just the perturbative expansion of $\Riem{i}{k}{s}{\bar{j}}$ around $x$. If we cut off the expansion at the first order, we are left with the original RW model.
\section{Comparison with other globalization constructions}
\label{sec:comp_other}
In the next sections, we are going to compare our globalization model with other three constructions: the first by \cite{CLL17}
uses tools of derived geometry to linearize the space of fields in the neighbourhood of a constant map as well as the Fedosov connection \cite{Fe49}, the second \cite{Ste17} is an extension to a manifold with boundary of the first procedure, while the third \cite{QZ09,QZ10,KQZ13} uses an approach similar to ours.
\subsection{Comparison with the CLL construction}
\label{comp:sec:summary}
We compare our model with the formulation of the RW model constructed in \cite{CLL17} in the setting of derived geometry (see Appendix \ref{app:derived_geometry}).
Let $\Sigma_3$ be a closed $3$ dimensional manifold and $M$ be a holomorphic symplectic manifold with a non-degenerate holomorphic $2$-form $\omega$.
To determine fields we use the language of $\Linf$-spaces (see \cite{Co11a,Co11b} for an introduction) and we define the the space of fields as
\begin{equation}
\Maps(\MdR,M_{\Bar{\partial}}),
\end{equation}
where $\MdR$ is the elliptic ringed space equipped with a sheaf of differential forms over $\Sigma_3$, i.e.
$\Omega^{\bullet}(\Sigma_3)$ and $M_{\Bar{\partial}}=(M,\g_M)$ is a sheaf of $\Linf$-algebras, where $\g_M=\Omega^{\bullet,\bullet}(M)\otimes T^{1,0}M[-1]$ with $T^{1,0}M$ the holomorphic tangent bundle.
Since the critical points of the action functional are constant maps from $\Sigma_3$ to $M$, we are going to study $\Maps(\MdR,M_{\Bar{\partial}})$ in the neighbourhood of a constant map $x\in\Maps(\MdR,\Xd)$, namely
\begin{equation}
\mathcal{F}_{\text{CLL}}\coloneqq \reallywidehat{\Maps}(\MdR,\Xd)=\Omega^{\bullet}(\Sigma_3)\otimes \g_M[1],
\end{equation}
with $\reallywidehat{\Maps}(\MdR,\Xd)$ defined as in \cite{Co11a,Co11b}.
Having specified the space of fields, the shifted symplectic structure is given by
\begin{equation}
\label{class:sympl_cll}
\begin{split}
\braket{-,-}:\ &\FDg\otimes_{\Omega^{\bullet,\bullet}(M)}\FDg \rightarrow \Omega^{\bullet,\bullet}(M)[-1]\\
&\braket{\alpha\otimes g_1,\beta\otimes g_2}:=\underbrace{\omega(g_1,g_2)}_{\text{sympl. struct. on $M$}}\,\,\,\varint\sur\alpha\wedge \beta,
\end{split}
\end{equation}
where $\Omega^{\bullet,\bullet}(M)=\Gamma\left(\bigwedge^{\bullet}T^{\vee}M\right)$ is a section of the cotangent bundle.
Since $C^{\bullet}(\g_M):=\reallywidehat{\Sym}^\bullet_{\Omega^{\bullet,\bullet}(M)}(\g^{\vee}_M[1])=\Omega^{\bullet,\bullet}(M)\otimes_{\mathcal{C}^{\infty}(M)}\reallywidehat{\Sym}^\bullet_{\mathcal{C}^{\infty}(M)}(T^{\vee1,0}M)$, to construct the action functional and to find our $L_{\infty}$-algebra we can use a procedure similar to the Fedosov's construction of a connection on a symplectic manifold \cite{F94}.
Let us denote the sections of the \textit{holomorphic Weyl bundle} on $M$ by
\begin{equation}
\mathcal{W}=\Omega^{\bullet,\bullet}(M)\otimes_{\mathcal{C}^{\infty}(M)}\CS[\![\hbar ]\!]
\end{equation}
where $\CS[\![\hbar ]\!]$ is the completed symmetric algebra over $T^{\vee1,0}M$, the holomorphic cotangent bundle which has a local basis $\{y^i\}$ with respect to the local holomorphic coordinates $\{x^i\}$. We call the sub-bundle $\Omega^{p,q}(M)\otimes \Sym^r(T^{\vee1,0}M)$ of $\W$ its $(p,q,r)$ component, in particular we refer to $r$ as \textit{weight}. To $\hbar$ is assigned a weight of 2.
\begin{prp}[\cite{CLL17}]
\label{prp_flatness}
There is a connection on the holomorphic Weyl bundle of the following form
\begin{equation}
\label{Fedosov_conn}
\mathcal{D}_\mathrm{F}=\nabla-\delta+\frac{1}{\hbar}[I,-]_{\W},
\end{equation}
which is flat modulo $\hbar$ and $[-,-]_{\mathcal{W}}$ is defined as in \cite{CLL17}. Here $I$ is a $1$-form valued section of the Weyl bundle of weight $\geq 3$, i.e. $I\in \bigoplus_{r\geq 3}\Gamma(\Sym^r(T^{\vee 1,0}M)\otimes T^\vee M)$, $\nabla$ is the extension to $\mathcal{W}$ of a connection on $T^{1,0}M$ which is compatible with the complex structure as well as with the holomorphic symplectic form and torsion free, $\delta=dx^i\wedge \frac{\partial}{\partial y^i}$ is an operator on $\mathcal{W}$.
\end{prp}
The connection $\Fe$ is called \textit{Fedosov connection} and it provides the $L_\infty$-structure on $\mathfrak{g}_M$. In these terms
the action can be written as
\begin{equation}
\label{class:dg_action_2}
\Sc_{\text{CLL}}=\frac{1}{2}\braket{d\sur\alpha,\alpha}+\sum^{\infty}_{k=0}\frac{1}{(k+1)!}\braket{\ell_k(\alpha^{\otimes k}),\alpha}
\end{equation}
with $\alpha \in \mathcal{F}_{\text{CLL}}$, $\braket{-,-}$ defined as in (\ref{class:sympl_cll}) and $\ell_k$ are the higher brackets in the $\Linf$-algebra, and $d_{\Sigma_3}$ the de Rham differential on the source $\Sigma_3$. We can read $\ell_0$ from the Fedosov connection in \eqref{Fedosov_conn}, i.e.
\begin{equation}
\ell_0=-dx^i\frac{\partial}{\partial y^i}
\end{equation}
The $\Linf$-products $\ell_1$ and $\ell_2$ are computed in the next section, when we compare the Fedosov connection with the classical Grothendieck connection.
\begin{rmk}
The action in \eqref{class:dg_action_2} satisfies the CME $(\Sc_{\text{CLL}},\Sc_{\text{CLL}})=0$ (see \cite[Proposition 2.16]{CLL17}). Moreover, in \cite{CLL17} it was observed that this construction is the formal version of the original RW model in the case when the $(2,0)$-part of the curvature is zero (see \cite[Section 2.3]{CLL17}).
\end{rmk}
\subsubsection{Comparison between the Fedosov connection and the classical Grothendieck connection}
The sufficient condition for the flatness of $\Fe$ (see the proof of Proposition \ref{prp_flatness} in \cite{CLL17}) implies that $I$ satisfies
\begin{equation}
\label{eqI}
I=\delta^{-1}(R+\nabla I)+\frac{1}{\hbar}\delta^{-1}I^2,
\end{equation}
where $\delta^{-1}=y^i\cdot \iota_{\partial_{x^i}}$ (up to a normalization factor) is another operator on $\mathcal{W}$ and $R$ is the curvature tensor.
\begin{rmk}
Since $I$ is a $1$-form valued section of $\W$, we can decompose it into its holomorphic and antiholomorphic component respectively. In particular the antiholomoprhic part component is the Taylor expansion of the Atiyah class as noted in \cite{CLL17}. In the case $R^{2,0}=0$, the $\Linf$-algebra is fully encoded by Taylor expansion of the Atiyah class as first noted by Kapranov in \cite{KA99}.
\end{rmk}
Since the operator $\delta^{-1}$ increases the weight by $1$, while $\nabla$ preserves the weight and $I$ has at least weight $3$, we can find a solution of the above equation with the following leading term (cubic term)\footnote{See also \cite{F00,GLL17}.}:
\begin{equation}
\label{deltaR}
\delta^{-1}R=\frac18\big[-\Chr{}{ij}{k,r}+\Chr{}{si}{r}\Chr{}{pj}{k}\Omega^{sp}\big]y^i y^j y^r dx^k+\frac16\Riem{}{\Bar{k}r}{i}{j} y^i y^j y^r dx^{\Bar{k}}=\delta^{-1}R_t+\delta^{-1}\bar{R}.
\end{equation}
Since the Fedosov connection requires the computation of $\frac{1}{\hbar}[I,-]_{\W}$, we compute this commutator for the leading order term of $I$, which is the cubic term we have just found. For the first term on the right hand side of Eq. (\ref{deltaR}) we have
\begin{equation}
\begin{split}
\frac{1}{\hbar}[\delta^{-1}R_t,-]_{\mathcal{W}}&=\bigg[\frac{1}{8}\bigg(-\Chr{}{rj}{k,q}+\Chr{}{s}{rq}\Chr{}{p}{jk}\Omega^{sp}\bigg)+\frac{1}{4}\bigg(-\Chr{}{qj}{k,r}+\Chr{}{sq}{k}\Chr{}{pj}{r}\Omega^{sp}\bigg)\bigg]\Omega^{qi}y^j y^r dx^k\frac{\partial}{\partial y^i}\\
&=\bigg[\frac18\bigg(-\Omega_{mr}\Chr{m}{j}{k,q}+\Omega_{mr}\Chr{m}{s}{k}\Chr{}{p}{jk}\Omega^{sp}\bigg)+\frac14\bigg(-\Omega_{mq}\Chr{m}{j}{k,r}+\Omega_{mq}\Chr{m}{s}{k}\Chr{}{pj}{r}\Omega^{sp}\bigg)\bigg]\times\\
&\hspace{10cm} \times\Omega^{qi}y^j y^r dx^k\frac{\partial}{\partial y^i}\\
&=\bigg[\frac18\Riem{\hspace{1.75mm} i}{k\ }{r}{j}+\frac14\Riem{\hspace{2.9mm}i}{k}{r\, }{j}\bigg]y^j y^r dx^k\frac{\partial}{\partial y^i}.
\end{split}
\end{equation}
For the second term we have
\begin{equation}
\frac{1}{\hbar}\big[\delta^{-1}\bar{R},-\big]_{\W}=\frac12\Chr{i}{j}{k,\Bar{r}}y^j y^k dz^{\Bar{r}}\frac{\partial}{\partial y^i}.
\end{equation}
After renaming some indices, the Fedosov connection is then
\begin{equation}
\label{class:Fed_conn_expl}
\Fe=d_x+d_{\bar{x}}-dx^j\frac{\partial}{\partial y^j}-dx^j\Chr{i}{k}{j}y^k\frac{\partial}{\partial y^i}+dx^j\bigg(\frac18R^{\hspace{1.5mm} i}_{j\; ks}+\frac14R^{\hspace{2.5mm} i}_{jk\; s}\bigg)y^ky^s\frac{\partial}{\partial y^i}+\frac12dx^{\Bar{j}}\Chr{i}{ks}{,\Bar{j}}y^ky^s\frac{\partial}{\partial y^i}+\dots.
\end{equation}
More explicitly,
\begin{equation}
\label{class:linf_prod}
\begin{split}
\ell_1&=-dx^j\Chr{i}{k}{j}y^k\frac{\partial}{\partial y^i},\\
\ell_2&=dx^j\bigg(\frac18R^{\hspace{1.5mm} i}_{j\; ks}+\frac14R^{\hspace{2.5mm} i}_{jk\; s}\bigg)y^ky^s\frac{\partial}{\partial y^i}+\frac12dx^{\Bar{j}}\Chr{i}{ks}{,\Bar{j}}y^ky^s\frac{\partial}{\partial y^i}.
\end{split}
\end{equation}
\begin{rmk}
The first terms in the Fedosov connection, explicitly written in (\ref{class:Fed_conn_expl}) coincide with the first terms for the classical Grothendieck connection (\ref{Grothendieck_con}). Furthermore, by substituting the explicit expressions of $\ell_1$ and $\ell_2$ in the action $\Sc_{\text{CLL}}$ (\ref{class:dg_action_2}), we can see that it coincides with the action $\Tilde{\Sc}\surg$ (\ref{class:expl_S_global}).
\end{rmk}
\subsubsection{Comparison between the CLL space of fields and globalization space of fields}
By rephrasing the argument of \cite[Section 6.1]{Mo20} to our context, we can extend the classical Grothendieck connection $\Gr$ to the complex
\begin{equation}
\label{extensioncomplex}
\Gamma\bigg(\bigwedge\nolimits^{\bullet}T^\vee M\otimes \CS \bigg).
\end{equation}
which is the algebra of functions on the formal graded manifold
\begin{equation}
T[1]M\bigoplus T^{1,0}M.
\end{equation}
This graded manifold is turned into a differential graded manifold by the classical Grothendieck connection $\Gr$. Moreover, since $\Gr$ vanishes on the body of the graded manifold, we can linearize at $x\in M$ and we get
\begin{equation}
T_x[1]M\bigoplus T_x^{1,0}M.
\end{equation}
On this graded manifold, we have a curved $\Linf$-structure (which is the same as $\mathfrak{g}_M[1]$) and Eq. (\ref{extensioncomplex}) can be interpreted as the Chevalley--Eilenberg complex of the aforementioned $\Linf$-algebra. Then, the space of fields for the globalized theory can be rewritten as
\begin{equation}
\label{extendedsof}
\Tilde{\mathcal{F}}\surg=\Omega^{\bullet}(\Sigma_3)\otimes \Omega^{\bullet,\bullet}(M)\otimes T^{1,0}_xM
\end{equation}
which is the same as $\FDg$ by linearizing at $x\in M$ the holomorphic tangent bundle as $\mathcal{D}_\text{F}$ vanishes on $M$.
\begin{rmk}
The idea that the classical Grothendieck connection and the Fedosov connection coincide is not new, in particular see \cite[Remark 3.6]{CMoW19} and \cite[Section 2.3]{CLL17}.
\end{rmk}
\begin{rmk}
Finally note that in \cite{CLL17} the source manifold $\Sigma_3$ was considered to be a closed manifold. As explained above (see Section \ref{class:sec:var_class_back}) our construction is valid also when $\partial\Sigma_3\neq \emptyset$. In the next section, we tackle this last setting by comparing our approach with \cite{Ste17}, where the derived geometric framework was implemented for manifolds with boundary.
\end{rmk}
\subsection{Comparison with Steffens' construction}
In \cite{Ste17}, Steffens applied the same derived geometry approach we have seen in the last section to what he calls \textit{AKSZ theories of Chern--Simons type}: CS theory and RW theories. In particular, his BV formulation of the RW model is completely analogue to the one in \cite{CLL17}: same space of fields, $\Linf$-algebra, action, etc.
However, he takes a step further. He proves a \textit{formal AKSZ theorem} \cite[Theorem 2.4.1]{Ste17} in the context of derived geometry. His RW model is then shown to be an AKSZ theory by attaching degree 2 to the holomorphic symplectic form (as we did ourselves in Section \ref{sec:Classical_Theory}). Consequently, he provides a BV-BFV formulation for the RW model. The BFV action found in \cite{Ste17} is analogous to the action in (\ref{class:dg_action_2}) in one dimension less (as it is customary with AKSZ theories). Even if the $\Linf$ products are not explicit in his construction, by using the ones in (\ref{class:linf_prod}), his BV-BFV formulation of the RW model is visibly identical to ours.
\subsection{Comparison with the (K)QZ construction}
Let $\Sigma_3$ be a 3-dimensional manifold and $M$ a hyperK{\"a}hler manifold with holomorphic symplectic form $\Omega$. Consider the symplectic graded manifold $\mathcal{M}\coloneqq T^{\vee0,1}[2]T^{\vee0,1}[1]M$ constructed out of $M$. It has the following coordinates: $X^i, X^{\Bar{i}}$ of degree 0 parametrizing $M$, $V^{\Bar{i}}$ of degree 1 parametrizing the fiber $T^{0,1}M$ and dual coordinates $P_{\Bar{i}}, Q_{\Bar{i}}$ of degree 2 and 1, respectively. The symplectic form is
\begin{equation}
\label{class:QZ_sympl_form}
\omega_{\mathcal{M}}=dP_{\Bar{i}} \wedge dX^{\Bar{i}}+dQ_{\Bar{i}}\wedge dV^{\Bar{i}}+\frac{1}{2}\Omega_{ij}dX^i\wedge dX^j.
\end{equation}
In order to have a ghost degree 2 symplectic form, the authors assign degree 2 to $\Omega$. With this setup, in \cite{QZ10,QZ09, KQZ13}, K{\"a}llén, Qiu and Zabzine construct an AKSZ model
\begin{align}
\mathcal{F}_{\text{QZ}}&\coloneqq \Maps(T[1]\Sigma_3, T^{\vee0,1}[2]T^{\vee0,1}[1]M)\\
\label{comp:action_qz}
\Sc_{\text{QZ}}&=\varint_{T[1]\Sigma_3}d^3zd^3\theta\bigg(\mathbf{P}_{\Bar{i}}D\mathbf{X}^{\Bar{i}}+\mathbf{Q}_{\Bar{i}}D\mathbf{V}^{\Bar{i}}+\frac{1}{2}\Omega_{ij}\mathbf{X}^iD\mathbf{X}^j+\mathbf{P}_{\Bar{i}}\mathbf{V}^{\Bar{i}}\bigg)
\end{align}
endowed with a cohomological vector field
\begin{equation}
Q=\varint_{T[1]\Sigma_3}d^3zd^3\theta \bigg(D\mathbf{P}_{\Bar{i}}\frac{\partial}{\partial \mathbf{P}_{\Bar{i}}}+D\mathbf{Q}_{\Bar{i}}\frac{\partial}{\partial \mathbf{Q}_{\Bar{i}}}+D\mathbf{V}^{\Bar{i}}\frac{\partial}{\partial \mathbf{V}^{\Bar{i}}}+ D\mathbf{X}^i\frac{\partial}{\partial \mathbf{X}^i}+D\mathbf{X}^{\Bar{i}}\frac{\partial}{\partial \mathbf{X}^{\Bar{i}}}+\mathbf{P}_{\Bar{i}}\frac{\partial}{\partial \mathbf{Q}_{\Bar{i}}}+\mathbf{V}^{\Bar{i}}\frac{\partial}{\partial \mathbf{X}^{\Bar{i}}}\bigg),
\end{equation}
where to the source manifold $T[1]\Sigma_3$, we assign coordinates $\{z^i\}$ of ghost degree 0 and coordinates $\{\theta^i\}$ of degree 1.
\begin{rmk}
\label{rmk_comp_zq_rw}
With a suitable gauge-fixing consisting on a particular choice of Lagrangian submanifolds, the action $\Sc_{\text{QZ}}$ reduces to the RW model up to a factor of $\hbar$ (see \cite[Section 4]{QZ09}):
\begin{equation}
\Sc_{\text{QZ}}\bigg|_{\text{GF}}=\frac{1}{2}\varint d^3z \bigg(\Omega_{ij} X^i_{(1)}\wedge d^{\nabla} X^{j}_{(1)}-\frac{1}{3}R_{k\Bar{k}j}^i X^k_{(1)}\wedge \Omega_{li}X^l_{(1)}\wedge X^j_{(1)}V^{\bar{k}}_{(0)}\bigg),
\end{equation}
with $d^{\nabla}X^i_{(1)}=dX^i_{(1)}+\Chr{i}{j}{k}dX^j_{(0)}X^k_{(1)}$. Note that the only fields left are the even scalar $X^i_{(0)}$, the odd 1-form $X^i_{(1)}$ and the odd scalar $V^{\bar{k}}_{(0)}$. A quick glance to our expression for the RW model in (\ref{class:expl_S_global}) (assume again the $(2,0)$ part of the curvature is zero as well as we cut off the perturbative expansion of the $(1,1)$ part at the $\Riem{i}{k}{s}{\bar{j}}$) suggests the association $V^{\bar{k}}_{(0)}\Leftrightarrow dx^{\bar{k}}$.
We will comment more on this later.
\end{rmk}
By expanding $\mathbf{X}^i$ through the geodesic exponential map and by pulling back $\omega_{\mathcal{M}}$ as well as $S_{\text{QZ}}$ through it, the authors find
\begin{align}
\exp^*\omega_{\mathcal{M}}&=dP_{\Bar{i}} \wedge dX^{\Bar{i}}+dQ_{\Bar{i}}\wedge dV^{\Bar{i}}+\frac{1}{2}\Omega_{ij}(x)dy^i\wedge dy^j-\delta X^{\bar{i}}\delta \Theta_{\bar{i}}\\
\exp^*\Sc_{\text{QZ}}\bigg|_{\tilde{\mathbf{P}}}&=\varint_{T[1]\Sigma_3}d^3zd^3\theta \bigg(\tilde{{\mathbf{P}}}_{\Bar{i}}D \mathbf{X}^{\Bar{i}}+\mathbf{Q}_{\Bar{i}}D\mathbf{V}^{\Bar{i}}+\frac{1}{2}\Omega_{ij}\mathbf{y}^i D\mathbf{y}^j-\Tilde{\mathbf{P}}_{\Bar{i}}\mathbf{V}^{\Bar{i}}+\Theta_{\Bar{i}}(x;\mathbf{y})\mathbf{V}^{\Bar{i}}\bigg)
\end{align}
where $\Theta_{\Bar{i}}$ is of degree 2 and given by
\begin{equation}
\label{class:qz_theta}
\Theta_{\Bar{i}}(x;y)=\sum^{\infty}_{n=3}\frac{1}{n!}\nabla_{l_4}\dots \nabla_{l_n}R^{\hspace{3mm}k}_{\Bar{i}l_1\ l_3}\Omega_{kl_2}(x)y^{l_1}\dots y^{l_n
\end{equation}
and $\Tilde{P}_{\Bar{i}}:=P_{\Bar{i}}+\Theta_{\Bar{i}}$.
After removing the \emph{spectator fields} (see \cite{QZ10,KQZ13}), the action becomes
\begin{equation}
\varint_{T[1]\Sigma_3}d^3zd^3\theta\bigg(\frac12\Omega_{ij}\mathbf{y}^iD\mathbf{y}^j+\Theta_{\bar{i}}(x;\mathbf{y})\mathbf{V}^{\bar{i}}\bigg),
\end{equation}
which further reduces to
\begin{equation}
\label{class:qz_action}
\varint_{T[1]\Sigma_3}d^3zd^3\theta\bigg(\frac12\Omega_{ij}\mathbf{y}^iD\mathbf{y}^j+\Theta_{\bar{i}}(x;\mathbf{y})V^{\bar{i}}_{(0)}\bigg)
\end{equation}
for degree reasons ($V^{\bar{i}}_{(0)}$ is an odd scalar). This action fails the CME by a $\bar{\partial}$-exact term due to $\Theta$ satisfying the Maurer--Cartan equation
\begin{equation}
\label{class:theta_MC}
\bar{\partial}_{[\bar{i}}\Theta_{\bar{j}]}=-(\Theta_{\bar{i}}, \Theta_{\bar{j}}),
\end{equation}
where $\bar{\partial}$ is the Dolbeault differential and $[\bar{i}\bar{j}]$ denotes antisymmetrization over the indices $\bar{i}$ and $\bar{j}$.
The hyperK{\"a}hler structure is then relaxed. A new connection which still preserves $\Omega$ (crucial for the perturbative approach through the exponential map above) is found. However, since the connection is not Hermitian, the curvature of $\Gamma$ exhibits also a $(2,0)$-component. This complicates the exponential map which can not be worked out at all orders as in (\ref{class:qz_theta}). In \cite{KQZ13}, the authors argue that a solution to this problem should originate from principles related to the globalization issues discussed in \cite{BLN02} and the application of Fedosov connection in order to deal with perturbation theory on curved manifold \cite{CF01}. In the realm of this paper, we furnish an affirmative answer to both their ideas. In particular, as we have seen in Section \ref{comp:sec:summary}, the Fedosov connection allowed to compute the terms in the $\Linf$-algebra and thus to work out the exponential map. In Section \ref{class:sec:var_class_back}, we have seen the Grothendieck connection to accomplish the same in the context of formal geometry.
\begin{rmk}
We can compare the procedure above with our globalization construction by associating $V^{\bar{i}}_{(0)}$ with $dx^{\bar{i}}$. First, note that $\Big(R\sur\Big)_{\bar{i}}$ in Eq. (\ref{class:qz_theta}) matches with the second term in Eq. \eqref{class:glob_terms}. Second, the action in \eqref{class:qz_action} coincides with our globalized action in \eqref{glob_action} if we ``forget" the $(2,0)$-part of the curvature. In particular, by associating $\bar{\partial}$ with $dx^{\bar{i}}\frac{\partial}{\partial x^{\bar{i}}}$, we can interpret the failure of \eqref{class:qz_action} to satisfy the CME due to the term \eqref{class:theta_MC} as a consequence of the action satisfying the $(1,1)$-part of the dCME (Eq. (\ref{class:dCME})).
\end{rmk}
We reserve the last remark of the section to precise the association between $V^{\bar{i}}_{(0)}$ and $dx^{\bar{i}}$ as well as their ``meaning" as we promised in Remark \ref{rmk_comp_zq_rw}.
\begin{rmk}
\label{rmk_parameters}
As we have seen above, $V^{\bar{i}}_{(0)}$ and $dx^{\bar{i}}$ arise in two different contexts: the first is an odd scalar coordinate parametrizing the fibers of $T^{0,1}M$, while the second is introduced through the classical Grothendieck connection as well as the perturbative expansion.
Nevertheless, the association makes sense considering that $V^{\bar{i}}_{(0)}$ is interpreted as an odd harmonic zero mode in \cite{KQZ13}. In fact, recall from Section \ref{sec:globalization}, $x$ is the zero mode obtained from the Euler--Lagrange equation $D\mathbf{X}=0$. If we enlarge the complex (see Eq. \eqref{extensioncomplex}), the space of fields becomes (\ref{extendedsof}) meaning that $dx^{\bar{i}}\in T^{\vee 0,1}M$, i.e. an odd zero mode. This association was pointed out first by Qiu and Zabzine in \cite{QZ12}.
The presence of these quantities has been known in the literature since the early days of the RW model and has deep consequences. Since they are odd, there can be as many as the dimension of $M$. As such, the perturbative expansion can not be infinite, but it can only stop at a certain order. This is a crucial difference between the CS and the RW theory, which was originally spotted in \cite{RW96} and attributed to the need for the RW theory to saturate the zero modes. According to Kontsevich in \cite{Ko99}, as a result the RW model can be understood as an AKSZ model with ``parameters" (these parameters are $V^{\bar{i}}_{(0)}$ or $dx^{\bar{i}}$). In the same article, he presented a different perspective on this subject by pointing out that the RW invariants come from characteristic classes of holomorphic connections.
\end{rmk}
\section{$BF$-like formulation of the Rozansky--Witten model}
\label{sec:BF-like_formulation}
In order to quantize our globalized version of the RW model in the quantum BV-BFV framework \cite{CMR17}, we need to formulate the model as a $BF$-like theory. This can be done by exploiting the similarities between the RW theory and the CS theory. These similarities have also been crucial in the construction of \cite{Ste17}. There it was argued that RW could be split following a similar approach to the one of Cattaneo, Mnev and Wernli for the CS theory in \cite{CMnW17} (see also \cite{We18} for a more detailed exposition).
As shown in \cite{CLL17} (see Eq. (\ref{class:sympl_cll})), we have a pairing on $ \Tilde{\mathcal{F}}\surg$ given by the BV symplectic form which can be defined on homogeneous elements $\hat{\mathbf{Y}}\otimes g_1$ and $\hat{\mathbf{Z}}\otimes g_2$ as
\begin{equation}
\begin{split}
\braket{-,-}:\ & \Tilde{\mathcal{F}}\surg\otimes \Tilde{\mathcal{F}}\surg \rightarrow \Omega^{\bullet,\bullet}(M),\\
&\braket{\hat{\mathbf{Y}}\otimes g_1,\hat{\mathbf{Z}}\otimes g_2}:=\underbrace{\Omega(g_1,g_2)}_{\text{sympl. struct. on $M$}}\,\,\varint_{T[1]\Sigma_3}\mu_{\Sigma_3}\bigg(\hat{\mathbf{Y}}\wedge \hat{\mathbf{Z}}\bigg)
\end{split}
\end{equation}
By expanding $\hat{\mathbf{X}}\in \Tilde{\mathcal{F}}\surg$ as $\hat{\mathbf{X}}=\hat{\mathbf{X}}^ie_i$, we have
\begin{equation}
\braket{\hat{\mathbf{X}},\hat{\mathbf{X}}}=\Omega(e_i,e_j)\varint_{T[1]\Sigma_3}\mu_{\Sigma_3}\bigg(\hat{\mathbf{X}}^i\wedge \hat{\mathbf{X}}^j\bigg)=\varint_{T[1]\Sigma_3}\mu_{\Sigma_3}\bigg(\Omega_{ij}\hat{\mathbf{X}}^i\wedge \hat{\mathbf{X}}^j\bigg).
\end{equation}
We can rewrite the globalized action (\ref{glob_action}) in the same way as in \cite{CLL17} (see the action in \eqref{class:dg_action_2}), we have
\begin{equation}
\Tilde{\Sc}\surg=\frac12\Big\langle\hat{\mathbf{X}}, D\hat{\mathbf{X}}\Big\rangle+\Big\langle\Big(\hat{R}\sur\Big)_j(x; \hat{\mathbf{X}})dx^j,\hat{\mathbf{X}}\Big\rangle+\Big\langle\Big(\hat{R}\sur\Big)_{\bar{j}}(x; \hat{\mathbf{X}})dx^{\bar{j}},\hat{\mathbf{X}}\Big\rangle,
\end{equation}
with
\begin{equation}
\label{class:coeff}
\begin{split}
\Big(\hat{R}\sur\Big)_j(x,\hat{\mathbf{X}})&=\sum^{\infty}_{k=0}\frac{1}{(k+1)!}\Big(\hat{R}_k\Big)_j(\hat{\mathbf{X}}^{\otimes k}),\\
\Big(\hat{R}\sur\Big)_{\bar{j}}(x,\hat{\mathbf{X}})&=\sum^{\infty}_{k=2}\frac{1}{(k+1)!}\Big(\hat{R}_k\Big)_{\bar{j}}(\hat{\mathbf{X}}^{\otimes k}).
\end{split}
\end{equation}
Now, similarly to the approach in \cite{CMnW17}, we assume that we can split the $\Linf$-algebra as
\begin{equation}
\mathfrak{g}[1]=\Omega^{\bullet,\bullet}(M)\otimes T^{\vee1,0}M=\Omega^{\bullet,\bullet}(M)\otimes V\oplus \Omega^{\bullet,\bullet}(M)\otimes W,
\end{equation}
with $V$ and $W$ two isotropic subspaces. We identify $W\cong V^\vee$ via the pairing (in particular thanks to the holomorphic symplectic form). Consequently, the superfield splits as $\hat{\mathbf{X}}=\hat{\mathbf{A}}+\hat{\mathbf{B}}=\hat{\mathbf{A}}^i\xi_i+\xi^i\hat{\mathbf{B}}_i$ with $\xi_i\in V$ and $\xi^i\in W$. Concerning the assignment of degrees, we make the following choices. Since $\Omega$ has ghost degree $2$ (and as such $\Omega^{-1}$ has ghost degree $-2$), we assign total degree 0 to $\mathbf{A}^i$ and $\xi_i$, total degree $2$ to $B_i$ and total degree $-2$ to $\xi^i$. We refer to Table \ref{class:Tab_degrees_split} for an explanation of the ghost degrees for the components of the superfields $\hat{\mathbf{A}}^i$ and $\hat{\mathbf{B}}_i$. Then $\hat{\mathbf{A}}^i\oplus \hat{\mathbf{B}}_i\in \Omega^{\bullet}(\Sigma_3)\oplus\Omega^{\bullet}(\Sigma_3)[2]$, which is a $BF$-like theory.
\begin{table}[hbt!]
\centering
\begin{tabular}{|lcc|}
\toprule
& Form degree & Ghost degree \\
\midrule
$A^i_{(0)}$ & 0 & 0 \\
$A^i_{(1)}$ & 1 & $-1$ \\
$A^i_{(2)}$ & 2 & $-2$\\
$A^i_{(3)}$ & 3 & $-3$\\
\midrule
$B_{(0)i}$ & 0 & 2 \\
$B_{(1)i}$ & 1 & 1 \\
$B_{(2)i}$ & 2 & $0$\\
$B_{(3)i}$ & 3 & $-2$\\
\bottomrule
\end{tabular}
\caption{Explanation for the form degree and ghost degree for the components of the superfields $\hat{\mathbf{A}}^i$ and $\hat{\mathbf{B}}_i$.}
\label{class:Tab_degrees_split}
\end{table}
\begin{rmk}
As explained in \cite[Remark 4.2.2]{Ste17}, the splitting of the target $T^{\vee1,0}_xM$ into two transversal holomorphic Lagrangian subbundles is not possible when $M$ is a K3 surface. Instead, it is possible when $M=T^\vee Y$, with $Y$ any complex manifold. In this case $M$ with the standard holomorphic symplectic form will have a vertical as well as a horizontal polarization.
\end{rmk}
To sum up, the space of fields is split as
\begin{equation}
\label{space_of_fields_gs}
\Tilde{\mathcal{F}}\surgS=\Omega^{\bullet}(\Sigma_3)\otimes\Omega^{\bullet,\bullet}(M)\otimes V\oplus \Omega^{\bullet}(\Sigma_3)[2]\otimes\Omega^{\bullet,\bullet}(M)\otimes W.
\end{equation}
\begin{defn}[Globalized split RW action]
The \textit{globalized split RW action} is defined as
\begin{equation}
\label{split_global_action}
\begin{split}
\Tilde{\Sc}\surgS&\coloneq
\Big\langle\hat{\mathbf{B}}, D\hat{\mathbf{A}}\Big\rangle+\Big\langle\Big(\hat{R}\sur\Big)_j(x; \hat{\mathbf{A}}+\hat{\mathbf{B}})dx^j,\hat{\mathbf{A}}+\hat{\mathbf{B}}\Big\rangle+\Big\langle\Big(\hat{R}\sur\Big)_{\bar{j}}(x; \hat{\mathbf{A}}+\hat{\mathbf{B}})dx^{\bar{j}},\hat{\mathbf{A}}+\hat{\mathbf{B}}\Big\rangle\\
&=\hat{\mathcal{S}}\surgS+\mathcal{S}\surgSR+\mathcal{S}\surgSRbar.
\end{split}
\end{equation}
with
\begin{equation}
\label{class:R_1}
\begin{split}
\bigg\langle\Big(\hat{R}\sur\Big)_j(x,\hat{\mathbf{A}}+\hat{\mathbf{B}})dx^{j},\hat{\mathbf{A}}+\hat{\mathbf{B}}\bigg\rangle&=\varint_{T[1]\Sigma_3}\mu_{\Sigma_3}\bigg(\sum^{\infty}_{k=0}\frac{1}{(k+1)!}\Big(\hat{R}_k\Big)^i_j((\Omega_{il}\hat{\mathbf{A}}^l+\hat{\mathbf{B}}_i)^{\otimes k})(\hat{\mathbf{A}}+\hat{\mathbf{B}})dx^{j}\bigg)\\
&=\varint_{T[1]\Sigma_3}\mu_{\Sigma_3}\Bigg\{\bigg(\Big(\hat{R}_0\Big)^i_j\Omega_{il}\hat{\mathbf{A}}^l+\Big(\hat{R}_0\Big)^i_j\hat{\mathbf{B}}_i+\frac{1}{2}\Big(\hat{R}_1\Big)^i_j(\xi_s)\Omega_{il}\hat{\mathbf{A}}^s\hat{\mathbf{A}}^l\\
&\quad+\Big(\hat{R}_1\Big)^i_j(\xi^s)\Omega_{il}\hat{\mathbf{A}}^l\hat{\mathbf{B}}_s
+\frac{1}{2}\Big(\hat{R}_1\Big)^i_j(\xi^s)\hat{\mathbf{B}}_i\hat{\mathbf{B}}_s\\
&\quad+\frac{1}{6}\Big(\hat{R}_2\Big)^i_j(\xi_s\xi_m)\Omega_{il}\hat{\mathbf{A}}^l\hat{\mathbf{A}}^s\hat{\mathbf{A}}^m+\frac{1}{2}\Big(\hat{R}_2\Big)^i_j(\xi_s\xi^m)\Omega_{il}\hat{\mathbf{A}}^l\hat{\mathbf{A}}^s\hat{\mathbf{B}}_m\\
&\quad+\frac{1}{2}\Big(\hat{R}_2\Big)^i_j(\xi^s\xi^m)\Omega_{il}\hat{\mathbf{A}}^l\hat{\mathbf{B}}_s\hat{\mathbf{B}}_m+\frac{1}{6}\Big(\hat{R}_2\Big)^i_j(\xi^s\xi^m)\hat{\mathbf{B}}_i\hat{\mathbf{B}}_s\hat{\mathbf{B}}_m+\dots\bigg)dx^{j}\Bigg\}\\
&=\varint_{T[1]\Sigma_3}\mu_{\Sigma_3}\Bigg\{\bigg(\Big(\hat{R}_0\Big)^i_{j}\Omega_{il}\hat{\mathbf{A}}^i+\Big(\hat{R}_0\Big)^i_j\hat{\mathbf{B}}_i+\frac{1}{2}\Big(\hat{R}_1\Big)^i_{j;s}\Omega_{il}\hat{\mathbf{A}}^l\hat{\mathbf{A}}^s\\
&\quad+\Big(\hat{R}_1\Big)^{is}_{j}\Omega_{il}\hat{\mathbf{A}}^l\hat{\mathbf{B}}_s+\frac{1}{2}\Big(\hat{R}_1\Big)^{is}_j\hat{\mathbf{B}}_i\hat{\mathbf{B}}_s\\
&\quad+\frac{1}{6}\Big(\hat{R}_2\Big)^i_{j;sm}\Omega_{il}\hat{\mathbf{A}}^l\hat{\mathbf{A}}^s\hat{\mathbf{A}}^m+\frac{1}{2}\Big(\hat{R}_2\Big)^{im}_{j;s}\Omega_{il}\hat{\mathbf{A}}^l\hat{\mathbf{A}}^s\hat{\mathbf{B}}_m\\
&\quad+\frac{1}{2}\Big(\hat{R}_2\Big)^{ism}_{j}\Omega_{il}\hat{\mathbf{A}}^l\hat{\mathbf{B}}_s\hat{\mathbf{B}}_m+\frac{1}{6}\Big(\hat{R}_2\Big)^{ism}_j\hat{\mathbf{B}}_i\hat{\mathbf{B}}_s\hat{\mathbf{B}}_m+\dots\bigg)dx^{j}\Bigg\}
\end{split}
\end{equation}
and
\begin{equation}
\label{class:R_2}
\begin{split}
\Big\langle\Big(\hat{R}\sur\Big)_{\bar{j}}(x,\hat{\mathbf{A}}+\hat{\mathbf{B}})dx^{\bar{j}},\hat{\mathbf{A}}+\hat{\mathbf{B}}\Big\rangle&=\varint_{T[1]\Sigma_3}\mu_{\Sigma_3}\bigg(\sum^{\infty}_{k=2}\frac{1}{(k+1)!}\Big(\hat{R}_k\Big)^i_{\bar{j}}((\Omega_{il}\hat{\mathbf{A}}^l+\hat{\mathbf{B}}_i)^{\otimes k})(\hat{\mathbf{A}}+\hat{\mathbf{B}})dx^{\bar{j}}\bigg)\\
&=\varint_{T[1]\Sigma_3}\mu_{\Sigma_3}\Bigg\{\bigg(\frac{1}{6}\Big(\hat{R}_2\Big)^i_{\bar{j}}\Omega_{il}(\xi_s\xi_m)\hat{\mathbf{A}}^l\hat{\mathbf{A}}^s\hat{\mathbf{A}}^m+\frac{1}{2}\Big(\hat{R}_2\Big)^i_{\bar{j}}\Omega_{il}(\xi_s\xi^m)\hat{\mathbf{A}}^l\hat{\mathbf{A}}^s\hat{\mathbf{B}}_m\\
&\quad+\frac{1}{2}\Big(\hat{R}_2\Big)^i_{\bar{j}}\Omega_{il}(\xi^s\xi^m)\hat{\mathbf{A}}^l\hat{\mathbf{B}}_s\hat{\mathbf{B}}_m+\frac{1}{6}\Big(\hat{R}_2\Big)^i_{\bar{j}}(\xi^s\xi^m)\hat{\mathbf{B}}_i\hat{\mathbf{B}}_s\hat{\mathbf{B}}_m+\dots\bigg)dx^{\bar{j}}\Bigg\}\\
&=\varint_{T[1]\Sigma_3}\mu_{\Sigma_3}\Bigg\{\bigg(\frac{1}{6}\Big(\hat{R}_2\Big)^i_{\bar{j};sm}\Omega_{il}\hat{\mathbf{A}}^l\hat{\mathbf{A}}^s\hat{\mathbf{A}}^m+\frac{1}{2}\Big(\hat{R}_2\Big)^{im}_{\bar{j};s}\Omega_{il}\hat{\mathbf{A}}^l\hat{\mathbf{A}}^s\hat{\mathbf{B}}_m\\
&\quad+\frac{1}{2}\Big(\hat{R}_2\Big)^{ism}_{\bar{j}}\Omega_{il}\hat{\mathbf{A}}^l\hat{\mathbf{B}}_s\hat{\mathbf{B}}_m+\frac{1}{6}\Big(\hat{R}_2\Big)^{ism}_{\bar{j}}\hat{\mathbf{B}}_i\hat{\mathbf{B}}_s\hat{\mathbf{B}}_m+\dots\bigg)dx^{\bar{j}}\Bigg\}.
\end{split}
\end{equation}
We call the model associated with the action \eqref{split_global_action}, \textit{globalized split RW model}.
\end{defn}
We present in Table \ref{class:Tab_coeff_split} the explicit expression as well as total degree for the component of $\Big(\hat{R}_k\Big)_j$ and $\Big(\hat{R}_k\Big)_{\bar{j}}$ in (\ref{class:R_1}) and (\ref{class:R_2}), respectively.
\begin{table}[h!]
\centering
\begin{tabular}{|lcc|}
\toprule
Operator & Explicit expression & Total degree \\
\midrule
$(\hat{R}_0\big)^i_{j}\Omega_{il}$ & -$\delta^{i}_j\Omega_{il}$ & 2 \\[2mm]
$(\hat{R}_0\big)^i_{j}$ & $-\delta^i_j$ & 0 \\[2mm]
$\Big(\hat{R}_1\Big)^i_{j;s}\Omega_{il}$ & $-\Chr{i}{s}{j}\Omega_{il}$ & 2\\[2mm]
$\Big(\hat{R}_1\Big)^{is}_{j}\Omega_{il}$ & $-\Chr{i}{q}{j}\Omega_{il}\Big(\Omega^{-1}\Big)^{qs}$ & 0\\[2mm]
$\Big(\hat{R}_1\Big)^{is}_{j}$ & $-\Chr{i}{j}{q}\Big(\Omega^{-1}\Big)^{qs}$ & $-2$ \\[2mm]
$\Big(\hat{R}_2\Big)^i_{j;sm}\Omega_{il}$ & $\bigg(\frac18R^{\hspace{1.5mm} i}_{j\; ms}+\frac14R^{\hspace{4mm} i}_{jm\; s}\bigg)\Omega_{il}$ & 2 \\[2mm]
$\Big(\hat{R}_2\Big)^{im}_{j;s}\Omega_{il}$ & $\bigg(\frac18R^{\hspace{1.5mm} i}_{j\; ps}+\frac14R^{\hspace{3mm} i}_{jp\; s}\bigg)\Omega_{il}\Big(\Omega^{-1}\Big)^{pm}$ & 0\\[2mm]
$\Big(\hat{R}_2\Big)^{ism}_{j}\Omega_{il}$ & $\bigg(\frac18R^{\hspace{1.5mm} i}_{j\; pn}+\frac14R^{\hspace{3mm} i}_{jp\; n}\bigg)\Omega_{il}\Big(\Omega^{-1}\Big)^{ns}\Big(\Omega^{-1}\Big)^{pm}$ & $-2$\\[2mm]
$\Big(\hat{R}_2\Big)^{ism}_{j}$ & $\bigg(\frac18R^{\hspace{1.5mm} m}_{j\hspace{2.5mm} kl}+\frac14R^{\hspace{2.5mm} m}_{jk\hspace{2.5mm} l}\bigg)\Big(\Omega^{-1} \Big)^{ls}\Big(\Omega^{-1}\Big)^{ki}$ & $-4$ \\
$\Big(\hat{R}_2\Big)^i_{\bar{j};sm}\Omega_{il}$ & $\bigg(\frac12\Riem{i}{\bar{j}}{m}{s}\bigg)\Omega_{il}$ & 2 \\[2mm]
$\Big(\hat{R}_2\Big)^{im}_{\bar{j};s}\Omega_{il}$ & $\bigg(\frac12\Riem{i}{\bar{j}}{n}{s}\bigg)\Omega_{il}\Big(\Omega^{-1}\Big)^{nm}$ & 0\\[2mm]
$\Big(\hat{R}_2\Big)^{ism}_{\bar{j}}\Omega_{il}$ & $\bigg(\frac12\Riem{i}{\bar{j}}{n}{p}\bigg)\Omega_{il}\Big(\Omega^{-1}\Big)^{ns}\Big(\Omega^{-1}\Big)^{pm}$ & $-2$\\[2mm]
$\Big(\hat{R}_2\Big)^{ism}_{\bar{j}}$ & $\bigg(\frac12\Riem{m}{\bar{j}}{k}{l}\bigg)\Big(\Omega^{-1} \Big)^{ls}\Big(\Omega^{-1}\Big)^{ki}$ & $ -4$ \\
\bottomrule
\end{tabular}
\caption{Explicit expression and total degree of the coefficients in \eqref{class:R_1} and \eqref{class:R_2}.}
\label{class:Tab_coeff_split}
\end{table}
If $\Sigma_3$ is a closed manifold, the globalized split RW action satisfies the dCME:
\[
d_M\tilde{\Sc}\surgS+\frac12(\tilde{\Sc}\surgS,\tilde{\Sc}\surgS)=0,
\]
with $d_M=d_x+d_{\bar{x}}$ the sum of the holomorphic and antiholomorphic Dolbeault differentials on the target manifold $M$.
In the presence of boundary, the globalized split action satisfies the mdCME:
\begin{equation}
\label{iotasplit}
\iota_{\Tilde{Q}\surgS}\omega\surgS=\delta \tilde{\Sc}\surgS+\pi^*\alpha\surgSB,
\end{equation}
with
\begin{align}
\begin{split}
\Tilde{Q}\surgS{}&=\varint_{T[1]\Sigma_3}\mu_{\Sigma_3}\bigg( -D\hat{\mathbf{A}}^i\frac{\delta}{\delta \hat{\mathbf{A}}^i}-D\hat{\mathbf{B}}_i\frac{\delta}{\delta \hat{\mathbf{B}}_i}+\sum^{\infty}_{k=0}\frac{1}{k!}\Big(\hat{R}_k\Big)^i_j((\hat{\mathbf{A}}+\hat{\mathbf{B}})^{\otimes k})dx^{j}\frac{\delta}{\delta \hat{\mathbf{A}}^i}\\
&\quad-\sum^{\infty}_{k=0}\frac{1}{k!}\Big(\hat{R}_{k}\Big)^l_j((\hat{\mathbf{A}}+\hat{\mathbf{B}})^{\otimes k})dx^{j}\Omega_{li}\frac{\delta}{\delta \hat{\mathbf{B}}_i}+\sum^{\infty}_{k=0}\frac{1}{k!}\Big(\hat{R}_k\Big)^i_{\bar{j}}((\hat{\mathbf{A}}+\hat{\mathbf{B}})^{\otimes k})dx^{{\bar{j}}}\frac{\delta}{\delta \hat{\mathbf{A}}^i},\\
&\quad-\sum^{\infty}_{k=0}\frac{1}{k!}\Big(\hat{R}_{k}\Big)^l _{\bar{j}}((\hat{\mathbf{A}}+\hat{\mathbf{B}})^{\otimes k})dx^{{\bar{j}}}\Omega_{li}\frac{\delta}{\delta \hat{\mathbf{B}}_i}\bigg),
\end{split}
\end{align}
\begin{align}
\omega\surgS{}&=\varint_{T[1]\Sigma_3}\mu_{\Sigma_3}\bigg( \delta\hat{\mathbf{B}}_i\delta \hat{\mathbf{A}}^i\bigg),\\
\alpha\surgSB{}&=\varint_{T[1]\partial \Sigma_3}\mu_{\partial \Sigma_3}\bigg(\hat{\mathbf{B}}_i\delta \hat{\mathbf{A}}^i\bigg).
\end{align}
\section{Perturbative quantization of the globalized split Rozansky--Witten model}
\label{sec:pert_quant_RW}
In the last section, we have formulated our globalized RW model as a $BF$-like theory. This allows us to quantize perturbatively the newly constructed globalized split RW model according to the Quantum BV-BFV framework \cite{CMR17} (see Section \ref{sec_qbvbfv} for an introduction). The quantization of the kinetic part of the action is analogous to the example of section 3 in \cite{CMR17}, since the theory reduces to the abelian $BF$ theory.. Hence we will be rather quick in the exposition referring to \cite{CMR17} for further details. We will focus our attention to the interacting part of the action (in our case this is actually just the globalization term), which has a rich, as well as complicated, structure. In particular, we will draw some comparison with the PSM, which has been considered in \cite{CMoW19}.
\subsection{Polarization}
The recipe to perturbatively quantize a $BF$-like theory according to the quantum BV-BFV formalism starts by requiring the data of a polarization.
Following the result of Section \ref{sec:BF-like_formulation}, in the globalized split RW theory, the space of boundary fields splits as
\begin{equation}
\label{space_boundary_fields_sg}
\Tilde{\mathcal{F}}\surgSB=\Omega^\bullet(\partial\Sigma_3)\otimes\Omega^{\bullet,\bullet}(M)\otimes V\oplus\Omega^\bullet(\partial\Sigma_3)[2]\otimes\Omega^{\bullet,\bullet}(M)\otimes W.
\end{equation}
Since we split $T^{1,0}M$ into isotropic subspaces, by the isotropy condition the subspaces are, in particular, Lagrangian. Therefore, either of them can be used as a base or fiber of the polarization.
\begin{notat}
From now on we will drop the hat from the notation of the ``globalized" superfields (e.g. $\hat{\mathbf{A}}^i$). Moreover, we will denote the coordinates on the base of the polarization by $\mathbb{A}^i$ or $\mathbb{B}^i$ and refer to this choice as $\mathbb{A}$- or $\mathbb{B}$-representation.
\end{notat}
Let us choose a decomposition of the boundary $\partial \Sigma_3=\partial_1\Sigma_3 \sqcup\partial_2\Sigma_3$, where $\partial_1\Sigma_3$ and $\partial_2\Sigma_3$ are two compact manifolds. Here, we can define a polarization $\mathcal{P}$ by choosing the $\mathbb{A}$-representation on $\partial_1\Sigma_3$ and the $\mathbb{B}$-representation on $\partial_2\Sigma_3$.
The space of leaves of the associated foliations are $\mathcal{B}_1:=\Omega^{\bullet}(\partial_1\Sigma_3)$ and $\mathcal{B}_2:=\Omega^{\bullet}(\partial_2\Sigma_3)[2]$, respectively. The space of boundary fields is $\mathcal{B}\bonP=\mathcal{B}_1\times \mathcal{B}_2\ni (\mathbb{A}^i, \mathbb{B}_i)$.
The BFV 1-form
is
\begin{equation}
\alpha\surgSBP=\varint_{\partial_1 \Sigma_3} \mathbf{B}_i\delta \mathbf{A}^i-\varint_{\partial_2 \Sigma_3} \delta \mathbf{B}_i \mathbf{A}^i
\end{equation}
and the quadratic part of the action \eqref{split_global_action} is
\begin{equation}
\hat{\Sc}\surgSP=\varint_{\Sigma_3}\mathbf{B}_id\mathbf{A}^i-\varint_{\partial_2\Sigma_3}\mathbf{B}_i\mathbf{A}^i.
\end{equation}
\subsection{Extraction of boundary fields}
We split the space of fields as
\begin{align}
\begin{split}
\tilde{\mathcal{F}}\surgS &\rightarrow \tilde{\mathcal{B}}\bonP\oplus \mathcal{Y}\\ \label{quantiz:split_fields_1}
(\mathbf{A}^i, \mathbf{B}_i)&\mapsto (\Tilde{\mathbb{A}}^i,\tilde{\mathbb{B}}_i)\oplus (\underline{\mathbf{A}}^i,\underline{\mathbf{B}}_i),
\end{split}
\end{align}
where $\tilde{\mathcal{B}}\bonP$ denotes the bulk extension of $\mathcal{B}\bonP$ to $\tilde{\mathcal{F}}\surgS$ with $\Tilde{{\mathbb{A}}}^i$ and $\Tilde{{\mathbb{B}}}_i$ the extensions of the boundary fields $\mathbb{A}^i$ and $\mathbb{B}$ to the bulk space of fields $\Tilde{\mathcal{F}}\surgS$; $\underline{\mathbf{A}}^i$ and $\underline{\mathbf{B}}_i$ are the bulk fields, which are required to restrict to zero on $\partial_1\Sigma_3$ and $\partial_2\Sigma_3$, respectively. Here, the extensions are chosen to be singular: $\Tilde{\mathbb{A}}^i$ and $\tilde{\mathbb{B}}_i$ are required to restrict to zero outside the boundary (a choice pointed out first in \cite{CMR17}). The action reduces to
\begin{equation}
\hat{\Sc}\surgSP=\varint\sur \underline{\mathbf{B}}_id\underline{\mathbf{A}}^i-\bigg(\varint_{\partial_2\Sigma_3} \mathbb{B}_i\underline{\mathbf{A}}^i-\varint_{\partial_1\Sigma_3}\underline{\mathbf{B}}_i\mathbb{A}^i\bigg).
\end{equation
\subsection{Construction of \texorpdfstring{$\Omega_0$}{O}}
At this point, we can construct the coboundary operator $\Omega_0$ by canonical quantization: we consider the boundary action and we replace any $\underline{\mathbf{B}}_i$ by $(-i\hbar \frac{\delta}{\delta \mathbb{A}^i})$ on $\partial_1\Sigma_3$, any $\underline{\mathbf{A}}^i$ by $(-i\hbar \frac{\delta}{\delta \mathbb{B}}_i)$ on $\partial_2\Sigma_3$. We obtain
\begin{equation}
\Omega_0=-i\hbar \bigg(\varint_{\partial_2\Sigma_3}d\mathbb{B}_i\frac{\delta}{\delta \mathbb{B}_i}+\varint_{\partial_1\Sigma_3}d\mathbb{A}^i\frac{\delta}{\delta \mathbb{A}^i}
\bigg).
\end{equation}
\subsection{Choice of residual fields}
The bulk contribution in the space of fields $\mathcal{Y}$ is further split into the space of residual fields $\mathcal{V}_{\Sigma_3}$ and a complement, the space of fluctuations fields $\mathcal{Y}'$, namely
\begin{alignat}{2}
&\mathcal{Y} &&\rightarrow \mathcal{V}_{\Sigma_3}\oplus \mathcal{Y}'\\ \label{quantiz:split_fields_1}
(\underline{\mathbf{A}}^i, &\underline{\mathbf{B}}_i)&&\mapsto (\mathsf{a}^i,\mathsf{b}_i)\oplus (\alpha^i,\beta_i),
\end{alignat}
where $\mathsf{a}^i$ and $\mathsf{b}_i$ are the residual fields, whereas $\alpha^i$ and $\beta_i$ are the fluctuations. Note that the fluctuation $\alpha^i$ is required to restrict to zero on $\partial_1\Sigma_3$ while $\beta_i$ is required to restrict to zero on $\partial_2\Sigma_3$.
In our case, the minimal space of residual fields is
\begin{equation}
\mathcal{V}_{\Sigma_3}=H^{\bullet}(\Sigma_3, \partial_1\Sigma_3)[0]\oplus H^{\bullet}(\Sigma_3, \partial_2\Sigma_3)[2]\ni (\mathsf{a}^i,\mathsf{b}_i).
\end{equation}
Here we can also define the BV Laplacian. To do it, pick a basis $\{[\chi_i]\}$ of $H^{\bullet}(\Sigma_3, \partial_1\Sigma_3)$ and its dual basis $\{[\chi^i]\}$ of $H^{\bullet}(\Sigma_3, \partial_2\Sigma_3)$ with representatives $\chi_i$ in $\Omega^{\bullet}(\Sigma_3, \partial_1\Sigma_3)$ and $\chi^i$ in $\Omega^{\bullet}(\Sigma_3, \partial_2\Sigma_3)$, with $\varint_{\Sigma_3}\chi_i\chi^j=\delta^j_i$. We can write the residual fields in a basis as
\begin{equation}
\begin{split}
\mathsf{a}^i&=\sum_k(z^{k}\chi_k)^i,\\
\mathsf{b}_i&=\sum_k (z^+_k\chi^{k})_i,
\end{split}
\end{equation}
where $\{z^k,z^+_k\}$ are canonical coordinates on $\mathcal{V}_{\Sigma_3}$ with BV symplectic form
\begin{equation}
\omega_{\mathcal{V}_{\Sigma_3}}=\sum_i(-1)^{\deg z^k}\delta z^+_k\delta z^k.
\end{equation}
Finally, the BV Laplacian on $\mathcal{V}_{\Sigma_3}$ is
\begin{equation}
\Delta_{\mathcal{V}_{\Sigma_3}}=\sum_i(-1)^{\deg z^k}\frac{\partial}{\partial z^k}\frac{\partial}{\partial z^+_k}.
\end{equation}
\subsection{Gauge-fixing and propagator}
We now have to fix a Lagrangian subspace $\mathcal{L}$ of $\mathcal{Y}'$. In the case of abelian BF theory, in \cite{CMR17}, the authors proved that such Lagrangian can be obtained from a \textit{contracting triple} $(\iota,p,K)$
for the complex $\Omega^\bullet_{\underline{D}}(\Sigma_3)$.
In particular, the integral kernel of $K$ is the propagator, which we call $\eta$.
Since $K$ is actually the inverse of an elliptic operator (as shown in \cite{CMR17}), the propagator is singular on the diagonal of $\Sigma_3\times \Sigma_3$. Hence, we will define it as follows. Let
\begin{equation}
\mathrm{Conf}_2(\Sigma_3)=\{(x_1,x_2)\in \Sigma_3\mid x_1\neq x_2\},
\end{equation}
and let $\iota_{\mathfrak{D}}$ be the inclusion of
\begin{equation}
\mathfrak{D}\coloneqq\{x_1\times x_2\in (\partial_1\Sigma_3\times \Sigma_3)\cup(\Sigma_3\times \partial_2\Sigma_3)\mid x_1\neq x_2\}
\end{equation}
into $\mathrm{Conf}_2(\Sigma_3)$. Then the propagator is the 2-form $\eta\in \Omega^2(\mathrm{Conf}_2(\Sigma_3),\mathfrak{D})$, where
\begin{equation}
\Omega^\bullet(\mathrm{Conf}_2(\Sigma_3),\mathfrak{D})=\{\gamma\in \Omega^\bullet(\mathrm{Conf}_2(\Sigma_3))\mid \iota^{*}_{\mathfrak{D}}\gamma=0\}.
\end{equation}
Explicitly,
\begin{equation}
\label{quantiz:prop}
\eta(x_1,x_2)=\frac{1}{T_{\Sigma_{3}}}\frac{1}{i\hbar}\varint_{\mathcal{L}}e^{\frac{i}{\hbar}\hat{\Sc}\surgSP}\pi^*_1\alpha^i(x_1)\pi^*_2\beta_i(x_2),
\end{equation}
with $\pi_1,\pi_2$ the projections from $M\times M$ to its first and second factor.
The coefficient $T_{\Sigma_3}$ is related to the Reidemeister torsion on $\Sigma_3$ as shown in \cite{CMR17}. However, its precise nature is irrelevant for the purposes of the present paper
\subsection{The quantum state}
We can sum up the splittings we have made so far as
\begin{align}
\begin{split}
\tilde{\mathcal{F}}\surgS &\rightarrow \mathcal{B}\bonP\times \mathcal{V}^{\mathcal{P}}_{\Sigma_3}\times \mathcal{Y}'\\
(\mathbf{A}^i, \mathbf{B}_i)&\mapsto (\mathbb{A}^i,\mathbb{B}_i)+ (\mathsf{a}^i,\mathsf{b}_i)+ (\alpha^i,\beta_i).
\end{split}
\end{align}
\begin{rmk}
As a result of the procedure detailed in \cite{CMR17}, this is referred to as \textit{good splitting}.
\end{rmk}
According to the splitting of the space of fields, the action decomposes as
\begin{equation}
\Sc\surgSP=\hat{\Sc}\surgSP+\hat{\Sc}^{\, \text{pert}}+\Sc^{\, \text{res}}+\Sc^{\, \text{source}},
\end{equation}
where
\begin{align}
\hat{\Sc}\surgSP&=\varint_{\Sigma_3}\beta_id\alpha^i,\\
\hat{\Sc}^{\, \text{pert}}&=\varint\sur \mathcal{V}(\underline{\mathbf{A}}, \underline{\mathbf{B}}),\\
\Sc^{\, \text{res}}&= - \bigg(\varint_{\partial_2\Sigma_3}\mathbb{B}_i\mathsf{a}^i-\varint_{\partial_1\Sigma_3}\mathsf{b}_i\mathbb{A}^i\bigg),\\
\Sc^{\, \text{source}}&= - \bigg(\varint_{\partial_2\Sigma_3}\mathbb{B}_i\alpha^i-\varint_{\partial_1\Sigma_3}\beta_i\mathbb{A}^i\bigg),
\end{align}
where $\hat{\Sc}^{\, \text{pert}}$ is an interacting term made up by a density-valued function $\mathcal{V}$ which depends on the fields but not on their derivatives (by assumption).
The state is given by:
\begin{equation}
\begin{split}
\hat{\psi}_{\Sigma_3}(\mathbb{A}, \mathbb{B}, \mathsf{a}, \mathsf{b})&=\varint_{(\alpha, \beta)\in \mathcal{L}}e^{\frac{i}{\hbar}\Sc\surgSP (\mathbb{A}+\mathsf{a}+\alpha,\mathbb{B}+\mathsf{b}+\beta)}\mathscr{D}[\alpha]\mathscr{D}[\beta]\\
&=e^{\frac{i}{\hbar}\Sc^{\, \text{res}}}\varint_{\mathcal{L}}e^{\frac{i}{\hbar}\hat{\Sc}\surgSP}e^{\frac{i}{\hbar}\hat{\Sc}^{\, \text{pert}}}e^{\frac{i}{\hbar}\Sc^{\text{source}}},
\end{split}
\end{equation}
where we denote by $\mathscr{D}$ a formal measure on $\mathcal{L}$.
The idea here is to compute the integral through a perturbative expansion, hence let us expand the exponentials as
\begin{equation}
\begin{split}
\hat{\psi}_{\Sigma_3}(\mathbb{A}, \mathbb{B}, \mathsf{a}, \mathsf{b})&=\sum_{k,l,m}\frac{1}{k!l!m!}(-1)^{k+m}\bigg(\varint_{\partial_2\Sigma_3}\mathbb{B}_i\mathsf{a}^i-\varint_{\partial_1\Sigma_3}\mathsf{b}_i\mathbb{A}^i\bigg)^k\times\\
&\quad\times\varint_{\mathcal{L}}e^{\frac{i}{\hbar}\hat{\Sc}\surgSP}\bigg(\varint\sur \mathcal{V}(\underline{\mathbf{A}}, \underline{\mathbf{B}})\bigg)^l
\bigg(\varint_{\partial_2\Sigma_3}\mathbb{B}_i\alpha^i-\varint_{\partial_1\Sigma_3}\beta_i\mathbb{A}^i\bigg)^m.
\end{split}
\end{equation}
In the globalized split RW model, the interaction term is actually given by the globalization terms (the second and third terms in the action \ref{split_global_action}).
After having expanded the globalization terms in in residual fields and in fluctuations, the integration over $\mathcal{L}$ can be solved by using the \textit{Wick theorem}.
\subsection{Feynman rules}
In this section, we are going to introduce the Feynman rules needed to define precisely the quantum state of our theory.
Since our aim is to prove the mdQME for the globalized split RW model, we will need to take care of the quantum Grothendieck BFV operator. This is a coboundary operator in which higher functional derivatives may appear (and as we will see they will indeed be present). As explained in \cite{CMR17}, higher functional derivatives requires a sort of ``regularization". This is provided by the composite fields, which we denote by square brackets $[\enspace]$ (e.g. for the boundary field $\mathbb{B}$, we will write $[\mathbb{B}_{i_1}\dots \mathbb{B}_{i_k}]$)\footnote{See Section \label{qs_bflike} for a short introduction.}.
\begin{defn}[Globalized split RW Feynman graph]
A \textit{globalized split RW Feynman graph} is an oriented graph with three types of vertices $V(\Gamma)=V_{\text{bulk}}(\Gamma)\sqcup V_{\partial_1}\sqcup V_{\partial_2}$, called bulk vertices and type 1 and 2 boundary vertices, such that
\begin{itemize}
\item bulk vertices can have any valence,
\item type 1 boundary vertices carry any number of incoming half-edges (and no outgoing half-edges),
\item type 2 boundary vertices carry any number of outgoing half-edges (and no incoming half-edges),
\item multiple edges and loose half-edges (leaves) are allowed.
\end{itemize}
\end{defn}
A labeling of a Feynman graph is a function from the set of half-edges to $\{1,\dots,\dim V\}$.
In our case our source manifold $\Sigma_3$ has boundary $\partial \Sigma_3=\partial_1 \Sigma_3 \sqcup \partial_2 \Sigma_3$, let $\Gamma$ be a Feynman graph and define
\begin{equation}
\mathrm{Conf}_\Gamma(\Sigma_3)\coloneqq \mathrm{Conf}_{V_{\text{bulk}}}(\Sigma_3) \times \mathrm{Conf}_{V_{\partial_1}}(\partial_1 \Sigma_3) \times \mathrm{Conf}_{V_{\partial_2}}(\partial_2 \Sigma_3).
\end{equation}
The Feynman rules are given by a map associating to a Feynman graph $\Gamma$ a differential form $\omega_\Gamma \in \Omega^\bullet(\mathrm{Conf}_\Gamma(\Sigma_3))$.
\begin{defn}[Globalized split RW Feynman rules] Let $\Gamma$ be a labeled Feynman graph. We choose a configuration $\iota:V(\Gamma) \rightarrow \mathrm{Conf}(\Gamma)$, such that decompositions are respected. Then, we \textit{decorate} the graph according to the following rules, namely, the \textit{Feynman rules}:
\begin{itemize}
\item Bulk vertices in $\Sigma_3$ decorated by ``globalized vertex tensors"
\begin{equation}
\begin{split}
\Big(\hat{R}_k\Big)^{i_1\dots i_s}_{j;j_1\dots j_t}dx^j &\coloneqq \frac{\partial^{s+t}}{\partial \underline{\mathbf{A}}^{i_1}\dots\partial \underline{\mathbf{A}}^{i_s} \partial \underline{\mathbf{B}}_{j_1}\dots \partial \underline{\mathbf{B}}_{j_t}} \bigg|_{\underline{\mathbf{A}}=\underline{\mathbf{B}}=0} \Big(\hat{R}_k\Big)^i_j((\underline{\mathbf{A}}+\underline{\mathbf{B}})^{\otimes k})(\Omega_{il}\underline{\mathbf{A}}^l+\underline{\mathbf{B}}_i)dx^{j}\\
\Big(\hat{R}_k\Big)^{i_1\dots i_s}_{{\Bar{j}};j_1\dots j_t} dx^{\Bar{j}}&\coloneqq \frac{\partial^{s+t}}{\partial \underline{\mathbf{A}}^{i_1}\dots\partial \underline{\mathbf{A}}^{i_s} \partial \underline{\mathbf{B}}_{j_1}\dots \partial \underline{\mathbf{B}}_{j_t}} \bigg|_{\underline{\mathbf{A}}=\underline{\mathbf{B}}=0} \Big(\hat{R}_k\Big)^i_{\Bar{j}}((\underline{\mathbf{A}}+\underline{\mathbf{B}})^{\otimes k})(\Omega_{il}\underline{\mathbf{A}}^l+\underline{\mathbf{B}}_i)dx^{\Bar{j}}
\end{split}
\end{equation}
where $s, t$ are the out- and in-valencies of the vertex and $i_1, \dots, i_s$ and $j_1, \dots, j_t$ are the
labels of the out (respectively in-)oriented half-edges.
\item Boundary vertices $v \in V_{\partial_1}(\Gamma)$ with incoming half-edges labeled $i_1, \dots, i_k$ and no out-going half-edges are decorated by a composite field $[\mathbb{A}^{i_1} \dots \mathbb{A}^{i_k}]$ evaluated at the point (vertex location) $\iota(v)$ on $\partial_1 \Sigma_3$.
\item Boundary vertices $v \in V_{\partial_2}$ on $\partial_2 M$ with outgoing half-edges labeled $j_1, \dots, j_l$ are decorated by $[\mathbb{B}_{j_1} \dots \mathbb{B}_{j_l}]$ evaluated at the point on $\partial_2 \Sigma_3$.
\item Edges between vertices $v_1, v_2$ are decorated with the propagator $\eta (\iota(v_1),\iota(v_2))\cdot \delta^i_j$, with $\eta$ the propagator induced by $\mathcal{L} \subset \mathcal{Y}'$, the gauge-fixing Lagrangian.
\item Loose half-edges (leaves) attached to a vertex $v$ and labeled $i$ are decorated with the residual
fields $\mathsf{a}^i$ (for out-orientation), $\mathsf{b}_i$
(for in-orientation) evaluated at the point $\iota(v)$.
\end{itemize}
\end{defn}
The Feynman Rules are represented in Figs. \ref{fig:fr_glob_1}, \ref{fig:fr_glob_2} and \ref{fig:comp_fields_2}.
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.25\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=1pt}]
\vertex[blob] (m) at (0,-2) {$\mathsf{a}$};
\vertex (a) at (0,0) {$x$} ;
\diagram*{
(a) -- [fermion, edge label' = $i$] (m)
};
\vertex [right=3em of m] {\(=\mathsf{a}^i(x)\)};
\end{feynman}
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.25\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex[blob] (m) at (0,-2) {$\mathsf{b}$};
\vertex (a) at (0,0) {$x$} ;
\diagram*{
(m) -- [fermion, edge label = $i$] (a)
};
\vertex [right=3em of m] {\(=\mathsf{b}_i(x)\)};
\end{feynman}
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (a) at (0,0) {$x$};
\vertex (b) at (2,0) {$y$} ;
\diagram*{
(a) -- [fermion, edge label = $i\hspace{5mm} j$] (b)
};
\vertex [right=3em of b] {\(=\delta^i_j\eta(x,y)\)};
\end{feynman}
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}
\caption{Feynman rules for residual fields and propagator}
\label{fig:fr_glob_1}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (a) at (-1,0)
\vertex (b) at (1,0);
\vertex (m1) at (0, 1);
\diagram*{
(a) -- m [dot] -- (b),
(m1) -- [fermion] m
};
\vertex [below=0.75em of m] {\(\mathbb{B}\)};
\vertex [left=0.25em of a] {\(\partial_2 \Sigma_3\)};
\end{feynman}
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\quad
\begin{subfigure}[b]{0.3\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (a) at (-1,0)
\vertex (b) at (1,0);
\vertex (m1) at (0, -1);
\diagram*{
(a) -- m [dot] -- (b),
m -- [fermion] (m1)
};
\vertex [above=0.75em of m] {\(\mathbb{A}\)};
\vertex [left=0.25em of a] {\(\partial_1 \Sigma_3\)};
\end{feynman}
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.42\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (a) at (-1.2, 0.35);
\vertex (b) at (-0.3, 0.79);
\vertex (c) at (+1.2, 0.4);
\vertex (e) at (-1.2, -0.35);
\vertex (f) at (-0.3, -0.79);
\vertex (g) at (+1.2, -0.4);
\diagram*{
(a) -- [] m [dot],
(b) -- [] m [dot],
(c) -- [] m [dot],
(e) -- [] m [dot],
(f) -- [] m [dot],
(g) -- [] m [dot],
};
\vertex [right=7em of m] {\(\quad=\Big(\hat{R}_k\Big)^{i_1\dots i_s}_{j;j_1\dots j_t}dx^j\)};
\vertex [above=0.2em of m, label=80:\(\dots\)] {};
\vertex [] at (-1.35,0.4) {\(j_1\)};
\vertex [] at (-0.45,1) {\(j_2\)};
\vertex [] at (1.4, 0.4) {\(j_t\)};
\vertex [] at (-1.35,-0.4) {\(i_1\)};
\vertex [] at (-0.45,-1) {\(i_2\)};
\vertex [] at (1.4, -0.4) {\(i_s\)};
\end{feynman}
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\quad
\begin{subfigure}[b]{0.42\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (a) at (-1.2, 0.35);
\vertex (b) at (-0.3, 0.79);
\vertex (c) at (+1.2, 0.4);
\vertex (e) at (-1.2, -0.35);
\vertex (f) at (-0.3, -0.79);
\vertex (g) at (+1.2, -0.4);
\diagram*{
(a) -- [] m [dot,red],
(b) -- [] m [dot,red],
(c) -- [] m [dot,red],
(e) -- [] m [dot,red],
(f) -- [] m [dot,red],
(g) -- [] m [dot,red],
};
\vertex [right=7em of m] {\(\quad=\Big(\hat{R}_k\Big)^{i_1\dots i_s}_{{\Bar{j}};j_1\dots j_t}dx^{\Bar{j}}\)};
\vertex [above=0.2em of m, label=80:\(\dots\)] {};
\vertex [] at (-1.35,0.4) {\(j_1\)};
\vertex [] at (-0.45,1) {\(j_2\)};
\vertex [] at (1.4, 0.4) {\(j_t\)};
\vertex [] at (-1.35,-0.4) {\(i_1\)};
\vertex [] at (-0.45,-1) {\(i_2\)};
\vertex [] at (1.4, -0.4) {\(i_s\)};
\end{feynman}
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}
\caption{Feynman rules for boundary fields and interaction vertices: we denote with a black dot the vertices arising from the $(2,0)$ part of the curvature (i.e. the terms corresponding to the term $\Sc_R$ in the action) and with a red dot the ones coming from the $(1,1)$ part (i.e. the terms corresponding to the term $\Sc_{\Bar{R}}$ in the action). Informally, we will call the first type of vertices ``black" vertex and the second one ``red" vertex.}
\label{fig:fr_glob_2}
\end{figure}
\begin{figure}[hbt!]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (a) at (-2,0)
\vertex (b) at (2,0);
\vertex (d) at (0.4, 0.75);
\vertex (e) at (1, 0.75);
\vertex (f) at (-1, 0.75);
\diagram*{
(a) -- m [dot] -- (b),
(d) -- [anti fermion] m,
(e) -- [anti fermion] m,
(f) -- [anti fermion] m
};
\vertex [below=0.75em of m] {\([\mathbb{A}^{i_1}\dots \mathbb{A}^{i_k}]\)};
\vertex [left=0.25em of a] {\(\partial_1 \Sigma_3\)};
\node at (-0.2, 0.5) {$\dots$};
\node at (-1.1, 0.6) {$i_k$};
\node at (0.5, 1) {$i_2$};
\node at (1.2, 0.9) {$i_1$};
\end{feynman}%
\end{tikzpicture}
\caption{}
\label{fig:comp_fields}
\end{subfigure}%
\qquad \begin{subfigure}[b]{0.3\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (a) at (-2,0)
\vertex (b) at (2,0);
\vertex (d) at (0.4, 0.75);
\vertex (e) at (1, 0.75);
\vertex (f) at (-1, 0.75);
\diagram*{
(a) -- m [dot] -- (b),
(d) -- [fermion] m,
(e) -- [fermion] m,
(f) -- [fermion] m
};
\vertex [below=0.75em of m] {\([\mathbb{B}_{j_1}\dots \mathbb{B}_{j_l}]\)};
\vertex [left=0.25em of a] {\(\partial_2 \Sigma_3\)};
\node at (-0.2, 0.5) {$\dots$};
\node at (-1.1, 0.6) {$j_l$};
\node at (0.5, 1) {$j_2$};
\node at (1.2, 0.9) {$j_1$};
\end{feynman}
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\caption{Feynman rules for the composite fields.}
\label{fig:comp_fields_2}
\end{figure}
The full covariant quantum state for globalized split RW theory is defined analogously as in \cite{CMR17}.
\begin{defn}[Full quantum state for the globalized split RW theory]
\label{full_quantum_state_RW}
Let $\Sigma_3$ be a 3-dimensional manifold with boundary. Consider the data of a globalized split RW theory which consists of the globalized split space of fields $\tilde{\mathcal{F}}\surgS$ as in \eqref{space_of_fields_gs}, the globalized split space of boundary fields $\tilde{\mathcal{F}}\surgSB$ as in \eqref{space_boundary_fields_sg}, a polarization $\mathcal{P}$ on $\tilde{\mathcal{F}}\surgSB$ a good splitting $\tilde{\mathcal{F}}\surgS=\mathcal{B}_{\partial \Sigma_3}^{\mathcal{P}} \times \mathcal{V}_{\Sigma_3}^{\mathcal{P}} \times \mathcal{Y}'$ and $\mathcal{L} \subset \mathcal{Y}'$, the gauge-fixing Lagrangian. We can define the \textit{full quantum state for the globalized split RW theory} by the formal power series
\begin{equation}
\boldsymbol{\hat{\psi}}\surgR(\mathbb{A},\mathbb{B};\mathsf{a},\mathsf{b})=T_{\Sigma_3}\exp\bigg(\frac{i}{\hbar}\sum_{\Gamma}\frac{(-i\hbar)^{\text{loops}(\Gamma)}}{|\text{Aut}(\Gamma)|}\varint_{\text{C}_\Gamma(\Sigma_3)}\omega_{\Gamma}(\mathbb{A}, \mathbb{B}; \mathsf{a}, \mathsf{b})
\bigg).
\end{equation}
\end{defn}
\section{Proof of the modified differential Quantum Master Equation}
\label{sec:mdQME}
In the BV-BFV formalism on manifolds with boundary we expect the mQME to hold. This is a condition which requires the quantum state to be closed under a certain coboundary operator (see \cite{CMR17}). However, in the context of a globalized AKSZ theory, this condition becomes more complicated. The new condition is called \textit{modified differential Quantum Master Equation} (mdQME). We refer to \cite{BCM12,CMR14} for a discussion of the classical and quantum aspects of this condition. An extension for this discussion for manifolds with boundary was provided in\cite{CMoW17}. Finally, in \cite{CMoW19} the mdQME for anomaly-free, unimodular split AKSZ theories was proven, and later on in \cite{CMoW20} for the globalized PSM.
Our aim in this section is to prove the mdQME for the globalized split RW model, namely
\begin{equation}
\nabla_{\text{G}}\boldsymbol{\hat{\psi}}\surgR=0,
\end{equation}
where $\nabla_{\text{G}}$ is the \textit{quantum Grothendieck BFV (qGBFV) operator} and $\boldsymbol{\hat{\psi}}\surgR$ is the full covariant quantum state for the globalized split RW theory.
As we will see, the proof follows almost verbatim from the proof of the mdQME in \cite{CMoW19}. Before addressing the proof, we focus on the qGBFV operator and we discuss the construction of the full BFV boundary operator.
\subsection{The quantum Grothendieck BFV operator}
\begin{defn}[qGBFV operator for the globalized split RW model]
Inspired by \cite{CMoW19}, we define the \textit{qGBFV operator for the globalized split RW model} as
\begin{equation}
\nabla_\mathrm{G}\coloneqq\bigg(d_x+d_{\Bar{x}}-i\hbar \Delta_{\mathcal{V}_{\Sigma_3, x}}+\frac{i}{\hbar}\boldsymbol{\Omega}_{\partial \Sigma_3}\bigg),
\end{equation}
with $\boldsymbol{\Omega}_{\partial \Sigma_3}$ the full BFV boundary operator
\begin{equation}
\boldsymbol{\Omega}_{\partial \Sigma_3}=\boldsymbol{\Omega}^{\mathbb{A}}_{\partial\Sigma_3}+\boldsymbol{\Omega}^{\mathbb{B}}_{\partial\Sigma_3}=\Omega^{\mathbb{A}}_{0}+\boldsymbol{\Omega}^{\mathbb{A}}_{\text{pert}}+\Omega^{\mathbb{B}}_0+\boldsymbol{\Omega}^{\mathbb{B}}_{\text{pert}}
\end{equation}
where
\begin{equation}
\begin{split}
\Omega^{\mathbb{A}}_0&=-i\hbar \varint_{\partial_1\Sigma_3}d\mathbb{A}^i\frac{\delta}{\delta \mathbb{A}^i},\\
\Omega^{\mathbb{B}}_0&=-i\hbar \varint_{\partial_2\Sigma_3}d\mathbb{B}_i\frac{\delta}{\delta \mathbb{B}_i}.
\end{split}
\end{equation}
and $\boldsymbol{\Omega}^{\mathbb{A}}_{\text{pert}}$ and $\boldsymbol{\Omega}^{\mathbb{B}}_{\text{pert}}$ are given by Feynman diagrams collapsing to the boundary in the $\mathbb{A}$-representation and $\mathbb{B}$-representation, respectively.
\end{defn}
\begin{rmk}
Note that $\nabla_{\text{G}}$ and $\Omega_{\partial \Sigma_3}$ are inhomogeneous forms on the holomorphic symplectic manifold $M$ since the globalized term in the action is a 1-form on $M$. Explicitly, for example in the $\mathbb{B}$-representation, we can decompose the $\boldsymbol{\Omega}^{\mathbb{B}}_{\mathrm{pert}}$ as
\begin{equation}
\boldsymbol{\Omega}^{\mathbb{B}}_{\mathrm{pert}}=\underbrace{\boldsymbol{\Omega}^{\mathbb{B}}_{1,0}+\boldsymbol{\Omega}^{\mathbb{B}}_{0,1}}_{\coloneqq \boldsymbol{\Omega}^{\mathbb{B}}_{(1)}}+\underbrace{\boldsymbol{\Omega}^{\mathbb{B}}_{2,0}+\boldsymbol{\Omega}^{\mathbb{B}}_{1,1}+\boldsymbol{\Omega}^{\mathbb{B}}_{0,2}}_{\coloneqq\boldsymbol{\Omega}^{\mathbb{B}}_{(2)}}+\dots.
\end{equation}
and similarly in the $\mathbb{A}$-representation.
\end{rmk}
In the next section, we proceed to give an explicit expression for the BFV boundary operator in the $\mathbb{B}$ and $\mathbb{A}$ representation. We start with the former.
\subsection{BFV boundary operator in the $\mathbb{B}$-representation}
Let us remind the reader about the general form of the BFV boundary operator in the $\mathbb{B}$-representation for a split AKSZ theory \cite{CMoW19}:
\begin{equation}
\boldsymbol{\Omega}_{\text{pert}}^{\mathbb{B}}\coloneqq \sum_{n,k \geq 0}\sum_{\Gamma}\frac{(i \hbar)^{\text{loops($\Gamma)$}}}{\mid \Aut(\Gamma)\mid} \varint_{\partial_2M}\bigg(\sigma_{\Gamma}\bigg)^{I_1\dots I_n}_{J_1\dots J_k}\wedge \mathbb{B}_{I_1}\wedge \dots \wedge \mathbb{B}_{I_n}\bigg((-1)^{kd} (i\hbar)^{k}\frac{\delta^{\mid J_1\mid + \dots +\mid J_k\mid}}{\delta [\mathbb{B}_{J_1} \dots \mathbb{B}_{J_k}]}\bigg)
\end{equation}
In order to find an explicit expression for the BFV boundary operator, we adopt the strategy in \cite{CMoW20} to find the BFV boundary operator in the $\mathbb{E}$-representation for the PSM. Their idea was to use the \textit{degree counting}. Indeed, in general, the form $\sigma_{\Gamma}$ is obtained as the integral over the compactification $\tilde{\mathrm{C}}_{\Gamma}(\mathbb{H}^d)$ of the open configuration space modulo scaling and translation, with $\mathbb{H}^d$ the $d$-dimensional upper half-space:
\begin{equation}
\label{mdQME:integral}
\sigma_{\Gamma}=\varint_{\tilde{\mathrm{C}}_{\Gamma(\mathbb{H}^d)}} \omega_{\Gamma},
\end{equation}
where $\omega_{\Gamma}$ is the product of limiting propagators at the point $p$ of collapse and vertex tensors. Note that in order for the integral \eqref{mdQME:integral} not to vanish the form degree of $\omega_\Gamma$ has to be the same as the dimension of $\tilde{\mathrm{C}}_{\Gamma}(\mathbb{H}^d)$. This gives constraints to the number of points in the bulk as well as points in the boundary admitted. We will apply this degree counting to our case, where, since we have $d=3$, the dimension of the compactified configuration space $\tilde{\mathrm{C}}_{\Gamma}(\mathbb{H}^3)$ is $\dim\tilde{\mathrm{C}}_{\Gamma}(\mathbb{H}^3)=3n+2m-3$, with $n$ the number of bulk vertices and $m$ the number of boundary vertices in $\Gamma$.
By using this procedure, in \cite{CMoW20} it was possible to find an explicit expression for the BFV boundary operator in the $\mathbb{E}$-representation for the PSM. As we will see, for us this is not possible. One could say that the cause is the nature of the RW model reflected in a dramatic increment in the number of Feynman rules as we go on in the $k$-index for the globalized terms in the action (see Eq. \eqref{class:coeff}).
To see this in practice, let us show explicitly the Feynman rules for the terms in \eqref{class:R_1} and \eqref{class:R_2}, which we sum up in Table \ref{class:Tab_coeff_split_fr}.
\begin{table}[!htb]
\begin{minipage}{.5\linewidth}
\centering
\begin{tabular}[t]{|lccc|}
\toprule
Vertex & Feynman rule & Total degree & Name \\
\midrule
$(\hat{R}_0\big)^i_{j}$ & \raisebox{\dimexpr15 pt-\totalheight\relax}{ \begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0, -0.75);
\diagram*{
(d) -- [fermion] m [dot]
};
\end{feynman}
\end{tikzpicture} } & 0 & \textrm{I} \\[5mm]
$(\hat{R}_0\big)^i_{j}$ & \raisebox{\dimexpr15 pt-\totalheight\relax}{ \begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0, -0.75);
\diagram*{
(d) -- [anti fermion] m [dot]
};
\end{feynman}
\end{tikzpicture} } & 2 & \textrm{II} \\[5mm]
$\Big(\hat{R}_1\Big)^i_{j;s}$ & \raisebox{\dimexpr20 pt-\totalheight\relax}{ \begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0, -0.75);
\vertex (c) at (0, 0.75);
\diagram*{
(d) -- [fermion] m [dot] -- [anti fermion] (c)
};
\end{feynman}
\end{tikzpicture} } & 0 & \textrm{III}\\[10mm]
$\Big(\hat{R}_1\Big)^{is}_{j}$ & \raisebox{\dimexpr20 pt-\totalheight\relax}{ \begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0, -0.75);
\vertex (c) at (0, 0.75);
\diagram*{
(d) -- [anti fermion] m [dot] -- [anti fermion] (c)
};
\end{feynman}
\end{tikzpicture} } & 2 & \textrm{IV}\\[10mm]
$\Big(\hat{R}_1\Big)^{is}_{j}$ & \raisebox{\dimexpr20 pt-\totalheight\relax}{ \begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0, -0.75);
\vertex (c) at (0, 0.75);
\diagram*{
(d) -- [anti fermion] m [dot] -- [fermion] (c)
};
\end{feynman}
\end{tikzpicture} } & 4 & \textrm{V}\\[10mm]
$\Big(\hat{R}_2\Big)^i_{j;sm}$ & \raisebox{\dimexpr20 pt-\totalheight\relax}{ \begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0.5, 0.5);
\vertex (c) at (-0.5, 0.5);
\vertex (a) at (0, -0.7);
\diagram*{
(d) -- [fermion] m [dot] -- [anti fermion] (c);
m -- [anti fermion] (a)
};
\end{feynman}
\end{tikzpicture} }
& 0 & \textrm{VI} \\[10mm]
$\Big(\hat{R}_2\Big)^{im}_{j;s}$ &
\raisebox{\dimexpr20 pt-\totalheight\relax}{ \begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0.5, 0.5);
\vertex (c) at (-0.5, 0.5);
\vertex (a) at (0, -0.7);
\diagram*{
(d) -- [anti fermion] m [dot] -- [anti fermion] (c);
m -- [anti fermion] (a)
};
\end{feynman}
\end{tikzpicture} }& 2 & \textrm{VII}\\
\bottomrule
\end{tabular}
\end{minipage}%
\begin{minipage}{.5\linewidth}
\centering
\begin{tabular}[t]{|lccc|}
\toprule
Vertex & Feynman rule & Total degree & Name \\
\midrule
$\Big(\hat{R}_2\Big)^{ism}_{j}$ & \raisebox{\dimexpr20 pt-\totalheight\relax}{ \begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0.5, 0.5);
\vertex (c) at (-0.5, 0.5);
\vertex (a) at (0, -0.7);
\diagram*{
(d) -- [anti fermion] m [dot] -- [fermion] (c);
m -- [anti fermion] (a)
};
\end{feynman}
\end{tikzpicture} } & 4 & \textrm{VIII}\\[10mm]
$\Big(\hat{R}_2\Big)^{ism}_{j}$ &
\raisebox{\dimexpr20 pt-\totalheight\relax}{ \begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0.5, 0.5);
\vertex (c) at (-0.5, 0.5);
\vertex (a) at (0, -0.7);
\diagram*{
(d) -- [anti fermion] m [dot] -- [fermion] (c);
m -- [fermion] (a)
};
\end{feynman}
\end{tikzpicture} }& 6 &\textrm{IX}\\[10mm]
$\Big(\hat{R}_2\Big)^i_{\bar{j};sm}$ & \raisebox{\dimexpr20 pt-\totalheight\relax}{ \begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0.5, 0.5);
\vertex (c) at (-0.5, 0.5);
\vertex (a) at (0, -0.7);
\diagram*{
(d) -- [fermion] m [dot, red] -- [anti fermion] (c);
m -- [anti fermion] (a)
};
\end{feynman}
\end{tikzpicture} } & 0 &\textrm{X} \\[10mm]
$\Big(\hat{R}_2\Big)^{im}_{\bar{j};s}$ & \raisebox{\dimexpr20 pt-\totalheight\relax}{ \begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0.5, 0.5);
\vertex (c) at (-0.5, 0.5);
\vertex (a) at (0, -0.7);
\diagram*{
(d) -- [anti fermion] m [dot, red] -- [anti fermion] (c);
m -- [anti fermion] (a)
};
\end{feynman}
\end{tikzpicture} } & 2 &\textrm{XI}\\[10mm]
$\Big(\hat{R}_2\Big)^{ism}_{\bar{j}}$ &\raisebox{\dimexpr20 pt-\totalheight\relax}{ \begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0.5, 0.5);
\vertex (c) at (-0.5, 0.5);
\vertex (a) at (0, -0.7);
\diagram*{
(d) -- [anti fermion] m [dot, red] -- [fermion] (c);
m -- [anti fermion] (a)
};
\end{feynman}
\end{tikzpicture} } & 4 &\textrm{XII}\\[10mm]
$\Big(\hat{R}_2\Big)^{ism}_{\bar{j}}$ &
\raisebox{\dimexpr20 pt-\totalheight\relax}{ \begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0.5, 0.5);
\vertex (c) at (-0.5, 0.5);
\vertex (a) at (0, -0.7);
\diagram*{
(d) -- [anti fermion] m [dot, red] -- [fermion] (c);
m -- [fermion] (a)
};
\end{feynman}
\end{tikzpicture} }& 6 &\textrm{XIII}\\[8.7mm]
\bottomrule
\end{tabular}
\end{minipage}
\caption{Feynman rules for the globalization terms in the action \eqref{split_global_action}.}
\label{class:Tab_coeff_split_fr}
\end{table}
Notice how the structure of the Feynman rules repeats similarly at each order (e.g. for $R_0$ we have 2 Feynman rules with degrees 0 and 2, respectively, while for $R_1$ we have 3 graphs with degrees 0, 2, 4). Hence, it is easy to understand how this works for higher order terms.
From there, one can notice that we have two types of vertices:
\begin{itemize}
\item vertices which are 1-forms in $dx^{i}$: we will denote them by a black dot ($\bullet$) and refer to them as black vertices;
\item vertices which are 1-forms in $dx^{{\bar{i}}}$: we will denote them by a red dot (\textcolor{red}{$\bullet$}) and refer to them as red vertices.
\end{itemize}
In our computations, we will limit ourselves to the Feynman rules in Table \ref{class:Tab_coeff_split_fr}, these are already enough to get a feeling about what is going on and even understand the behaviour of higher order terms, when possible. By using the names in the Table, since $n=\textrm{I}+\textrm{II}+\textrm{III}+\textrm{IV}+\textrm{V}+\textrm{VI}+\textrm{VII}+\textrm{VIII}+\textrm{IX}+\textrm{X}+\textrm{XI}+\textrm{XII}+\textrm{XIII}$ is the sum of the vertices, the degree counting produces the following equation
\begin{equation}
\begin{split}
\textrm{I}+\textrm{II}+\textrm{III}+\textrm{IV}+\textrm{V}+\textrm{VI}+\textrm{VII}+\textrm{VIII}&+\textrm{IX}+\textrm{X}+\textrm{XI}+\textrm{XII}+\textrm{XIII}+2m-3=\\
&2\textrm{II}+2\textrm{IV}+4\textrm{V}+2\textrm{VII}+4\textrm{VIII}+6\textrm{IX}+2\textrm{XI}+4\textrm{XII}+6\textrm{XIII},
\end{split}
\end{equation}
where on the right hand side we are taking into account that in the $\mathbb{B}$-representation, the arrows leaving the globalization vertex have to stay inside the collapsing subgraph. If this is not the case, by the boundary conditions on the propagator \cite{CMR17}, the result would be zero.
First, let us focus on the black vertices (i.e. vertices \textrm{I}--\textrm{IX}). The equation reduces to
\begin{equation}
\label{degree counting eq}
3\textrm{I}+\textrm{II}+3\textrm{III}+\textrm{IV}-\textrm{V}+3\textrm{VI}+\textrm{VII}-\textrm{VIII}-3\textrm{IX}+2m-3=0.
\end{equation}
The Feynman diagrams contributing to the BFV boundary operator are those whose vertices solve the equation \eqref{degree counting eq}. Hence, let us solve the equation case-by-case.
Up to one bulk vertex, with the Feynman rules \textrm{I}--\textrm{IX} we have one diagram (see Fig. \ref{fig:first_diagram}).
\begin{figure}[H]
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0, 0.75);
\vertex (a) at (-1.25, -0.75);
\vertex (b) at (1.25, -0.75);
\node[dot] (x) at (0, -0.75);
\diagram*{
(d) -- [fermion] m [dot],
(a) -- (b),
m --[fermion] (x)
};
\vertex [left=0.25em of a] {\(\partial_2 \Sigma_3\)};
\end{feynman}
\clip (-0.9,-0.75) rectangle (0.9,0.9);
\draw (0,-0.65) circle(0.9);
\end{tikzpicture}
\caption{First graph with a single black vertex contributing to the BFV boundary operator.}
\label{fig:first_diagram}
\end{figure}
From Fig. \ref{fig:first_diagram}, we notice that in order to have a degree 1 operator which satisfies the degree counting for higher order terms we need vertices with an even number of heads and tails. We show the first higher order contributions in Fig. \ref{fig:higher_diag}, while a general diagram contributing to the BFV operator
is exhibited in Fig. \ref{fig:gen_diagram}.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (-0.75, 0.75);
\vertex (c) at (0.75, 0.75);
\vertex (a) at (-1.25, -0.75);
\vertex (b) at (1.25, -0.75);
\node[dot] (x) at (-0.5, -0.75);
\node[dot] (y) at (0.5, -0.75);
\diagram*{
(d) -- [fermion] m [dot],
(c) -- [fermion] m,
(a) -- (b),
m --[fermion] (x),
m --[fermion] (y)
};
\vertex [left=0.25em of a] {\(\partial_2 \Sigma_3\)};
\end{feynman}
\clip (-0.9,-0.75) rectangle (0.9,0.9);
\draw (0,-0.65) circle(0.9);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}\quad
\begin{subfigure}[b]{0.25\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (-0.75, 0.75);
\vertex (c) at (0.75, 0.75);
\vertex (e) at (0, 0.75);
\vertex (a) at (-1.25, -0.75);
\vertex (b) at (1.25, -0.75);
\node[dot] (x) at (-0.5, -0.75);
\node[dot] (y) at (+0.5, -0.75);
\node[dot] (z) at (0, -0.75);
\diagram*{
(d) -- [fermion] m [dot],
(c) -- [fermion] m,
(e) -- [fermion] m,
m -- [fermion] (x),
m -- [fermion] (y),
m -- [fermion] (z),
(a) -- (b)
};
\vertex [left=0.25em of a] {\(\partial_2 \Sigma_3\)};
\vertex [left=0.25em of a] {\(\partial_2 \Sigma_3\)};
\end{feynman}
\clip (-0.9,-0.75) rectangle (0.9,0.9);
\draw (0,-0.65) circle(0.9);
\end{tikzpicture}
\label{}
\caption{}
\end{subfigure}
\caption{Second and third graph with a single black vertex contributing to the BFV boundary operator.}
\label{fig:higher_diag}
\end{figure}
\begin{figure}[H]
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (-1.4, 0.7);
\vertex (c) at (0.9, 0.95);
\vertex (e) at (-0.8, 0.95);
\vertex (a) at (-1.65, -0.95);
\vertex (b) at (1.65, -0.95);
\node[dot] (x) at (-0.9, -0.95);
\node[dot] (y) at (+0.85, -0.95);
\node[dot] (z) at (-0.4, -0.95);
\diagram*{
(d) -- [fermion] m [dot],
(c) -- [fermion] m,
(e) -- [fermion] m,
m -- [fermion] (x),
m -- [fermion] (y),
m -- [fermion] (z),
(a) -- (b)
};
\vertex [left=0.25em of a] {\(\partial_2 \Sigma_3\)};
\vertex [left=0.25em of a] {\(\partial_2 \Sigma_3\)};
\end{feynman}
\draw (-1.2,-0.95) arc (180:0:1.2);
\node at (0,0.65) {\dots};
\node at (0.2,-0.75) {\dots};
\end{tikzpicture}
\caption{A general Feynman diagram contributing to the BFV operator in the $\mathbb{B}$-representation up to one black bulk vertex.}
\label{fig:gen_diagram}
\end{figure}
Concerning the red vertices, the graphs contributing to the BFV operator up to one bulk vertex will start to appear from the vertices associated to the term $(R_3)_{\bar{j}}dx^{\bar{j}}$ (coming from the thid term in the action \ref{split_global_action}). Taking this into account, the general term of the diagrams with a red vertex is shown in Fig. \ref{fig:gen_diagram_red}.
\begin{figure}[H]
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (-1.4, 0.7);
\vertex (c) at (0.9, 0.95);
\vertex (e) at (-0.8, 0.95);
\vertex (a) at (-1.65, -0.95);
\vertex (b) at (1.65, -0.95);
\node[dot] (x) at (-0.9, -0.95);
\node[dot] (y) at (+0.85, -0.95);
\node[dot] (z) at (-0.4, -0.95);
\diagram*{
(d) -- [fermion] m [red, dot],
(c) -- [fermion] m,
(e) -- [fermion] m,
m -- [fermion] (x),
m -- [fermion] (y),
m -- [fermion] (z),
(a) -- (b)
};
\vertex [left=0.25em of a] {\(\partial_2 \Sigma_3\)};
\vertex [left=0.25em of a] {\(\partial_2 \Sigma_3\)};
\end{feynman}
\draw (-1.2,-0.95) arc (180:0:1.2);
\node at (0,0.65) {\dots};
\node at (0.2,-0.75) {\dots};
\end{tikzpicture}
\caption{A general Feynman diagram contributing to the BFV operator in the $\mathbb{B}$-representation up to one red bulk vertex. In particular, the graph with a total number of 4 arrows (2 entering and 2 leaving the red vertex) is the first non-zero contribution.}
\label{fig:gen_diagram_red}
\end{figure}
These considerations prove the following proposition.
\begin{prp}
\label{prp_op}
Consider the globalized split RW model, in the $\mathbb{B}$-representation, the first contribution to $\boldsymbol{\Omega}^{\mathbb{B}}_{\mathrm{pert}}$ is given by $\boldsymbol{\Omega}^{\mathbb{B}}_{(1)}= \boldsymbol{\Omega}^{\mathbb{B}}_{1,0}+ \boldsymbol{\Omega}^{\mathbb{B}}_{0,1}$ with
\begin{equation}
\begin{split}
\boldsymbol{\Omega}^{\mathbb{B}}_{1,0}&=\sum_{\substack{k\geq1,S_1, \dots S_k\\ i_1,\dots, i_k, j_1, \dots, j_k}}\frac{(-i\hbar)^k}{(k+S_1+\dots+S_k)!}\varint_{\partial_2\Sigma_3}\Big(\hat{R}_{2k-1}\Big)^{i_1\dots i_k}_{j;j_1\dots j_k} dx^j[\mathbb{B}_{i_1}\mathbb{B}_{S_1}]\dots \\
&\hspace{8cm}\times[\mathbb{B}_{i_k}\mathbb{B}_{S_k}] \frac{\delta^{|j_1+\dots+j_k|+|S_1|+\dots+|S_k|}}{\delta [\mathbb{B}_{j_1}\dots \mathbb{B}_{j_k}][\delta \mathbb{B}_{S_1}]\dots [\delta \mathbb{B}_{S_k}]}\\
\boldsymbol{\Omega}^{\mathbb{B}}_{0,1}&=\sum_{\substack{k\geq 2,S_1, \dots S_k\\ i_1,\dots, i_k, j_1, \dots, j_k}}\frac{(-i\hbar)^k}{(k+S_1+\dots+S_k)!}\varint_{\partial_2\Sigma_3}\Big(\hat{R}_{2k-1}\Big)^{i_1\dots i_k}_{\bar{j};j_1\dots j_k} dx^{\bar{j}}[\mathbb{B}_{i_1}\mathbb{B}_{S_1}]\dots\\
&\hspace{8cm}\times[\mathbb{B}_{i_k}\mathbb{B}_{S_k}] \frac{\delta^{|j_1+\dots+j_k|+|S_1|+\dots+|S_k|}}{\delta [\mathbb{B}_{j_1}\dots \mathbb{B}_{j_k}][\delta \mathbb{B}_{S_1}]\dots [\delta \mathbb{B}_{S_k}]}.
\end{split}
\end{equation}
\end{prp}
For $n>1$, the situation gets more complicated. We solve equation \eqref{degree counting eq} numerically. Empirically, for an even number of bulk vertices, we witness the absence of solutions. This implies immediately $\boldsymbol{\Omega}^{\mathbb{B}}_{(2)}=0$.
In the case $n=3$, the number of Feynman diagrams for the vertices \textrm{I}--\textrm{IX} increases dramatically with respect to the $n=1$ case. This increment is tamed since the necessity of having a degree 1 operator will decrease their number. However, we are not able to provide an explicit as well as general form for the BFV operator along the same lines as in Proposition \ref{prp_op}. We rely on examples which we show in Appendix \ref{app:feyn}.
\begin{rmk}
Here we are assuming that the dimension of our target manifold $M$ is at least 4, if this would not be the case, then we would not have the 3 bulk vertices contribution to the BFV boundary operator. Hence, the number of bulk vertices allowed is bounded by the dimension of $M$. This was already noticed in \cite{CMoW20}. The difference here is that this reflects the ``odd Grassmanian nature" of the RW model with respect to CS theory (see Remark \ref{rmk_parameters}).
\end{rmk}
\subsection{BFV boundary operator in the $\mathbb{A}$-representation}
In the $\mathbb{A}$-representation, the arrows coming from the globalized vertices are allowed to leave the collapsing subgraph. Therefore, our arguments about the degree counting are not valid here. However since the coboundary operator has a total degree 1, while $\mathbb{A}^i$ has total degree 0, we can have at most 1 bulk vertex, i.e. $\boldsymbol{\Omega}^{\mathbb{A}}_{\mathrm{pert}}=\boldsymbol{\Omega}^{\mathbb{A}}_{1,0}+\boldsymbol{\Omega}^{\mathbb{A}}_{0,1}$ with
\begin{equation}
\begin{split}
\boldsymbol{\Omega}^{\mathbb{A}}_{1,0}&= \sum_{k\geq0}\,\,\varint_{\partial_1\Sigma_3}\sum_{J_1,\dots, J_r, I_1,\dots, I_s} \frac{(-i\hbar)^{|I_1|+\dots +|I_s|}}{(|I_1|+\dots +|I_s|)!}\Big(\hat{R}_k\Big)^{I_1\dots I_s}_{j;J_1\dots J_r}dx^j\prod^{r+s=k+1}_{r=1, s=1}[\mathbb{A}^{J_r}]\frac{\delta^{|I_s|}}{\delta [\mathbb{A}^{I_s}]},\\
\boldsymbol{\Omega}^{\mathbb{A}}_{0,1}&= \sum_{k\geq3}\,\,\varint_{\partial_1\Sigma_3}\sum_{J_1,\dots, J_r, I_1,\dots, I_s} \frac{(-i\hbar)^{|I_1|+\dots +|I_s|}}{(|I_1|+\dots +|I_s|)!}\Big(\hat{R}_k\Big)^{I_1\dots I_s}_{\bar{j};J_1\dots J_r}dx^{\bar{j}}\prod^{r+s=k+1}_{r=1, s=1}[\mathbb{A}^{J_r}]\frac{\delta^{|I_s|}}{\delta [\mathbb{A}^{I_s}]},
\end{split}
\end{equation}
where we label by the multiindex $J_r$ the arrows emanating from a boundary vertex towards the globalized vertex, by the multiindex $I_s$ the leaves emanating from the bulk vertex. The sum of $r$ and $s$ has to be $k+1$ since these are the total number of arrows leaving and arriving at a globalized vertex $(R_k)_jdx^j$ (or $(R_k)_{\bar{j}}dx^{\bar{j}}$).
\subsection{Flatness of the qGBFV operator for the globalized split RW model}
In this section, we prove that the qGBFV operator for the globalized split RW model squares to zero. The proof follows along the same lines as in \cite{CMoW19}, we will remark where there are differences and refer to their work when the procedure is identical. Before entering into the details of the proof, we should mention that their proof (and the proof of the mdQME) depends on two assumptions: \textit{unimodularity} and \textit{absence of hidden faces} (\textit{anomaly-free} condition). The first means that tadpoles are not allowed. In the case of the globalized split RW model, we notice that this assumption is not needed since tadpoles vanish \cite{RW96}.
\begin{assump}
\label{ass_hidden_faces}
We assume that the globalized split RW model is \emph{anomaly-free}, i.e. for every graph $\Gamma$, we have that
\begin{equation}
\varint_{F_{\geq 3}}\omega_\Gamma=0,
\end{equation}
where by $F_{\geq 3}$, we denote the union of the faces where at least three bulk vertices collapse in the bulk (also called \emph{hidden faces} \cite{BC}).
\end{assump}
\begin{rmk}
It is well known that Chern--Simons theory is \emph{not} an anomaly-free theory \cite{AS91,AS94}. The construction of the quantum theory there depends on the choice of gauge-fixing. The appearance of anomalies can be resolve by choosing a framing and framing-dependent counter terms for the gauge-fixing. A famous example of an anomaly-free theory is given by the Poisson sigma model \cite{CF00} since by the result of Kontsevich \cite{Ko03} any 2-dimensional theory is actually anomaly-free.
A general method for dealing with theories that do have anomalies is to add counter terms to the action. If the differential form $\omega_\Gamma$, which is integrated over the hidden faces, is exact, one can use the primitive form to cancel the anomalies by the additional vertices that appear.
\end{rmk}
Since the integrals we will consider are fiber integrals, we will apply of Stokes' theorem for integration along a compact fiber with corners, i.e.
\begin{equation}
d\pi_*=\pi_*d-\pi^{\partial}_*,
\end{equation}
where $\pi_*$ denotes the fiber integration.
In particular, the application of Stokes' theorem to a fiber integral yiels
\begin{equation}
\label{stokes}
(d_x+d_{\bar{x}})\varint_{\text{C}_{\Gamma}}\omega_\Gamma=\varint_{\text{C}_{\Gamma}}(d+d_{\bar{x}})\omega_\Gamma-\varint_{\partial \text{C}_{\Gamma}}\omega_\Gamma,
\end{equation}
where $d$ is the differential on $M\times C_\Gamma$.
\begin{thm}[Flatness of the qGBFV operator]\label{thm:flatness}
The qGBFV operator $\nabla_{\textup{G}}$ for the anomaly-free globalized split RW model squares to zero, i.e.
\begin{equation}
\label{flatness_GBFV}
(\nabla_{\textup{G}})^2\equiv0,
\end{equation}
where
\begin{equation}
\nabla_{\textup{G}}=d_{M}-i\hbar \Delta_{\mathcal{V}_{\Sigma_3, x}}+\frac{i}{\hbar}\boldsymbol{\Omega}_{\partial \Sigma_3}=d_x+d_{\Bar{x}}-i\hbar \Delta_{\mathcal{V}_{\Sigma_3, x}}+\frac{i}{\hbar}\boldsymbol{\Omega}_{\partial \Sigma_3}.
\end{equation}
\end{thm}
\begin{proof}
According to \cite{CMoW19}, the flatness of $\nabla_{\mathrm{G}}$ is equivalent to the equation
\begin{equation}
\label{flatness_GBFV_3}
i\hbar d_M\boldsymbol{\Omega}_{\partial \Sigma_3}-\frac12\bigg[\boldsymbol{\Omega}_{\partial \Sigma_3},\boldsymbol{\Omega}_{\partial \Sigma_3}\bigg]=0.
\end{equation}
This equation was proven for a globalized split AKSZ theory in \cite{CMoW19}, in which the $d_M$ is just the de Rham differential on the body of the target manifold. However, in our case, $d_M$ is the sum of the holomorphic and antiholomorphic Dolbeault differentials on $M$.
We prove Eq. \eqref{flatness_GBFV_3} for $\boldsymbol{\Omega}^{\mathbb{B}}$. For $\boldsymbol{\Omega}^{\mathbb{A}}$, the proof is analogous as discussed in \cite{CMoW19}. Suppose we apply $d_M$ to a term of the form
\begin{equation}
\boldsymbol{\Omega}^{\mathbb{B}}_\Gamma=\varint_{\partial_2\Sigma_3} \sigma_\Gamma \bigg(\Big(\hat{R}_k\Big)_jdx^j; \Big(\hat{R}_k\Big)_{\bar{j}}dx^{\bar{j}}\bigg)^I_{J_1\dots J_s}[\mathbb{B}^{J_1}]\dots [\mathbb{B}^{J_s}]\frac{\delta}{\delta [\mathbb{B}^{I}]},
\end{equation}
where $k$ could be any number greater than 0. Here, we chose the easiest term to express with more clarity what is going on. As in \cite{CMoW19}, we apply Stokes' theorem. However, this is different to the corresponding situation in \cite{CMoW19} since in our theory we have also red vertices\footnote{For the sake of clarity, we stress again that in \cite{CMoW19}, the vertices are only ``black" since $d_M$ is the de Rham differential on the body of the target manifold.}, which is portrayed by the fact that $\sigma_\Gamma$ depends also on $\Big(\hat{R}_k\Big)_{\bar{j}}dx^{\bar{j}}$. We obtain
\begin{equation}
\begin{split}
(d_x+d_{\bar{x}})\boldsymbol{\Omega}^{\mathbb{B}}_\Gamma=\varint_{\partial_2\Sigma_3}\bigg\{(d_x+d_{\bar{x}})\sigma_{\Gamma}\bigg(\Big(\hat{R}_k\Big)_jdx^j; \Big(\hat{R}_k\Big)_{\bar{j}}dx^{\bar{j}}\bigg)\bigg\}^I_{J_1\dots J_s}[\mathbb{B}^{J_1}]\dots [\mathbb{B}^{J_s}]\frac{\delta}{\delta [\mathbb{B}^{I}]} +[\Omega^\mathbb{B}_0,\boldsymbol{\Omega}^{\mathbb{B}}_\Gamma],
\end{split}
\end{equation}
where the second term is produced when $d_x$ acts on the $\mathbb{B}$ fields (we do not have a corresponding term for $d_{\bar{x}}$ since we do not have fields $\mathbb{B}^{\bar{i}}$ terms to act on). By applying again Stokes' theorem, we have:
\begin{equation}
\begin{split}
(d_x+d_{\bar{x}})\sigma_{\Gamma}\bigg(\Big(\hat{R}_k\Big)_jdx^j; \Big(\hat{R}_k\Big)_{\bar{j}}dx^{\bar{j}}\bigg)&=(d_x+d_{\bar{x}})\varint_{\tilde{\text{C}}_{\Gamma}}\omega_{\Gamma}\bigg(\Big(\hat{R}_k\Big)_jdx^j; \Big(\hat{R}_k\Big)_{\bar{j}}dx^{\bar{j}}\bigg)\\
&=\varint_{\tilde{\text{C}}_{\Gamma}}(d+d_{\bar{x}})\omega_{\Gamma}\bigg(\Big(\hat{R}_k\Big)_jdx^j; \Big(\hat{R}_k\Big)_{\bar{j}}dx^{\bar{j}}\bigg)\\
&\quad\pm\varint_{\partial\tilde{\text{C}}_{\Gamma}}\omega_{\Gamma}\bigg(\Big(\hat{R}_k\Big)_jdx^j; \Big(\hat{R}_k\Big)_{\bar{j}}dx^{\bar{j}}\bigg).
\end{split}
\end{equation}
\begin{rmk}
\label{rmk7.4.3}
In principle, $d$ is the differential on $M\times \mathrm{C}_\Gamma$, hence it can be decomposed as $d=d_x+d_1+d_2$, where $d_1$ denotes the part of the differential acting on the propagator and $d_2$ the part acting on $\mathbb{B}$ fields (and, more generally, on $\mathbb{A}$ fields). We do not have a corresponding antiholomorphic differential on $M\times \mathrm{C}_\Gamma$ since the propagators and the fields are all holomorphic. This is different with respect to the case considered in \cite{CMoW19}.
\end{rmk}
As in \cite{CMoW19}, we have $d\omega_{\Gamma} =d_x\omega_{\Gamma}$ and in the boundary integral we have three classes of faces. The first two types of faces, where more than two bulk points collapse and where a subgraph $\Gamma$ collapse at the boundary, can be proved as in \cite{CMoW19}. In particular, the former vanishes by our assumptions that the theory is anomaly-free (see Assumption \ref{ass_hidden_faces}), while the second produces exactly the term $\frac12\bigg[\boldsymbol{\Omega}^{\mathbb{B}}_{\mathrm{pert}},\boldsymbol{\Omega}^{\mathbb{B}}_{\mathrm{pert}}\bigg]$ by \cite[Lemma 4.9]{CMoW19}. On the other hand, the third case, when two bulk vertices collapse, has some differences with respect to the analogous situation in \cite{CMoW19} due to the already mentioned further presence of red vertices. Here we distinguish four cases:
\begin{itemize}
\item when a red vertex collapses with a black vertex, then these faces cancel out with\\ $d_x\omega_{\Gamma}\bigg(\Big(\hat{R}_k\Big)_{\bar{j}}dx^{\bar{j}}\bigg)$ by the dCME \eqref{dcme_2};
\item when a black vertex collapses with a red vertex, then these faces cancel out with\\ $d_{\bar{x}}\omega_{\Gamma}\bigg(\Big(\hat{R}_k\Big)_jdx^j\bigg)$ by the dCME \eqref{dcme_4};
\item when two black vertices collapse, then these faces cancel out with $d_x\omega_{\Gamma}\bigg(\Big(\hat{R}_k\Big)_{j}dx^{j}\bigg)$ by the dCME \eqref{dcme_1};
\item when two red vertices collapse, then these faces cancel out with $d_{\bar{x}}\omega_{\Gamma}\bigg(\Big(\hat{R}_k\Big)_{\bar{j}}dx^{\bar{j}}\bigg)$ by the dCME \eqref{dcme_3}.
\end{itemize}
By $\omega_{\Gamma}\bigg(\Big(\hat{R}_k\Big)_{\bar{j}}dx^{\bar{j}}\bigg)$ or $\omega_{\Gamma}\bigg(\Big(\hat{R}_k\Big)_{j}dx^{j}\bigg)$, we mean the part of the subgraph $\Gamma'$, which contains a red or black vertex.
This proves (\ref{flatness_GBFV_3}), thus $(\nabla_{\textup{G}})^2\equiv 0$.
\end{proof}
\subsection{Proof of the mdQME for the globalized split RW model}
In this section, we are going to prove the mdQME for the globalized split RW model. The proof follows similarly as in \cite{CMoW19}. As before, we will refer to their work when the situation is identical and point out eventual differences.
\begin{thm}[mdQME for anomaly-free globalized split RW model]\label{thm:mdQME}
Consider the full covariant perturbative state $\hat{\psi}_{\Sigma_3,x}$ as a quantization of the anomaly-free globalized split RW model. Then
\begin{equation}
\label{mdqme_thm}
\bigg(d_M-i\hbar \Delta_{\mathcal{V}_{\Sigma_3, x}}+\frac{i}{\hbar}\boldsymbol{\Omega}_{\partial \Sigma_3}\bigg)\boldsymbol{\hat{\psi}}\surgR=0.
\end{equation}
\end{thm}
\begin{proof}
Let $\mathcal{G}$ denote the set of Feynman graphs of the theory. Then, we can write the full covariant quantum state for the globalized split RW model as
\begin{equation}
\label{proof_state}
\boldsymbol{\hat{\psi}}\surgR=T_{\Sigma_3}\sum_{\Gamma\in\mathcal{G}}\varint_{\text{C}_\Gamma}\omega_\Gamma \Big(\hat{R}_jdx^j; \hat{R}_{\bar{j}}dx^{\bar{j}}\Big),
\end{equation}
where the combinatorial prefactor $\frac{(-i\hbar)^{\loops(\Gamma)}}{\vert\Aut(\Gamma)\vert}$ is included in $\omega_\Gamma$ (by $\loops$ we denote the number of loops of a graph $\Gamma$) and we denote the configuration space $\text{C}_\Gamma(\Sigma_3)$ by $\text{C}_\Gamma$ for simplicity. We note that $\omega_\Gamma$ is a ($\mathcal{V}_{\Sigma_3,x}$-dependent) differential form on $\text{C}_\Gamma\times M$. Again, following \cite{CMoW19}, we can apply Stokes' theorem \eqref{stokes} and we get
\begin{equation}
d_M\varint_{\text{C}_\Gamma}\omega_\Gamma\Big(\hat{R}_jdx^j; \hat{R}_{\bar{j}}dx^{\bar{j}}\Big)=\varint_{\text{C}_\Gamma}(d+d_{\bar{x}})\omega_\Gamma\Big(\hat{R}_jdx^j; \hat{R}_{\bar{j}}dx^{\bar{j}}\Big)-\varint_{\partial\text{C}_\Gamma}\omega_\Gamma\Big(\hat{R}_jdx^j; \hat{R}_{\bar{j}}dx^{\bar{j}}\Big).
\end{equation}
As mentioned in Remark \ref{rmk7.4.3}, the $d$ inside the integral is the total differential on $\text{C}_\Gamma(\Sigma_3)\times M$, and thus we can split it as
\begin{equation}
d=d_x+d_1+d_2,
\end{equation}
where $d_1$ denotes the part of the differential acting on the propagators in $\omega_\Gamma$ and $d_2$ is the part acting on $\mathbb{B}$ and $\mathbb{A}$ fields.
With this setup, which is basically analogous to the one in \cite{CMoW19}, except for the presence of the red vertices and $d_{\bar{x}}$ already extensively discussed, Eq. \eqref{mdqme_thm} is verified by proving three relations
\begin{itemize}
\item a relation between the application of $d_1$ and of $\Delta_{\mathcal{V}_{\Sigma_3, x}}$ to the quantum state,
\item a relation between the application of $d_2$ and of $\Omega_0$ to the quantum state,
\item a relation between the application of $d_M$ and of the boundary contributions to the quantum state.
\end{itemize}
The proofs of these relations can be carried from \cite{CMoW19} over to the globalized split RW model without any problem. The only difference is when they prove that the contributions in $\partial \text{C}_\Gamma$ consisting of diagrams with two bulk vertices collapsing vanish (which is needed for the third relation). In our case one should consider again three contributions: when two bulk black vertices collapse, when two bulk red vertices collapse, when a red vertex and a black one collapse. The vanishing of these terms follows from Eqs. \eqref{dcme_1}, \eqref{dcme_2}, \eqref{dcme_3}, \eqref{dcme_4}. The rest of the procedure is identical to \cite{CMoW19}.
\end{proof}
\section{Outlook and future direction}
\label{sec:outlook}
Our globalization construction leads to an interesting extension of some aspects in the program presented in \cite{ChanLeungLi2020} for manifolds with boundary and cutting-gluing techniques. In particular, it would be of interest to understand some relations to the deformation quantization of K\"ahler manifolds in the guise of \cite{ReshetikhinTakhtajan1999}, especially using the constructions of \cite{CMoW20}, and Berezin--Toeplitz quantization as presented in \cite{Schlichenmaier2010} (possibly for the noncompact case).
It also leads to a more general globalization construction of an algebraic index theory formulation by using the BV formalism together with Fedosov's globalization approach as presented in \cite{GLL17}. Moreover, it might also be related to a case of twisted topological field theories, known as Chern--Simons--Rozansky--Witten TFTs, constructed by Kapustin and Saulina in \cite{KapustinSaulina2009}. In particular, they use the BRST formalism to produce interesting observables as Wilson loops and thus one might be able to combine it with ideas of \cite{AlekseevBarmazMnev2013,Mo20}. Another direction would be the study of the RW invariants through our construction for hyperK\"ahler manifolds. We guess that this would require studying observables of RW theory in the BV-BFV formulation, but the globalization procedure should tell something about these 3-manifold invariants. We hope that this might also be compatible with some generalizations of RW invariants in the non-hyperK\"ahler case as discussed in \cite{RS02}.
\begin{appendix}
\section{Topological quantum field theories}
\label{app:TQFT}
This appendix gives a brief introduction to perturbative and functorial constructions of topological (quantum) field theories, especially we recall Atiyah's TQFT axioms.
\subsection{Brief introduction to perturbative quantum field theory}
On a spacetime manifold $\Sigma$, consider a space of fields\footnote{The space of fields is usually given by sections of some vector bundle over $\Sigma$.} $F_{\Sigma}$ and an action functional $S_\Sigma$ which is required to be \textit{local}. This means that the action is the integral of a density-valued Lagrangian $\mathscr{L}$, called \textit{Lagrangian density}, depending on the fields and on a finite number of higher derivatives. In particular, $S_\Sigma: F_{\Sigma}\rightarrow \mathbb{C}$, with
\begin{equation}
S_{\Sigma}(\phi)=\varint_{\Sigma}\mathscr{L}(\phi, \partial\phi, \dots),
\end{equation}
where $\phi\in F_{\Sigma}$ is a field. The set of data consisting of $(\Sigma, F_{\Sigma}, S_{\Sigma})$ defines a classical Lagrangian field theory.
During the years, physicists have developed several approaches to quantum field theory. Roughly, we can split them into perturbative and non-perturbative methods. Here, we focus on the former. Note that by perturbative, we mean semiclassical: in physics jargon perturbation theory is the idea of expanding through a formal power series around the coupling constant of the action. In the perturbative setting, the protagonist of the story is the \textit{partition function} $Z$, which encodes all the information about the quantum theory it portraits. In general, we can express it through a path integral as
\begin{equation}
\label{ov.tft:part_funct_1}
Z_{\Sigma}=\varint_{F_{\Sigma}} e^{\frac{i}{\hbar}S_{\Sigma}(\phi)}\mathscr{D}[\phi],
\end{equation}
where $\hbar$ is the reduced Planck constant.
\begin{rmk}
\label{tft:rmk_measure}
In (\ref{ov.tft:part_funct_1}), $\mathscr{D}$ denotes a formal measure on $F_{\Sigma}$. Depending on the space of fields $F_\Sigma$, this measure is often mathematically ill-defined. Nevertheless, one can define \eqref{ov.tft:part_funct_1} by considering the methods of \emph{perturbative expansion} around critical points of $S_\Sigma$ in a formal power series in $\hbar$ with coefficients given by \emph{Feynman graphs} (see e.g. \cite{FeynmanHibbs1965,P}).
\end{rmk}
Let us make the above discussion more precise. Consider $\Sigma$ to be a manifold with boundary $\partial \Sigma$ and $B_{\partial \Sigma}$ to be the space of boundary values of the fields on $M$. Since the boundary manifold is the boundary of $\Sigma$, we have a restriction map $F_{\Sigma}\xrightarrow[]{\pi}B_{\partial \Sigma}$. The partition function is thus a complex-valued function on $B_{\partial \Sigma}$ which can be written as
\begin{equation}
\label{ov.tft:part_funct_2}
Z_{\Sigma}(\phi_{\partial\Sigma}; \hbar)=\varint_{\{ \phi\in F_{\Sigma}\mid\ \phi\vert_{\partial \Sigma}:=\phi_{\partial \Sigma}\}} e^{\frac{i}{\hbar}S_{\Sigma}(\phi)}\mathscr{D}[\phi],
\end{equation}
where $\phi_{\partial \Sigma}$ is a point in $B_{\partial \Sigma}$.
The manifold $\Sigma$ may be complicated and, as a result, the computation of $Z_{\Sigma}$ can become difficult. Therefore, it would be desirable to cut $\Sigma$ into smaller and, hopefully, easier pieces, compute the partition function there and then glue them together to get the overall state. Suppose, $\Sigma$ is closed and cut it in two disjoint manifolds $\Sigma_1$ and $\Sigma_2$ along a common boundary $\Sigma$, i.e. $\Sigma= \Sigma_1\sqcup_{\partial \Sigma}\Sigma_2$. If we paste them together, we expect the following condition to hold
\begin{equation}
Z_{\Sigma}=\varint_{\phi_{\partial\Sigma}\in B_{\partial \Sigma}}Z_{\Sigma_1}(\phi_{\partial \Sigma})Z_{\Sigma_2}(\phi_{\partial \Sigma})\mathscr{D}[\phi_{\partial \Sigma}].
\end{equation}
\subsection{Brief introduction to functorial quantum field theory}
\label{func_qft}
The functorial approach to QFT was developed by Segal in the context of conformal field theory \cite{Se88} and by Atiyah for TQFT \cite{At88}. However, this description is general and it allows us to describe any QFT.
According to Atiyah's axioms, an $n$-dimensional topological field quantum field theory consists of the following set of data:
\begin{enumerate}
\item A Hilbert space $\mathcal{H}(\Sigma)$, called the \textit{space of states}, associated to a closed oriented\footnote{The orientation endows the manifolds with symbols $\{in, out\}$ which denote \textit{incoming} or \textit{outgoing} orientation.} $(n-1)$-manifold $\Sigma$,
\item A linear map of vector spaces $Z_M: \mathcal{H}_\mathrm{in}\rightarrow \mathcal{H}_\mathrm{out}$, called \textit{partition function}, associated to an oriented $n$-cobordism\footnote{See Example \ref{exm:cob} for a definition.} $M$ from $\Sigma_\mathrm{in}$ to $\Sigma_\mathrm{out}$ (i.e. the boundary of $M$ is assumed to be given as $\partial M=\Sigma_\mathrm{in}\sqcup \Sigma_\mathrm{out}$).
\item Orientation-preserving diffeomorphisms $\phi: \Sigma_1\rightarrow \Sigma_2$ which act on $\mathcal{H}_\Sigma$ through unitary maps $\rho(\phi): \mathcal{H}_{\Sigma_\mathrm{in}}\rightarrow \mathcal{H}_{\Sigma_\mathrm{out}}$, with $\rho$ a representation.
\item Orientation-reversing identity diffeomorphisms $s_\Sigma: \Sigma\rightarrow \Bar{\Sigma}$, where we denote by $\Bar{\Sigma}$, the manifold with opposite orientation. These diffeomorphisms act by $\mathbb{C}$-anti-linear maps $\sigma_{\Sigma}\coloneqq\rho(s_\Sigma): \mathcal{H}_{\Sigma}\rightarrow \mathcal{H}_{\Bar{\Sigma}}$.
\end{enumerate}
This set of data is required to satisfy the following axioms:
\begin{enumerate}
\item[(i)](Multiplicativity) For two closed oriented $(n-1)$-manifolds $\Sigma$ and $\Sigma'$, the space of states is multiplicative, i.e.
\begin{equation}
\mathcal{H}_{\Sigma \sqcup \Sigma'}=\mathcal{H}_\Sigma \otimes \mathcal{H}_{\Sigma'}.
\end{equation}
For two $n$-cobordisms $M: \Sigma_\mathrm{in}\rightarrow\Sigma_\mathrm{out}$ and $M': \Sigma'_\mathrm{in}\rightarrow\Sigma'_\mathrm{out}$, the partition function is multiplicative
\begin{equation}
Z_{M\sqcup M'}=Z_M\otimes Z_{M'}:\quad \mathcal{\Sigma}_\mathrm{in}\otimes \mathcal{\Sigma'}_\mathrm{in}\rightarrow \mathcal{\Sigma}_\mathrm{out}\otimes \mathcal{\Sigma'}_\mathrm{out}
\end{equation}
\item[(ii)](Gluing) Let $M_1: \Sigma_1\rightarrow \Sigma_2$, $M_2: \Bar{\Sigma}_2\rightarrow \Sigma_3$ be two $n$-cobordisms, the glued cobordisms can be constructed by gluing along the common $\Sigma_2$-component as $M_1 \cup_{\Sigma_2}M_2: \Sigma_1 \rightarrow \Sigma_3$. The associated partition function is then obtained by composing the partition functions for $M_1$ and $M_2$ as linear maps:
\begin{equation}
Z_{M_1 \cup_{\Sigma_2}M_2}=Z_{M_2}\circ Z_{M_1}:\quad \mathcal{H}_{\Sigma_1}\rightarrow \mathcal{H}_{\Sigma_3}.
\end{equation}
\item[(iii)](Involutivity) $Z(\Bar{\Sigma})=Z(\Sigma)^\vee$, where $Z(\Sigma)^\vee$ is the dual vector space.
\item[(iv)] $\mathcal{H}_{\emptyset}=\mathbb{C}$ and $Z_{\Sigma \times [0,1]}=\Id:\mathcal{\Sigma}\rightarrow\mathcal{\Sigma}$.
\item[(v)] For $\phi: M\rightarrow M'$ a diffeomorphism, the following diagram commutes:
\begin{center}
\begin{tikzcd}[column sep=large, row sep=large]
\mathcal{H}_{\Sigma_\mathrm{in}}\arrow[r, "Z_M"]\arrow[d, "\rho(\phi\mid_{\Sigma_\mathrm{in}})"']& \mathcal{H}_{\Sigma_\mathrm{out}}\arrow[d, "\rho(\phi\mid_{\Sigma_\mathrm{out}})"] \\
\mathcal{H}_{\Sigma'_\mathrm{in}}\arrow[ r,"Z_{M'}"']& \mathcal{H}_{\Sigma'_\mathrm{out}}
\end{tikzcd}
\end{center}
It follows that $Z_M$ is invariant under diffeomorphisms of $M$ relative to its boundary components.
\item[(vi)](Symmetry) The natural diffeomorphism $\Sigma \sqcup \Sigma'\rightarrow \Sigma' \sqcup \Sigma$ is sent by $\rho$ to the natural isomorphism $\mathcal{H}_{\Sigma}\otimes \mathcal{H}_{\Sigma'}\rightarrow\mathcal{H}_{\Sigma'}\otimes \mathcal{H}_{\Sigma}$.
\item[(vii)] The partition function for the cylinder $\Sigma \times [0,1]$ viewed as a cobordism $\Sigma \times \Bar{\Sigma}\rightarrow\emptyset$ composed with the anti-linear map $(\sigma_\Sigma)^{-1}:\mathcal{H}_{\Sigma}\rightarrow \mathcal{H}_\Sigma$ yields the Hermitian inner product $\braket{-,-}: \mathcal{H}_\Sigma \times \mathcal{H}_\Sigma \rightarrow\mathbb{C}$.
\end{enumerate}
Let $M$ be a closed $n$-manifold, which can be regarded as a cobordism $\emptyset \rightarrow \emptyset$. The associated partition function $Z_M\in \mathbb{C}$ is an invariant under orientation-preserving diffeomorphisms on $M$.
In general, for a mapping torus
$\frac{\Sigma \times [0,1]}{\Sigma \times \{0\}\stackrel{\phi}{\sim} \Sigma \times \{1\}}$ with $\phi:\Sigma\rightarrow\Sigma$ a gluing diffeomorphism, the axioms above imply that $Z=\mathrm{Tr}_{\mathcal{H}_\Sigma}\rho(\phi)$. In particular, for the product manifold $\Sigma \times S^1$ formed by identifying the opposite ends of the cylinder:
\begin{equation}
\label{ov:tft.partfunc_withS1}
Z_{\Sigma \times S^1}=\mathrm{Tr}_{\mathcal{H}_\Sigma}(\Id)=\dim \mathcal{H}_{\Sigma}\in \mathbb{Z}_{\geq 0}.
\end{equation}
This implies that the space of states is finite-dimensional.
\subsubsection{Atiyah's axioms for TQFTs}
Atiyah's axioms can be reformulated in the categorical language. We start with some prerequisites before arriving to the definition of a TQFT.
Let us consider a symmetric monoidal category $\mathbf{C}$: it is a category equipped with a bifunctor $\otimes: \mathbf{C}\times \mathbf{C}\rightarrow \mathbf{C}$, called \textit{monoidal product} which allows to, roughly speaking, ``multiply" objects. This product is well-defined because it is associative up to natural isomorphisms (in jargon it satisfies the \textit{pentagon equations} \cite{Ma71}). Moreover, a monoidal category is symmetric when for all the objects $A, B\in \mathbf{C}$ there are natural isomorphisms
\begin{equation}
\beta_{A,B}: A\otimes B\rightarrow B\otimes A
\end{equation}
compatible with the associativity of the monoidal structure (they satisfy the \textit{hexagon equations} \cite{Ma71}).
\begin{exm}
\label{exm:cob}
We consider two examples, which we need for later:
\begin{enumerate}
\item The category $\mathbf{Vect}_{\mathbb{K}}$ whose objects are $\mathbb{K}$-vector spaces for some field $\mathbb{K}$ and morphisms are $\mathbb{K}$-linear maps. It is monoidal with the usual tensor product as monoidal product ($\otimes :=\otimes_{\mathbb{K}}$) and with unit $\boldsymbol{1}:=\mathbb{K}$. Moreover, one can show that it is symmetric.
\item The category of oriented \textit{cobordisms}$, \mathbf{Cob}^{\text{or}}_n$. The objects are oriented closed $(n-1)$- dimensional manifolds and morphisms are diffeomorphisms classes of bordisms. In a more down to Earth language, this means that the morphisms are given by the bulk of an oriented compact $n$-dimensional manifold with boundary, whose boundary components are the objects. We can compose a morphism with another morphism simply by gluing along the common boundaries. It has a monoidal structure where the monoidal product is given by the disjoint union and the unit object is the empty set $\emptyset$ viewed as an $(n-1)$-dimensional manifold. The objects are endowed with orientations labeled by symbols $\{\mathrm{in}, \mathrm{out}\}$.
\end{enumerate}
\end{exm}
Atiyah's axioms can be reformulated in a short way as:
\begin{defn}[Topological field theory]
Let $(\mathbf{C},\otimes)$ be a symmetric monoidal category. An $n$-dimensional oriented closed topological field theory (TFT) is a symmetric monoidal functor
\begin{equation}
Z: \mathbf{Cob}^{\text{or}}_n\rightarrow \mathbf{C}.
\end{equation}
\end{defn}
\begin{defn}[Topological quantum field theory]
\label{ov:tqft}
An $n$-dimensional oriented topological quantum field theory (TQFT) is a symmetric monoidal functor
\begin{equation}
Z: \mathbf{Cob}^{\text{or}}_n\rightarrow \mathbf{Vect}_{\mathbb{C}}.
\end{equation}
\end{defn}
\begin{rmk}
Note that the target category contains also infinite-dimensional vector spaces. However, an analogue of Eq. \eqref{ov:tft.partfunc_withS1}, implies that the state spaces are finite-dimensional.
\end{rmk}
\begin{rmk}
As seen in Definition \ref{ov:tqft}, the category of smooth oriented cobordisms is usually used to describe a TQFT. However, cobordisms may possess other geometric structures such as conformal structure, spin structure, framing, boundaries, etc. Consequently, the associated field theory will be conformal QFT, spin or framed TQFT, etc. For example, for Yang-Mills theories and sigma models, the source category is the category of smooth Riemannian manifolds with a collar at the boundary.
\end{rmk}
\begin{exm}
As first example, let us consider a cobordism represented by some pair of pants with genus 1 (see Fig. \ref{tqft:fig:pants}). The TQFT $F$ assigns to each boundary component a Hilbert space, i.e. $Z(\partial_k\Sigma)=\mathcal{H}_k$ for $k=1,2,3$. Since $Z$ is a symmetric monoidal functor, we have $Z(\partial_1\Sigma\sqcup \partial_2\Sigma\sqcup \partial_3\Sigma)=\mathcal{H}^{\vee}_1\otimes \mathcal{H}^{\vee}_2\otimes\mathcal{H}_3$. As said before, each cobordism comes with a certain orientation: $\partial_1\Sigma$ as well as $\partial_2\Sigma$ are incoming boundaries (which we denote in the figure by an incoming arrow), while $\partial_3\Sigma$ is an outgoing boundary (which we denote in the figure by an outgoing arrow). Associated to $\partial_1\Sigma$ and $\partial_2\Sigma$, we have an incoming Hilbert space $\mathcal{H}_{\text{in}}\coloneqq \mathcal{H}^{\vee}_1\otimes \mathcal{H}^{\vee}_2\cong \mathcal{H}_1\otimes \mathcal{H}_2$ and an outgoing Hilbert space $\mathcal{H}_{\text{out}}\coloneqq\mathcal{H}_3$ associated to $\partial_3\Sigma$. The state $\psi$ corresponding to this
cobordism and the given TQFT is given as the value of the morphism represented by the genus 1 pair of pants above (i.e. the bounding manifold $M$) under $F$.
\begin{figure}[hbt!]
\centering
\begin{tikzpicture}[rotate=270,transform shape,tqft, view from=incoming,
every incoming boundary component/.style={fill=gray}
]
\pic[draw,
tqft/pair of pants,
every lower boundary component/.style={draw},
every incoming lower boundary component/.style={solid},
every outgoing lower boundary component/.style={dashed},
genus=1,
shade, shading angle=270,
name=A
];
\node[label={below:$\uparrow$}] at (A-outgoing boundary 1) {};
\node[label={below:$\uparrow$}] at (A-outgoing boundary 2) {};
\node[label={above:$\uparrow$}] at (A-incoming boundary 1) {};
\node[rotate=90] at (A-outgoing boundary 1) {\hspace{-2cm $\partial_1\Sigma$}};
\node[rotate=90] at (A-outgoing boundary 2) {\hspace{-2cm $\partial_2\Sigma$}};
\node[rotate=90] at (A-incoming boundary 1) {\hspace{2cm $\partial_3\Sigma$}};
\node[rotate=90] at (-1.2,-0.9) {$M$};
\end{tikzpicture}
\caption{Cobordism $M$ represented by pair of pants of genus 1 with boundary components $\partial_1\Sigma$, $\partial_2\Sigma$, $\partial_3\Sigma$.}
\label{tqft:fig:pants}
\end{figure}
\end{exm}
\begin{exm}
As already mentioned in Section \ref{func_qft}, a closed manifold $\Sigma$ can be seen as a cobordism $\emptyset \rightarrow \emptyset$. We can cut it in two disjoint manifolds $\Sigma_1$ and $\Sigma_2$ along a common boundary $\partial\Sigma$, i.e. $\Sigma= \Sigma_1\sqcup_{\partial \Sigma}\Sigma_2$. Then we can assign an opposite orientation to $\partial_1\Sigma_1$ with respect to the orientation of $\partial_1\Sigma_2$. The same can be done to $\partial_2\Sigma_1$ with respect to the orientation of $\partial_2\Sigma_2$. The two manifolds with boundary $\Sigma_1$ and $\Sigma_2$ can be glued back together to recover the partition function of the closed manifold $\Sigma$, see Fig. \ref{tft:fig:gluing}.
\begin{figure}[hbt!]
\centering
\begin{subfigure}[b]{0.3\textwidth}
\centering
\begin{tikzpicture}[rotate=-90,transform shape,tqft, view from=incoming,
]
\pic[draw,
tqft/reverse pair of pants,
every lower boundary component/.style={draw},
every incoming boundary component/.style={solid, fill=white, draw=white},
every outgoing boundary component/.style={solid, draw=white},
every outgoing lower boundary component/.style={dashed},
genus=1,
shade, shading angle=90,
name=A
];
\pic[draw,
tqft/cup,
every lower boundary component/.style={draw},
every incoming lower boundary component/.style={solid},
every outgoing lower boundary component/.style={dashed},
fill=gray,
anchor=incoming boundary 1,name=B, at=(A-outgoing boundary 1)
];
\node[rotate=90] at (A-incoming boundary 1) {\hspace{1.3cm $\partial_1\Sigma_1$}};
\node[rotate=90] at (A-incoming boundary 2) {\hspace{1.3cm $\partial_2\Sigma_1$}};
\node[rotate=90] at (-0.6,-0.5) {$\Sigma_1$};
\end{tikzpicture}
\end{subfigure
\begin{subfigure}[b]{0.3\textwidth}
\centering
\begin{tikzpicture}[rotate=270,transform shape,tqft, view from=incoming,
every incoming boundary component/.style={fill=gray},
every lower boundary component/.style={draw=gray
]
\pic[draw,
tqft/pair of pants,
every incoming lower boundary component/.style={solid,draw=gray},
every outgoing lower boundary component/.style={dashed},
genus=1,
shade, shading angle=270,
name=A
];
\pic[draw,
tqft/cap,
every lower boundary component/.style={draw},
fill=gray, every outgoing boundary component/.style={draw=white},
anchor=outgoing boundary 1,name=B, at=(A-incoming boundary 1)
];
\node[rotate=90] at (A-outgoing boundary 1) {\hspace{-1.3cm $\partial_1\Sigma_2$}};
\node[rotate=90] at (A-outgoing boundary 2) {\hspace{-1.3cm $\partial_2\Sigma_2$}};
\node[rotate=90] at (-1.2,-0.9) {$\Sigma_2$};
\end{tikzpicture}
\end{subfigure}
\caption{Gluing of two manifolds $\Sigma_1$ and $\Sigma_2$ along a common boundary $\partial \Sigma$.}
\label{tft:fig:gluing}
\end{figure}
\end{exm}
\begin{rmk}
It is important to highlight that the functorial approach to TQFT is not based on any perturbative framework, therefore, its nature is intrinsically non perturbative.
\end{rmk}
Successively, in \cite{BD95}, Baez and Dolan suggested enhancing Atiyah's notion of TQFT to a functor from the $(\infty, n)$-extension of the cobordism category. Their idea is to allow gluing as well as cutting with higher codimension data. Moreover, they conjectured these TQFTs to be completely classifiable: this conjecture is known as the \textit{Cobordism hypothesis}. In \cite{Lu09}, Lurie provided a complete classification result for fully extended TQFTs formulated in the language of $(\infty,n)$-categories, a generalization of the notion of a category.
\section{Elements of formal geometry}
\label{app:formal_geometry}
In this appendix, we want to explain some of the important notions of formal geometry developed in \cite{GK71,Bo11} which are used for the globalization procedure.
\subsection{Formal power series on vector spaces}
Let $V$ be a finite dimensional vector space. The polynomial algebra on $V$ is given by
\begin{equation}
\Sym^\bullet (V^\vee) = \bigoplus_{k=0}^\infty \Sym^k(V^\vee).
\end{equation}
If we choose $\{e_1, \dots, e_n\}$ to be a basis of $V$, with dual basis $\{y_1, \dots , y_n\}$, then elements $f \in \Sym^\bullet (V^\vee)$ are given by
\begin{equation}
f(y)=\sum_{i_1,\dots,i_n=1}^\infty f_{i_1,\dots, i_n}y_1^{i_1}\dots y_n^{i_n}=\sum_I f_Iy^{I},
\end{equation}
with only finitely many non-vanishing $f_I$. We have denoted by $I$ a multi-index and $y^I=y^{i_1}\dots y^{i_n}$, $y^{\emptyset} \coloneqq 1$.
This algebra can be completed to the algebra of formal power series $\reallywidehat{\Sym}^\bullet(V^\vee)$, with infinitely many nonzero coefficients $f_I$. Note that both algebras $\Sym^\bullet(V^\vee)$ and $\reallywidehat{\Sym}^\bullet (V^\vee)$ are commutative, with the multiplication of formal power series or polynomials, respectively, generated by $V^\vee$. One can specify derivations of these algebras by their value on these generators, therefore, the map
\begin{alignat}{1}
\label{iso}
V \otimes \Sym^\bullet (V^\vee) &\rightarrow \Der(\Sym^\bullet (V^\vee))\\
v \otimes f &\mapsto \Big(V^\vee \ni \alpha \mapsto \alpha(v) \cdot f\Big)
\end{alignat}
is an isomorphism with inverse
\begin{alignat}{1}
\Der(\Sym^\bullet (V^\vee)) & \rightarrow V \otimes \Sym^\bullet (V^\vee) \\
D & \mapsto \sum_{i=1}^n e_i \otimes D(y^i).
\end{alignat}
In coordinates this corresponds to sending $e_i \mapsto \frac{\partial}{\partial y^i}$.
\subsection{Formal exponential maps}
\label{subsec:form_exp_map}
Let $M$ be a manifold and let $\varphi: U \rightarrow M$, with $U \subset TM$ an open neighbourhood of the zero section. For $x \in M$, $y \in T_xM \cap U$, we can write $\varphi(x,y)=\varphi_x(y)$.
\begin{defn}[Generalized exponential map]
We call $\varphi$ a \textit{generalized exponential map} if for all $x \in M$ we have that
\begin{enumerate}
\item $\varphi_x(0)=x$,
\item $d\varphi_x(0)=\Id_{T_xM}$.
\end{enumerate}
\end{defn}
In local coordinates
\begin{equation}
\varphi_x^i(y)=x^i+y^i+\frac{1}{2}\varphi_{x,jk}^iy^jy^k+\frac{1}{3!} \varphi_{x,jkl}^iy^jy^ky^l+\dots,
\end{equation}
where the $x_i$ and $y_i$ are respectively the base and the fiber coordinates. Two generalized exponential maps are identified if their corresponding jets agree at all orders.
\begin{defn}[Formal exponential map]
A \textit{formal exponential map} is an equivalence class of generalized exponential maps. A formal exponential map is completely specified by the sequence of functions $\Big (\varphi_{x,i_1,\dots,i_k}^i \Big)_{k=0}^\infty$.
\end{defn}
From now on, we will abuse notation and we will denote equivalence classes and their representatives by $\varphi$. One can produce a section $\sigma \in \Gamma(\reallywidehat{\Sym}^\bullet(T^\vee M))$ from a formal exponential map $\varphi$ and a function $f \in \mathcal{C}^\infty(M)$, via $\sigma_x=\mathrm{T}\varphi_x^*f$, with $\mathrm{T}$ the Taylor expansion in the fiber coordinates around $y=0$ and the pullback defined by any representative of $\varphi$. We will denote this section by $\mathrm{T}\varphi^*f$, and note that it is independent of the choice of the representative since it only depends on the jets.
\subsection{Grothendieck connection}
\label{grotta}
\begin{defn}[Grothendieck connection]
On $\reallywidehat{\Sym}^\bullet(T^\vee M)$ we can define a flat connection $\Gr$, satisfying the property that
\begin{equation}
\Gr \sigma=0 \iff \sigma=\mathrm{T}\varphi^*f,
\end{equation}
for some $f\in \mathcal{C}^\infty(M)$. Namely, $\Gr=d+R$ with $R\in \Gamma\Big(T^\vee M \otimes TM \otimes \reallywidehat{\Sym}^\bullet(T^\vee M)\Big)$, a 1-form with values in derivations\footnote{We can use the isomorphism \eqref{iso} to identify derivations of $\reallywidehat{\Sym}^\bullet(T^\vee M)$ with $\Gamma\Big(TM \otimes \reallywidehat{\Sym}^\bullet(T^\vee M)\Big)$.} of $\reallywidehat{\Sym}^\bullet(T^\vee M)$. In local coordinates, $R$ is defined as $R_idx^i$ and
\begin{equation}
R_i(x;y)\coloneqq-\bigg[\bigg(\frac{\partial \varphi}{\partial y}\bigg)^{-1}\bigg]^k_j\frac{\partial \varphi^j}{\partial x^i}\frac{\partial}{\partial y^k}= Y_i^k(x;y)\frac{\partial}{\partial y^k}.
\end{equation}
Hence, we have
\begin{equation}
R(x;y)=R_i(x;y)dx^i=Y_i^k \frac{\partial}{\partial y^k}dx^i.
\end{equation}
The connection $\Gr$ is called the \textit{Grothendieck connection}\footnote{In the setting of field theory we have called this the \emph{classical} Grothendieck connection in order to distinguish it from its quantum counterpart.}.
\end{defn}
For $\sigma \in \Gamma(\reallywidehat{\Sym}^\bullet(T^\vee M))$, $R(\sigma)$ is expressed via the Taylor expansion (in the $y$ coordinates) of
\begin{equation}
-d_y\sigma \circ (d_y\varphi)^{-1} \circ d_x\varphi: \Gamma(TM) \rightarrow \Gamma(\reallywidehat{\Sym}^\bullet(T^\vee M)),
\end{equation}
and therefore, $R$ does not depend on the coordinate choice. For a vector field $\xi=\xi^i \frac{\partial}{\partial x^i}$, we have
\begin{equation}
\Gr^\xi=\xi + \hat{\xi},
\end{equation}
with
\begin{equation}
\hat{\xi}(x;y)=\iota_\xi R(x;y)=\xi^i(x)Y_i^k(x;y) \frac{\partial}{\partial y^k}.
\end{equation}
\begin{rmk}
The classical Grothendieck connection is flat (i.e. $D^2_G=0$). Moreover, the flatness condition translates into
\begin{equation}
\label{MC_R}
d_xR+\frac{1}{2}[R,R]=0,
\end{equation}
which is a \textit{Maurer--Cartan} (MC) equation for $R$.
\end{rmk}
\begin{rmk}
It can be proved that its cohomology is concentrated in degree 0 and is given by
\begin{equation}
H^0_{\Gr}\Big(\Gamma(\reallywidehat{\Sym}^\bullet(T^\vee M))\Big)=\mathrm{T}\varphi^* \mathcal{C}^\infty(M) \cong \mathcal{C}^\infty(M).
\end{equation}
\end{rmk}
\subsection{Formal vertical tensor fields}
Let $E \rightarrow M$ be any \textit{tensorial bundle}\footnote{A tensorial bundle is any bundle which is a tensor product or antisymmetric or symmetric product of the tangent or cotangent bundle, or a direct sum thereof.}, for example $E=\bigwedge^k TM$. Its sections are called \textit{tensor fields of type E}.
\begin{defn}[Formal vertical bundle]
The associated \textit{formal vertical bundle} to $E$ is then $\hat{E} \coloneqq E \otimes \reallywidehat{\Sym}^\bullet(T^\vee M)$. Its sections are called \textit{formal vertical tensors of type $E$}.
\end{defn}
\begin{rmk}
These bundles can be thought of as tensors of the same type on $TM$ where the dependence on fiber directions is formal.
\end{rmk}
The formal exponential map defines an injective map
\begin{equation}
\mathrm{T}\varphi^*:E \rightarrow \hat{E}
\end{equation}
via the Taylor expansion of a tensor field pulled back\footnote{Note that $\varphi$ is a local diffeomorphism and hence we can define the pullback of contravariant tensors as the pushforward of the inverse.} to $U$ by $\varphi$.
Furthermore, we can let $R$ act by formal derivatives and therefore, we get a Grothendieck connection $\Gr=d+R$ on any formal vertical tensor bundle. Similarly, as before, we have:
\begin{itemize}
\item $\Gr$ is flat;
\item flat sections of $\Gr$ are precisely the ones in the image of $\mathrm{T}\varphi^*$;
\item the cohomology of $\Gr$ is concentrated in degree 0 and given by the flat sections, i.e. $\hat{E}$-valued 0-forms.
\end{itemize}
\subsection{Changing the formal exponential map}
\label{app5}
We will denote by $\varphi$ be a family of formal exponential maps depending on a parameter $t$ belonging to an open interval $I$. One can then associate to this family a formal exponential map $\psi$ for the manifold $M \times I$ by
\begin{equation}
\psi(x,t,y,\tau) \coloneqq (\varphi_x(y),t+\tau),
\end{equation}
with $\tau$ the tangent variable to $t$. The corresponding connection $\tilde{R}$ is defined as follows. Let $\tilde{\sigma}$ be a section of $\reallywidehat{\Sym}^\bullet\Big(T^\vee(M\times I)\Big)$, by definition we have:
\begin{equation}
\tilde{R}(\tilde{\sigma})=-(d_y\tilde{\sigma},d_\tau \tilde{\sigma}) \circ \begin{pmatrix}
(d_y\varphi)^{-1} & 0 \\
0 & 1\end{pmatrix} \circ \begin{pmatrix}
d_x\varphi & \Dot{\varphi} \\
0 & 1
\end{pmatrix}.
\end{equation}
Hence, $\tilde{R}=R+Cdt+T$, with $R$ defined as in Section \ref{grotta}, but with a $t$-dependence now, and $T=-dt\frac{\partial}{\partial \tau}$.
The MC equation \eqref{MC_R} can be reformulated for $\tilde{R}$ observing that :
\begin{itemize}
\item $d_x T=d_tT=0$,
\item $T$ commutes with $R$ and $C$.
\end{itemize}
The $(2,0)$-form component of the MC equation over $M \times I$ yields again the MC equation for $R$, while the $(1,1)$- component reads
\begin{equation}
\Dot{R}=d_xC+[R,C].
\end{equation}
\begin{rmk}
Under a change of formal exponential map, $R$ changes by a gauge transformation having as generator the section $C$ of $\hat{\mathfrak{X}}(TM) \coloneqq TM \otimes \reallywidehat{\Sym}^\bullet(T^\vee M)$. Finally, if $\sigma$ is a section in the image of $\mathrm{T}\varphi^*$, a simple computation yields
\begin{equation}
\Dot{\sigma}=-L_C\sigma.
\end{equation}
One can think of it as the associated gauge transformation for sections.
\end{rmk}
\subsection{Extension to graded manifolds}
The previous results can be generalized to the category of graded manifolds exploiting the algebraic reformulation of formal exponential maps developed in \cite{LS17}.
More concretely, given a formal exponential map $\varphi$ on a smooth manifold $M$, one can construct a map
\begin{equation}
\pbw: \Gamma(\reallywidehat{\Sym}^\bullet(TM)) \rightarrow \mathcal{D}(M)
\end{equation}
from sections of the completed symmetric algebra of the tangent bundle to the algebra of differential operators $\mathcal{D}$ by defining
\begin{equation}
\pbw\Big(X_1 \odot \dots \odot X_n\Big)(f)=\frac{d}{dt_1}\Bigg|_{t_1=0} \dots \frac{d}{dt_n}\Bigg|_{t_n=0} f\Big(\varphi(t_1X_1+\dots+t_nX_n)\Big),
\end{equation}
where we denote by $\odot$ the symmetric product.
One can also define this map in the category of graded manifolds by choosing a torsion-free connection $\nabla$ on the tangent bundle of a graded manifold $M$ with Christoffel symbols $\Gamma^k_{ij}$. In particular, there still exists an element
$R^{\nabla} \in \Omega^1\Big(M,TM \otimes \reallywidehat{\Sym}^\bullet(T^\vee M)\Big)$ with the property that $\Gr=d_M+R^{\nabla}$ is a flat connection on $\reallywidehat{\Sym}^\bullet(T^\vee M)$, i.e.
\begin{equation}
R^{\nabla}=-\delta+\Gamma+A^{\nabla}.
\end{equation}
In local coordinates $\{x^i\}$ on $M$ and $\{y^i\}$ on $TM$, we have
\begin{equation}
\begin{split}
\delta &=dx^i\frac{\partial}{\partial y^i},\\
\Gamma &=-dx^i\Gamma^k_{ij}(x)y^j\frac{\partial}{\partial y^k}, \\
A^{\nabla}&=dx^i \sum_{|J| \geq 2}A^k_{i,J}(x)y^J \frac{\partial}{\partial y^k}.
\end{split}
\end{equation}
We define $R_i \in \Gamma(M, \reallywidehat{\Sym}^\bullet(T^\vee M) \otimes TM)$ and $Y_i^k \in \Gamma(M, \reallywidehat{\Sym}^\bullet(T^\vee M))$ via
\begin{equation}
R^{\nabla}=R_i(x;y)dx^i=Y_i^k(x;y)dx^i\frac{\partial}{\partial y^k}.
\end{equation}
In particular note that $\Gr$ extends to a differential on $\Omega^{\bullet}(M,\reallywidehat{\Sym}^\bullet(T^\vee M))$. T
The Taylor expansion of a function $f \in \mathcal{C}^\infty(M)$ can be defined as \cite{LS17}
\begin{equation}
\label{1}
\mathrm{T}\varphi^* f \coloneqq \sum_I \frac{1}{I!}y^I\pbw \Big(\underset{\leftarrow}{\partial_x^I}\Big)(f),
\end{equation}
where
\begin{equation}
\Big(\underset{\leftarrow}{\partial_x^I}\Big)=\underbrace{\partial_{x_1} \odot \dots \odot \partial_{x_1}}_{i_1} \odot \dots \odot \underbrace{\partial_{x_n} \odot \dots \odot \partial_{x_n}}_{i_n}.
\end{equation}
One can prove that \eqref{1} has still the same properties, i.e. the image of $\mathrm{T}\varphi^*$ consists precisely of the $\Gr$-closed sections of $\reallywidehat{\Sym}^\bullet(T^\vee M)$.
We can describe how the exponential map varies under the choice of a connection mimicking the construction for the smooth case described in Section \ref{app5}. More concretely, assume we have a smooth family $\nabla^t$ of connections on $TM$, then we can associate to that family a connection $\tilde{\nabla}$ on $M \times I$. The associated $R^{\tilde{\nabla}}$ can be split as in Section \ref{app5}
\begin{equation}
R^{\tilde{\nabla}}=R^{\nabla^t}+C^{\nabla^t}dt+T,
\end{equation}
where $C \in \Gamma(M,\reallywidehat{\Sym}^\bullet(T^\vee M))$. As previously, $\Gr^2=0$ means
\begin{equation}
\Dot{R}^{\nabla^t}=d_MR^{\nabla^t}+[C^{\nabla^t},R ^{\nabla^t}],
\end{equation}
and for any section $\sigma$ in the image of $\mathrm{T}\varphi^*$ we have
\begin{equation}
\Dot{\sigma}=-L_{C^{\nabla^t}} \sigma.
\end{equation}
\section{Elements of derived geometry}
\label{app:derived_geometry}
In Section \ref{sec:BV-BFV}, we have introduced the BV formalism as a way to deal with non-isolated critical points for the action of a gauge theory. In other words, this means that the critical locus of the action functional (i.e. the set of points such that $\delta S=0$) is singular. The BV formalism instructs us to resolve the singularities homologically by taking the \textit{derived critical locus} of the action functional, which is a smooth object in the category of derived spaces: this is done by the Koszul resolution of the critical locus. More generally, this procedure can be understood globally in the setting of \textit{derived algebraic geometry} (DAG) \cite{To14,PTTV13}. However, for the present work, we do not require the whole DAG language. For us it is sufficient to work with a ``tamed" version of DAG, namely the framework developed by Costello in \cite{Co11a,Co11b} to deal with formal mapping stacks which capture the geometry of derived critical loci in nonlinear sigma models.
\subsection{Category of derived manifolds}
Here, we want to define the category of derived manifolds. Let us start with the objects.
Denote by $\Omega^\bullet(M)$ the de Rham algebra of a manifold $M$, which, in other words, is a sheaf of commutative differentially graded algebras.
\begin{defn}[Derived manifold, \cite{Co11a}]
A derived manifold (over $\mathbb{R}$) is a pair $(M, \mathcal{A})$, where $M$ is a smooth manifold and $\mathcal{A}$ is a sheaf of unital differentially graded $\Omega^\bullet(M)$, satisfying the conditions
\begin{enumerate}
\item As a sheaf of $\mathcal{C}^\infty(M)$-algebras, $\mathcal{A}$ is locally free and of finite rank.
\item There is a morphism $\A\rightarrow\mathcal{C}^\infty(M)$ of sheaves of $\Omega^\bullet(M)$-algebras and the kernel of this map is a sheaf of nilpotent ideals.
\item The topology of $M$ has a basis such that the cohomology $\A(U)$ is concentrated in nonpositive degrees for each basis set of $U$.
\end{enumerate}
\end{defn}
\begin{exm}
Trivially, any manifold $M$ with $\A=\cinfty$ is a derived manifold.
\end{exm}
\begin{exm}
Let $M$ be a manifold, take $\A=\derham$ equipped with a de Rham differential. The pair $(M,\derham)$ is derived manifold which we denote by $M_{\,\text{dR}}$.
\end{exm}
\begin{exm}
Let $M$ be a complex manifold, take $\A=\Omega^{0,\bullet}(M)$ equipped with a Dolbeault differential $\Bar{\partial}$. The pair $(M,\Omega^{0,\bullet}(M))$ is complex derived manifold.
\end{exm}
\begin{defn}[Morphisms of derived manifolds]
A morphism of derived manifolds $(M,\A)\rightarrow(N,\mathcal{B})$ is a smooth map $f:M\rightarrow N$ together with a morphism $\phi:f^{-1}\mathcal{B}\rightarrow\A$ of $f^{-1}\Omega^\bullet(N)$-algebras such that the diagram
\begin{center}
\begin{tikzcd}[column sep=large, row sep=large]
f^{-1}\mathcal{B}\arrow[r, "\phi"]\arrow[d]& \A\arrow[d] \\
f^{-1}\mathcal{C}^\infty(N)\arrow[ r]& \mathcal{C}^\infty(M)
\end{tikzcd}
\end{center}
commutes.
\end{defn}
\begin{notat}
We denote by $\mathbf{DMan}$ the category with objects given by derived manifolds and morphisms given by the ones we have just defined.
\end{notat}
The notion of morphisms between derived manifolds is further enriched by introducing \textit{weak equivalences} between derived manifolds. For this purpose, we will use the nilpotent differential graded (dg) ideal $I$ of $(M,\A)$ defined as the kernel of the map $\A\rightarrow\mathcal{C}^\infty(M)$. Here, we have a filtration by powers of the nilpotent ideal. Let $\Grad\A$ denote the associated graded algebra with degree $k$ part $\Grad^k\A\coloneqq F^k\A/F^{k+1}\A$ and the induced differential.
\begin{defn}[Weak equivalence]
A morphism $(f,\phi):(M,\A)\rightarrow(N,\mathcal{B})$ of derived manifolds is a \textit{weak equivalence} if $f$ is a differomoprhisms and the induced map
\begin{equation}
\Grad\phi:f^{-1}\Grad\mathcal{B}\rightarrow\Grad\A
\end{equation}
is a quasi-isomorphism.
\end{defn}
Having a filtration has also another aim: it should mirror the role of the tower of quotients of a local Artinian algebra in formal deformation theory. In that context, in many situations, it is useful to proceed by \textit{Artinian induction}: let $(A, \mathfrak{m})$ be a local Artinian algebra over $\mathbb{R}$, there is a tower
\begin{equation}
A=A/\mathfrak{m}^{n+1}\rightarrow A/\mathfrak{m}^{n}\rightarrow \dots \rightarrow A/\mathfrak{m}\cong \mathbb{R}.
\end{equation}
This tower is then used to prove some properties of $A$. Following these ideas, in fact, derived manifolds can be used to study derived deformation theory as Artinian algebras are used to study formal deformation theory. Now, let us define Artinian dg algebras and make these ideas more precise. We will be concise so we refer to \cite{Co11a,CG16} for a more detailed exposition.
\begin{defn}[Artinian dg algebra]
An \textit{Artinian dg algebra} $R$ over a field $\mathbb{K}$ is a finite dimensional dg algebra over $\mathbb{K}$, concentrated in non positive degrees, with a unique nilpotent dg ideal $\mathfrak{m}$ such that $R/\mathfrak{m}\cong\mathbb{K}$.
\end{defn}
The relation between Artinian dg algebra and derived manifolds is explained by the following Proposition
\begin{prp}[\cite{GG14}]
\label{prp:art_dman}
There is a fully faithful embedding
\begin{align}
\begin{split}
\Spec: \mathbf{dgArt}^{op}_{\mathbb{K}}&\rightarrow\mathbf{DMan},\\
R&\mapsto \Spec R\coloneqq(\pt, R).
\end{split}
\end{align}
\end{prp}
The importance of Artinian dg algebras comes from being a sort of ``test object" in formal derived deformation theory.
\begin{defn}[Formal derived moduli problem, \cite{Lu11}]
\label{formal_mod_prob}
A formal derived moduli problem over $\mathbb{K}$ is a functor
\begin{equation}
X:\mathbf{dgArt}_{\mathbb{K}}\rightarrow\mathbf{sSets},
\end{equation}
where $\mathbf{sSets}$ is the category of simplicial sets and $X$ is such that $X(\mathbb{K})$ is contractible and $X$ preserves certain homotopy limits.
\end{defn}
\begin{rmk}
Loosely speaking, Artinian dg algebras are points with nilpotent directions in derived manifolds. Hence, studying formal moduli problems corresponds to studying the formal neighbourhoods of such points.
\end{rmk}
\begin{rmk}
We can now combine Definition \ref{formal_mod_prob} and Definition \ref{prp:art_dman}. We generalize the formal derived moduli problems by extending the functor $X$ to a functor $\mathbf{DMan}^{op}\rightarrow \mathbf{sSets}$. In this way, we can study formal moduli problems parametrized by a smooth manifold $M$ (before they were parametrized by an Artinian algebra).
\end{rmk}
\subsection{Derived stacks}
In this section, we are going to introduce briefly the \textit{derived stacks}. These are the spaces studied in derived algebraic geometry.
Recall the functor of points approach in algebraic geometry: a scheme can be defined as a functor from the category of commutative $\mathbb{K}$-algebras, i.e. $\mathbf{CAlg}_{\mathbb{K}}$, to the category of sets. Motivated by the study of moduli problems, since the focus was to classify objects with their isomorphisms, the target category was extended to the category of groupoids (a small category whose morphisms are invertible). These new functors were called stacks. A further generalization is called \textit{higher stacks}, where the interest is to classify objects up to a higher notion of equivalence rather than isomorphisms (e.g. quasi-isomorphisms). The target category in this case is extended to the category of simplicial sets. Finally derived stacks (or derived higher stacks) arrive when we enlarge the source category to $\mathbf{DCAlg}_{\mathbb{K}}$, i.e. the category of simplicial commutative $\mathbb{K}$-algebras. This category has a natural model category structure, which allows to do homotopy theory. Hence, derived stacks are defined as functors $\mathbf{DCAlg}_{\mathbb{K}}\rightarrow \mathbf{sSets}$ which send equivalences in the source category to weak homotopy equivalences on the target and satisfy a \textit{descent} condition \cite{To06} .
The related definition in Costello's approach \cite{Co11a,Co11b} is similar, the only difference is for the source category which is the category of derived manifolds $\mathbf{DMan}^{op}$.
\begin{defn}[Derived stack]
A\textit{ derived stack} or \textit{derived space} is a functor:
\begin{equation}
X:\mathbf{DMan}^{op}\rightarrow\mathbf{sSets}
\end{equation}
such that:
\begin{itemize}
\item $X$ takes weak equivalences of derived manifolds to weak equivalences of simplicial sets.
\item $X$ satisfies \v{C}ech descent.
\end{itemize}
\end{defn}
The notion of \v{C}ech descent is outside the scope of the present work, we refer to \cite{GG14,Ste17} for a definition.
In the following, we will study a particular type of derived stack with a geometric interpretation, i.e. the derived stack represented by $\Linf$-spaces.
\subsection{\texorpdfstring{$L_{\infty}$}{L}-spaces}
The heart of the philosophy of deformation theory consists of the following statement: ``every formal derived moduli problem is represented by an $\Linf$-algebra". For the explicit statement see \cite{Lu10}. We will see how this works in our setting, but before, we need some definitions.
\begin{defn}[Curved $\Linf$-algebra over $A$]
Let $A$ be a commutative differential graded algebra (cdga) with a nilpotent dg ideal $I$ and $A^{\#}$ be the underlying graded algebra, with zero differential. A \textit{curved $\Linf$-algebra over \textup{$A$}} is a finitely generated projective $A^{\#}$-module $V$ together with a derivation of cohomological degree 1:
\begin{equation}
d:\reallywidehat{\Sym}^\bullet(V[1]^\vee)\rightarrow\reallywidehat{\Sym}^\bullet(V[1]^\vee)
\end{equation}
such that:
\begin{itemize}
\item $(\reallywidehat{\Sym}^\bullet(V[1]^\vee),d)$ is a cdga over $A$.
\item $d$ preserves the ideal $\Sym^{>0}(V[1]^\vee)$ modulo the nilpotent ideal $I$.
\end{itemize}
\end{defn}
\begin{rmk}
If we take the Taylor components of $d$, we obtain the maps
\begin{equation}
d_n: V^\vee\rightarrow\bigwedge\nolimits^n(V[n-2]^\vee),\quad n\geq 0
\end{equation}
which, upon dualization, become the $\Linf$-brackets
\begin{equation}
\ell_n: \bigwedge\nolimits^n(V[n-2])\rightarrow V
\end{equation}
\end{rmk}
\begin{notat}
The completed symmetric algebra over $A^{\#}$ of the 1-shifted dual of $V$, i.e. $\reallywidehat{\Sym}^\bullet(V[1]^\vee)$, is known as the \textit{Chevalley--Eilenberg complex} of $V$, and denoted in the following by $C^{\bullet}(V)$.
\end{notat}
\begin{defn}[Morphism of curved $\Linf$-algebras]
A \textit{morphism of curved $\Linf$-algebras} $\phi:V\rightarrow W$ over $A$ is a map $\pi^*:C^\bullet(W)\rightarrow C^\bullet(V)$ of cdga's over $A$ which respects the filtration by the ideal $I$.
\end{defn}
\begin{defn}[Maurer--Cartan element, Maurer--Cartan equation]
Let $\mathfrak{g}$ be the curved $\Linf$-algebra and $\alpha$ a degree 1 element of $\mathfrak{g}$. The \textit{Maurer--Cartan element} associated to $\alpha$ is
\begin{equation}
\label{MC}
MC(\alpha)=\sum^{\infty}_{n=0}\frac{1}{n!}\ell_n(\alpha^{\otimes n}).
\end{equation}
The \textit{Maurer--Cartan equation} for $\alpha$ is $MC(\alpha)=0$.
\end{defn}
\begin{rmk}
To render \eqref{MC} well-defined, since it involves an infinite sum, we will only consider Maurer--Cartan elements in \textit{nilpotent} $\Linf$-algebras.
\end{rmk}
As we have preannounced before, formal moduli problems are described by $\Linf$-algebras. Explicitly, let $\mathfrak{g}$ be an $\Linf$-algebra and $R$ be an Artinian dg algebra over $\mathbb{R}$, with maximal ideal $\mathfrak{m}$. Then, we can associate a formal derived moduli problem to $\mathfrak{g}$ by sending $R$ to the simplicial set
\begin{equation}
MC(\mathfrak{g}\otimes_{\mathbb{R}}\mathfrak{m})
\end{equation}
of solution of the Maurer--Cartan equation of the nilpotent $\Linf$-algebra $\mathfrak{g}\otimes_{\mathbb{R}}\mathfrak{m}$. More precisely, the Maurer--Cartan functor $MC_{\mathfrak{g}}$ is a formal derived moduli problem (see \ref{formal_mod_prob}). Moreover, if we send an $\Linf$-algebra to its associated Maurer--Cartan functor, we obtain an equivalence of categories \cite{Lu10}.
However, we are interested in $\Linf$-algebras parametrized by smooth manifolds and derived stacks, the global counterparts to the notion of formal moduli problems, which has a local nature.
\begin{defn}[curved $\Linf$-algebra over $\Omega^\bullet(X)$, $\Linf$-space]
Let X be a smooth manifold.
\begin{enumerate}
\item A \textit{curved $\Linf$-algebra over $\Omega^\bullet(X)$} consists of a $\mathbb{Z}$-graded topological\footnote{By topological, we mean that the fibers are topological vector spaces and the transition maps are continuous.} vector bundle $\pi: V\rightarrow X$ and the structure of an $\Linf$-algebra on its sheaf of smooth sections, denoted by $\mathfrak{g}$, where the base algebra is over $\Omega^\bullet(X)$ with nilpotent ideal $I=\Omega^{\geq 1}(X)$.
\item An $\Linf$-space is a pair $(X,\mathfrak{g})$, where $\mathfrak{g}$ is a curved $\Linf$-algebra over $\Omega^\bullet(X)$.
\end{enumerate}
\end{defn}
Now, we will explain how every $\Linf$-space defines a derived stack. It works in the same manner as $\Linf$-algebras determine formal moduli problems. Let $B\mathfrak{g}\coloneqq(X,\mathfrak{g})$ denote an $\Linf$-space. For a smooth map $f:Y\rightarrow X$, a curved $\Linf$-algebra is formed over $\Omega^\bullet(Y)$ as
\begin{equation}
f^*\mathfrak{g}\coloneqq f^{-1}\mathfrak{g}\otimes_{f^{-1}\Omega^\bullet(X)}\Omega^\bullet(Y).
\end{equation}
\begin{defn}[$B\mathfrak{g}$ functor of points, \cite{Co11b,GG14}]
Let $\Delta^n$ be the standard $n$-simplex in $\mathbb{R}^n.$ For $B\mathfrak{g}$ an $\Linf$-space, its \textit{functor of points} is the functor
\begin{equation}
MC_{B\mathfrak{g}}:\mathbf{DMan}^{op}\rightarrow\mathbf{sSets},
\end{equation}
which sends a derived manifold $\mathcal{M}=(M,\A)$ to the simplicial set $ MC_{B\mathfrak{g}}(\mathcal{M})$ whose $n$-simplices are given by pairs $(f,\alpha)$ where $f:M\rightarrow X$ is a smooth map and $\alpha$ a solution of the Maurer--Cartan equation in the nilpotent curved $\Linf$-algebra $f^*\mathfrak{g}\otimes_{\Omega^\bullet(M)}I_{\mathcal{M}}\otimes_{\mathbb{R}}\Omega^\bullet(\Delta^n)$, where $I_{\mathcal{M}}$ is the nilpotent ideal. In other words,
\begin{equation}
\label{mc_func_points}
MC_{B\mathfrak{g}}(\mathcal{M})=\bigsqcup_{f\in \mathcal{C}^\infty(M,X)}MC(f^*\mathfrak{g}\otimes_{\Omega^\bullet(M)}I_{\mathcal{M}}).
\end{equation}
\end{defn}
\begin{thm}[\cite{GG14}]
For any $\Linf$-space, its functor of points is a derived stack.
\end{thm}
\begin{rmk}
Since in \eqref{mc_func_points} we are tensoring with the nilpotent ideal $I_{\mathcal{M}}$ instead of the whole algebra, the $\Linf$-algebra is nilpotent. This reflects the idea that the deformation functor $ MC_{B\mathfrak{g}}$ should deform only the nilpotent directions of a derived manifold.
\end{rmk}
When $X$ is a complex manifold, the following theorem provides us with all the needed properties for its $\Linf$-space.
\begin{thm}[\cite{Co11b}]
Let $X$ be a complex manifold and let $\Omega^{\#}(X)$ be the de Rham complex endowed with the zero differential. There exists an $\Linf$-space $X_{\Bar{\partial}}=(X,\mathfrak{g}_X)$ with the following properties:
\begin{enumerate}
\item $\mathfrak{g}_X=\Omega^{\#}(X)\otimes_{\mathcal{C}^{\infty}(X)}T^{1,0}X[-1]$ as a $\Omega^{\#}(X)$-module.
\item $C^\bullet(\mathfrak{g}_X)\cong \Omega^\bullet(X)\otimes_{\mathcal{C}^{\infty}(X)}\Jet^\mathrm{hol}_X$ as a $\Omega^{\bullet}(X)$-algebra.
\item The jet prolongation map
\begin{equation}
\mathcal{C}^{\infty}(X)\hookrightarrow\Omega^\bullet(X)\otimes_{\mathcal{C}^{\infty}(X)}\Jet^\mathrm{hol}_X\cong C^\bullet(\mathfrak{g}_X)
\end{equation}
is a quasi-isomorphism of complexes of sheaves.
\end{enumerate}
\end{thm}
\subsection{Derived mapping spaces}
\label{der_map_spaces}
For an $\Linf$-space, we can think of its functor of points $MC_{B\mathfrak{g}}$ as the derived stack of maps into $B\mathfrak{g}$. With this idea in mind, in this section, we will see that if $(M,\A)$ is a derived manifold, a subset of the space of maps $(M,\A)\rightarrow(X,\mathfrak{g})$ is itself represented by an $\Linf$-space.
Hence, let us define a new \textit{simplicial presheaf} (see \cite{GG14}) on the site of $\mathbf{DMan}$ given by
\begin{alignat}{1}
MC^{\mathcal{M}}_{B\mathfrak{g}}: \mathbf{DMan}^{op}&\rightarrow\mathbf{sSets},\\
\mathcal{N}&\mapsto MC_{B\mathfrak{g}}(\mathcal{M}\times \mathcal{N}).
\end{alignat}
This functor is again a derived stack since $MC_{B\mathfrak{g}}$ is a derived stack. In particular, this is the derived stack of maps from $\mathcal{M}$ to $B\mathfrak{g}$. In perturbation theory, when we have a space of fields given by the space of maps $\mathcal{M}\rightarrow B\mathfrak{g}$, we perturb around a subset of these maps, usually the constant maps. Here we do the same. We will consider the sub-simplicial presheaf
\begin{equation}
\reallywidehat{MC}^{\mathcal{M}}_{B\mathfrak{g}}(\mathcal{N})\subset MC^{\mathcal{M}}_{B\mathfrak{g}}(\mathcal{N})
\end{equation}
of Maurer--Cartan solutions in which the underlying smooth map $M\rightarrow X$ is constant. More precisely, for an auxiliary derived manifold $(\mathcal{N}, \mathcal{B})$, consider
\begin{equation}
\reallywidehat{MC}^{\mathcal{M}}_{B\mathfrak{g}}(\mathcal{N})\subset MC(\mathcal{N}\times \mathcal{M})
\end{equation}
which consists of Maurer--Cartan elements $(f, \alpha)$ such that the underlying smooth map $f:N\times M\rightarrow X$ factors through the projection onto $M$. Costello showed that this space is itself an $\Linf$-space under certain conditions. This is useful for us since we would like $ \reallywidehat{MC}^{\mathcal{M}}_{B\mathfrak{g}}$ to represent the space of fields of a classical field theory.
\begin{prp}[\cite{Co11a,Co11b}]
Let $\mathcal{M}=(M,\A)$ be a derived manifold with nilpotent sheaf of ideals $I$ and with the property that, if $\A$ is filtered by the powers of the nilpotent ideal, then the
cohomology $\Gr^i\A$ f ( M) for $i\geq 1$ is concentrated in degrees $\geq 1$. \\
Let $(X, \mathfrak{g})$ be an $\Linf$-space such that the cohomology of the sheaf of $\Linf$-algebras $\mathfrak{g}_{\mathrm{red}}\coloneqq\mathfrak{g}/\Omega^{\geq 1}(X)$.\\
Then, the restricted Maurer--Cartan functor $\reallywidehat{\text{MC}}^{\mathcal{M}}_{B\mathfrak{g}}$ is weakly equivalent to the functor of points for the $\Linf$-space $(X, \mathfrak{g}\otimes \mathcal{A}(M))$.
\end{prp}
\begin{notat}
From now on, we will write $\reallywidehat{\Maps}(\mathcal{M}, B\mathfrak{g})$ for the $\Linf$-space $(X,\mathcal{A}(M)\otimes \mathfrak{g})$.
\end{notat}
\subsection{Shifted symplectic structures}
In \cite{Sch93}, Schwarz gave a definition of shifted symplectic structure on a dg manifold. Since in dg manifolds, all spaces of tensors are cochain complexes, the space of $i$-forms $\Omega^i(M)$ on a dg manifold is a cochain complex with a differential called the \textit{internal} differential. Moreover, we have also the de Rham differential $d_\mathrm{dR}:\Omega^{i}(M)\rightarrow\Omega^{i+1}(M)$. According to Schwarz a symplectic form is an element of the complex of 2-forms that is both, closed with respect to the de Rham differential and the internal differential. This element is also required to be non-degenerate. If the internal degree\footnote{The internal degree is also called ``ghost" degree.} of the form is $k$, it is said to be $k$-shifted. The physical relevant degrees are $-1$ for the space of bulk fields (BV formalism) and 0 for the boundary phase space (BFV formalism).
In \cite{PTTV13}, Pantev et al. gave a definition of shifted symplectic structure in the context of derived geometry using the language of derived Artin stacks. Here, a closed $2$-form is a cocycle in the truncated de Rham complex
\begin{equation}
\Omega^2_{cl}(X)\coloneqq \bigg(\bigoplus_{k\geq 2} \Omega^k(X)[-k+2], d_\mathrm{dR} \bigg)
\end{equation}
shifted in such a way that $\Omega^2(X)$ is in degree zero.
\begin{rmk}
Note that a closed $2$-form is given by a sequence of forms $(\omega_2,\dots, \omega_k,\dots)$, with $\omega_k$ a form of degree $k$ and finitely many nonzero forms, such that $d_\mathrm{dR}\omega_k=\pm d_{\mathrm{int}}\omega_{k+1}$ for $k\geq 2$, where $d_\mathrm{int}$ is the internal differential. Therefore, to say that a 2-form is closed we need to specify more data than just a 2-form. Hence, being closed is a datum, it is not anymore a property like in the smooth case.
\end{rmk}
In particular, a 2-form is symplectic when it is non-degenerate in a suitable sense (see \cite{PTTV13}).
In \cite{CG16}, it is shown that a symplectic form of degree $k$ in the sense of Schwarz is the same as a degree $k-2$ non-degenerate invariant symmetric pairing on $\mathfrak{g}$. Moreover, we have the following lemma that closes the circles between all these apparently different notions of a symplectic form.
\begin{lmm}[\cite{CG16}]
Let $\mathfrak{g}$ be a finite dimensional $\Linf$-algebra. A $k$-shifted symplectic structure in the sense of \cite{PTTV13} on $B\mathfrak{g}$ is the same as a degree $k-2$ non-degenerate invariant symmetric pairing on $\mathfrak{g}$.
\end{lmm}
Hence, a $k$-shifted symplectic structure on an $\Linf$-space $(X,\mathfrak{g})$ can be defined to be such a pairing on $\mathfrak{g}$.
\begin{exm}
Consider a complex manifold $X$ of dimension $2n$, by endowing $X$ with a holomorphic symplectic form (a non-degenerate 2-form on $T^{1,0}X$ which is closed under $d_X=\partial+\Bar{\partial}$, with $\partial$ the holomorphic differential and $\Bar{\partial}$ the antiholomorphic differential), the $\Linf$-space $X_{\Bar{\partial}}$ associated to $X$ becomes 0-shifted symplectic.
\end{exm}
\section{Examples of Feynman graphs for the BFV boundary operator in the $\mathbb{B}$-representation}
\label{app:feyn}
Here, we present the graphs appearing in the BFV boundary operator in the $\mathbb{B}$-representation up to three bulk vertices (black or red) and up to the Feynman rules in Table \ref{class:Tab_coeff_split_fr}. Let us consider $\boldsymbol{\Omega}^{\mathbb{B}}_3=\boldsymbol{\Omega}^{\mathbb{B}}_{3,0}+\boldsymbol{\Omega}^{\mathbb{B}}_{2,1}+\boldsymbol{\Omega}^{\mathbb{B}}_{1,2}+\boldsymbol{\Omega}^{\mathbb{B}}_{0,3}$. We present
\begin{itemize}
\item the graphs appearing in $\boldsymbol{\Omega}^{\mathbb{B}}_{3,0}$ in Figs. \ref{fig:omega30_1}, \ref{fig:omega30_2} and \ref{fig:omega30_3};
\item the graphs appearing in $\boldsymbol{\Omega}^{\mathbb{B}}_{2,1}$ in Fig. \ref{fig:omega21};
\item the graphs appearing in $\boldsymbol{\Omega}^{\mathbb{B}}_{1,2}$ in Fig. \ref{fig:omega12};
\item the graphs appearing in $\boldsymbol{\Omega}^{\mathbb{B}}_{0,3}$ in Fig. \ref{fig:omega03}.
\end{itemize}
We note that all the boundaries in the figures are assumed to be $\partial_2\Sigma_3$.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\node[dot] (d) at (0, 0.65);
\vertex (f) at (0, 1.85);
\vertex (a) at (-3, -1.2);
\vertex (b) at (3, -1.2);
\node[dot] (z) at (0, -1);
\vertex (x) at (-0.5, -0.5);
\vertex (y) at (0.5, -0.5);
\diagram*{
(f) -- [fermion] (d),
(d) -- [fermion] m [dot],
(a) -- (b),
m -- (x),
m -- (y),
(x) -- [fermion] (z),
(y) -- [fermion] (z),
};
\end{feynman}
\draw (-2.7,-1.2) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0, 1.75);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot] (z) at (0, -1.2);
\node[dot] (x) at (-0.5, -0.5);
\diagram*{
(d) -- [fermion] m [dot],
(a) -- (b),
m -- [fermion] (x),
(x) -- [fermion] (z),
m -- [fermion] (z)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0, 1.75);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot] (z) at (0, -1.2);
\node[dot] (x) at (-0.5, -0.5);
\diagram*{
(d) -- [fermion] m [dot],
(a) -- (b),
m -- [anti fermion] (x),
(x) -- [ fermion] (z),
m -- [anti fermion] (z),
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\node[dot] (d) at (0, 0.65);
\vertex (f) at (0, 1.85);
\vertex (a) at (-3, -1.2);
\vertex (b) at (3, -1.2);
\node[dot] (z) at (0, -1);
\vertex (x) at (-0.5, -0.5);
\vertex (y) at (0.5, -0.5);
\diagram*{
(f) -- [fermion] (d),
(d) -- [fermion] m [dot],
(a) -- (b),
m -- [anti fermion] (x),
m -- [anti fermion] (y),
(x) -- (z),
(y) -- (z),
};
\end{feynman}
\draw (-2.7,-1.2) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\node[dot] (d) at (0, 0.65);
\vertex (f) at (0, 1.85);
\vertex (a) at (-3, -1.2);
\vertex (b) at (3, -1.2);
\node[dot] (z) at (0, -1);
\vertex (x) at (-0.5, -0.5);
\vertex (y) at (0.5, -0.5);
\diagram*{
(f) -- [fermion] (d),
(d) -- [anti fermion] m [dot],
(a) -- (b),
m -- [anti fermion] (x),
m -- [anti fermion] (y),
(x) -- (z),
(y) -- (z),
};
\end{feynman}
\draw (-2.7,-1.2) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0, 1.85);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot] (z) at (0, -1.2);
\node[dot] (x) at (-0.5, -0.5);
\diagram*{
(d) -- [fermion] m [dot],
(a) -- (b),
m -- [ fermion] (x),
(x) -- [ anti fermion] (z),
m -- [ anti fermion] (z),
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\node[dot] (d) at (0, 0.65);
\vertex (f) at (0, 2.25);
\vertex (a) at (-3, -0.8);
\vertex (b) at (3, -0.8);
\node[dot] (x) at (-0.5, -0.5);
\vertex (y) at (0.5, -0.5);
\diagram*{
(f) -- [fermion] (d),
(d) -- [fermion] m [dot],
(a) -- (b),
m -- [ fermion] (x),
m -- [anti fermion] (y),
(x) -- (y),
};
\end{feynman}
\draw (-2.7,-0.8) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0, 1.75);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot] (z) at (0, -1.2);
\node[dot] (x) at (-0.5, -0.5);
\diagram*{
(d) -- [fermion] m [dot],
(a) -- (b),
m -- [ fermion] (x),
(x) -- [ fermion] (z),
m -- [ anti fermion] (z),
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\node[dot] (d) at (0, 0.65);
\vertex (f) at (0, 2.25);
\vertex (a) at (-3, -0.8);
\vertex (b) at (3, -0.8);
\node[dot] (x) at (-0.5, -0.5);
\vertex (y) at (0.5, -0.5);
\diagram*{
(f) -- [fermion] (d),
(d) -- [anti fermion] m [dot],
(a) -- (b),
m -- [ fermion] (x),
m -- (y),
(x) -- [ anti fermion] (y),
};
\end{feynman}
\draw (-2.7,-0.8) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\caption{First part of the graphs appearing in $\boldsymbol{\Omega}^{\mathbb{B}}_{3,0}$. However, most of them do not give any contribution since by Kontsevich's vanishing lemma \cite{Ko03} all the graphs with vertices where exactly one arrow ends and starts vanish and also the graphs with double edges, i.e. when two edges connecting the same two vertices, vanish. This can be seen by using Kontsevich's angle form propagator on $\mathbb{H}^3$.}
\label{fig:omega30_1}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.325\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[]
\node[dot] (d) at (0, 0.5);
\vertex (e) at (0, 1.2);
\vertex (a) at (-2.75, -1.4);
\vertex (b) at (2.75, -1.4);
\node[dot] (z) at (0, -1.2);
\vertex (x) at (-0.5,0);
\vertex (y) at (0.5,0);
\node[dot] (q) at (0, -0.5);
\diagram*{
(x) -- (d),
(y) -- (d),
(a) -- (b),
(e) --[fermion] (d),
(q) --[anti fermion] (x),
(q) --[anti fermion] (y),
(q) -- [anti fermion] (z),
};
\end{feynman}
\draw (-2.15,-1.4) arc (180:0:2.15);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.325\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[]
\node[dot] (d) at (0, 0.5);
\vertex (e) at (0, 1.2);
\vertex (a) at (-2.75, -1.4);
\vertex (b) at (2.75, -1.4);
\node[dot] (z) at (0, -1.2);
\vertex (x) at (-0.5,0);
\vertex (y) at (0.5,0);
\node[dot] (q) at (0, -0.5);
\diagram*{
(x) -- (d),
(y) -- (d),
(a) -- (b),
(e) --[fermion] (d),
(q) --[anti fermion] (x),
(q) --[anti fermion] (y),
(q) -- [ fermion] (z),
};
\end{feynman}
\draw (-2.15,-1.4) arc (180:0:2.15);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.325\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[]
\node[dot] (d) at (0, 0.5);
\vertex (e) at (0, 1.2);
\vertex (a) at (-2.75, -1.4);
\vertex (b) at (2.75, -1.4);
\node[dot] (z) at (0, -1.2);
\vertex (x) at (-0.5,0);
\vertex (y) at (0.5,0);
\node[dot] (q) at (0, -0.5);
\diagram*{
(x) -- (d),
(y) -- (d),
(a) -- (b),
(e) --[fermion] (d),
(q) --[anti fermion] (x),
(q) --[ fermion] (y),
(q) -- [ fermion] (z),
};
\end{feynman}
\draw (-2.15,-1.4) arc (180:0:2.15);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.325\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[]
\node[dot] (d) at (0, 0.5);
\vertex (e) at (0, 1.2);
\vertex (a) at (-2.75, -1.4);
\vertex (b) at (2.75, -1.4);
\node[dot] (z) at (0, -1.2);
\vertex (x) at (-0.5,0);
\vertex (y) at (0.5,0);
\node[dot] (q) at (0, -0.5);
\diagram*{
(x) -- (d),
(y) -- (d),
(a) -- (b),
(e) --[fermion] (d),
(q) --[fermion] (x),
(q) --[anti fermion] (y),
(q) -- [anti fermion] (z),
};
\end{feynman}
\draw (-2.15,-1.4) arc (180:0:2.15);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hspace{1cm}
\begin{subfigure}[b]{0.325\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[]
\node[dot] (d) at (0, 0.5);
\vertex (e) at (0, 1.2);
\vertex (a) at (-2.75, -1.4);
\vertex (b) at (2.75, -1.4);
\node[dot] (z) at (0, -1.2);
\vertex (x) at (-0.5,0);
\vertex (y) at (0.5,0);
\node[dot] (q) at (0, -0.5);
\diagram*{
(x) -- (d),
(y) -- (d),
(a) -- (b),
(e) --[fermion] (d),
(q) --[fermion] (x),
(q) --[fermion] (y),
(q) -- [fermion] (z),
};
\end{feynman}
\draw (-2.15,-1.4) arc (180:0:2.15);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\caption{Second part of the graphs appearing in $\boldsymbol{\Omega}^{\mathbb{B}}_{3,0}$. All of them do not give any contribution since again by Kontsevich's vanishing lemma \cite{Ko03} all the graphs with double edges vanish.}
\label{fig:omega30_2}
\end{figure}
\vfill
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.325\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[]
\node[dot,red] (d) at (0, 0.5);
\vertex (e) at (0, 1.2);
\vertex (a) at (-2.75, -1.4);
\vertex (b) at (2.75, -1.4);
\node[dot] (z) at (0, -1.2);
\vertex (x) at (-0.5,0);
\vertex (y) at (0.5,0);
\node[dot,red] (q) at (0, -0.5);
\diagram*{
(x) -- (d),
(y) -- (d),
(a) -- (b),
(e) --[fermion] (d),
(q) --[anti fermion] (x),
(q) --[anti fermion] (y),
(q) -- [anti fermion] (z),
};
\end{feynman}
\draw (-2.15,-1.4) arc (180:0:2.15);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.325\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[]
\node[dot,red] (d) at (0, 0.5);
\vertex (e) at (0, 1.2);
\vertex (a) at (-2.75, -1.4);
\vertex (b) at (2.75, -1.4);
\node[dot] (z) at (0, -1.2);
\vertex (x) at (-0.5,0);
\vertex (y) at (0.5,0);
\node[dot,red] (q) at (0, -0.5);
\diagram*{
(x) -- (d),
(y) -- (d),
(a) -- (b),
(e) --[fermion] (d),
(q) --[anti fermion] (x),
(q) --[anti fermion] (y),
(q) -- [ fermion] (z),
};
\end{feynman}
\draw (-2.15,-1.4) arc (180:0:2.15);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.325\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[]
\node[dot,red] (d) at (0, 0.5);
\vertex (e) at (0, 1.2);
\vertex (a) at (-2.75, -1.4);
\vertex (b) at (2.75, -1.4);
\node[dot] (z) at (0, -1.2);
\vertex (x) at (-0.5,0);
\vertex (y) at (0.5,0);
\node[dot,red] (q) at (0, -0.5);
\diagram*{
(x) -- (d),
(y) -- (d),
(a) -- (b),
(e) --[fermion] (d),
(q) --[anti fermion] (x),
(q) --[ fermion] (y),
(q) -- [ fermion] (z),
};
\end{feynman}
\draw (-2.15,-1.4) arc (180:0:2.15);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.325\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[]
\node[dot,red] (d) at (0, 0.5);
\vertex (e) at (0, 1.2);
\vertex (a) at (-2.75, -1.4);
\vertex (b) at (2.75, -1.4);
\node[dot] (z) at (0, -1.2);
\vertex (x) at (-0.5,0);
\vertex (y) at (0.5,0);
\node[dot,red] (q) at (0, -0.5);
\diagram*{
(x) -- (d),
(y) -- (d),
(a) -- (b),
(e) --[fermion] (d),
(q) --[fermion] (x),
(q) --[anti fermion] (y),
(q) -- [anti fermion] (z),
};
\end{feynman}
\draw (-2.15,-1.4) arc (180:0:2.15);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hspace{1cm}
\begin{subfigure}[b]{0.325\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[]
\node[dot,red] (d) at (0, 0.5);
\vertex (e) at (0, 1.2);
\vertex (a) at (-2.75, -1.4);
\vertex (b) at (2.75, -1.4);
\node[dot] (z) at (0, -1.2);
\vertex (x) at (-0.5,0);
\vertex (y) at (0.5,0);
\node[dot,red] (q) at (0, -0.5);
\diagram*{
(x) -- (d),
(y) -- (d),
(a) -- (b),
(e) --[fermion] (d),
(q) --[fermion] (x),
(q) --[fermion] (y),
(q) -- [fermion] (z),
};
\end{feynman}
\draw (-2.15,-1.4) arc (180:0:2.15);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\caption{Graphs appearing in $\boldsymbol{\Omega}^{\mathbb{B}}_{1,2}$. All of them do not give any contribution since again by Kontsevich's vanishing lemma \cite{Ko03} all the graphs with double edges vanish.}
\label{fig:omega12}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot] (d) at (0, 1);
\vertex (e) at (-0.5, 2);
\vertex (f) at (0.5, 2);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot] (z) at (0, -0.7);
\node[dot] (s) at (0, -1.4);
\vertex (x) at (-0.5,-0.25);
\vertex (q) at (0.5,-0.25);
\node[dot] (y) at (0,0.25);
\diagram*{
(a) -- (b),
(e) --[fermion] (d),
(f) --[fermion] (d),
(d) -- [anti fermion] (y),
(y) --[anti fermion] (x),
(y) --[anti fermion] (q),
(z) -- (x),
(z) -- (q),
(z) --[fermion] (s)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot] (d) at (0, 1);
\vertex (e) at (-0.5, 2);
\vertex (f) at (0.5, 2);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot] (z) at (0, -0.7);
\node[dot] (s) at (0, -1.4);
\vertex (x) at (-0.5,-0.25);
\vertex (q) at (0.5,-0.25);
\node[dot] (y) at (0,0.25);
\diagram*{
(a) -- (b),
(e) --[fermion] (d),
(f) --[fermion] (d),
(d) -- [anti fermion] (y),
(y) --[ fermion] (x),
(y) --[ fermion] (q),
(z) -- (x),
(z) -- (q),
(z) --[fermion] (s)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot] (d) at (0, 1);
\vertex (e) at (0, 2);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot] (z) at (0, -0.7);
\node[dot] (s) at (0, -1.4);
\vertex (x) at (0.5,0.5);
\vertex (p) at (-0.5,0.5);
\vertex (y) at (-2,1.75);
\diagram*{
(a) -- (b),
(e) --[fermion] (d),
(p) -- (d),
(x) -- (d),
(p) --[fermion] m[dot],
(x) --[fermion] m,
m -- [anti fermion] (z),
(y) -- [fermion] (z),
(z) --[fermion] (s)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot] (d) at (0, 1);
\vertex (e) at (0, 2);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot] (z) at (0, -0.7);
\node[dot] (s) at (0, -1.4);
\vertex (x) at (0.5,0.5);
\vertex (p) at (-0.5,0.5);
\vertex (y) at (-2,1.75);
\diagram*{
(a) -- (b),
(e) --[fermion] (d),
(p) -- (d),
(x) -- (d),
(p) --[anti fermion] m[dot],
(x) --[anti fermion] m,
m -- [anti fermion] (z),
(y) -- [fermion] (z),
(z) --[fermion] (s)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot] (d) at (0, 0.25);
\vertex (e) at (0, 1.75);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot] (z) at (0, -0.7);
\node[dot] (s) at (0, -1.4);
\node[dot] (x) at (-0.5,-0.25);
\vertex (y) at (-1,1.75);
\diagram*{
(a) -- (b),
(e) --[fermion] (d),
(x) -- (d),
(x) --[anti fermion] (d),
(x) --[anti fermion] (z),
(d) -- [ fermion] (z),
(y) -- [fermion] (x),
(z) --[fermion] (s)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot] (d) at (0, 0.25);
\vertex (e) at (0, 1.75);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot] (z) at (0, -0.5);
\node[dot] (s) at (0, -1.4);
\node[dot] (x) at (-0.5,-0.25);
\vertex (p) at (0.5,-0.9);
\vertex (q) at (-0.5,-0.9);
\vertex (y) at (-0.5,1.75);
\vertex (n) at (-1,1.75);
\diagram*{
(a) -- (b),
(e) --[fermion] (d),
(x) -- (d),
(x) --[anti fermion] (d),
(d) -- [ fermion] (z),
(y) -- [fermion] (x),
(n) -- [fermion] (x),
(p) --[fermion] (s),
(p) -- (z),
(q) --[fermion] (s),
(q) -- (z)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot] (d) at (0, 0.25);
\vertex (e) at (0, 1.75);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot] (z) at (0, -0.5);
\node[dot] (s) at (0, -1.4);
\node[dot] (x) at (-0.5,-0.25);
\vertex (p) at (0.75,-0.5);
\vertex (y) at (-0.5,1.75);
\vertex (n) at (-1,1.75);
\diagram*{
(a) -- (b),
(e) --[fermion] (d),
(x) --[anti fermion] (z),
(d) -- [ fermion] (z),
(y) -- [fermion] (x),
(n) -- [fermion] (x),
(z) --[fermion] (s),
(d) -- (p),
(p) --[fermion] (s)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot] (d) at (0, 0.25);
\vertex (e) at (-1, 1.75);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot] (z) at (0, -0.5);
\node[dot] (s) at (0, -1.4);
\node[dot] (x) at (0.75,-0.1);
\vertex (p) at (0.75,-0.5);
\vertex (y) at (+0.75,1.75);
\vertex (n) at (+1.25,1.75);
\diagram*{
(a) -- (b),
(e) --[fermion] (z),
(x) -- (d),
(x) --[anti fermion] (d),
(d) -- [ anti fermion] (z),
(y) -- [fermion] (x),
(n) -- [fermion] (x),
(z) --[fermion] (s),
(p) --[fermion] (s),
(p) -- (d)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot] (d) at (0, 0.25);
\vertex (e) at (-1, 1.75);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot] (z) at (0, -0.5);
\node[dot] (s) at (0, -1.4);
\node[dot] (x) at (0.75,-0.1);
\vertex (y) at (0,1.75);
\vertex (n) at (+1.25,1.75);
\diagram*{
(a) -- (b),
(e) --[fermion] (z),
(x) -- (d),
(x) --[ fermion] (d),
(d) -- [ anti fermion] (z),
(y) -- [fermion] (d),
(n) -- [fermion] (x),
(z) --[fermion] (s),
(x) --[fermion] (s)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot] (d) at (0, 0.25);
\vertex (e) at (0, 1.75);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot] (z) at (-0.5, -0.5);
\node[dot] (s) at (0, -1.4);
\node[dot] (x) at (0.75,-0.1);
\vertex (y) at (+0.75,1.75);
\vertex (n) at (+1.25,1.75);
\vertex (q) at (-0.5,1.75);
\vertex (r) at (-0.75, -0.9);
\diagram*{
(a) -- (b),
(e) --[fermion] (d),
(x) -- (d),
(x) --[anti fermion] (d),
(s) -- [anti fermion] (z),
(y) -- [fermion] (x),
(n) -- [fermion] (x),
(r) --[fermion] (s),
(z) -- (r),
(d) --[fermion] (s),
(q) --[fermion] (z)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\caption{Third part of the graphs appearing in $\boldsymbol{\Omega}^{\mathbb{B}}_{3,0}$. Most of them do not give any contribution since again by Kontsevich's vanishing lemma \cite{Ko03} all the graphs with double edges vanish.}
\label{fig:omega30_3}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\node[dot] (d) at (0, 0.65);
\vertex (f) at (0, 1.85);
\vertex (a) at (-3, -1.2);
\vertex (b) at (3, -1.2);
\node[dot] (z) at (0, -1);
\vertex (x) at (-0.5, -0.5);
\vertex (y) at (0.5, -0.5);
\diagram*{
(f) -- [fermion] (d),
(d) -- [fermion] m [dot, red],
(a) -- (b),
m -- (x),
m -- (y),
(x) -- [fermion] (z),
(y) -- [fermion] (z),
};
\end{feynman}
\draw (-2.7,-1.2) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0, 1.75);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot] (z) at (0, -1.2);
\node[dot] (x) at (-0.5, -0.5);
\diagram*{
(d) -- [fermion] m [dot, red],
(a) -- (b),
m -- [fermion] (x),
(x) -- [fermion] (z),
m -- [fermion] (z)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0, 1.75);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot] (z) at (0, -1.2);
\node[dot] (x) at (-0.5, -0.5);
\diagram*{
(d) -- [fermion] m [dot, red],
(a) -- (b),
m -- [anti fermion] (x),
(x) -- [ fermion] (z),
m -- [anti fermion] (z),
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\node[dot] (d) at (0, 0.65);
\vertex (f) at (0, 1.85);
\vertex (a) at (-3, -1.2);
\vertex (b) at (3, -1.2);
\node[dot] (z) at (0, -1);
\vertex (x) at (-0.5, -0.5);
\vertex (y) at (0.5, -0.5);
\diagram*{
(f) -- [fermion] (d),
(d) -- [fermion] m [dot, red],
(a) -- (b),
m -- [anti fermion] (x),
m -- [anti fermion] (y),
(x) -- (z),
(y) -- (z),
};
\end{feynman}
\draw (-2.7,-1.2) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\node[dot] (d) at (0, 0.65);
\vertex (f) at (0, 1.85);
\vertex (a) at (-3, -1.2);
\vertex (b) at (3, -1.2);
\node[dot] (z) at (0, -1);
\vertex (x) at (-0.5, -0.5);
\vertex (y) at (0.5, -0.5);
\diagram*{
(f) -- [fermion] (d),
(d) -- [anti fermion] m [dot, red],
(a) -- (b),
m -- [anti fermion] (x),
m -- [anti fermion] (y),
(x) -- (z),
(y) -- (z),
};
\end{feynman}
\draw (-2.7,-1.2) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0, 1.75);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot] (z) at (0, -1.2);
\node[dot] (x) at (-0.5, -0.5);
\diagram*{
(d) -- [fermion] m [dot, red],
(a) -- (b),
m -- [ fermion] (x),
(x) -- [ anti fermion] (z),
m -- [ anti fermion] (z),
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\node[dot] (d) at (0, 0.65);
\vertex (f) at (0, 2.25);
\vertex (a) at (-3, -0.8);
\vertex (b) at (3, -0.8);
\node[dot] (x) at (-0.5, -0.5);
\vertex (y) at (0.5, -0.5);
\diagram*{
(f) -- [fermion] (d),
(d) -- [fermion] m [dot, red],
(a) -- (b),
m -- [ fermion] (x),
m -- [anti fermion] (y),
(x) -- (y),
};
\end{feynman}
\draw (-2.7,-0.8) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\vertex (d) at (0, 1.75);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot] (z) at (0, -1.2);
\node[dot] (x) at (-0.5, -0.5);
\diagram*{
(d) -- [fermion] m [dot, red],
(a) -- (b),
m -- [ fermion] (x),
(x) -- [ fermion] (z),
m -- [ anti fermion] (z),
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\centering
\begin{tikzpicture}
\begin{feynman}[every blob={/tikz/fill=white!30,/tikz/inner sep=0.5pt}]
\node[dot] (d) at (0, 0.65);
\vertex (f) at (0, 2.25);
\vertex (a) at (-3, -0.8);
\vertex (b) at (3, -0.8);
\node[dot] (x) at (-0.5, -0.5);
\vertex (y) at (0.5, -0.5);
\diagram*{
(f) -- [fermion] (d),
(d) -- [anti fermion] m [dot, red],
(a) -- (b),
m -- [ fermion] (x),
m -- (y),
(x) -- [ anti fermion] (y),
};
\end{feynman}
\draw (-2.7,-0.8) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\caption{Graphs appearing in $\boldsymbol{\Omega}^{\mathbb{B}}_{2,1}$. Most of them do not give any contribution since again by Kontsevich's vanishing lemma \cite{Ko03} all the graphs with vertices where exactly one arrow ends and starts and graphs with double edges vanish.}
\label{fig:omega21}
\end{figure}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot,red] (d) at (0, 1);
\vertex (e) at (-0.5, 2);
\vertex (f) at (0.5, 2);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot, red] (z) at (0, -0.7);
\node[dot] (s) at (0, -1.4);
\vertex (x) at (-0.5,-0.25);
\vertex (q) at (0.5,-0.25);
\node[dot,red] (y) at (0,0.25);
\diagram*{
(a) -- (b),
(e) --[fermion] (d),
(f) --[fermion] (d),
(d) -- [anti fermion] (y),
(y) --[anti fermion] (x),
(z) -- (x),
(y) --[anti fermion] (q),
(z) -- (q),
(z) --[fermion] (s)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot,red] (d) at (0, 1);
\vertex (e) at (-0.5, 2);
\vertex (f) at (0.5, 2);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot, red] (z) at (0, -0.7);
\node[dot] (s) at (0, -1.4);
\vertex (x) at (-0.5,-0.25);
\vertex (q) at (0.5,-0.25);
\node[dot,red] (y) at (0,0.25);
\diagram*{
(a) -- (b),
(e) --[fermion] (d),
(f) --[fermion] (d),
(d) -- [anti fermion] (y),
(y) --[ fermion] (x),
(z) -- (x),
(y) --[ fermion] (q),
(z) -- (q),
(z) --[fermion] (s)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot,red] (d) at (0, 1);
\vertex (e) at (0, 2);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot, red] (z) at (0, -0.7);
\node[dot] (s) at (0, -1.4);
\vertex (x) at (0.5,0.5);
\vertex (p) at (-0.5,0.5);
\vertex (y) at (-2,1.75);
\diagram*{
(a) -- (b),
(e) --[fermion] (d),
(p) -- (d),
(x) -- (d),
(p) --[fermion] m[dot,red],
(x) --[fermion] m,
m -- [anti fermion] (z),
(y) -- [fermion] (z),
(z) --[fermion] (s)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot,red] (d) at (0, 1);
\vertex (e) at (0, 2);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot, red] (z) at (0, -0.7);
\node[dot] (s) at (0, -1.4);
\vertex (x) at (0.5,0.5);
\vertex (p) at (-0.5,0.5);
\vertex (y) at (-2,1.75);
\diagram*{
(a) -- (b),
(e) --[fermion] (d),
(p) -- (d),
(x) -- (d),
(p) --[anti fermion] m[dot,red],
(x) --[anti fermion] m,
m -- [anti fermion] (z),
(y) -- [fermion] (z),
(z) --[fermion] (s)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot,red] (d) at (0, 0.25);
\vertex (e) at (0, 1.75);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot, red] (z) at (0, -0.7);
\node[dot] (s) at (0, -1.4);
\node[dot,red] (x) at (-0.5,-0.25);
\vertex (y) at (-1,1.75);
\diagram*{
(a) -- (b),
(e) --[fermion] (d),
(x) -- (d),
(x) --[anti fermion] (d),
(x) --[anti fermion] (z),
(d) -- [ fermion] (z),
(y) -- [fermion] (x),
(z) --[fermion] (s)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot,red] (d) at (0, 0.25);
\vertex (e) at (0, 1.75);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot, red] (z) at (0, -0.5);
\node[dot] (s) at (0, -1.4);
\node[dot,red] (x) at (-0.5,-0.25);
\vertex (p) at (0.5,-0.9);
\vertex (q) at (-0.5,-0.9);
\vertex (y) at (-0.5,1.75);
\vertex (n) at (-1,1.75);
\diagram*{
(a) -- (b),
(e) --[fermion] (d),
(x) -- (d),
(x) --[anti fermion] (d),
(d) -- [ fermion] (z),
(y) -- [fermion] (x),
(n) -- [fermion] (x),
(p) --[fermion] (s),
(p) -- (z),
(q) --[fermion] (s),
(q) -- (z)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot,red] (d) at (0, 0.25);
\vertex (e) at (0, 1.75);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot, red] (z) at (0, -0.5);
\node[dot] (s) at (0, -1.4);
\node[dot,red] (x) at (-0.5,-0.25);
\vertex (p) at (0.75,-0.5);
\vertex (y) at (-0.5,1.75);
\vertex (n) at (-1,1.75);
\diagram*{
(a) -- (b),
(e) --[fermion] (d),
(x) --[anti fermion] (z),
(d) -- [ fermion] (z),
(y) -- [fermion] (x),
(n) -- [fermion] (x),
(z) --[fermion] (s),
(d) -- (p),
(p) --[fermion] (s)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot,red] (d) at (0, 0.25);
\vertex (e) at (-1, 1.75);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot, red] (z) at (0, -0.5);
\node[dot] (s) at (0, -1.4);
\node[dot,red] (x) at (0.75,-0.1);
\vertex (p) at (0.75,-0.5);
\vertex (y) at (+0.75,1.75);
\vertex (n) at (+1.25,1.75);
\diagram*{
(a) -- (b),
(e) --[fermion] (z),
(x) -- (d),
(x) --[anti fermion] (d),
(d) -- [ anti fermion] (z),
(y) -- [fermion] (x),
(n) -- [fermion] (x),
(z) --[fermion] (s),
(p) --[fermion] (s),
(p) -- (d)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot,red] (d) at (0, 0.25);
\vertex (e) at (-1, 1.75);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot, red] (z) at (0, -0.5);
\node[dot] (s) at (0, -1.4);
\node[dot,red] (x) at (0.75,-0.1);
\vertex (y) at (0,1.75);
\vertex (n) at (+1.25,1.75);
\diagram*{
(a) -- (b),
(e) --[fermion] (z),
(x) -- (d),
(x) --[ fermion] (d),
(d) -- [ anti fermion] (z),
(y) -- [fermion] (d),
(n) -- [fermion] (x),
(z) --[fermion] (s),
(x) --[fermion] (s)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\hfill
\begin{subfigure}[b]{0.4\textwidth}
\begin{tikzpicture}
\begin{feynman}[]
\node[dot,red] (d) at (0, 0.25);
\vertex (e) at (0, 1.75);
\vertex (a) at (-3, -1.4);
\vertex (b) at (3, -1.4);
\node[dot, red] (z) at (-0.5, -0.5);
\node[dot] (s) at (0, -1.4);
\node[dot,red] (x) at (0.75,-0.1);
\vertex (y) at (+0.75,1.75);
\vertex (n) at (+1.25,1.75);
\vertex (q) at (-0.5,1.75);
\vertex (r) at (-0.75, -0.9);
\diagram*{
(a) -- (b),
(e) --[fermion] (d),
(x) -- (d),
(x) --[anti fermion] (d),
(s) -- [anti fermion] (z),
(y) -- [fermion] (x),
(n) -- [fermion] (x),
(r) --[fermion] (s),
(z) -- (r),
(d) --[fermion] (s),
(q) --[fermion] (z)
};
\end{feynman}
\draw (-2.7,-1.4) arc (180:0:2.7);
\end{tikzpicture}
\caption{}
\label{}
\end{subfigure}%
\caption{Graphs appearing in $\boldsymbol{\Omega}^{\mathbb{B}}_{0,3}$. Most of them do not give any contribution since again by Kontsevich's vanishing lemma \cite{Ko03} all the graphs with double edges vanish.}
\label{fig:omega03}
\end{figure}
\end{appendix}
\clearpage
\printbibliography
\end{document}
|
1,314,259,995,921 | arxiv | \section{Introduction}
Understanding the limits of controllability of quantum and classical transport has long been considered a topic of great relevance in physics, chemistry, and biology \cite{hanggi2009,lambert2012,cao2020}. In particular, the study of novel materials that exhibit excitation-energy transfer pathways, that are different from those available in nature, has recently attracted a great deal of attention. Indeed, the control of transport phenomena at the nanoscale has shown an enormous potential for the development of new light-harvesting technologies for solar energy conversion \cite{photo_book}, enhanced sensing \cite{hodaei_2017,pirandola2018,chen2020}, and even for the design of electronic and photonic circuits capable of performing complex tasks with high efficiency \cite{aspuru2012,lee2018,wang2020,elshaari2020}. In this regard, quantum random walks have emerged as useful tools for the experimental simulation of non-trivial transport phenomena. In general, quantum networks have been implemented on different platforms, such as optical cavities \cite{caruso2011,viciani2015,beaudoin2017}, trapped ions \cite{zahringer2010,blatt2012,trautmann2018}, ultracold atomic lattices \cite{lewenstein2007,bloch2012,preiss2015,dadras2018}, superconducting circuits \cite{peropadre2016,chin2018,yan2019,kjaergaard2020}, and integrated photonics \cite{schreiber2011,sansoni2012,crespi2013,rechtsman2013,caruso2016,armando2018,harris2017,alan2019,magana2019multiphoton,you2020multiparticle}.
Unfortunately, these platforms do not seem to satisfy, at once, all desirable features of a universal simulator, namely full control of the system's parameters, low losses, and scalability. In this work, we demonstrate robust simulation of quantum transport using a state-of-art reconfigurable electronic network. This is managed by constructing a unique mapping that allows us to establish a direct connection between the probability amplitudes of a quantum tight-binding system and the voltages of coupled electrical-oscillator networks. Our platform, which comprises ten fully reconfigurable RLC oscillators, is implemented by means of operational amplifiers and passive linear electrical components. This let us operate, as many of the aforementioned platforms, within the single-excitation Hilbert subspace, i.e., the space that describes the dynamics of a single particle in a tight-binding quantum network \cite{roberto-LPL}.
To test the versatility and precision of our platform, we have implemented different quantum transport protocols that demand specific site-frequencies and coupling conditions. In particular, we have explored the ballistic propagation of a single-excitation wavefunction in an ordered lattice and its localization due to stochastically-varying couplings (static disorder), the so-called Anderson localization \cite{anderson1958}. We have implemented the Su-Schrieffer-Heeger (SSH) model \cite{ssh1,ssh2}, where the proper use of alternating-coupling values, and fixed site-energies, allows us to directly observe the emergence of one-dimensional edge states. Because of its relevance for scalable quantum computing and communication \cite{nielsen_book,liang2005}, we have implemented the protocol known as perfect transport \cite{chris2004,armando2013}, which makes use of a linear chain of qubits (or sites) where the couplings between them follow a precise square-root rule to coherently transfer quantum states.
Finally, we have tested our platform capabilities for mimicking the transport behavior of photosynthetic light-harvesting complexes by implementing the first simulation of the exciton dynamics in the B800 ring of the purple bacteria LH2 complex \cite{cheng2006}.
The dynamics of a single excitation in a system comprising $N$ coupled quantum oscillators is described by the Schrodinger equation $i\partial_t\ket{\psi\pare{t}} = \hat{H}\ket{\psi\pare{t}}$, where the Hamiltonian is given by
\begin{equation}\label{Eq:Hamiltonian}
\hat{H}=\sum_{n=1}^{N}\varepsilon_n\ket{n}\bra{n} + \sum_{n\neq m}^{N}J_{nm}\ket{n}\bra{m},
\end{equation}
with $\ket{n}$ denoting the energy density associated to the $n$th oscillator. The $n$th-site energies and the coupling between sites $n$ and $m$ are given by $\varepsilon_n$ and $J_{nm}$, respectively. Then, by expanding the time-dependent wavefunction in the site basis, i.e. $\ket{\psi\pare{t}} = \sum_{n}c_{n}\pare{t}\ket{n}$, it is straightforward to find that the Schrodinger equation leads to a set of first-order coupled differential equations of the form $i\partial_{t}c_{n} = \varepsilon_{n}c_{n} + \sum_{n\neq m}^{N}J_{nm}c_{m}$. In the weak-coupling limit ($J_{nm}\ll \varepsilon_{n}$), the time-derivative of this equation becomes \cite{briggs2011,roberto2013}
\begin{equation}\label{Eq:second_dev_Q}
\frac{d^{2}c_{n}}{dt^{2}} = -\varepsilon_{n}^{2}c_{n} - \varepsilon_{n}\sum_{n\neq m}^{N}2J_{nm}c_{m}.
\end{equation}
As we will show below, the importance of this expression resides in the fact that it allows us to establish a direct connection between the probability amplitudes $c_{n}$ of a quantum system and the voltages $V_{n}$ in an electrical-oscillator network. To do so, let us consider an array of $N$ inductively-coupled RLC oscillators (see Supplementary Materials for details), where R, L and C stand for resistor, inductor and capacitor, respectively. We can use the Kirchhoff laws to find that the equations of motion for the voltages $V_{n}\pare{t}$ across the capacitors $C_{n}$ are given by
\begin{equation}\label{Eq:second_dev_C}
\begin{split}
\frac{d^{2}V_{n}}{dt^{2}} = \frac{1}{C_{n}} & \left[- \frac{1}{R_{n}}\frac{dV_{n}}{dt} - \frac{V_{n}}{L_{n}} - \sum_{j=n+1}^{N}\frac{V_{n}-V_{j}}{L_{nj}}\right. \\
& \left. \hspace{2mm} + \sum_{j=1}^{j<n}\frac{V_{j} - V_{n}}{L_{jn}} \right],
\end{split}
\end{equation}
where $L_{nj}$ stands for the inductor that couples the $n$th and $j$th oscillators. Remarkably, by writing Eq. (\ref{Eq:second_dev_C}) in the non-dissipative limit, i.e. when $R\rightarrow \infty$, one can find that it is mathematically equivalent to Eq. (\ref{Eq:second_dev_Q}) with
\begin{equation}\label{Eq:Energies}
\varepsilon_{n}^{2} = \frac{1}{C_{n}}\pare{\frac{1}{L_{n}} + \sum_{m\neq n}^{N}\frac{1}{L_{nm}}}, \hspace{2mm} J_{nm} = -\frac{1}{2\varepsilon_{n}L_{nm}C_{n}}.
\end{equation}
This mapping among probability amplitudes, $c_{n}\pare{t}$, and voltages, $V_{n}\pare{t}$, is then completed by adding a non-Hermitian term to the Hamiltonian (\ref{Eq:Hamiltonian}), which accounts for the parasitic losses that are present in its experimental implementation. It is worth mentioning that this non-Hermitian term is determined by analyzing the time-dependent energy in the quantum and electronic models. While the total energy in the quantum system is given by $Q_{\text{q}}(t)=\sum_{n}\left|c_n\right|^2$, the energy stored in and across the ten coupled oscillators is obtained by writing $Q_{\text{cl}}(t)=\frac{1}{2}\sum_{m \neq n} C_nV_n^2+L_nI_n^2+L_{nm}I_{nm}^2$, where $I_{n}$ and $I_{nm}$ stand for the currents passing through the oscillator and coupling inductors, respectively. By tracking time-traces for both energies, one can find that the energy decay rates keep a quantitative agreement if the term $\hat{H}_{\text{loss}} = -\frac{i}{2}\sum_{n}\Gamma_{n}\ket{n}\bra{n}$, with $\Gamma_{n} = 1/\pare{R_{n}C_{n}}$ describing the rate at which energy is dissipated, is included in the Hamiltonian (\ref{Eq:Hamiltonian}), see Supplementary Materials for details.
Our current version of the electronic platform comprises ten fully reconfigurable RLC oscillators, where site-frequencies and couplings can be independently selected from a broad range of possible values (see Supplementary Materials and Refs. \cite{roberto2015,alan2017,roberto2018} for details). This allows us to explore different quantum transport protocols, including Anderson localization, the emergence of edge states in the SSH model, the coherent transfer of a quantum state, and the simulation of excitonic energy transport in photosynthetic light-harvesting complexes.
\begin{figure}[b!]
\centering
\includegraphics[width=8.25cm]{Figure1.pdf}
\caption{Time evolution of an excitation (voltage signal) initialized in the central site of a one-dimensional network comprising nine nearest-neighbor-coupled oscillators. The rows (from top to bottom) depict the evolution for increasingly larger degree of disorder: (a-b) $\Delta = 0$, (c-d) $\Delta = 0.5$, and (e-f) $\Delta = 0.9$. The results presented in all panels correspond to the average of 50 different disordered-array time evolutions.}
\label{Fig:Anderson_results}
\end{figure}
\emph{Anderson Localization.-} The localization of a particle's wavefunction in disordered lattices is one of the most fascinating effects in physics \cite{anderson1958}. This fundamental phenomenon, known as Anderson localization, arises from the interference of multiple scattering effects. In this scenario, the wavefunction of a propagating particle in a lattice is affected by static disorder, introduced in either the lattice-site energies (diagonal disorder) or in the coupling among them (off-diagonal disorder) \cite{armando2011}. We have implemented the Anderson localization protocol by making use of $N=9$ out of the ten available oscillators in our electronic platform. The oscillators are arranged in a one-dimensional nearest-neighbor-coupled lattice described by the Hamiltonian
\begin{eqnarray}
\hat{H}_{\text{AL}} & = & \sum_{n=1}^{N}\pare{\varepsilon_n-i\Gamma_{n}}\ket{n}\bra{n} + \sum_{n = 1}^{N-1}J_{n,n+1}\ket{n}\bra{n+1} \nonumber \\
& + & \sum_{n = 1}^{N-1}J_{n+1,n}\ket{n+1}\bra{n}.
\end{eqnarray}
All site frequencies $\varepsilon_n$ and losses $\Gamma_{n}$ are described by the values presented in the first row of Table I in the Supplementary Materials. The static disorder is introduced through the coupling between sites by randomly selecting the value of each coupling inductor from a uniform distribution. This is described by $\cor{L_{x}\pare{1 - \Delta}, L_{x}\pare{1 + \Delta}}$ with $L_{x}=96.05$ mH and $\Delta = 0,0.5,0.9$ indicating the degree of the lattice disorder. Figure \ref{Fig:Anderson_results} shows the time evolution of an excitation (voltage signal) initialized in the central site of the one-dimensional lattice. The first column shows the quantum-mechanically-predicted population $\abs{c_{n}}^2$ evolution, whereas the second column shows our experimentally-obtained squared-voltage-signal $\abs{V_{n}}^2$ evolution. Notice that, as one might expect, ballistic propagation of the excitation is observed when disorder is absent ($\Delta = 0$), see Figs. \ref{Fig:Anderson_results}(a-b); while for strong disorder ($\Delta=0.9$) the excitation gets localized in the central site of the lattice, as depicted in Figs. \ref{Fig:Anderson_results}(e-f). It is important to remark that given the stochastic nature of Anderson localization, the results shown in each panel of Fig. \ref{Fig:Anderson_results} correspond to the average of 50 different disordered-array time evolutions.
\emph{The Su-Schrieffer-Heeger (SSH) Model.-} One of the simplest models to study non-trivial topology phenomena, such as the emergence of topologically-protected edge states, is the Su-Schrieffer-Heeger (SSH) model \cite{ssh1,ssh2}. The SSH model describes the hopping of a spinless fermion on a one-dimensional lattice with staggered hopping amplitudes, as shown in the insets of Fig. \ref{Fig:SSH_results}. The chain consists of $N$ unit cells, each of which hosts two sites, one on sublattice $A$, and one on sublattice $B$. We neglect interactions between electrons, consequently the dynamics of each electron is described by a single-excitation Hamiltonian of the form \cite{asboth2016}
\begin{eqnarray}\label{Eq:SSH}
\hat{H}_{\text{SSH}} &=& \sum_{n=1}^{N}\cor{\pare{\varepsilon_n-i\Gamma_{n}}\ket{n}\bra{n} + J_{\alpha}\pare{\ket{n,B}\bra{n,A} + \text{H.c.}}} \nonumber \\
&+& J_{\beta}\sum_{n = 1}^{N-1}\pare{\ket{n+1,A}\bra{n,B} + \text{H.c.}},
\end{eqnarray}
where the states of the chain are described by $\ket{n,A}$ and $\ket{n,B}$, with the electron's unit cell represented by $n\in\llav{1,2,...,N}$, and H.c. stands for the Hermitian conjugate.
\begin{figure}[t!]
\centering
\includegraphics[width=8.0cm]{Figure2.pdf}
\caption{Time evolution of a voltage signal (excitation) initialized in (a) the edge and (b) the bulk of a ten-oscillator Su-Schrieffer-Heeger (SSH) chain. The fast oscillating signals correspond to the experimentally-measured squared voltages $\abs{V_{n}}^2$, whereas the slowly-varying envelope (solid lines) shows the theoretically-predicted behavior of the quantum populations $\abs{c_{n}}^2$. The insets show the lattice structure, as well as the initial excitation conditions in each case.}
\label{Fig:SSH_results}
\end{figure}
Arguably, the most important feature of the SSH model is the emergence of topologically protected edge modes at the end of the chain, when the intracell coupling $J_{\alpha}$ exceeds the intercell coupling $J_{\beta}$ \cite{Liu2019}. We have experimentally produced these states by making use of all ten sites in our electronic platform. The parameters used for implementing the Hamiltonian in Eq. (\ref{Eq:SSH}) are described in the second row of Table I in the Supplementary Materials. Note that the coupling inductors satisfy the condition $J_{\alpha} = 2J_{\beta}$. Figure \ref{Fig:SSH_results} shows the time evolution of an excitation (voltage signal) initialized in (a) the edge and (b) the bulk of the chain. Note that the fast oscillating signals correspond to the experimentally-measured squared voltages $\abs{V_{n}}^2$. The slowly-varying envelope (solid lines) shows the theoretically-predicted behavior of the quantum populations $\abs{c_{n}}^2$. These results demonstrate two important facts: (1) the quantum probabilities follow the same dynamics as the envelope of the squared voltage signals, and (2) the small frequencies used in our device ($\sim$ $1.5$ kHz) allow for a rather simple extraction of the amplitude and phase of the signals. In general, this is a cumbersome task in experiments working at optical (or higher) frequencies. Finally, note from Fig. \ref{Fig:SSH_results} that the relation $J_{\alpha}=2J_{\beta}$ creates a condition in which any excitation initialized in the edge will tend to stay there for a longer time than when injecting energy in any site of the bulk. This is precisely the result of the topological edge protection \cite{ssh1,ssh2}. It is important to remark that, as in other topological-insulator examples, this energy-localization effect becomes stronger as the system's size is increased \cite{bookLuo,pablo2020}.
\emph{Coherent Transfer of States.-} A key milestone for the development of scalable quantum computing and communication is the coherent transfer of states among numerous sites in an extended network. Remarkably, it has been shown that if coherence is maintained across many sites, the transfer of quantum states can be obtained with extremely high efficiency \cite{nielsen_book,bose2003,bose2007,kay2010}. Indeed, this so-called \emph{perfect state transfer} can be observed by engineering a qubit-chain described by a Hamiltonian of the form \cite{chris2004,armando2013}
\begin{equation}\label{Eq:PT}
\begin{split}
\hat{H}_{\text{CT}} & = \sum_{n=1}^{N}\pare{\varepsilon_n-i\Gamma_{n}}\ket{n}\bra{n} + \sum_{n = 1}^{N} J_{n-1} \ket{n-1}\bra{n} \\
& + \sum_{n = 1}^{N} J_{n}\ket{n+1}\bra{n},
\end{split}
\end{equation}
where the couplings follow the square-root relation: $J_{n} = \frac{\pi}{2t_f}\sqrt{n\pare{N-n}}$, with $t_f$ describing the time that an initial one-site excitation takes to be transferred from site $n$ to the site $N-n+1$. It is worth mentioning that although the Hamiltonian in Eq. (\ref{Eq:PT}) was originally proposed for fermionic qubits \cite{bose2003}, its single-excitation nature suggests that it can be implemented in either quantum or classical platforms \cite{roberto-LPL}.
\begin{figure}[t!]
\centering
\includegraphics[width=8.0cm]{Figure3.pdf}
\caption{Time evolution of a voltage signal (excitation) initialized in the first site of the chain ($n=1$). The inset shows the lattice structure, as well as the initial excitation condition. Note that after the characteristic time $t_{f}=5.6$ s, the signal injected into the first site of the chain ($n=1$) is coherently transferred to the final site (n=7) with an efficiency of 0.61, that is, 61\% of the total energy is recollected in the intended final site of the lattice.}
\label{Fig:PT_results}
\end{figure}
In an effort to provide a simple, low-cost platform for simulating quantum state-transfer protocols, we have implemented the Hamiltonian shown in Eq. (\ref{Eq:PT}). For this purpose, we have taken $N=7$ oscillators out of the ten available in the electronic platform. We have arranged them in a chain where all site frequencies, losses and couplings are characterized by the values presented in the third row of Table I in the Supplementary Materials. Note that, in order to maintain the same value for all site-frequencies, the capacitance in the oscillators take different values, this is because the corresponding frequencies strongly depend on the couplings, which change with the site positions.
Figure \ref{Fig:PT_results} shows the time evolution of an excitation (voltage signal) initialized in the first site of the chain (see the inset in Fig. \ref{Fig:PT_results}). Note that after $t_{f} = 5.6$ s, a signal injected into the first site of the chain ($n=1$) is coherently transferred to the final site ($n=7$) with an efficiency of 0.61. This means that 61\% of the total energy is recollected in the intended final site of the lattice. This rather small value is mainly due to the intrinsic losses ($>$1 k$\Omega$) of the general purpose operational amplifiers (see Supplementary Materials), which could be reduced by making use of low-noise instrumentation amplifiers \cite{OPAMP_book}.
\emph{Photosynthetic Energy Transport.-} We finally present, for the first time, the simulation of the exciton dynamics in the B800 ring of the purple bacteria LH2 complex. The LH2 complex of \emph{Rhodopseudomonas acidophila} carries 27 bacteriochlorophyll (BChl) molecules in two concentric rings embedded in the surrounding proteins \cite{cheng2006}, 9 of the BChl molecules form the B800 ring (see inset in Fig. \ref{Fig:LH2_results}), which absorbs maximally at 800 nm, and the other 18 molecules form the B850 ring which absorbs maximally at 850 nm. The BChl molecules in the B850 ring are closely packed, which leads to strong electronic coupling between adjacent pigments \cite{scholes_1999}, whereas the large distance between adjacent BChl molecules in the B800 ring results in a weak nearest-neighbor coupling.
In the single-excitation basis, the B800 ring can be described by a tight-binding Hamiltonian of the form \cite{cheng2006}
\begin{equation}\label{Eq:LH2}
\hat{H}_{\text{B}800} = \sum_{n=1}^{N}\pare{\varepsilon_{n}-i\Gamma_{n}}\ket{n}\bra{n} + \sum_{n \neq m}^{N} J_{nm} \ket{n}\bra{m},
\end{equation}
where the excitation energies of the BChl molecules and the coupling between them are given by $\varepsilon_{n} = 12450\;\text{cm}^{-1}$ and $J_{nm}=-27\;\text{cm}^{-1}$, respectively. To simulate the dynamics described by the Hamiltonian (\ref{Eq:LH2}), we first note that the rate at which BChl molecules interact is extremely fast compared to the characteristic frequencies of our platform. Therefore, we introduce a proper rescaling factor, which is found to be $\eta = 5.2615\times 10^{12}$. With this factor, we obtain an excitation energy of $\varepsilon_{n} = 446.4\;\text{Hz}$ and a coupling of $J_{nm}=-0.9\;\text{Hz}$. These parameters are set by making use of the values presented in the fourth row of Table I in the Supplementary Materials.
Figure \ref{Fig:LH2_results} shows the dynamics of a voltage signal (excitation) initialized in one of the sites of the B800 ring, as depicted in the inset. Note that the results are presented in a rescaled time-window (3.2 s), which corresponds to a $\sim 0.6$ ps time-evolution in the real molecular system. Moreover, note that the weak coupling between the BChl molecules in the B800 ring results in a slow propagation of the energy among the sites, thus making the system more susceptible to dissipation effects due to its interaction with an environment \cite{croce_book}.
\begin{figure}[t!]
\centering
\includegraphics[width=8.0cm]{Figure4.pdf}
\caption{Time evolution of a voltage signal (excitation) initialized in one of the sites of the B800 ring ($n=1$). The inset shows the ring structure, as well as the initial excitation condition. Note that the results are presented in a rescaled time-window (3.2 s), which corresponds to a $\sim 0.6$ ps time-evolution in the real photosynthetic complex.}
\label{Fig:LH2_results}
\end{figure}
To conclude, we have presented a versatile, reconfigurable network for the simulation of quantum transport. Our platform overcomes major limitations in existing protocols for quantum simulation, namely preservation of coherence, full control of system parameters, low losses, and scalability. We have exploited the negligible decoherence and versatility of our network to induce complex superpositions and interference effects, thus allowing us to simulate Hamiltonians attributed to important quantum transport dynamics. Because of its robustness and versatility, our device arises as a promising platform for the simulation of quantum transport phenomena.
This work was supported by CONACyT under the projects CB-2016-01/284372 and A1-S-8317, and by DGAPA-UNAM under the project PAPIIT-IN102920. We acknowledge funding from the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award 0000250387.
\section*{\large{Supplementary materials}}
In this document, we show how an array of $N$ inductively-coupled RLC oscillators can be electronically implemented by making use of functional blocks synthesized with operational amplifiers and passive linear electrical components. Under this scheme, we can select independently site-frequencies, couplings and losses from a broad range of possible values. Additionally, we devote a section to discuss the time-dependent energy in the quantum and
electronic models.
\section*{Circuit Design and Parameters}
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=\textwidth]{Figure1s.pdf}
\end{center}
\protect\caption{Schematics for (a) the oscillators, (b) the couplings and (c) the parameter digital control. The input signals are indicated with red nodes, whereas the outputs are denoted with blue ones. $R_{fj}$, $C_{fj}$, $U_j$ and $M_j$ stand for resistors, capacitors, operational amplifiers and analog multipliers, respectively. The voltage in the capacitor $V_n$, the currents across the oscillator inductor $I_n$ are the electrical variables of interest and the label $V_0$ refers to interconnection of feedback signals. $V_{Rn}$, $V_{Ln}$, $V_{Cn}$ and $V_{IC}$ are voltage signals generated by the DACs, which allow one to configure the system parameters in an easy and accessible way via software. The communication between the microcontroller and the DACs is performed through the SPI protocol, which makes use of the digital signals LDAC, DATA and CLK, and the configuration bits (LE$j$, $A_0$, $A_1$ and $A_2$) sent to the demultiplexers. (d) Printed circuit board of ten fully reconfigurable RLC oscillators.}
\label{fig:Circuit2}
\end{figure*}
Our experimental setup comprises a network of ten inductively-coupled RLC oscillators, whose dynamics are governed by Eq. (4) of the main manuscript. The oscillators and couplings are electronically implemented with active networks of operational amplifiers (OPAMPs) and passive linear electrical components. It is worth mentioning that the transfer functions of the basic electrical networks of OPAMPs namely, adders, integrators and gains, obey specific mathematical operations. This allows us to interconnect them to build complex sequences of mathematical functions where voltage signals represent physical variables of the system that is being studied. In this representation, the parameters of the system are mapped into passive components within the active networks, such as resistors and capacitors. Consequently, any change in the parameters leads to physically replacing components.
To avoid this, we have merged basic electrical networks with integrated analog multipliers for synthesizing voltage-driven components whose values depend on an external voltage signal provided by digital-to-analog converters (DAC) that communicate with a master microcontroller by the serial peripheral interface (SPI) protocol. Because of this remarkable feature, the initial conditions and the system parameters $R_n$, $L_n$, $C_n$ and $L_{nm}$, which control the site-frequencies and couplings, can be individually addressed within a wide range values via software. More importantly, since coupling values can be set to zero, one can control the connection topology between oscillators by enabling or disabling the couplings.
Structurally, our experimental setup is divided in two parts, analog and digital. The former encompasses the oscillators and couplings, both of them built with purely analog electronic components. Figures \ref{fig:Circuit2}(a) and (b) show the general schemes of the electronic circuits for the oscillators and couplings, respectively. There, $R_{fj}$, $C_{fj}$, $U_j$ and $M_j$ stand for metal resistors (1\% tolerance), polyester capacitors, general-purpose operational amplifiers LF353 and analog multipliers AD633JN (four-quadrant voltage multiplier), respectively.
In Figure \ref{fig:Circuit2}, the input signals are indicated with red nodes, whereas the outputs are denoted with blue ones. $V_n$, $I_n$ and $I_{nm}$ are the electrical variables of interest, namely the voltage in the capacitor, the currents across the oscillator inductors and the coupling inductors, respectively. The label $V_0$ refers to interconnection of an internal signal. In the experimental setup, the parameters of the oscillators and couplings, $R_n$, $L_n$, $C_n$ and $L_{nm}$, as well as the initial conditions $V_n(0)$ are defined by the values of $R_{fj}$, $C_{fj}$, $V_{Rn}$, $V_{Ln}$ $V_{Cn}$ and $V_{IC}$. These quantities satisfy the following relationships
\begin{eqnarray}
\frac{1}{R_n}&=&\frac{R_{f1}V_{Rn}\phi}{R_{f2}}, \nonumber \\
\frac{1}{L_n}&=&\frac{R_{f1}V_{Ln}\phi}{R_{f2}R_{f3}C_{f1}}, \nonumber \\
\frac{1}{C_n}&=&\frac{R_{f1}V_{Cn}\phi}{R_{f2}R_{f3}C_{f1}},
\\
\frac{1}{L_{nm}}&=&\frac{R_{f1}V_{L{nm}}\phi}{R_{f2}R_{f3}C_{f1}}, \nonumber \\
V_n(0)&=&\frac{R_{f1}V_{IC}}{R_{f2}}, \nonumber
\end{eqnarray}
where $\phi=1/10$ is a manufacturing default factor of the analog multiplier, integrated to avoid saturation of the output voltage. Remarkably, the $R_{fj}$ and $C_{fj}$ devices represent the core configuration of the electronic platform and fix the maximum values that the system parameters can take. Furthermore, the voltage signals $V_{Rn}$, $V_{Ln}$, $V_{Cn}$ and $V_{IC}$, coming from the DACs, and taking discrete values between 0 V and 5 V with a resolution of 1.22 mV, allow to independently select such parameters from a broad range of possible values within the defined interval. To operate the operational amplifiers and analog multipliers in a convenient bandwidth, the resistor and capacitor values are set to $R_{f1}=10$ $\text{k}\Omega$, $R_{f2}=5$ $\text{k}\Omega$, $R_{f3}=1$ $\text{k}\Omega$ and $C_{f1}=0.1$ $\mu$F. This configuration allows us to tune the site frequencies from $0$ Hz to $1590$ Hz. Finally, to energize the device, we make use of a stabilized DC power supply (KEITHLEY triple channel, 2231A-30-3), which feeds the $\pm$12 V bias voltage $(+V_s,-V_s)$ to the OPAMPs and the analog multipliers.
\begin{table*}[t!]
\setlength{\arrayrulewidth}{0.2mm}
\setlength{\tabcolsep}{0.5mm}
\setlength{\doublerulesep}{0.6mm}
\extrarowheight = -0.5ex
\renewcommand{\arraystretch}{1.7}
\begin{tabular}{c c c}
\arrayrulecolor{black} \hline
\rowcolor[HTML]{aad5ff}
\textbf{Experiment} & \makebox[5.5cm][c]{\textbf{Site-Frequencies and Losses}} & \textbf{Coupling Coefficients} \\
\rowcolor[HTML]{f0f8ff}
& $C_n = 1.50$ mF & \makebox[5.3cm][c]{$L_{nm} = \cor{L_{x}\pare{1 - \Delta}, L_{x}\pare{1 + \Delta}}$ } \\ \rowcolor[HTML]{f0f8ff
Anderson Localization & $L_n = 3.35$ mH & $L_x = 96.05$ mH \\ \rowcolor[HTML]{f0f8ff
& $R_n = 1\;\text{k}\Omega$ & $\Delta = 0,0.5,0.9$ \\
\arrayrulecolor{white}\hline \hline \rowcolor[HTML]{f0f8ff
& $C_n = 1.50$ mF & $L_{\alpha} = 96.05$ mH \\ \rowcolor[HTML]{f0f8ff
The SSH Model & $L_n = 3.35$ mH & $L_{\beta} = 192.1$ mH \\ \rowcolor[HTML]{f0f8ff
& $R_n = 900\;\Omega$ & \\
\hline \hline \rowcolor[HTML]{f0f8ff
& $C_1 = C_4 = C_7 = 7.54$ mF & $L_{12}=L_{67}=321.36$ mH \\ \rowcolor[HTML]{f0f8ff
\makebox[4.6cm][c]{Coherent Transfer of States} & $C_2=C_3=C_5=C_6=7.58$ mF & $L_{23}=L_{56}=181.97$ mH \\ \rowcolor[HTML]{f0f8ff
& $L_n = 1.11$ mH & $L_{34}=L_{45}=75.45$ mH \\ \rowcolor[HTML]{f0f8ff
& $R_n = 1.5\;\text{k}\Omega$ & \\
\hline \hline \rowcolor[HTML]{f0f8ff
& $C_n = 1.50$ mF & \\ \rowcolor[HTML]{f0f8ff
Photosynthetic Transport & $L_n = 3.35$ mH & $L_{nm} = 806.90$ mH \\ \rowcolor[HTML]{f0f8ff
& $R_n = 1\;\text{k}\Omega$ & \\ \arrayrulecolor{black} \hline
\end{tabular}
\caption{Electrical-component values used in the implementation of the quantum transport protocols presented in the main article.}
\label{Tab:table1}
\end{table*}
As for the digital part, we incorporate digital-to-analog converters (MCP4921, Resolution 12 bits), demultiplexers (SN47HC138N, high speed CMOS 3-to-8 line decoder) and a microcontroller (PIC18 familiy), which together deal with the parameter and initial condition configurations. Note that in the electronic platform there are eighty-five configurable parameters, four per each oscillator and forty possible all-to-all couplings, each one of them controlled by voltage signals coming from the DAC ($V_{Rn}$, $V_{Ln}$, $V_{Cn}$ and $V_{IC}$). To satisfy this demand, the enable/disable terminal of each DAC is connected to a digital bus managed by demultiplexers, in this way, with only sixteen lines of the microcontroller we can select a particular DAC, setting the properly configuration bits, $LEj$, $A_0$, $A_1$ and $A_2$, to the demultiplexers, as well as to transmit a desired output voltage to the DAC through the SPI protocol using the control and data bits, namely, LDAC, DATA and CLK [see Fig. \ref{fig:Circuit2}(c)].
To ensure a strong connection among the electronic components, we design and manufacture a printed circuit board (PCB, 40$\times$50 cm) where electronic devices were mounted and soldered. The PCB was designed in Altium Software and fabricated with a computer numerical control (CNC) laser. The electronic realization of the ten fully reconfigurable RLC oscillators on the PCB is shown in Fig. \ref{fig:Circuit2}(d). The module controlling the parameters and initial conditions is indicated with a green square, whereas the analog oscillators and couplings are signaled with blue and red, respectively. Both the analog and digital modules of our experimental setup are energized through the main power connector (orange squared). The acquisition of the electrical variables (magenta square) is performed with a Digilent oscilloscope (Analog discovery 2), which directly transfers the information to a computer by USB connection.
To conclude this section, we finally provide (in Table \ref{Tab:table1}) detailed information regarding the electronic-component values needed for the implementation of the experiments described in the main text.
\section*{Energy Loss Estimation}
In this section we provide a thorough description of how the unavoidable losses, present in our electronic platform, can be accounted for in the quantum tight-binding network model. Let us consider the energy contained in the whole circuit, which is given by
\begin{equation}
Q_{\text{cl}}(t)=\frac{1}{2}\sum_{m \neq n} C_nV_n^2+L_nI_n^2+L_{nm}I_{nm}^2.
\end{equation}
As one might expect, in the presence of losses (or resistance), the total energy of the system will decay following an exponential behavior \cite{roberto2018}. Of course, in the absence of resistance, the total energy is conserved. Remarkably, in the quantum model, a quantity that follows the same behavior in the presence (or absence) of losses is the trace of the density matrix. We define as an energy-like measure of the quantum system, given by the expression
\begin{equation}
Q_{\text{q}}(t)=\sum_{n}\left|c_n\right|^2.
\end{equation}
Indeed, it is well known that for a closed quantum system, the trace of the system's density matrix is conserved, whereas for a system affected by a dissipative environment, the trace of the reduced density matrix (defined as the density matrix obtained after the environment's degrees of freedom are traced out) decays exponentially \cite{opensys_book}. Along this line, the simplest way of introducing a dissipative process in a quantum system is by including a non-Hermitian term, in the closed system's Hamiltonian, of the form
\begin{equation}
\hat{H}_{\text{loss}} = -\frac{i}{2}\sum_{n}\Gamma_{n}\ket{n}\bra{n},
\end{equation}
with $\Gamma_{n}$ describing the rate at which energy is dissipated to the system's environment.
\begin{figure}[t!]
\centering
\includegraphics[width=8.25cm]{Figure2s.pdf}
\caption{Time evolution of the normalized energy in the electronic platform (solid line) and the quantum dissipative model (dashed line). The dash-dotted line shows an exponential curve fitting. The parameters used for obtaining the energy-curves are those used in the experimental implementation of the SSH model.}
\label{Fig:Energy}
\end{figure}
Figure \ref{Fig:Energy} shows the time evolution of the normalized energy for the electrical circuit (blue solid line) and the trace of the open system's density matrix (red dashed line). Note that both curves follow the same exponentially-decaying behavior (described by the green dotted curve fitting), which allows us to establish the relation: $\Gamma = 1/\pare{RC}$, where $R=R_n$ and $C=C_n$ stand for the resistance and capacitance of each electrical oscillator in the device, respectively. This important result is what allows us to include the effects of the electronic parasitic losses into the quantum tight-binding model.
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
|
1,314,259,995,922 | arxiv | \section{Introduction}
We consider a multivariate autoregressive process with normally distributed noise, defined through
\[
X_t^\varepsilon = AX_{t-1}^\varepsilon+\varepsilon\xi_t,\; t\ge 1,\; X_0^\varepsilon = x_0,
\]
where $X_t^\varepsilon\in \mathbb{R}^d$, $A$ is a real $d\times d$ matrix , $\varepsilon$ is a small positive parameter and $\{\xi_t\}_{t\ge 1}$ is an i.i.d. sequence of multivariate standard normal random variables. We will study the time until the process exits from a set of the type $\{x\in \mathbb{R}^d: |c^T x| < 1\}$ for some vector $c\in \mathbb{R}^d$. Subject to some conditions, we show that the expectation of this exit time is of the order of magnitude $\exp(1/(\varepsilon^2 c^T\Sigma_\infty c))$ for small values of $\varepsilon$, where $\varepsilon^2\Sigma_\infty$ is the covariance matrix of the stationary distribution of the process.
\bigskip\noindent
The corresponding univariate case, where $X_t^\varepsilon\in \mathbb{R}\;\forall t\ge 0$ has been investigated before, by Klebaner and Liptser in \cite{Kle} and by Ruths in \cite{Rut}. In \cite{Kle}, the authors proved a large deviation principle (LDP) for a class of past-dependent models. As an example, they used the univariate
autoregressive process $\{X_t^\varepsilon\}_{t\ge 0}$, where
\[
X_t^\varepsilon = aX_{t-1}^\varepsilon + \varepsilon\xi_t, X_0^\varepsilon = x_0,
\]
where $X_t^\varepsilon\in \mathbb{R}\;\forall t$, $|a|<1$, $\varepsilon$ is a positive parameter and $\{\xi_t\}_{t\ge 1}$ is an i.i.d. sequence of standard normal random variables. This process has a stationary distribution which is normal with mean 0 and variance $1/(1-a^2)$. Klebaner and Liptser showed that the family of processes $\{X_t^\varepsilon\}_{t\ge 0}$ obeys an LDP with rate of speed $\varepsilon^2$ and rate function
\[
I(\bar u) = \left\{\begin{array}{ll}
\frac12\sum_{t=1}^\infty (u_t-au_{t-1})^2, & u_0 = x_0,\\
\infty , &\mbox{otherwise,}
\end{array}\right.
\]
where $\bar u = (u_0,u_1,\ldots )$, and that this implies that
\begin{equation}
\limsup_{\varepsilon\rightarrow 0}\varepsilon^2\log E\tau^\varepsilon \le\frac12 (1-a^2),
\end{equation}
when $\tau^\varepsilon := \min\{t\ge 1: |X_t^\varepsilon|\ge 1\}$. This upper bound is sharp, considering \cite{Rut}, where the corresponding lower bound was proven with another method. We also note the correspondence between this bound and the variance of the stationary distribution.
\bigskip\noindent
In section 2 of this paper, we establish the corresponding large deviation principle for a family of multivariate processes. We also present a method to get a lower bound for the exit time of normally distributed processes. In section 3 we prove the asymptotics of the exit time of the multivariate autoregressive process. In section 4, we apply the same methods to get a result for the exit time from an interval for the univariate autoregressive process of order $n$, where
\[
X_t^\varepsilon = a_1X_{t-1}^\varepsilon+\ldots + a_nX_{t-n}^\varepsilon + \varepsilon\xi_t,\; t\ge n, \; X_0^\varepsilon = x_0,\ldots , X_{n-1}^\varepsilon = x_{n-1},
\]
where $a_1,\ldots ,a_n$ are real parameters and $\{\xi_t\}_{t\ge n}$ is a sequence of univariate standard normal random variables.
\section{Methods for upper and lower bounds}
In the first two parts of this section, we consider how to use the large deviation principle to get an upper bound of the asymptotics of an exit time from a set for a process. In the third part of the section, we consider another method for the corresponding lower bound, when the process has a normal distribution.
\subsection{The large deviation principle}
The following definition of the large deviation principle is taken from Varadhan (\cite{Var}), with the slight difference that we let the rate of speed be a function of $\varepsilon$ and call it $q(\varepsilon)$, as Klebaner and Liptser also did (\cite{Kle}). The large deviation principle (LDP) is then defined in the following way: Let $\{P_\varepsilon\}$ be a family of probability measures on the Borel subsets of a complete separable metric space $Z$. We say that $\{P_\varepsilon\}$ satisfies the large deviation principle with a rate function $I(\cdot)$ if there exists a function $I$ from $Z$ into $[0,\infty]$ satisfying the following conditions: $0\le I(z)\le\infty\;\forall z\in Z$, $I$ is lower semicontinuous, the set $\{z:I(z)\le l\}$ is a compact set in $Z$ for all $\;l<\infty$ and
\begin{eqnarray*}
& &\limsup_{\varepsilon\rightarrow 0} q(\varepsilon )\log P_\varepsilon (C) \le -\inf_{z\in C}I(z) \;\; \mbox{ for every closed set } C\subset Z \mbox{ and }\\
& &\liminf_{\varepsilon\rightarrow 0} q(\varepsilon )\log P_\varepsilon (G) \ge -\inf_{z\in G}I(z) \;\; \mbox{ for every open set } G\subset Z.
\end{eqnarray*}
We will consider a family of processes $\{X_t^\varepsilon\}_{t\ge 0}$, where $X_t^\varepsilon \in \mathbb{R}^d \;\forall t\ge 0$ and
\begin{equation}\label{process_definition}
X_t^\varepsilon = f(X_{t-1}^\varepsilon,\ldots , X_{t-n}^\varepsilon, \varepsilon\xi_t ) \mbox{ for }t\ge n,
\end{equation}
where $f: (\mathbb{R}^d)^{n+1}\mapsto \mathbb{R}^d$ is a continuous function, $\{\xi_t\}_{t\ge n}$ is an i.i.d. sequence of random variables in $\mathbb{R}^d$, $\varepsilon$ is a positive parameter and the starting values are $X_0^\varepsilon =x_0,\ldots , X_{n-1}^\varepsilon= x_{n-1}$. We will prove a large deviation principle for the family of probability measures induced by $\{X_t^\varepsilon\}_{t\ge 0}$, assuming that a large deviation principle for the family of probability measures induced by $\{\varepsilon\xi_n\}$ holds.
\begin{thm}\label{thm}
Assume that the family of probability measures induced by $\{\varepsilon\xi\}$, where $\xi$ is a copy of $\xi_n$, satisfies a large deviation principle with rate function $I_{\varepsilon\xi}(z)$ and rate of speed $q(\varepsilon)$. Then the large deviation principle holds for the family of probability measures induced by $\{X_t^\varepsilon\}_{t\ge 0}$ with the same rate of speed and the rate function
\[
I(y_0,y_1,y_2,\ldots ) = \inf_{\genfrac{}{}{0pt}{1}{\genfrac{}{}{0pt}{1}{z_t\in \mathbb{R}^d\;\forall t\ge n}{y_t = f(y_{t-1},\ldots , y_{t-n},z_t), t\ge n}}{y_0 = x_0,\ldots , y_{n-1}= x_{n-1}}} \sum_{t=n}^\infty I_{\varepsilon\xi}(z_t).
\]
\end{thm}
\noindent
Proof: We have assumed that an LDP holds for the family of probability measures induced by the family $\{\varepsilon\xi\}$, with the rate function $I_{\varepsilon\xi}(z)$ and the rate of speed $q(\varepsilon)$. By \cite{Lyn}, the LDP then holds for the family of probability measures induced by the family of vectors $\{\varepsilon\xi_t\}_{t=n}^N$ with the same rate of speed and the rate function
\[
I_{\{\varepsilon\xi_t\}_{t=n}^N}(z_n,\ldots , z_N) = \sum_{t=n}^{N} I_{\varepsilon\xi}(z_t),
\]
where $N$ is finite. By the Dawson-G\"{a}rtner theorem (see for example \cite{Dem}), it follows that the LDP holds for the family of probability measures induced by $\{\varepsilon\xi_t\}_{t\ge n}$ with rate of speed $q(\varepsilon)$ and rate function
\[
I_{\{\varepsilon\xi_t\}_{t\ge n}}(z_n,z_{n+1},\ldots) = \sum_{t=n}^\infty I_{\varepsilon\xi_t}(z_t).
\]
Now, since $f$ is continuous, the mapping $\{\varepsilon\xi_t\}_{t\ge n}\mapsto \{X_t^\varepsilon\}_{t\ge 0}$ is continuous in the space $(\mathbb{R}^d)^\infty$ with the metric $\rho(x,y) = \sum_{j\ge 1}2^{-j}\frac{||x_j-y_j||}{1+ ||x_j-y_j||}$, where $||\cdot ||$ denotes the Euclidian norm on $\mathbb{R}^d$. Thus, we can use the contraction principle (see for example \cite{Dem}). It implies that the LDP for the family of probability measures associated with the family $\{X_t^\varepsilon\}_{t\ge 0}$ holds with rate of speed $q(\varepsilon)$ and rate function
\[
I(y_0, y_1,y_2,\ldots ) = \inf_{\genfrac{}{}{0pt}{1}{\genfrac{}{}{0pt}{1}{(z_n,z_{n+1},\ldots )\in (\mathbb{R}^d)^\infty }{y_t = f(y_{t-1},\ldots , y_{t-n},z_t),t\ge n}}{y_0 = x_0,\ldots , y_{n-1}= x_{n-1}}}I_{ \{ \varepsilon\xi_t \}_{t\ge n}}(z_n,z_{n+1},\ldots ),
\]
where the infimum over the empty set is taken as $\infty$. We can write this rate function as
\[
I(y_0,y_1,y_2,\ldots ) = \inf_{\genfrac{}{}{0pt}{1}{\genfrac{}{}{0pt}{1}{z_t\in \mathbb{R}^d\;\forall t\ge n}{y_t = f(y_{t-1},\ldots , y_{t-n},z_t),t\ge n}}{y_0 = x_0,\ldots , y_{n-1}= x_{n-1}}} \sum_{t=n}^\infty I_{\varepsilon\xi}(z_t)
\]
and the proof is finished.
\subsection{Exit times with the large deviation principle}
We will later use the large deviation principle to get a bound for the exit time from a set for a certain process. To see how this will be done, let us for the moment define the exit time as
\begin{equation}
\tau := \min\{t\ge n: X_t^\varepsilon \notin \Omega\},
\end{equation}
where $X_t^\varepsilon$ is defined as in equation \ref{process_definition} and $\Omega$ is a set in $\mathbb{R}^d$. Assume that the starting points $x_0,\ldots ,x_{n-1}$ of the process belong to $\Omega$. For the expectation of the exit time, we have the following, where $M$ is any integer greater than or equal to $n$:
\begin{eqnarray*}
E_{x_0,\ldots ,x_{n-1}}(\tau )&\le & M + P(\tau > M-1)E_{x_0,\ldots ,x_{n-1}}(\tau | \tau > M-1)\\
&\le & M + P(\tau >M-1)[MP(\tau = M|\tau > M-1) \\
& & + (M+\sup_{x_0,\ldots ,x_{n-1}\in \Omega}E_{x_0,\ldots ,x_{n-1}}(\tau ))P(\tau > M|\tau > M-1)]\\
&\le & 2M + \sup_{x_0,\ldots ,x_{n-1}\in \Omega} E_{x_0,\ldots ,x_{n-1}}(\tau )\cdot \sup_{x_0,\ldots ,x_{n-1}\in \Omega}P_{x_0,\ldots ,x_{n-1}}(\tau > M).
\end{eqnarray*}
Thus, it holds that
\[
\sup_{x_0,\ldots ,x_{n-1}\in \Omega} E_{x_0,\ldots ,x_{n-1}}(\tau ) \le \frac{2M}{\inf_{x_0,\ldots ,x_{n-1}\in \Omega}P_{x_0,\ldots ,x_{n-1}}(\tau\le M)},
\]
or simply that
\begin{equation}
E_{x_0,\ldots ,x_{n-1}}(\tau ) \le \frac{2M}{\inf_{x_0,\ldots ,x_{n-1}\in \Omega}P_{x_0,\ldots ,x_{n-1}}(\tau\le M)}
\end{equation}
for any set of starting points $x_0,\ldots ,x_{n-1}\in \Omega$ and any integer $M\ge n$. If the infimum in the denominator is attained for the starting points $x_0^*,\ldots ,x_{n-1}^*\in\Omega$, the inequality above implies that
\begin{equation}\label{inequality}
\limsup_{\varepsilon\rightarrow 0} q(\varepsilon )\log E_{x_0,\ldots ,x_{n-1}}(\tau) \le -\lim_{\varepsilon\rightarrow 0} q(\varepsilon )\log P_{x_0^*,\ldots ,x_{n-1}^*}(\tau\le M),
\end{equation}
if the right hand side limit exists. Since
\[
P_{x_0^*,\ldots ,x_{n-1}^*}(\tau\le M) = P_{x_0^*,\ldots ,x_{n-1}^*}(X_t^\varepsilon \notin \Omega \mbox{ for some } t\in \{n,\ldots , M\}),
\]
the limit may be calculated if we have a large deviation principle for the family of probability measures induced by $\{X_t^\varepsilon\}_{t\ge 0}$ and if the function $f$ and the set $\Omega$ are suitable.
In sections 3 and 4 we will use this method to get upper bounds for exit times for multivariate autoregressive processes and univariate processes of order $n$, respectively.
\subsection{A lower bound for the exit time of normally distributed variables}
We now leave the large deviation principle for a moment, and consider a method to get a lower bound for an exit time. The following theorem gives a lower bound for the asymptotics of the mean exit time from a symmetric interval for a sequence of univariate normally distributed random variables $\{Y_t^\varepsilon\}_{t\ge 1}$, with mean zero and bounded variance. Thus, in this section we consider the exit time
\[
\tau_{(-1,1)} := \min\{t\ge 1: |Y_t^\varepsilon |\ge 1\}.
\]
\begin{thm}\label{thmlower}
Assume that $\{Y_t^\varepsilon\}_{t\ge 1}$ is a sequence of normally distributed random variables, all with mean 0, and that
\[
{\rm Var}(Y_t^\varepsilon ) \le q(\varepsilon )\sigma^2 \;\forall t\ge 1,
\]
for some $\sigma^2 > 0$ and some positive function $q(\varepsilon )$ where $\lim_{\varepsilon\rightarrow 0} q(\varepsilon) = 0$. Then
\[
\liminf_{\varepsilon\rightarrow 0}q(\varepsilon )\log E\tau_{(-1,1)} \ge \frac{1}{2\sigma^2}.
\]
\end{thm}
\noindent
Proof: Since $Y_t^\varepsilon$ has a normal distribution with mean zero and variance bounded by $q(\varepsilon )\sigma^2$, $E(e^{\lambda Y_t^\varepsilon})\le \exp(\frac12 \lambda^2q(\varepsilon )\sigma^2)$ $\forall t\ge 1$, which implies that also
\[
E(\cosh(\lambda Y_t^\varepsilon)) \le e^{\frac12 \lambda^2q(\varepsilon )\sigma^2}\;\; \forall t\ge 1.
\]
For any $N\ge 1$, we have the following Chernoff-type bound of the probability that the exit time is smaller than or equal to $N$:
\begin{eqnarray*}
P(\tau_{(-1,1)} \le N) &=& P(\max_{1\le t\le [N]}|Y_t^\varepsilon |\ge 1)
= P(\cosh (\lambda\max_{1\le t\le [N]}|Y_t^\varepsilon | )\ge \cosh\lambda )\\
& & \le (\cosh\lambda)^{-1} E(\cosh (\lambda\max_{1\le t\le [N]}|Y_t^\varepsilon | )),
\end{eqnarray*}
which holds for any positive $\lambda$. (Of course, the bound holds for any $\lambda\in R$.) Since
\[
\cosh (\lambda\max_{1\le t\le [N]}|Y_t^\varepsilon | ) = \max_{1\le t\le [N]}\cosh(\lambda Y_t^\varepsilon) \le \sum_{t=1}^{[N]}\cosh(\lambda Y_t^\varepsilon),
\]
it follows that
\[
E(\cosh (\lambda\max_{1\le t\le [N]}|Y_t^\varepsilon | )) \le [N]e^{\frac12 \lambda^2q(\varepsilon )\sigma^2} \le Ne^{\frac12 \lambda^2q(\varepsilon )\sigma^2}.
\]
Thus, we have the bound
\[
P(\tau_{(-1,1)}\le N) \le (\cosh\lambda)^{-1} Ne^{\frac12 \lambda^2q(\varepsilon )\sigma^2} \le 2e^{-\lambda}Ne^{\frac12 \lambda^2q(\varepsilon )\sigma^2},
\]
for any $\lambda > 0$. By choosing $\lambda$ in the optimal way, that is, as $\lambda = 1/(q(\varepsilon )\sigma^2)$, we get the bound
\[
P(\tau_{(-1,1)}\le N) \le 2N \exp(-\frac{1}{2q(\varepsilon ) \sigma^2}).
\]
Now, let $\delta$ be a small positive number and choose $N = \exp(\frac{1}{2q(\varepsilon ) \sigma^2}-\frac{\delta}{q(\varepsilon)})$. Then $P(\tau_{(-1,1)} >N) > 1-2\exp(-\frac{\delta}{q(\varepsilon )})$, which implies that
\[
E\tau_{(-1,1)} \ge NP(\tau_{(-1,1)} > N) \ge \exp(\frac{1}{2q(\varepsilon ) \sigma^2}-\frac{\delta}{q(\varepsilon )})(1-2\exp(-\frac{\delta}{q(\varepsilon )})),
\]
and thus
\[
\liminf_{\varepsilon\rightarrow 0}q(\varepsilon )\log E\tau_{(-1,1)} \ge \frac{1}{2\sigma^2}-\delta.
\]
Since this holds for any $\delta >0$, we get the lower bound
\[
\liminf_{\varepsilon\rightarrow 0}q(\varepsilon )\log E\tau_{(-1,1)} \ge \frac{1}{2\sigma^2},
\]
and the proof is finished.
\medskip
\noindent
{\bf Remark:} If we wanted to consider a one-sided exit time, for example the time until $Y_t^\varepsilon > 1$, we could simply use the exponential function instead of the hyperbolic cosine in the argument above. The resulting lower bound would be the same.
\section{Exit times for a multivariate autoregressive process}
In this section, we use the methods described in section 2 to show that the expectation of the exit time from a set $\{x\in \mathbb{R}^d: |c^T x| < 1\}$ for a vector $c\in \mathbb{R}^d$ (that is not the zero vector) for a multivariate autoregressive process is of the order of magnitude $\exp(1/(\varepsilon^2 c^T\Sigma_\infty c))$, where $\varepsilon^2\Sigma_\infty$ is the covariance matrix of the stationary distribution of the process.
\subsection{A multivariate autoregressive process}
By a multivariate autoregressive process, we mean a process $\{X_t^\varepsilon\}_{t\ge 0}$, such that
\begin{equation}\label{multivariate_ar}
X_t^\varepsilon = AX_{t-1}^\varepsilon + \varepsilon\xi_t, \; X_0^\varepsilon = x_0,
\end{equation}
where $X_t^\varepsilon \in \mathbb{R}^d\;\forall t$, $A$ is a real $d\times d$ matrix, $\varepsilon$ is a positive parameter and $\{\xi_t\}_{t\ge 1}$ is an i.i.d. sequence of multivariate normal random variables in $\mathbb{R}^d$, with mean zero and covariance matrix $I$ (the unit matrix). For any $t\ge 1$, $X_t^\varepsilon$ has a multivariate normal distribution with mean $EX_t^\varepsilon = A^tx_0$, where $x_0$ is the starting point, and covariance matrix $\varepsilon^2\Sigma_t$, where
\begin{equation}
\Sigma_t = A\Sigma_{t-1}A^T + I, \;\; t\ge 2,
\end{equation}
and $\Sigma_1 = I$. The matrix $\Sigma_t$ can also be written as the sum
\begin{equation}
\Sigma_t = \sum_{i=0}^{t-1} A^i(A^T)^i, t\ge 1.
\end{equation}
Throughout, we will assume that all eigenvalues of $A$ have absolute values less than one. The process $\{X_t^\varepsilon\}_{t\ge 0}$ then has a stationary distribution, which is multivariate normal with mean $(0,0,\ldots ,0)^T$ and covariance matrix $\varepsilon^2\Sigma_\infty$, where $\Sigma_\infty$ satisfies
\begin{equation}\label{covmatrixequation}
\Sigma_\infty = A\Sigma_\infty A^T + I.
\end{equation}
Of course, the matrix $\Sigma_\infty$ can also be expressed as the sum
\begin{equation}
\Sigma_\infty = \sum_{i=0}^\infty A^i(A^T)^i.
\end{equation}
\subsection{Exit times for the multivariate autoregressive process}
For the multivariate autoregressive process $\{X_t^\varepsilon\}_{t\ge 0}$, we will consider the exit time
\begin{equation}
\tau:=\min\{t\ge 1: |c^TX_t^\varepsilon |\ge 1\},
\end{equation}
where $c$ is a vector in $\mathbb{R}^d$, $c\neq (0,\ldots , 0)^T$. We will find the limit of $\varepsilon^2\log E\tau$ as $\varepsilon\rightarrow 0$, by using the methods described in section 2. For the upper bound, we will use the large deviation principle, so we need the following corollary.
\begin{cor}\label{corollary}
The family of probability measures induced by $\{X_t^\varepsilon\}_{t\ge 0}$, where $X_t^\varepsilon$ is defined as in equation \ref{multivariate_ar}, satisfies the large deviation principle with rate of speed $q(\varepsilon) = \varepsilon^2$ and rate function
\[
I(y_0,y_1,\ldots ) = \frac12 \sum_{t=1}^\infty (y_t-Ay_{t-1})^T(y_t-Ay_{t-1}),
\]
where $y_0 = x_0$.
\end{cor}
\noindent
Proof: By using Cram\'{e}r's theorem (see for example \cite{Dem}), one can show that the family of probability measures induced by the family $\{\varepsilon\xi\}$, where $\xi$ is multivariate normal with mean zero and covariance matrix $I$, satisfies the LDP with rate of speed $\varepsilon^2$ and rate function
\[
I_{\varepsilon\xi} (z) = \frac12 z^Tz, \; z\in \mathbb{R}^d.
\]
By using Theorem \ref{thm}, we can deduce that the family of probability measures induced by $\{X_t^\varepsilon\}_{t\ge 0}$ satisfies the LDP with the same rate of speed $\varepsilon^2$ and rate function
\begin{eqnarray*}
I(y_0,y_1,\ldots ) &=& \inf_{\genfrac{}{}{0pt}{1}{\genfrac{}{}{0pt}{1}{z_t\in \mathbb{R}^d}{y_t = Ay_{t-1} + z_t}}{y_0 = x_0}} \sum_{t=1}^\infty \frac12 z_t^Tz_t \\
&=& \frac12\sum_{t=1}^\infty (y_t-Ay_{t-1})^T(y_t-Ay_{t-1}),
\end{eqnarray*}
where $y_0 = x_0$.
\medskip\noindent
For the exit time of a multivariate autoregressive process, starting at the origin, we will prove the following theorem:
\begin{thm}\label{ar_thm}
For the exit time $\tau = \min\{t\ge 1: |c^TX_t^\varepsilon|\ge 1\}$, where $\{X_t^\varepsilon\}_{t\ge 0}$ is the multivariate autoregressive process defined in equation \ref{multivariate_ar}, and $x_0 = (0,\ldots ,0)^T$,
\[
\lim_{\varepsilon\rightarrow 0}\varepsilon^2\log E\tau = \frac{1}{2c^T\Sigma_\infty c},
\]
where $\varepsilon^2\Sigma_\infty$ is the covariance matrix of the stationary distribution of the process.
\end{thm}
\noindent
Proof: The theorem follows from lemmas \ref{upper} and \ref{lower} below.
\begin{lemma}\label{upper}
For $\{X_t^\varepsilon\}_{t\ge 0}$ and $\tau$ as in Theorem \ref{ar_thm}, we have
\[
\limsup_{\varepsilon\rightarrow 0}\varepsilon^2\log E_{x_0}\tau \le \frac{1}{2c^T\Sigma_\infty c},
\]
for any $x_0$ such that $|c^Tx_0|<1$.
\end{lemma}
\noindent
Proof: Consider the exit time $\tau = \min\{t\ge 1: |c^TX_t^\varepsilon|\ge 1\}$. This means that we consider exits from the set $\Omega := \{x\in \mathbb{R}^d: |c^Tx| < 1\}$. For this set $\Omega$, $\inf_{x_0\in \Omega}P_{x_0}(\tau\le M) = P_{(0,\ldots ,0)^T}(\tau\le M)$, since $X_t^\varepsilon$ has a normal distribution with mean $A^tx_0$. Thus, inequality \ref{inequality} in section 2.2 implies that
\[
\limsup_{\varepsilon\rightarrow 0}\varepsilon^2 \log E_{x_0}\tau \le -\lim_{\varepsilon\rightarrow 0}\varepsilon^2\log P_{(0,\ldots ,0)^T}(\tau\le M),
\]
where the right hand side limit can be calculated with the LDP that was proven in corollary \ref{corollary}. Since
\[
\{\tau\le M\} = \{\max_{1\le t\le M}|c^TX_t^\varepsilon |\ge 1\},
\]
we have
\[
\lim_{\varepsilon\rightarrow 0}\varepsilon^2\log P_{(0,\ldots ,0)^T}(\tau\le M) = -\inf_{\genfrac{}{}{0pt}{1}{\max_{1\le t\le M}|c^Ty_t |\ge 1,}{y_0 = (0,\ldots ,0)^T}} \frac12\sum_{t=1}^\infty (y_t-Ay_{t-1})^T(y_t-Ay_{t-1}).
\]
Consider this infimum. The following holds:
\begin{eqnarray*}
& &\inf_{\genfrac{}{}{0pt}{1}{\max_{1\le t\le M}|c^Ty_t |\ge 1}{y_0 = (0,\ldots ,0)^T}} \frac12\sum_{t=1}^\infty (y_t-Ay_{t-1})^T(y_t-Ay_{t-1})\\
&=& \inf_{1\le N\le M}\left( \inf_{\genfrac{}{}{0pt}{1}{|c^Ty_N |\ge 1}{y_0 = (0,\ldots ,0)^T}} \frac12\sum_{t=1}^\infty (y_t-Ay_{t-1})^T(y_t-Ay_{t-1})\right)\\
&=& \inf_{1\le N\le M}\left( \inf_{\genfrac{}{}{0pt}{1}{|c^Ty_N |\ge 1}{y_0 = (0,\ldots ,0)^T}} \frac12\sum_{t=1}^N (y_t-Ay_{t-1})^T(y_t-Ay_{t-1})\right),
\end{eqnarray*}
where the last equality holds because we can choose $y_t = Ay_{t-1}$ for $t>N$. We can write $y_N$ as the telescoping sum $\sum_{t=1}^N A^{N-t}(y_t-Ay_{t-1})$, when $y_0=(0,\ldots ,0)^T$. By using the Cauchy-Schwarz inequality, we get
\begin{eqnarray*}
& & \left(\sum_{t=1}^N (y_t-Ay_{t-1})^T(y_t-Ay_{t-1})\right) \cdot \left(\sum_{t=1}^N c^TA^{N-t}(A^{N-t})^Tc \right)\\
&\ge & \left(\sum_{t=1}^N c^TA^{N-t}(y_t-Ay_{t-1})\right)^2 = (c^Ty_N)^2.
\end{eqnarray*}
Equality in the Cauchy-Schwarz inequality is attained when $y_t-Ay_{t-1} = K(A^{N-t})^Tc$, $t=1,\ldots , N$, for any constant $K\in \mathbb{R}$. This holds when
\[
y_t = K\left(\sum_{i=0}^{t-1}A^i(A^T)^i\right)(A^{N-t})^Tc = K\Sigma_t(A^{N-t})^Tc, \mbox{ for } t= 1,\ldots , N,
\]
where $\Sigma_t$ is defined as in section 3.1. By choosing $K = 1/(|c^T\Sigma_Nc|)$, we get $|c^Ty_N|= 1$. Thus, we have now shown that
\begin{eqnarray*}
\inf_{\genfrac{}{}{0pt}{1}{|c^Ty_N |\ge 1,}{y_0 = (0,\ldots ,0)^T}} \frac12\sum_{t=1}^N (y_t-Ay_{t-1})^T(y_t-Ay_{t-1})
= \frac{1}{2\sum_{t=1}^N c^TA^{N-t}(A^T)^{N-t}c} = \frac{1}{2c^T\Sigma_N c}.
\end{eqnarray*}
Since $\Sigma_t = \sum_{i=0}^{t-1}A^i(A^T)^i$ and
\[
c^T\Sigma_tc = c^T\Sigma_{t-1}c + c^TA^{t-1}(A^{t-1})^Tc \ge c^T\Sigma_{t-1}c \;\;\forall t = 2,\ldots , M,
\]
$\{c^T\Sigma_t c\}_{t\ge 1}$ is a positive and increasing sequence. It follows, that
\[
\inf_{1\le N\le M} \frac{1}{2c^T\Sigma_Nc} = \frac{1}{2c^T\Sigma_Mc},
\]
and we have shown that
\[
\limsup_{\varepsilon\rightarrow 0}\varepsilon^2\log E_{x_0}\tau \le \frac{1}{2c^T\Sigma_Mc}.
\]
Since this inequality holds for any integer $M\ge 1$, we actually have
\begin{equation}
\limsup_{\varepsilon\rightarrow 0}\varepsilon^2\log E_{x_0}\tau \le \frac{1}{2c^T\Sigma_\infty c},
\end{equation}
and the proof is finished.
\begin{lemma}\label{lower}
For $\{X_t^\varepsilon\}_{t\ge 0}$ and $\tau$ as in Theorem \ref{ar_thm}, we have
\[
\liminf_{\varepsilon\rightarrow 0}\varepsilon^2\log E_{(0,\ldots ,0)^T}(\tau ) \ge \frac{1}{2c^T\Sigma_\infty c},
\]
where $E_{(0\ldots ,0)^T}$ denotes that the starting point of the process is $x_0 = (0,\ldots ,0)^T$.
\end{lemma}
\noindent
Proof: For each $t\ge 1$, $c^TX_t^\varepsilon$ has a univariate normal distribution with mean zero (since $x_0$ is now chosen to be the zero vector) and variance ${\rm Var}(c^TX_t^\varepsilon) = \varepsilon^2c^T\Sigma_t c$. In the proof of lemma \ref{upper}, we showed that $\{c^T\Sigma_t c\}_{t\ge 1}$ is an increasing sequence, and thus we have
\[
{\rm Var}(c^TX_t^\varepsilon) \le \varepsilon^2c^T\Sigma_\infty c \;\;\forall t\ge 1.
\]
The statement of the lemma then follows immediately from theorem \ref{thmlower}.
\bigskip\noindent
We illustrate the result in theorem \ref{ar_thm} by simulating a bivariate process $\{X_t^\varepsilon\}_{t\ge 0}$, where
\[
X_t^\varepsilon = AX_{t-1}^\varepsilon+\varepsilon\xi_t,
\]
where $X_t^\varepsilon = (X_{t,1}^\varepsilon,X_{t,2}^\varepsilon)^T\in \mathbb{R}^2 \;\forall t\ge 1$, $X_0 = (0,0)^T$, $\{\xi_t\}_{t\ge 1}$ is an i.i.d. sequence of bivariate standard normal random variables and $A = \left( \begin{array}{cc}0.8 & 1\\ 0 & 0.5\end{array}\right)$. Since the eigenvalues of $A$ (0.8 and 0.5) are less than one in absolute value, the process has a stationary distribution. Let $c=(1,1)^T$ and consider the exit time $\tau = \min\{t\ge 1: |X_{t,1}^\varepsilon+X_{t,2}^\varepsilon|\ge 1\}$. The matrix $\Sigma_\infty$ is calculated from equality \ref{covmatrixequation}. We get
\[
\Sigma_\infty = \left( \begin{array}{cc}\frac{925}{81} & \frac{10}{9}\\ \frac{10}{9} & \frac{12}{9}\end{array}\right).
\]
Theorem \ref{ar_thm} says that $\lim_{\varepsilon\rightarrow 0}\varepsilon^2\log E\tau = 1/(2c^T\Sigma_\infty c) = 81/2426 \approx 0.03339$. We use the statistical programming package R to simulate paths of the process for a few values of $\varepsilon$. For each value of $\varepsilon$, 100 paths are simulated and the mean exit time is calculated. The results are shown in table \ref{thetable}. For $\varepsilon = 0.12$, the mean exit time is around 84, while it is around 6 000 000 for $\varepsilon = 0.05$. Naturally, the simulations become more and more time-consuming as $\varepsilon$ decreases and the mean exit time increases.
\begin{table}[ht]
\begin{center}
\begin{tabular}{ |l|c|c|c|c|c|c|c|}
\hline
$\varepsilon$ & 0.12 & 0.10 & 0.08 & 0.07 & 0.06 & 0.05 \\
\hline
$\varepsilon^2\log E\tau$ & 0.0639 & 0.0554 & 0.0473 & 0.0434 & 0.0415 & 0.0389 \\
\hline
\end{tabular}
\caption{Simulation of a bivariate autoregressive process}
\label{thetable}
\end{center}
\end{table}
\section{Exit times for the autoregressive process of order $n$}
We will now use the methods in section 2 for the univariate autoregressive process of order $n$ with normally distributed noise.
\subsection{The autoregressive process of order $n$}
The autoregressive process of order $n$ is defined as the process $\{X_t^\varepsilon\}_{t\ge 0}$, where
\begin{equation}
X_t^\varepsilon = b_1X_{t-1}^\varepsilon+\ldots + b_nX_{t-n}^\varepsilon + \varepsilon\xi_t,\; X_0^\varepsilon = x_0, \ldots , X_{n-1}^\varepsilon = x_{n-1}.
\end{equation}
Here $X_t^\varepsilon\in \mathbb{R}\;\forall t\ge 0$, $b_1,\ldots , b_n$ are real parameters, $\varepsilon$ is a positive parameter and $\{\xi_t\}_{t\ge n}$ is an i.i.d. sequence of standard normal (univariate) random variables. We consider the exit time from the interval $(-1,1)$, that is,
\begin{equation}\label{ar_n_tau}
\tau_{(-1,1)} = \min\{t\ge n: |X_t^\varepsilon|\ge 1\}.
\end{equation}
The process can actually be seen as a multivariate process. Let $Y_t^\varepsilon := (X_t^\varepsilon,\ldots , X_{t-n+1}^\varepsilon)^T$ $\forall t\ge n-1$. Then $\{Y_t^\varepsilon\}_{t\ge n}$ is a multivariate process that satisfies
\[
Y_t^\varepsilon = BY_{t-1}^\varepsilon + \varepsilon (\xi_t,0,\ldots ,0)^T\;\mbox{ for } t\ge n, \; Y_{n-1}^\varepsilon = (x_{n-1},\ldots ,x_0)^T,
\]
where
\[
B = \left(\begin{array}{cccc} b_1 & b_2 &\cdots & b_n\\
1 & 0 &\cdots &0\\
0 & \ddots & 0 & 0\\
0 &\cdots & 1 & 0\end{array}\right).
\]
This process is similar to but not exactly like the multivariate autoregressive process that we considered in section 3. For each $t\ge n$, $Y_t^\varepsilon$ has a multivariate normal distribution with mean $EY_t^\varepsilon = B^{t-n+1}(x_{n-1},\ldots ,x_0)^T$ and covariance matrix $\varepsilon^2\Sigma_t$, where $\Sigma_t$ is given by
\begin{eqnarray*}
\Sigma_t &=& B\Sigma_{t-1}B^T + (1,0,\ldots ,0)^T(1,0,\ldots ,0) \;\mbox{ for } t\ge n+1,\\ \Sigma_n &=& (1,0,\ldots ,0)^T(1,0,\ldots ,0),
\end{eqnarray*}
or by the sum
\[
\Sigma_t = \sum_{k=0}^{t-n}B^k(1,0,\ldots ,0)^T(1,0,\ldots ,0) (B^T)^k.
\]
Throughout this section, we make the assumption that $b_1,\ldots ,b_n$ are such that all eigenvalues of the matrix $B$ have absolute values less than one. Then the process $\{Y_t^\varepsilon\}_{t\ge n-1}$ has a stationary distribution which is normal with the zero vector as mean and covariance matrix $\varepsilon^2\Sigma_\infty$, where
\begin{eqnarray*}
\Sigma_\infty &=& B\Sigma_\infty B^T + (1,0,\ldots ,0)^T(1,0,\ldots ,0), \mbox{ or }\\
\Sigma_\infty &=& \sum_{k=0}^\infty B^k(1,0,\ldots ,0)^T(1,0,\ldots ,0) (B^T)^k.
\end{eqnarray*}
This implies that the original univariate process $\{X_t^\varepsilon\}_{t\ge 0}$ has a stationary distribution which is normal with mean zero and variance $\varepsilon^2\sigma^2$, where $\sigma^2$ is given by
\begin{equation}
\sigma^2 = \sum_{k=0}^\infty (B_{11}^k)^2,
\end{equation}
where $B_{11}^k$ denotes the element at the first row and the first column of the matrix $B^k$.
\subsection{Exit times from an interval}
Under the assumption that the starting points $x_0,\ldots ,x_{n-1}$ are zeroes, we have the following result for the exit time $\tau_{(-1,1)}$:
\begin{thm}
For the autoregressive process $\{X_t^\varepsilon\}_{t\ge 0}$ of order $n$, and the exit time $\tau_{(-1,1)}$,
\[
\lim_{\varepsilon\rightarrow 0} \varepsilon^2 \log E_{(0,\ldots ,0)}\tau_{(-1,1)} = \frac{1}{2\sigma^2},
\]
assuming that all eigenvalues of $B$ are less than one in absolute value, and that $x_0 = \ldots = x_{n-1}= 0$.
\end{thm}
\noindent
Proof: We use the large deviation principle to get an upper bound of the limit. The logarithmic moment generating function of $(\xi,0,\ldots ,0)^T$, where $\xi$ is a standard normal random variable, is
\[
\Lambda(\lambda) = \log E(e^{\lambda_1\xi}) = \frac{\lambda_1^2}{2},
\]
where $\lambda = (\lambda_1,\ldots ,\lambda_n)^T$. Thus, the Fenchel-Legendre transform of $\Lambda (\lambda)$ is
\[
\Lambda^*(z) = \sup_{\lambda\in R^n}(\lambda^Tz-\Lambda(\lambda)) = \left\{\begin{array}{l} \frac{z_1^2}{2}, \mbox{ if } z_2 = \ldots = z_n = 0\\
\infty, \mbox{ otherwise.}\end{array} \right. ,
\]
where $z = (z_1,\ldots ,z_n)^T\in \mathbb{R}^n$. Cram\'{e}r's theorem (see for example \cite{Dem}) now gives us that the family of probability measures induced by the family $\{\varepsilon(\xi,0,\ldots ,0)^T\}$ satisfies the large deviation principle with rate of speed $\varepsilon^2$ and rate function $I_{\varepsilon(\xi,0,\ldots ,0)^T}(z) = \Lambda^*(z), z\in \mathbb{R}^n$. By theorem \ref{thm}, the large deviation principle then holds for the family of probability measures induced by $\{Y_t^\varepsilon\}_{t\ge n-1}$ with rate of speed $\varepsilon^2$ and rate function
\begin{eqnarray*}
I(y_{n-1},y_n,\ldots ) &=& \inf_{\genfrac{}{}{0pt}{1}{\genfrac{}{}{0pt}{1}{z_t\in \mathbb{R}^n}{y_t = By_{t-1}+z_t,\; t\ge n}}{y_{n-1}= (x_{n-1},\ldots ,x_0)^T}}\sum_{t=n}^\infty I_{\varepsilon(\xi,0,\ldots ,0)^T}(z_t)\\
&=& \left\{ \begin{array}{ll}\frac12 \sum_{t=n}^\infty ((y_t-By_{t-1})_1)^2 & \mbox{if } (y_t-By_{t-1})_k = 0\\
& \forall k=2,\ldots ,n,\; \forall t \ge n\\
& \mbox{and } y_{n-1} = (x_{n-1},\ldots ,x_0)^T\\
\infty ,& \mbox{otherwise,}\end{array}\right.
\end{eqnarray*}
where $y_{n-1}, y_n,\ldots \in \mathbb{R}^n$ and $(y_t-By_{t-1})_k$ denotes the $k$:th element of the vector $y_t-By_{t-1}$. We now proceed as in the multivariate autoregressive case. This time, we consider exits from the set $\Omega = \{x\in \mathbb{R}^d: |c^Tx|<1\}$ for the vector $c = (1,0,\ldots , 0)^T$. For this $\Omega$, $\inf_{y_{n-1}\in\Omega}P_{y_{n-1}}(\tau_{(-1,1)}\le M) = P_{(0,\ldots ,0)^T}(\tau_{(-1,1)}\le M)$ and
\begin{eqnarray*}
& & \lim_{\varepsilon\rightarrow 0}\varepsilon^2\log P_{(0,\ldots ,0)^T}(\tau_{(-1,1)}\le M) = -\inf_{\genfrac{}{}{0pt}{1}{\max_{n\le t\le M}|c^Ty_t|\ge 1}{y_{n-1} = (0,\ldots 0)^T}} I(y_{n-1},y_n,\ldots ) \\
&=& -\inf_{n\le N\le M}(\inf_{\genfrac{}{}{0pt}{1}{|c^Ty_N|\ge 1}{y_{n-1} = (0,\ldots ,0)^T}}I(y_{n-1},y_n,\ldots )),
\end{eqnarray*}
where
\begin{equation}\label{ar_n_inf}
\inf_{\genfrac{}{}{0pt}{1}{|c^Ty_N|\ge 1}{y_{n-1} = (0,\ldots ,0)^T}} I(y_{n-1},y_n,\ldots ) = \!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \inf_{\genfrac{}{}{0pt}{1}{\genfrac{}{}{0pt}{1}{|c^Ty_N|\ge 1}{y_{n-1} = (0,\ldots ,0)^T}}{(y_t-By_{t-1})_k = 0, \;2\le k\le n, \forall t\ge n}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\frac12 \sum_{t=n}^N (y_t-By_{t-1})^T(y_t-By_{t-1}).
\end{equation}
As in the multivariate autoregressive case, one can use the Cauchy-Schwarz inequality to show that the sum on the right hand side in equality \ref{ar_n_inf} is larger than or equal to $1/(2c^T\Sigma_N c)$, where $\Sigma_N$ is defined as in section 4.1. Equality is achieved for $y_t = K\Sigma_t(B^{N-t})^Tc$, $t=n,\ldots , N$ where the choice of $K = 1/(|c^T\Sigma_Nc|)$ gives $|c^Ty_N| = 1$. It is easy to check that this sequence $\{y_t\}_{t=n}^N$ indeed satisfies $(y_t-By_{t-1})_k = 0$ for $k=2,\ldots , n$ and $t=n,\ldots N$. Thus, we get
\[
\inf_{\genfrac{}{}{0pt}{1}{\genfrac{}{}{0pt}{1}{|c^Ty_N|\ge 1}{y_{n-1} = (0,\ldots ,0)^T}}{(y_t-By_{t-1})_k = 0,\; 2\le k\le n, \forall t\ge n}}\frac12 \sum_{t=n}^N (y_t-By_{t-1})^T(y_t-By_{t-1}) = \frac{1}{2c^T\Sigma_N c},
\]
which implies that
\[
\lim_{\varepsilon\rightarrow 0}\varepsilon^2\log P_{(0,\ldots ,0)^T}(\tau_{(-1,1)}\le M) = -\frac{1}{2c^T\Sigma_M c}.
\]
Inequality \ref{inequality} now gives us that $\limsup_{\varepsilon\rightarrow 0}\varepsilon^2\log E_{(x_0,\ldots ,x_{n-1})}\tau_{(-1,1)}\le 1/(2c^T\Sigma_M c)$, and since this holds for any $M\ge n$, we may substitute $\Sigma_\infty$ for $\Sigma_M.$ Since $c = (1,0,\ldots ,0)^T$, $c^T\Sigma_\infty c = \sigma^2$, and we have
\begin{equation}\label{ar_n_upper}
\limsup_{\varepsilon\rightarrow 0}\varepsilon^2 \log E_{(x_0,\ldots , x_{n-1})}\tau_{(-1,1)} \le\frac{1}{2\sigma^2}.
\end{equation}
Thus, we have the desired upper bound for any starting points $x_0,\ldots ,x_{n-1}\in (-1,1)$. For the corresponding lower bound, we make the additional assumption that $x_0 =\ldots = x_{n-1}= 0$. Then $X_t^\varepsilon$ has a normal distribution with mean zero and variance
\[
\sigma_t^2 = \sum_{k=0}^{t-n}(B_{11}^k)^2 \le \sigma^2.
\]
Theorem \ref{thmlower} then immediately gives us
\begin{equation}\label{ar_n_lower}
\liminf_{\varepsilon\rightarrow 0}\varepsilon^2\log E_{(0,\ldots ,0)}\tau_{(-1,1)} \ge \frac{1}{2\sigma^2}.
\end{equation}
The upper and lower bounds in inequalities \ref{ar_n_upper} and \ref{ar_n_lower} together imply that
\begin{equation}
\lim_{\varepsilon\rightarrow 0}\varepsilon^2\log E_{(0,\ldots ,0)}\tau_{(-1,1)} = \frac{1}{2\sigma^2},
\end{equation}
and the proof is finished.
\medskip\noindent
{\bf Acknowledgements:} I would like to thank professor M. A. Lifshits for first showing me the method for lower bounds, and professor G\"{o}ran H\"{o}gn\"{a}s for many encouraging discussions. The financial support of the Academy of Finland (grant no. 127719) is gratefully acknowledged.
|
1,314,259,995,923 | arxiv | \section{Introduction}
The pion and the proton are the most elemental bound states due to the strong interaction.
The knowledge on their structures is important to test our understanding on QCD. The electromagnetic (EM) form factor
is one of the most simple and naive non-perturbative quantity reflecting the structures of these bound states.
In 2000, the measurements of ratio of the EM form factors of proton by the polarized method \cite{Jones2000,Gayou2002} give very different results with those given by Rosenbluth method \cite{Andivahis1994,Walker1994}. This suggests the extraction of the EM form factors from the experimental data is a non-trivial problem. The two-photon-exchange (TPE) effects in the unpolarized $ep$ scattering are expected to explain the discrepancy between the results by the polarized method and Rosenbluth method. Many theoretical methods have been used to estimate the TPE effects such as the hadronic model
\cite{Blunden03,Kondra05,Blunden05,zhouhq2014}, GPD method \cite{Chen04,Afana05}, phenomenological parametrizations \cite{Chen07,BK07}, dispersion relation approach \cite{BK06,BK08,BK11,BK12,BK14,Blunden2017}, pQCD calculations \cite{BK09,Kivel09} and SCEF method \cite{TPE-SCEF}. The recent experimental results on the $R^{2\gamma}\equiv \sigma_{e^{+}p\rightarrow e^{+}p}/\sigma_{e^{+}p\rightarrow e^{-}p}$ \cite{OLYMPUS2017} which measures the TPE effect directly shows the estimation by the most recent calculation \cite{Blunden2017} does not match the experimental data very well. All these mean our understanding on the TPE effects in the $ep$ scattering still needs to be imporved both in the theoretical and the experimental aspects.
The TPE effects in the other processes also abstract many interesting and are discussed in references, for example $e^{+}e^{-} \rightarrow p \overline{p}$ \cite{DianYongChen2008}, $e\pi$ scattering \cite{Blunden2010,YuBingDong2010} and unpolarized $\mu p$ scattering \cite{DianYongChen2013,Tomalak2014,Afanasev2016,zhouhaiqing2017}. In literatures, the TPE effects in the process $e^{+}e^{-} \rightarrow \pi^{+}\pi^{-}$ is usually ignored since the TPE effects will not affect the total cross section or the time-like EM form factor of pion, while the TPE effects still play their role in the angle dependent of the differential cross section. The EM form factor of pion in the space-like region at high momentum transfer has played important role in the test of pQCD factorization \cite{ZhengTaoWei2003,XingGangWu2004,Raha2009-PRD}, while the experimental measurement of the EM form factor of pion in the space-like region is not a trivial problem since there is no pion target. The study of the EM form factor of pion in the time-like region is another window to test the pQCD factorization \cite{Raha2010-PLB,HaoChungHu2013-PLB,ShangChen2015-PLB}. The study of the TPE effects in this process also play the similar role to test the pQCD factorization and to help us understand the TPE effects. In this work, we estimate this effect and we also clarify some discussion on the time-like EM form factor of pion at the leading order of pQCD given in literatures. We arrange our work as following, in Section II we give a simple introduction on the cross section of $e^{+}e^{-} \rightarrow \pi^{+}\pi^{-}$ and the time-like EM form factor of pion by pQCD under the one-photon-exchange (OPE) approximation, in Section III we discuss the TPE effects in this process, in Section IV we discuss the input used in our practical estimation, and in Section V we give the numerical results and our conclusion.
\section{$e^+e^-\rightarrow \pi^+\pi^-$ via one-photon-exchange}
\begin{figure}[htbp]
\center{\epsfxsize 3.4 truein\epsfbox{e+e-pi+pi-OPE.eps}}
\caption{Diagrams for $e^{+}e^{-}\rightarrow \pi^{+}\pi^{-}$ with one-photon exchange (OPE).}
\label{figure:Amp-OPE}
\end{figure}
In the OPE approximation, the process $e^+e^- \rightarrow \pi^+\pi^-$ can be described by the diagram showed in Fig.\ref{figure:Amp-OPE} and the corresponding amplitude can be expressed as
\begin{eqnarray}
\mathcal{M}^{1\gamma}=[{\bar u}(-p_2,m_e)(-ie\gamma^\mu) u(p_1,m_e)]D_{\mu\nu}(q)[-ie (p_4-p_3)^\nu F_{\pi}(s)],
\label{Amp1}
\end{eqnarray}
where $p_1,p_2,p_3$ and $p_4$ are the momenta of the initial electron, initial anti-electron, finial $\pi^{-}$ and $\pi^{+}$, $D_{\mu\nu}(q)$ is the photon propagator, $q=p_1+p_2=p_3+p_4$, $Q^2=q^2$ and $F_{\pi}(Q^2)$ is the time-like EM form factor of pion which is defined as
\begin{eqnarray}
<\pi^+\pi^-|j_{\mu}(0)|0>\equiv -(p_4-p_3)_{\mu} F_{\pi}(Q^2),
\label{definition-FF}
\end{eqnarray}
with $j_{\mu}=\sum e_i\overline{q}_i\gamma_{\mu}q_i$, $q_i$ the quark fields, $i$ the flavor indexes of the quarks and $e_i$ the corresponding electric charge ($-1$ for electron).
By Eq.(\ref{Amp1}), the unpolarized differential cross section can be expressed as
\begin{eqnarray}
d\sigma_{un}^{1\gamma}=\frac{1}{2}e^2F_\pi(Q^2)\ F^*_\pi(Q^2)\ sin^2\theta,
\end{eqnarray}
where $\theta$ is the angle between the three momenta of initial electron(${\bf p}_1$) and finial $\pi^{-}({\bf p}_3)$ in the center frame, $e=-|e|=-\sqrt{4\pi\alpha_{QED }}$.
In the large momentum transfer region, the perturbative QCD (pQCD) can be applied to estimate the electromagnetic form factor $F_\pi(Q^2)$ \cite{pQCD} . In the leading order of the strong interaction coupling $\alpha_s$, the corresponding Feynman diagrams are showed as Fig. \ref{figure:FF-OPE} and the corresponding contribution can be expressed as
\begin{figure}[htbp]
\center{\epsfxsize 2.5 truein\epsfbox{FF-timelike-quarkxy-OPEA.eps}\epsfxsize 2.8 truein\epsfbox{FF-timelike-quarkxy-OPEB.eps}}
\center{\epsfxsize 2.5 truein\epsfbox{FF-timelike-quarkxy-OPEC.eps}\epsfxsize 2.8 truein\epsfbox{FF-timelike-quarkxy-OPED.eps}}
\caption{Diagrams for $e^{+}e^{-}\rightarrow \pi^{+}\pi^{-}$ with OPE in the leading order of pQCD.}
\label{figure:FF-OPE}
\end{figure}
\begin{eqnarray}
F_\pi^{(a)}(Q^2)&=&\frac{(p_4-p_3)_\nu}{-ie(p_4-p_3)^2}\int_{0}^{1}dxdy\int_{-\infty}^{\infty}d^2{\bf b}_{1}d^2{\bf b}_{2} \int_{-\infty}^{\infty}\frac{d^2{\bf k_{\perp 1}}}{(2\pi)^2}\frac{d^2{\bf k_{\perp 2}}}{(2\pi)^2}e^{-i{\bf b}_1\cdot{\bf k}_{\perp 1}-i{\bf b}_2\cdot{\bf k}_{\perp 2}} \nonumber\\
&&~~~~~~~~~~~~~~~~~~~\times e^{-S(x,y,b_1,b_2,Q)}S_t(x)S_t(y)T_H^{\nu,(a)},
\end{eqnarray}
where $b_1=|{\bf b}_1|, b_2=|{\bf b}_2|$, $S(x,y,b_1,b_2,Q)$ is the Sudakov factor in $b$ space and $S_t$ is the threshold resummation factor whose expressions can be found in \cite{Sudkov-factor-Sterman,jet-function-LiHN2002} and we also list them in the Appendix.
\begin{eqnarray}
T_H^{\nu,(a)}&=&c_f^{1\gamma} \textrm{Tr}[\Phi^{(fin)}_{\pi^+}(p_4,y,{\bf k}_{\perp 2})(-ig_s\gamma^\sigma)\Phi^{(fin)}_{\pi^-}(p_3,x,{\bf k}_{\perp 1})(-\frac{1}{3}ie\gamma^\nu)S_q(q_q) (-ig_s\gamma^\rho)] D_{\rho\sigma}(q_g),\nonumber \\
\end{eqnarray}
where $c_f^{1\gamma}=\frac{\delta_{ij}}{3}\frac{\delta_{mn}}{3}T^a_{jm}T^b_{ni}\delta_{ab}=\frac{4}{9}$ is the global color factor of the amplitude, $g_s$ is the strong coupling, $-1/3$ is the charge of $d$-quark, $e=-|e|$ is the electromagnetic coupling, $S(q_q)$ and $D_{\rho\sigma}(q_g)$ are the propagators of quark and gluon without the color indexes, $q_q$ and $q_g$ are the momenta of the corresponding quark and gluon in the propagators with
\begin{eqnarray}
{q}_q&\equiv&[xp_3+{\bf k}_{\perp 1}]-[p_3+p_4],\nonumber\\
{q}_g&\equiv&[yp_4+{\bf k}_{\perp 2}]-[-(1-x)p_3+{\bf k}_{\perp 1}],
\end{eqnarray}
and $\Phi^{(fin)}_{\pi^{\pm}}$ are the wave functions of $\pi^{\pm}$ expressed as
\begin{eqnarray}
\Phi^{(fin)}_{\pi^+}(p_4,y,{\bf k}_{\perp 2})&=&\frac{if_\pi}{4}\Big \{\sla{{p}}_4\gamma_5\phi_{\pi}(y)
-\mu_\pi\gamma_5 \Big [\phi^P_\pi(y)-i\sigma_{\mu\nu}\Big(\frac{p_4^\mu p_3^\nu}{p_4\cdot{p_3}}\frac{\phi^\sigma_\pi{'}(y)}{6}
-p_4^\mu\frac{\phi^\sigma_\pi(y)}{6}\frac{\partial}{\partial{\bf k}_{\perp2\nu}}\Big)\Big]\Big\},\nonumber\\
\Phi^{(fin)}_{\pi^-}(p_3,x,{\bf k}_{\perp 1})&=&\frac{if_\pi}{4}\Big\{\sla{{p}}_3\gamma_5\phi_{\pi}(x)
-\mu_\pi\gamma_5\Big[\phi^P_\pi(x)-i\sigma_{\mu\nu}\Big(\frac{p_3^\mu p_4^\nu}{p_3\cdot p_4}\frac{\phi^\sigma_\pi{'}(x)}{6}
-p_3^\mu\frac{\phi^\sigma_\pi(x)}{6}\frac{\partial}{\partial{\bf k}_{\perp1\nu}}\Big)\Big]\Big\},\nonumber \\
\end{eqnarray}
with $f_{\pi}=0.131$GeV,
After including the contributions from the other diagrams and some algebraic calculation, the finial expression for $F_{\pi}(Q^2)$ can be expressed as
\begin{eqnarray}
F_\pi(Q^2)&=&\int_{0}^{1}dxdy\int_{0}^{\infty}b_1db_1b_2db_2\alpha_s(\mu^2)e^{-S(x,y,b_1,b_2,Q)}S_t(x)\nonumber\\
&&\frac{16\pi f_\pi^2}{9}Q^2 \Big\{t_0+\frac{\mu_\pi^2}{Q^2}[t_1+t_2+t_3]\Big\} H_0^{(1)}(\sqrt{xy}Qb_2)\nonumber\\
&&\Big [\theta(b_1-b_2)H_0^{(1)}(\sqrt{x}Qb_1)J_0(\sqrt{x}Qb_2)+\theta(b_2-b_1)H_0^{(1)}(\sqrt{x}Qb_2)J_0^{(1)}(\sqrt{x}Qb_1)\Big ],
\label{pion-FF}
\end{eqnarray}
where the scale $\mu$ in the coupling is taken as $\textrm{max}\{\sqrt{x}Q,1/b_1,1/b_2\}$ and
\begin{eqnarray}
t_0&=&-\frac{1}{2}x\phi_\pi(y)\phi_\pi(x),\nonumber\\
t_1&=&(1-x)\phi^P_\pi(y)\phi^P_\pi(x),\nonumber\\
t_2&=&-\frac{(1+x)}{6}\phi^P_\pi(y)\phi^T_\pi(x),\nonumber\\
t_3&=&\frac{1}{3}\phi^P_\pi(y)\phi^\sigma_\pi(x).
\label{t}
\end{eqnarray}
Comparing Eq.(\ref{pion-FF},\ref{t}) with the expressions used in Ref. \cite{ZhengTaoWei2003,XingGangWu2004,Raha2010-PLB,HaoChungHu2013-PLB,ShangChen2015-PLB}, two properties of Eq.(\ref{pion-FF}) should be clarified. The first one is that Eq.(\ref{pion-FF}) is consistent with the one got by Ref. \cite{ZhengTaoWei2003} in the space-like region, the factor $1/3$ in the term $t_3$ is different with the factor $1/2$ given in Ref. \cite{XingGangWu2004}. After some careful check, we conclude this difference is due to the different deal on the term $\partial {\bf \sla{k}}_{{\bf\perp} i}/\partial{\bf k}_{\perp i\mu}$. When one takes it as $\gamma_\perp^{\mu}$ one gets $1/3$, when one takes it as $\gamma^{\mu}$ one gets $1/2$. We take the factor $1/3$ in the finial expression. In the practical numerical calculation, the contribution from this difference is very small in the space-like region and usually are neglected in some calculations, while it is not small in the time-like region and should be included. The second property of of Eq.(\ref{pion-FF}) is that there is a sign difference in the term $t_2$ between Eq.(\ref{pion-FF}) and those used in Ref. \cite{Raha2010-PLB,HaoChungHu2013-PLB,ShangChen2015-PLB}. After some check, we take Eq.(\ref{pion-FF}) as the finial result. Eq.(\ref{pion-FF}) can also be obtained via analytical continuation
of the space-like form factor \cite{ZhengTaoWei2003,XingGangWu2004} to the time-like region as the twist-2 case \cite{HaoChungHu2013-PLB}.
\section{$e^+e^-\rightarrow \pi^+\pi^-$ via two-photon-exchange}
When the TPE contributions in the process $e^+e^-\rightarrow \pi^+\pi^-$ are considered, one has the corresponding diagrams showed in Fig.3 at the leading order.
\begin{figure}[htbp]
\center{\epsfxsize 2.8 truein\epsfbox{e+e-pi+pi-TPEA.eps}\epsfxsize 2.8 truein\epsfbox{e+e-pi+pi-TPEB.eps}}
\caption{Diagrams for TPE for $e^{+}e^{-}\rightarrow \pi^{+}\pi^{-}$ with two-photon exchange (TPE) in the leading order of pQCD.}
\label{figure:Amp-TPE}
\end{figure}
The amplitude corresponding to Fig. \ref{figure:Amp-TPE}(a) can be expressed as
\begin{eqnarray}
i\mathcal{M}^{2\gamma,(a)}&=&\int dxdy \int d^2{\bf b}_1d^2{\bf b}_2\int \frac{d^2{\bf k}_{\perp 1}}{(2\pi)^2}\frac{d^2{\bf k}_{\perp 2}}{(2\pi)^2}e^{-i{\bf b}_1\cdot{\bf k}_{\perp 1}-i{\bf b}_2\cdot{\bf k}_{\perp 2}} \nonumber\\
&&~~~\times e^{-S(x,y,b_1,b_2,Q)}T_H^{2\gamma,(a)} \nonumber \\
&\triangleq &\int K\ast T_H^{2\gamma,(a)},
\end{eqnarray}
where
\begin{eqnarray}
T_H^{2\gamma,(a)}&=&\bar{u}(-{p}_2,s_2)(-ie\gamma^\mu)S_e(q_e)(-ie\gamma^\rho)u({p}_1,s_1)D_{\rho\sigma}(q_1)D_{\mu\nu}(q_2)\nonumber\\
&& c_{2\gamma}\textrm{Tr}[\Phi _{\pi}^{(f)}(p_4,y,{\bf b}_2)(\frac{2}{3}ie\gamma^\nu)\Phi_{mn,\pi}^{(f)}(p_3,x,{\bf b}_1)(-\frac{1}{3}ie\gamma^\sigma)]\nonumber\\
& \triangleq & \bar{u}(-{p}_2,s_2)\gamma^\mu\gamma^\omega\gamma^\rho u({p}_1,s_1)q_{e,\omega}T^{(a)}_{\mu\rho}(Q^2,\theta,{\bf b}_1,{\bf k}_{\perp 1},{\bf b}_2,{\bf k}_{\perp 2}),
\end{eqnarray}
with $c_{2\gamma}=\frac{\delta_{ij}}{3}\frac{\delta_{ij}}{3}=\frac{1}{3}$ the global color factor and the momenta in the propagators
\begin{eqnarray}
{q}_e&=&-{p}_2+q_2,\nonumber\\
{q}_1&=&[xp_3+{\bf k}_{\perp 1}]-[-(1-y)p_4+{\bf k}_{\perp 2}],\nonumber\\
{q}_2&=&[yp_4+{\bf k}_{\perp 2}]-[-(1-x)p_3+{\bf k}_{\perp 1}],
\end{eqnarray}
and
\begin{eqnarray}
&&T^{(a)}_{\mu\rho}(Q^2,\theta,{\bf b}_1,{\bf k}_{\perp 1},{\bf b}_2,{\bf k}_{\perp 2}) \nonumber \\
& = &c_{2\gamma} \textrm{Tr}[\Phi_\pi(p_4,y,{\bf b}_2)(\frac{2}{3}ie\gamma_\mu)\Phi_\pi(p_3,x,{\bf b}_1)(-\frac{1}{3}ie\gamma_\rho)](-ie)^2\frac{-i}{q_1^2+i\epsilon}\frac{-i}{q_2^2+i\epsilon}\frac{i}{q_e^2+i\epsilon}.
\label{T_munu}
\end{eqnarray}
Using the relation
\begin{eqnarray}
\gamma^\mu\gamma^\omega\gamma^\rho=
g^{\mu\omega}\gamma^\rho-g^{\mu\rho}\gamma^\omega+g^{\omega\rho}\gamma^\mu-i\gamma^5\epsilon^{\mu\omega\rho\sigma}\gamma_\sigma,
\end{eqnarray}
the amplitude $i\mathcal{M}^{2\gamma,(a)}$ can be expressed in a similar form as $i\mathcal{M}^{1\gamma}$ and one has
\begin{eqnarray}
&&T_H^{2\gamma,(a)}({p}_1,s_1;{p}_2,s_2;p_3,p_4) \nonumber \\
&=&\bar{u}(-{p}_2,s_2)\gamma^\rho u({p}_1,s_1)q_{e,\omega}T^{(a)}_{\omega\rho}-\bar{u}(-{p}_2,s_2)\gamma^\omega u({p}_1,s_1)q_{e,\omega}T^{(a)}_{\mu\mu}\nonumber \\
&&+\bar{u}(-{p}_2,s_2)\gamma^\mu u({p}_1,s_1)q_{e,\rho}T^{(a)}_{\mu\rho} -\bar{u}(-{p}_2,s_2)\gamma^\sigma u({p}_1,s_1)i\gamma_5\epsilon^{\mu\omega\rho}_{~~~~\sigma}q_{e,\omega}T^{(a)}_{\mu\rho}) \nonumber \\
&=& [{\bar u}(-p_2,m_e)\gamma^\mu u(p_1,m_e)][q_{e,\omega}T^{(a)}_{\omega\mu}-q_{e,\mu}T^{(a)}_{\rho\rho}+q_{e,\rho}T^{(a)}_{\mu\rho}]\nonumber \\
&&+[{\bar u}(-p_2,m_e)\gamma_5\gamma^\mu u(p_1,m_e)][-i\epsilon^{\sigma\omega\rho}_{~~~~\mu}q_{e,\omega}T^{(a)}_{\sigma\rho}],\nonumber \\
&\triangleq& [{\bar u}(-p_2,m_e)(-ie\gamma_\mu) u(p_1,m_e)]D^{\mu\nu}(q) T_{\nu}^{(a),eff}\nonumber \\
&&+[{\bar u}(-p_2,m_e)(-ie\gamma_5\gamma_\mu) u(p_1,m_e)]D^{\mu\nu}(q) \bar{T}_{\nu}^{(a),eff},
\end{eqnarray}
with
\begin{eqnarray}
T_{\nu}^{(a),eff}&=&\frac{1}{-ie}\frac{q^2}{-i}[q_{e,\omega}T^{(a)}_{\omega\nu}-q_{e,\nu}T^{(a)}_{\rho\rho}+q_{e,\rho}T^{(a)}_{\nu\rho}],\nonumber \\
\bar{T}_{\nu}^{(a),eff}&=&\frac{1}{-ie}\frac{q^2}{-i}[-i\epsilon^{\sigma\omega\rho}_{~~~~\nu}q_{e,\omega}T^{(a)}_{\sigma\rho}].
\label{T_nu_eff}
\end{eqnarray}
Generally, $T_{\nu}^{(a),eff}$ can be written as $c_1p_{1\nu}+c_2p_{2\nu}+c_3p_{3\nu}$, using the approximation $m_e=0$, the first two terms give no contributions and one get $T_{\nu}^{(a),eff} \propto (p_4-p_3)_\nu$ and finally
\begin{eqnarray}
&&i\mathcal{M}^{2\gamma,(a)} \nonumber \\
&=&\int K \ast [{\bar u}(-p_2,m_e)(-ie\gamma_\mu) u(p_1,m_e)]D^{\mu\nu}(q)T_{\nu}^{eff}] \nonumber \\
&&+\int K \ast [{\bar u}(-p_2,m_e)(-ie\gamma_5\gamma_\mu) u(p_1,m_e)]D^{\mu\nu}(q)\tilde{T}_{\nu}^{eff,(a)}]\nonumber \\
&\triangleq& [{\bar u}(-p_2,m_e)(-ie\gamma_\mu) u(p_1,m_e)]D^{\mu\nu}(q)[-ie (p_4-p_3)_\nu \tilde{F}_\pi^{(a)}(Q^2,\theta)],\nonumber \\
&&+[{\bar u}(-p_2,m_e)(-ie\gamma_5\gamma_\mu) u(p_1,m_e)]D^{\mu\nu}(q)[-ie (p_4-p_3)_\nu \tilde{G}_\pi^{(a)}(Q^2,\theta)],
\end{eqnarray}
where $\tilde{F}_\pi(Q^2,\theta),\tilde{G}_\pi(Q^2,\theta)$ are expressed as
\begin{eqnarray}
\label{FFs of TPE part1}
\tilde{F}_\pi^{(a)}(Q^2,\theta)&=& \int \frac{(p_4-p_3)^\nu}{-ie(p_4-p_3)^2}T_\nu^{(a),eff},\nonumber \\
\tilde{G}_\pi^{(a)}(Q^2,\theta)&=& \int \frac{(p_4-p_3)^\nu}{-ie(p_4-p_3)^2}\bar{T}_\nu^{(a),eff}.
\label{FFs of TPE}
\end{eqnarray}
The contribution from the Fig. \ref{figure:Amp-TPE} (b) can be also get in a similar way. Due to the similar form with $F_{\pi}(Q^2)$, we call $\tilde{F}_\pi^{(a)}(Q^2,\theta),\tilde{G}_\pi^{(a)}(Q^2,\theta)$ as the general form factors in the following and the finial expressions for the general form factors can be get from the Eq. (\ref{T_munu},\ref{T_nu_eff},\ref{FFs of TPE}).
After some calculation, one has
\begin{eqnarray}
\tilde{F}_\pi(Q^2,\theta)& \triangleq &\tilde{F}^{(a)}_\pi(Q^2,\theta)+\tilde{F}^{(b)}_\pi(Q^2,\theta),\nonumber\\
\tilde{F}^{(b)}_\pi(Q^2,\theta)&=&-\tilde{F}^{(a)}_\pi(Q^2,\theta+\pi),
\end{eqnarray}
where
\begin{eqnarray}
\tilde{F}^{(a)}_\pi(Q^2,\theta)&=&\frac{c_{2\gamma} e^2 f_\pi^2 Q^2}{36\pi}\int b_2db_2\int dxdy~e^{-S(x,y,b_1,b_2,Q)}\nonumber\\
&&\times\bigg\{\frac{1}{2}\phi_\pi(x)\phi_\pi(y)Q^2(\!-\!\cos\theta\!+\!x\!+\!y\!-\!1) +\mu_\pi^2\big[\phi^P_\pi(x)\phi^P_\pi(y)(\!-\!\cos\theta\!+\!x\!+\!y\!-\!1)\nonumber\\
&&-\frac{1}{36}\phi^T_\pi(x)\phi^T_\pi(y)(\!-\!\cos\theta\!+\!x\!+\!y\!-\!1)+\frac{1}{24}\phi^T_\pi(x)\phi^\sigma_\pi(y) +\frac{1}{24}\phi^\sigma_\pi(x)\phi^T_\pi(y)\big]\bigg\}\nonumber\\
&&\times \tilde{H}(x,y,Q,b_2,\theta),
\end{eqnarray}
and
\begin{eqnarray}
\tilde{H}(x,y,Q,b_2,\theta)&=&\int d\phi_{b_2} dk_{\perp 3x}e^{-ib_{2x}k_{\perp 3x}}\nonumber\\
&&\times\bigg\{\frac{2\sqrt{2}e^{\frac{|b_{2y}|}{\sqrt{2}}\left(-\sqrt{P_1^{(1)}(x,y,Q,k_{\perp3x},\theta)-i\epsilon}\right)}}{\sqrt{P_1^{(1)}\!(x,y,\!Q,k_{\perp3x},\!\theta)\!-\!i\epsilon} P_2^{(1)}\!(x,y,\!Q,k_{\perp3x},\!\theta)P_3^{(1)}\!(x,y,\!Q,k_{\perp3x},\!\theta)}\nonumber\\
&&-\frac{e^{|b_{2y}|\left(-\sqrt{P_1^{(2)}(x,y,Q,k_{\perp3x})-i\epsilon}\right)}}{\sqrt{P_1^{(2)}(x,y,Q,k_{\perp3x})-i\epsilon} P_2^{(2)}(x,y,Q)P_3^{(2)}(x,y,Q,k_{\perp3x},\theta)}\nonumber\\ &&+\frac{e^{|b_{2y}|\left(-\sqrt{P_1^{(3)}(x,y,Q,k_{\perp3x})-i\epsilon}\right)}}{\sqrt{P_1^{(3)}(x,y,Q,k_{\perp3x})-i\epsilon} P_2^{(3)}(x,y,Q)P_3^{(3)}(x,y,Q,k_{\perp3x},\theta)}\bigg\},
\end{eqnarray}
with $b_{2y} \triangleq b_2 \sin{\phi_{b_2}}$, $b_{2x}\triangleq b_2 \cos{\phi_{b_2}}$, $k_{\perp 3}=k_{\perp 2}-k_{\perp 1}=\{k_{\perp 3x},k_{\perp 3y}\}$, $\epsilon=0^+$ and
\begin{eqnarray}
&&P_1^{(1)}(x,y,Q,k_{\perp3x},\theta)=2 k_{\perp3x}^2+2 k_{\perp3x}Q\sin\theta+Q^2(-\cos\theta(x+y-1)+2xy-x-y+1)\nonumber\\
&&+2m_e^2,\nonumber\\
&&P_2^{(1)}(x,y,Q,k_{\perp3x},\theta)=2 k_{\perp3x}Q\sin\theta+Q^2(-\cos\theta(x+y-1)+x-y+1)+2m_e^2,\nonumber\\
&&P_3^{(1)}(x,y,Q,k_{\perp3x},\theta)=2 k_{\perp3x} Q\sin\theta+Q^2(-\cos\theta(x+y-1)-x+y+1)+2m_e^2,\nonumber\\
&&P_1^{(2)}(x,y,Q,k_{\perp3x})=k_{\perp3x}^2+Q^2(x-1)y,\nonumber\\
&&P_2^{(2)}(x,y,Q)=Q^2 (x-y),\nonumber\\
&&P_3^{(2)}(x,y,Q,k_{\perp3x},\theta)=P_3^{(1)}(x,y,Q,k_{\perp3x},\theta),\nonumber\\
&&P_1^{(3)}(x,y,Q,k_{\perp3x})=k_{\perp3x}^2+Q^2x(y-1),\nonumber\\
&&P_2^{(3)}(x,y,Q)=P_2^{(2)}(x,y,Q),\nonumber\\
&&P_3^{(3)}(x,y,Q,k_{\perp3x},\theta)=P_2^{(1)}(x,y,Q,k_{\perp3x},\theta).
\end{eqnarray}
Furthermore, the cross section from the interference of $\mathcal{M}^{2\gamma}$ and $\mathcal{M}^{1\gamma}$ can expressed as
\begin{eqnarray}
d\sigma_{un}^{2\gamma}
&=&\frac{1}{2}e^2\ sin^2\theta\{2Re[F^*_\pi(Q^2)\tilde{F}_\pi(Q^2,\theta)]\},
\end{eqnarray}
and there is no contribution from $\tilde{G}_\pi(Q^2,\theta)$.
\section{The input }
In the time-like region, in principle the contributions from the resonances should also be considered. In this work, we limit our discussion at the high energy region and focus on the TPE effects, so we neglect the contributions from the resonances at present and the needed input are the same as those used in space-like region. For simplicity, we directly take $n_f=3,\Lambda=0.2$GeV in the Sudakov factor and neglect the dependence of $n_f$ and $\Lambda$ on $Q^2, 1/b_1$ and $1/b_2$. All other inputs are taken as same as those used in Ref. \cite{HaoChungHu2013-PLB} which means the asymptotic two-parton twist-2 and twist-3 DAs are taken
\begin{eqnarray}
\phi_{\pi}(x)&=&6x(1-x)[1+a_2C^{3/2}_{2}(1-2x)], \nonumber \\
\phi^{P}_{\pi}(x)&=&1, \nonumber \\
\phi^{\sigma}_{\pi}(x)&=&6x(1-x), \nonumber \\
\phi^{T}_{\pi}(x)&=&d\phi^{\sigma}_{\pi}(x)/dx =6(1-2x),
\end{eqnarray}
with $a_2=0.2$ and the Gegenbauer polynomial $C^{3/2}_2(u)=(3/2)(5u^2-1)$. The normalization of the above DAs is a little different with that in Ref. \cite{HaoChungHu2013-PLB}. The associated chiral scale is taken as $\mu_\pi=1.3$GeV, the shape parameter in the threshold resummation factor $S_t(x)$ is taken as $c=0.4$ and the renormalization scale used in the $\alpha_{S}$ and Sudakov factor is taken as $\mu=max(\sqrt{x}Q,1/b_1,1/b_2)$.
Other forms of DAs are also used for estimation and the practical numerical results show the form factors are a little sensitive on the input DAs. Since our focus is on the TPE effects in $e^+e^-\rightarrow \pi^+\pi^-$, we do not go to discuss the detail of the dependence of the pion form factor $Q^2|F_{\pi}(Q^2)|$ on the input DAs.
\section{Numerical results and discussion}
Using the inputs suggested in the last section, the form factors $F_{\pi}(Q^2),\tilde{F}_\pi(Q^2,\theta)$ can be calculated directly by the numerical method. In our numerical calculation, we use the function NIntegrate in the Mathematica to do the integration and also the Bessel function in the Mathematica are used directly. The function Vegas in the package Cuba \cite{Cuba} is also used to check the numerical calculation and we find it gives the same result. We want to point out that the integration include the Bessel function should be dealt carefully. The integration of $Q^2|\tilde{F}_\pi(Q^2,\theta)|$ is heavy and in the practical calculation, we at first calculate the results at some points with the relative precision about $1\%$ and then fit the results.
\begin{figure}[!htbp]
\center{\epsfxsize 3.4 truein\epsfbox{OPE-time-abs.eps}\epsfxsize 3.4 truein\epsfbox{OPE-time-phase.eps}}
\caption{Results for $Q^2 F_\pi(Q^2)$ {\it vs} . $Q^2$. The left panel is the result for $Q^2 |F_\pi(Q^2)|$ {\it vs} . $Q^2$ and the right panel is the result for the phase of $F_\pi(Q^2)$ {\it vs} . $Q^2$. }
\label{figure:OPE-full}
\end{figure}
The numerical results for $Q^2|F_{\pi}(Q^2)|$ and the phase of $F_{\pi}(Q^2)$ are presented in Fig. \ref{figure:OPE-full}. The red dashed curves refer to the contribution from twist-2 DA, the blue dotted curves refer to the contribution from twist-3 DAs and the black solid curves refer to the contribution from their sum. The contribution from the twist-2 DA is almost same with that presented in \cite{HaoChungHu2013-PLB}. The contribution from the twist-3 DAs is much smaller than that from the twist-2 DA, which is very different with the property presented in \cite{HaoChungHu2013-PLB,ShangChen2015-PLB}. For comparison, three results are presented in Fig. \ref{figure:OPE-twist3-comparison} to show the reason of the large difference.
\begin{figure}[!htbp]
\center{\epsfxsize 3.4 truein\epsfbox{OPE-time-twist3-abs-comparison.eps}\epsfxsize 3.4 truein\epsfbox{OPE-time-twist3-phase-comparison.eps}}
\caption{Comparison of the contributions from twist-3 DAs to $Q^2 |F_\pi(Q^2)|$ and the phase of $F_\pi(Q^2)$ with different expressions. The olive dashed curves labelled as ``twist-3-refs" refers to the results by replacing $t_1+t_2+t_3$ in Eq. (\ref{pion-FF}) with $t_1-t_2$ which was given in \cite{Raha2010-PLB} and then used in Ref. \cite{HaoChungHu2013-PLB,ShangChen2015-PLB}, the pink dashed-doted curves labelled as ``twist-3-corrected" refers to the results by replacing $t_1+t_2+t_3$ in Eq. (\ref{pion-FF}) with $t_1+t_2$ and the black solid curves labelled as ``twist-3-full" refers to the results by Eq. (\ref{pion-FF}).}
\label{figure:OPE-twist3-comparison}
\end{figure}
In Fig. \ref{figure:OPE-twist3-comparison}, the olive dashed curves labelled as ``twist-3-refs" refer to the results by replacing $t_1+t_2+t_3$ in Eq. (\ref{pion-FF}) with $t_1-t_2$ which was given in \cite{Raha2010-PLB} and then used in Ref. \cite{HaoChungHu2013-PLB,ShangChen2015-PLB}, the pink dashed-doted curves labelled as ``twist-3-corrected" refers to the results by replacing $t_1+t_2+t_3$ in Eq. (\ref{pion-FF}) with $t_1+t_2$ and the black solid curves labelled as ``twist-3-full" refer to the results from Eq. (\ref{pion-FF}). The numerical results ``twist-3-Refs" are almost same with the corresponding results in Fig.5 of Ref. \cite{HaoChungHu2013-PLB}. The comparison of the results ``twist-3-refs" and ``twist-3-corrected" shows that there is large cancellation between the contributions from the terms $t_1$ and $t_2$. The comparison of the results ``twist-3-corrected" and ``twist-3-full" shows the contribution from the term $t_3$ is also important. The property of the contribution from the term $t_3$ is very different with that in the space-like region where the contribution from this term is small.
The numerical results for $Q^2|\tilde{F}_\pi(Q^2,\theta)|$ {\it vs.} $Q^2$ at $\theta =(1/9,2/9,1/3,4/9)\pi$ are presented in Fig. \ref{figure:FFbar-abs-vs-QQ}. The red dashed curves refer to the contribution from twist-2 DA, the blue dotted curves refer to the contribution from twist-3 DAs and the black solid curves refer to the contribution from their sum. One can see the magnitudes of $Q^2|\tilde{F}_\pi(Q^2,\theta)|$ are about ($10\%-20\%$) of $Q^2|F_\pi(Q^2)|$ at small $\theta$ which means the absolute contributions from the TPE effects are not small. This is natural since naively the ratio is expected as $\alpha_{QED}/\alpha_S$ due to Fig. \ref{figure:FF-OPE} and Fig. \ref{figure:Amp-TPE}. This property is differen with the TPE corrections in the elastic $ep$ scattering at small momentum transfer where the relative corrections are expected as $\alpha_{QED}$. The $Q^2|\tilde{F}_\pi(Q^2,\theta)|$ also shows strong angle dependence which is the most interesting property different with $F_{\pi}(Q^2)$. The manifest dependence of $Q^2|\tilde{F}_\pi(Q^2,\theta)|$ on $\theta$ at $Q^2=(20,50)$ GeV$^2$ are presented in Fig. \ref{figure:FFbar-abs-vs-theta}.
\begin{figure}[!htbp]
\center{\epsfxsize 6 truein\epsfbox{FFbar-abs-vs-QQ-fit.eps}}
\caption{The numerical results for $Q^2|\tilde{F}_\pi(Q^2,\theta)|$ {\it vs.} $Q^2$ at $\theta=(1/9,2/9,1/3,4/9)\pi$ from twist-2 DA (red dashed), twist-3 DAs (blue dotted) and their sum (black solid), respectively.}
\label{figure:FFbar-abs-vs-QQ}
\end{figure}
\begin{figure}[htbp]
\center{\epsfxsize 6 truein\epsfbox{FFbar-abs-vs-theta-fit.eps}}
\caption{The numerical results for $Q^2|\tilde{F}_\pi(Q^2,\theta)|$ {\it vs.} $\theta$ at $Q^2=(20,50)$ GeV$^2$ from twist-2 DA (red dashed), twist-3 DA (blue dotted) and their sum (black solid), respectively.}
\label{figure:FFbar-abs-vs-theta}
\end{figure}
The normalized cross sections $d\sigma_{un}Q^4/sin^2\theta$ from the OPE (black solid curves) and OPE+TPE (red dashed curves) are presented in Fig. \ref{figure:cross-section}, where one can see a manifest asymmetry in the angle dependence of the cross section after including the TPE effects. The existing of such asymmetry is a direct single of the TPE effects. The measurements of such asymmetry can help us understand the TPE effects.
\begin{figure}[htbp]
\center{\epsfxsize 7.5 truein\epsfbox{cross-section.eps}}
\caption{The numerical results for $d\sigma_{un}Q^4/sin^2\theta$ {\it vs.} $\theta$ at $Q^2=(20,50)$ GeV$^2$ from the OPE (black solid) and OPE+TPE (red dashed), respectively.}
\label{figure:cross-section}
\end{figure}
In summary, in this work the TPE effects in the process $e^+e^- \rightarrow \pi^+\pi^-$ at large momentum transfer are discussed within the perturbative QCD (pQCD). The TPE contributions to the cross section are calculated and we find the asymmetry of the differential cross section on the scattering angle reaches about $10\%-20\%$ at small angle. The time-like electromagnetic form factor of pion at the leading order of $\alpha_S$ from the twist-3 DAs is also discussed and the comparison of our results with those in the references are presented.
\section{Acknowledgments}
The author Hai-Qing Zhou would like to thank Hsiang-nan Li, Xing-Gang Wu and Shan Cheng for their kind and helpful discussions. This work is supported by the National Natural Science Foundations of China under Grant No. 11375044.
\section{Appendix}
In this Appendix, some expressions used in the practical calculation are listed.
The Sudkov factor $S(x,y,b_1,b_2,Q)$ \cite{Sudkov-factor-Sterman} is expressed as
\begin{eqnarray}
S(x,y,b_1,b_2,Q)&=&s(xQ,b_1)+s(yQ,b_2)+s((1-x)Q,b_1)+s((1-y)Q,b_2)\nonumber\\
&&-\frac{1}{\beta_0}\text{ln}\left(\frac{\hat{t}}{-\hat{b}_1}\right) -\frac{1}{\beta_0}\text{ln}\left(\frac{\hat{t}}{-\hat{b}_2}\right),
\end{eqnarray}
where
\begin{eqnarray}
s(xQ,1/b)&=&\frac{A^{(1)}}{2\beta_0}\hat{q}\text{ln} \left(\frac{\hat{q}}{-\hat{b}}\right) +\frac{A^{(2)}}{4\beta_0^2} \left(\frac{\hat{q}}{-\hat{b}}-1\right) -\frac{A^{(1)}}{2\beta_0}(\hat{b}+\hat{q})\nonumber\\
&&-\frac{4A^{(1)}\beta_1}{16\beta_0^3}\hat{q}
\left[\frac{1+\text{ln}(-2\hat{b})}{-\hat{b}} -\frac{1+\text{ln}(2\hat{q})}{\hat{q}}\right]\nonumber\\
&&-\left[\frac{A^{(2)}}{4\beta_0^2} -\frac{A^{(1)}}{4\beta_0} \text{ln}\left(\frac{1}{2}e^{2\gamma_E-1}\right)\right]\text{ln}\left(\frac{\hat{q}}{-\hat{b}}\right)\nonumber\\
&&-\frac{4A^{(1)}\beta_1}{32\beta_0^3} \left[\text{ln}^2(-2\hat{b})-\text{ln}^2(2\hat{q})\right],
\end{eqnarray}
with
\begin{eqnarray}
&&\hat{t}=ln(\frac{t}{\Lambda_{QCD}}), \qquad t=max(\sqrt{x}Q,1/b_1,1/b_2),\nonumber\\
&&\hat{b}=ln(b\Lambda_{QCD}),\qquad \hat{q}=ln[\frac{xQ}{\sqrt{2}\Lambda_{QCD}}],\nonumber\\
&&A^{(1)}=C_F=\frac{4}{3},\nonumber\\
&&A^{(2)}=(\frac{67}{27}-\frac{\pi^2}{9})N_c-\frac{10}{27}N_f+\frac{8}{3}\beta_0ln(\frac{e^{\gamma_E}}{2}),\nonumber\\
&&\beta_0=\frac{11N_c-2N_f}{12}=\frac{9}{4},\qquad \beta_1=\frac{51N_c-19N_f}{24}=4,\nonumber\\
&& N_c=N_f=3.
\end{eqnarray}
The jet function $S_t(x_i)$ \cite{jet-function-LiHN2002} is expressed as
\begin{eqnarray}
S_t(x_i)=\frac{2^{1+2c}\Gamma(3/2+c)}{\sqrt{\pi}\Gamma(1+c)}[x_i(1-x_i)]^c.
\end{eqnarray}
The running strong coupling $\alpha_S$\cite{PDG2016} is expressed as
\begin{eqnarray}
\alpha_s(\mu^2)=\frac{\pi}{\beta_0\text{ln}(\mu^2/\Lambda_{QCD}^2)} -\frac{\pi\beta_1\text{ln}(\text{ln}(\mu^2/\Lambda_{QCD}^2))} {\beta_0^3\text{ln}^2(\mu^2/\Lambda_{QCD}^2)}.
\end{eqnarray}
|
1,314,259,995,924 | arxiv | \section{Introduction} \label{sec:intro}
The motivation for this work arises from certain tasks in image processing, where the robustness of methods plays an important role.
In this context, the Student-$t$ distribution and the closely related Student-$t$ mixture models became popular
in various image processing tasks.
In~\cite{VS14} it has been shown that Student-$t$ mixture models are superior
to Gaussian mixture models for mode\-ling image patches and the authors proposed an application in image compression.
Image denoising based on Student-$t$ models was addressed in~\cite{LS2019} and image
deblurring in~\cite{DHWMZ2019,YYG2018}.
Further applications include robust image segmentation~\cite{BM18,NW12,SNG07} as well as robust registration~\cite{GNL09,ZZDZC14}.
In one dimension and for $\nu=1$, the Student-$t$ distribution coincides with the one-dimensional Cauchy distribution.
This distribution has been proposed to model a very impulsive noise behavior and
one of the first papers which suggested a variational approach in connection with wavelet shrinkage
for denoising of images corrupted by Cauchy noise was
\cite{ALP2002}. A variational method consisting of a data term
that resembles the noise statistics and a total variation regularization term has been introduced in~\cite{MDHY18,SDZ15}.
Based on an ML approach the authors of~\cite{LPS18} introduced a so-called
generalized myriad filter that estimates both the location and the scale parameter of the Cauchy distribution.
They used the filter in a nonlocal denoising approach,
where for each pixel of the image they chose as samples of the distribution those pixels having a similar neighborhood
and replaced the initial pixel by its filtered version.
We also want to mention that a unified framework for images corrupted by white noise
that can handle (range constrained) Cauchy noise as well was suggested in~\cite{LMSS2018}.
In contrast to the above pixelwise replacement, the state-of-the-art algorithm of Lebrun et al.~\cite{LBM13}
for denoising images corrupted by white Gaussian noise
restores the image patchwise based on a maximum a posteriori approach.
In the Gaussian setting, their approach is equivalent to
minimum mean square error estimation, and more general, the resulting estimator can be
seen as a particular instance of a best linear unbiased estimator (BLUE).
For denoising images corrupted by additive Cauchy noise, a similar approach was addressed in \cite{LS2019} based
on ML estimation for the family of Student-t distributions, of which the
Cauchy distribution forms a special case.
The authors call this approach generalized multivariate myriad filter.
However, all these approaches assume that the degree of freedom parameter $\nu$ of the Student-$t$
distribution is known, which might not be the case in practice. In this paper we consider the estimation of the degree of freedom parameter based on an ML approach.
In contrast to maximum likelihood estimators of the location and/or scatter parameter(s) $\mu$ and $\Sigma$,
to the best of our knowledge the question of existence of a joint maximum likelihood estimator has not been analyzed before and in this paper we provide first results in this direction.
Usually the likelihood function of the Student-$t$ distributions and mixture models
are minimized using the EM algorithm derived e.g.\ in~\cite{LLT89,McLK1997,MP98,PM00}.
For fixed $\nu$, there exists an accelerated EM algorithm~\cite{KTV94,MVD97,vanDyk1995}
which appears to be more efficient than the classical one for smaller parameters $\nu$.
We examine the convergence of the accelerated version if also the
degree of freedom parameter $\nu$ has to be estimated. Also for unknown degrees of freedom, there exist an accelerated version of the EM algorithm,
the so-called ECME algorithm~\cite{LR95} which differs from our algorithm.
Further, we propose two modifications of the $\nu$ iteration step which lead to efficient algorithms
for a wide range of parameters $\nu$. Finally, we address further accelerations of our algorithms by
the squared iterative methods (SQUAREM) \cite{VR2008} and the
damped Anderson acceleration with restarts and $\epsilon$-monotonicity (DAAREM) \cite{HV2019}.
The paper is organized as follows:
In Section \ref{sec:ML} we introduce the Student-$t$ distribution, the negative $\log$-likelihood function $L$
and their derivatives.
The question of the existence of a minimizer of $L$ is addressed in Section \ref{sec:exits_nu_scatter}.
Section \ref{sec:zero_F} deals with the solution of the equation arising when setting the gradient of $L$
with respect to $\nu$ to zero. The results of this section will be important for the convergence
consideration of our algorithms in the
Section \ref{sec:algs}. We propose three alternatives of the classical EM algorithm and prove
that the objective function $L$ decreases for the iterates produced by these algorithms.
Finally, we provide two kinds of numerical results in Section \ref{sec:algs}.
First, we compare the different algorithms by numerical examples which indicate that the new $\nu$ iterations
are very efficient for estimating $\nu$ of different magnitudes.
Second, we come back to the original motivation of this paper and estimate the degree of freedom parameter $\nu$
from images corrupted by one-dimensional Student-$t$ noise.
\section{Likelihood of the Multivariate Student-$t$ Distribution} \label{sec:ML}
The density function of the
$d$-dimensional Student-$t$ distribution $T_\nu(\mu,\Sigma)$ with
$\nu>0$ degrees of freedom, \emph{location} paramter $\mu\in \mathbb{R}^d$ and symmetric, positive definite \emph{scatter matrix} $\Sigma\in \SPD(d)$
is given by
\begin{equation}\label{pdf}
p(x|\nu,\mu,\Sigma) =
\frac{\Gamma\left(\frac{d+\nu}{2}\right)}{\Gamma\left(\frac{\nu}{2}\right)\, \nu^{\frac{d}{2}} \, \pi^{\frac{d}{2}} \,
{\abs{\Sigma}}^{\frac{1}{2}}} \, \frac{1}{\left(1 +\frac1\nu(x-\mu)^{\mbox{\tiny{T}}} \Sigma^{-1}(x-\mu) \right)^{\frac{d+\nu}{2}}},
\end{equation}
with the \emph{Gamma function}
$
\Gamma(s) \coloneqq\int_0^\infty t^{s-1}\mathrm{e}^{-t}\dx[t]
$.
The expectation of the Student-$t$ distribution is $\mathbb{E}(X) = \mu$ for $\nu > 1$
and the covariance matrix is given by $\Cov(X) =\frac{\nu }{\nu-2} \Sigma$ for $\nu > 2$,
otherwise the quantities are undefined.
The smaller the value of $\nu$, the heavier are the tails of the $T_\nu(\mu,\Sigma)$ distribution.
For $\nu \to \infty$,
the Student-$t$ distribution $T_\nu(\mu,\Sigma)$ converges to the normal distribution $\mathcal{N}(\mu,\Sigma)$ and for $\nu = 0$
it is related to the projected normal distribution on the sphere $\mathbb{S}^{d-1}\subset\mathbb{R}^d$.
Figure~\ref{Fig:different_nu}
illustrates this behavior for the one-dimensional standard Student-$t$ distribution.
\begin{figure}[thb]
\centering
\centering
{\includegraphics[width=0.4\textwidth]{images/Student_t_nu.pdf}}
\caption{Standard Student-$t$ distribution $T_\nu(0,1)$
for different values of $\nu$ in comparison with the standard normal distribution $\mathcal{N}(0,1)$.}\label{Fig:different_nu}
\end{figure}
As the normal distribution, the $d$-dimensional Student-$t$ distribution belongs to the class of \emph{elliptically symmetric distributions}.
These distributions are stable under linear transforms in the following sense: Let $X\sim T_\nu(\mu,\Sigma)$ and $A\in \mathbb{R}^{d\times d}$ be an invertible matrix and let $b\in \mathbb{R}^d$.
Then $AX + b\sim T_\nu(A\mu + b, A\Sigma A^{\mbox{\tiny{T}}})$. Furthermore, the Student-$t$ distribution $T_\nu(\mu,\Sigma)$ admits the following \emph{stochastic representation}, which can be used to generate samples from $T_\nu(\mu,\Sigma)$ based on samples from the multivariate standard normal distribution $\mathcal{N}(0,I)$ and the Gamma distribution $\Gamma\bigl(\tfrac{\nu}{2},\tfrac{\nu}{2}\bigr)$: Let $Z\sim \mathcal{N}(0,I)$ and $Y\sim \Gamma\bigl(\tfrac{\nu}{2},\tfrac{\nu}{2}\bigr)$ be independent, then
\begin{equation} X = \mu + \frac{\Sigma^{\frac{1}{2}}Z}{\sqrt{Y}}\sim T_\nu(\mu,\Sigma).\label{stochastic_representation}
\end{equation}
For i.i.d.\ samples $x_i \in \mathbb R^d$, $i=1,\ldots,n$,
the likelihood function of the Student-$t$ distribution $T_\nu(\mu,\Sigma)$ is given by
\begin{equation*}
\mathcal{L}(\nu,\mu,\Sigma|x_1,\ldots,x_n)
= \frac{\Gamma\left(\frac{d+\nu}{2}\right)^n}{\Gamma\left(\frac{\nu}{2}\right)^n(\pi \nu)^{\frac{nd}{2}}\abs{\Sigma}^{\frac{n}{2}} }
\prod_{i=1}^n \frac{1}{\bigl(1+\frac{1}{\nu}(x_i-\mu)^{\mbox{\tiny{T}}} \Sigma^{-1} (x_i-\mu)\bigr)^{\frac{d+\nu}{2}}},
\end{equation*}
and the log-likelihood function by
\begin{align}
\ell(\nu,\mu,\Sigma|x_1,\ldots,x_n)
=&
\, n \, \log\Bigl(\Gamma\left( \tfrac{d+\nu}{2}\right)\Bigr)
- n \log \Bigl( \Gamma\left(\tfrac{\nu}{2}\right)\Bigr)-\tfrac{nd}{2}\log(\pi\nu) \\
&- \frac{n}{2}\log \abs{\Sigma} - \tfrac{d+\nu}{2} \sum_{i=1}^n \log\left(1+\frac{1}{\nu}(x_i-\mu)^{\mbox{\tiny{T}}} \Sigma^{-1} (x_i-\mu) \right).
\end{align}
In the following, we are interested in the negative log-likelihood function, which up to the factor $\frac{2}{n}$ and weights $w_i = \frac{1}{n}$ reads as
\begin{align} \label{ML}
L(\nu,\mu,\Sigma)
&= -2\log\Bigl(\Gamma\left( \tfrac{d+\nu}{2}\right)\Bigr)+ 2 \log\Bigl(\Gamma\left( \tfrac{\nu}{2}\right)\Bigr) - \nu \log(\nu) \\
&\quad + (d + \nu)\sum_{i=1}^n w_i \log\left(\nu + (x_i-\mu)^{\mbox{\tiny{T}}} \Sigma^{-1} (x_i-\mu) \right)+ \log \abs{\Sigma}.
\end{align}
In this paper, we allow for arbitrary weights from the open probability simplex
$
\mathring \Delta_n \coloneqq \big\{w = (w_1,\ldots,w_n) \in \mathbb R_{>0}^n: \sum_{i=1}^n w_i = 1 \big\}
$.
In this way, we might express different levels of confidence in single samples or handle the occurrence of multiple samples.
Using
$
\frac{\partial \log(\abs{X})}{\partial X} = X^{-1}
$ and
$
\frac{\partial a^{\mbox{\tiny{T}}} X^{-1}b }{\partial X} =- {(X^{-{\mbox{\tiny{T}}}})}a b^{\mbox{\tiny{T}}} {(X^{-{\mbox{\tiny{T}}}})},
$
see~\cite{PP08}, the derivatives of $L$ with respect to $\mu$, $\Sigma$ and $\nu$ are given by
\begin{align*}
\frac{\partial L}{\partial \mu}(\nu,\mu,\Sigma)
& = -2(d+\nu )\sum_{i=1}^n w_i \frac{ \Sigma^{-1}(x_i-\mu)}{\nu + (x_i-\mu)^{\mbox{\tiny{T}}} \Sigma^{-1} (x_i-\mu)},\\
\frac{\partial L}{\partial \Sigma}(\nu,\mu,\Sigma)
& = - (d+\nu ) \sum_{i=1}^n w_i \frac{ \Sigma^{-1}(x_i-\mu)(x_i-\mu)^{\mbox{\tiny{T}}} \Sigma^{-1} }{\nu + (x_i-\mu)^{\mbox{\tiny{T}}} \Sigma^{-1} (x_i-\mu)}+\Sigma^{-1},\\
\frac{\partial L}{\partial \nu}(\nu,\mu,\Sigma )
& =
\phi\left(\frac{\nu}{2}\right) - \phi \left(\frac{\nu + d}{2}\right) + \sum_{i=1}^n w_i \left( \frac{\nu + d}{\nu + (x_i-\mu)^{\mbox{\tiny{T}}} \Sigma^{-1} (x_i-\mu)}\right.\\
& \quad \left. - \log\left(\frac{\nu + d}{\nu + (x_i-\mu)^{\mbox{\tiny{T}}} \Sigma^{-1} (x_i-\mu)} \right) - 1\right),
\end{align*}
with
$$\phi(x) \coloneqq \psi(x) - \log (x), \qquad x >0$$
and the \emph{digamma function}
$$
\psi(x) = \frac{\mathrm{d}}{\mathrm{d}x}\log\left(\Gamma(x)\right) = \frac{\Gamma'(x)}{\Gamma(x)}.
$$
Setting the derivatives to zero results in the equations
\begin{align}
0 &= \sum_{i=1}^n w_i \frac{x_i-\mu}{\nu+(x_i-\mu)^{\mbox{\tiny{T}}} \Sigma^{-1} (x_i-\mu)},\label{mult_cond_a}\\
I &= (d+\nu)\sum_{i=1}^n w_i \frac{\Sigma^{-\frac{1}{2}}(x_i-\mu)(x_i-\mu)^{\mbox{\tiny{T}}} {\Sigma^{-\frac{1}{2}}} }{\nu+(x_i-\mu)^{\mbox{\tiny{T}}} \Sigma^{-1} (x_i-\mu)} ,\label{mult_cond_S}\\
0 &= F\left(\frac{\nu }{2} \right) \coloneqq \phi\left(\frac{\nu }{2}\right) - \phi\left(\frac{\nu +d}{2}\right) \\
&\quad + \sum_{i=1}^n w_i \left( \frac{\nu + d}{\nu + (x_i-\mu)^{\mbox{\tiny{T}}} \Sigma^{-1} (x_i-\mu)}- \log\left(\frac{\nu + d}{\nu + (x_i-\mu)^{\mbox{\tiny{T}}} \Sigma^{-1} (x_i-\mu)} \right) - 1\right). \label{ML_nu}
\end{align}
Computing the trace of both sides of~\eqref{mult_cond_S} and using the linearity and permutation invariance of the trace operator we obtain
\begin{align}
d& = \tr(I)
=
(d+\nu)\sum_{i=1}^n w_i \frac{\tr\bigl(\Sigma^{-\frac{1}{2}}(x_i-\mu)(x_i-\mu)^{\mbox{\tiny{T}}} {\Sigma^{-\frac{1}{2}}}\bigr)}{\nu+(x_i-\mu)^{\mbox{\tiny{T}}} \Sigma^{-1} (x_i-\mu)} \\
&= (d+\nu)\sum_{i=1}^n w_i \frac{(x_i-\mu)^{\mbox{\tiny{T}}} \Sigma^{-1} (x_i-\mu)}{\nu+(x_i-\mu)^{\mbox{\tiny{T}}} \Sigma^{-1} (x_i-\mu)},
\end{align}
which yields
\begin{equation} \label{trace_1}
1= (d+\nu) \sum_{i=1}^n w_i \frac{1}{\nu+(x_i-\mu)^{\mbox{\tiny{T}}} \Sigma^{-1} (x_i-\mu)}.
\end{equation}
We are interested in critical points of the negative log-likelihood function $L$, i.e. in solutions $(\mu,\Sigma,\nu)$ of
\eqref{mult_cond_a} - \eqref{ML_nu}, and in particular in minimizers of $L$.
\section{Existence of Critical Points}\label{sec:exits_nu_scatter}
In this section, we examine whether the negative log-likelihood function $L$ has a minimizer, where we restrict our attention to the case $\mu = 0$. For an approach how to extend the results to arbitrary $\mu$ for fixed $\nu$ we refer to \cite{LS2019}. To the best of our knowledge, this is the first work that provides results in this direction. The question of existence is, however, crucial in the context of ML estimation, since it lays the foundation for any convergence result for the EM algorithm or its variants. In fact, the authors of~\cite{LLT89} observed the divergence of the EM algorithm in some of their numerical experiments, which is in accordance with our observations.
For \emph{fixed} $\nu >0$, it is known that there exists a unique solution of \eqref{mult_cond_S}
and for $\nu = 0$ that there exists solutions of \eqref{mult_cond_S}
which differ only by a multiplicative positive constant, see, e.g. \cite{LS2019}.
In contrast, if we do not fix $\nu$, we have roughly to distinguish between the two cases
that the samples tend to come from a Gaussian distribution, i.e.\ $\nu\to\infty$, or not.
The results are presented in Theorem \ref{thm:Existence of Scatter}.
We make the following general assumption:
\begin{Assumption}\label{Ass:lin_ind}
Any subset of $\le d$ samples $x_i$, $i \in \{1,\ldots,n\}$ is linearly independent and
$\max\{w_i:i=1,\ldots,n\}<\frac{1}{d }$.
\end{Assumption}
For $\mu = 0$, the negative log-likelihood function becomes
\begin{align}
L(\nu,\Sigma)
&\coloneqq -2\log\left(\Gamma\left(\frac{d+\nu}{2}\right)\right)+2\log\left(\Gamma\left(\frac\nu2\right)\right)-\nu\log(\nu)\\
&\quad +(d+\nu)\sum_{i=1}^n w_i \log\left(\nu+ x_i^{\mbox{\tiny{T}}}\Sigma^{-1}x_i\right)+\log(\abs{\Sigma})\\
&=-2\log\left(\Gamma\left(\frac{d+\nu}{2}\right)\right)+2\log\left(\Gamma\left(\frac\nu2\right)\right)-\nu\log(\nu)\\
&\quad +(d+\nu)\log(\nu)+(d+\nu)\sum_{i=1}^nw_i\log\left(1+ \frac1\nu x_i^{\mbox{\tiny{T}}}\Sigma^{-1}x_i\right)+\log(\abs{\Sigma}).
\end{align}
Further, for a fixed $\nu>0$, set
\begin{equation*}
L_\nu(\Sigma) \coloneqq (d+\nu)\sum_{i=1}^n w_i \log\left(\nu+ x_i^{\mbox{\tiny{T}}}\Sigma^{-1}x_i\right)+\log(\abs{\Sigma}).
\end{equation*}
To prove the next existence theorem we will need two lemmas, whose proofs are given in the appendix.
\begin{Theorem} \label{thm:Existence of Scatter}
Let $x_i \in \mathbb R^d$, $i=1,\ldots,n$ and $w \in \mathring \Delta_n$ fulfill Assumption \ref{Ass:lin_ind}.
Then exactly one of the following statements holds:
\begin{enumerate}
\item[(i)] There exists a minimizing sequence $(\nu_r,\Sigma_r)_r$ of $L$,
such that $\{\nu_r:r\in\mathbb N\}$ has a finite cluster point. Then we have
$\argmin_{(\nu,\Sigma)\in\mathbb{R}_{>0}\times\mathrm{SPD}(d)} L(\nu,\Sigma)\neq\emptyset$
and every
$(\hat \nu,\hat \Sigma)\in\argmin_{(\nu,\Sigma)\in\mathbb{R}_{>0}\times\mathrm{SPD}(d)}L(\nu,\Sigma)$
is a critical point of $L$.
\item[(ii)]
For every minimizing sequence $(\nu_r,\Sigma_r)_r$ of $L(\nu,\Sigma)$ we have
$\lim\limits_{r\to\infty} \nu_r=\infty$. Then
$(\Sigma_r)_r$ converges to the maximum likelihood estimator $\hat\Sigma=\sum_{i=1}^n w_ix_ix_i^{\mbox{\tiny{T}}}$
of the normal distribution $\mathcal{N}(0,\Sigma)$.
\end{enumerate}
\end{Theorem}
\begin{proof}
\textbf{Case 1:} Assume that there exists a minimizing sequence $(\nu_r,\Sigma_r)_r$ of $L$, such that $(\nu_r)_r$ has a bounded subsequence.
In particular, using Lemma \ref{lem:rike}, we have that $(\nu_r)_r$ has a cluster point $\nu^* >0$
and a subsequence $(\nu_{r_k})_k$ converging to $\nu^*$.
Clearly, the sequence $(\nu_{r_k},\Sigma_{r_k})_k$ is again a minimizing sequence so that we skip the second index in the following.
By Lemma \ref{lem:lik}, the set $\overline{\{\Sigma_r:r\in\mathbb N\}}$
is a compact subset of $\mathrm{SPD}(d)$.
Therefore there exists a subsequence $(\Sigma_{r_k})_k$
which converges to some $\Sigma^*\in\mathrm{SPD}(d)$. Now we have by continuity of $L(\nu,\Sigma)$ that
$$
L(\nu^*,\Sigma^*)=\lim\limits_{k\to\infty}L(\nu_{r_k},\Sigma_{r_k})=\min_{(\nu,\Sigma)\in\mathbb{R}_{>0}\times\mathrm{SPD}(d)} L(\nu,\Sigma).
$$
\textbf{Case 2:} Assume that for every minimizing sequence $(\nu_r,\Sigma_r)_r$ it holds that $\nu_r\to\infty$ as $r\to \infty$.
We rewrite the likelihood function as
\begin{align}
L(\nu,\Sigma)
&=
2\log \left( \frac{\Gamma\left(\frac\nu2\right)\frac\nu2^\frac{d}{2} }{ \Gamma\left(\frac{d+\nu}{2} \right)} \right)
+d \log(2)
+(d+\nu) \sum_{i=1}^n w_i
\log \left(1+\frac1\nu x_i^{\mbox{\tiny{T}}} \Sigma^{-1}x_i\right)+\log(\abs{\Sigma}).
\end{align}
Since
\[\lim_{\nu \rightarrow \infty} \frac{\Gamma\left(\frac\nu2\right)\frac\nu2^\frac{d}{2} }{ \Gamma\left(\frac{d+\nu}{2} \right)}=1,\]
we obtain
\begin{equation}\label{eq:asym_likelihood}
\lim\limits_{r\to\infty}L(\nu_r,\Sigma_r)=
d\log(2)+ \lim_{\nu_r \rightarrow \infty} (d+\nu_r)\sum_{i=1}^nw_i\log\left(1+\frac1{\nu_r} x_i^{\mbox{\tiny{T}}} \Sigma_r^{-1}x_i\right)+\log(\abs{\Sigma_r}).
\end{equation}
Next we show by contradiction that $\overline{\{\Sigma_r:r\in\mathbb N\}}$ is in $\mathrm{SPD}(d)$ and bounded:
Denote the eigenvalues of $\Sigma_r$ by $\lambda_{r1}\geq\cdots\geq\lambda_{rd}$.
Assume that either $\{\lambda_{r1}:r\in\mathbb N\}$ is unbounded
or that $\{\lambda_{rd}:r\in\mathbb N\}$
has zero as a cluster point.
Then, we know by \cite[Theorem 4.3]{LS2019}
that there exists a subsequence of $(\Sigma_r)_r$, which we again denote by $(\Sigma_r)_r$,
such that for any fixed $\nu>0$ it holds
\begin{equation}\label{eq:fixed_nu_to_infty}
\lim\limits_{r\to\infty} L_\nu (\Sigma_r)=\infty.
\end{equation}
Since $k\mapsto\left(1+\frac{k}{x}\right)^k$ is monotone increasing, for $\nu_r\geq d+1$ we have
\begin{align}
(d+\nu_r)\sum_{i=1}^n w_i \log\left(1+\frac1{\nu_r}x_i^{\mbox{\tiny{T}}} \Sigma_r^{-1}x_i\right)
&=\sum_{i=1}^n w_i \log\left(\left(1+\frac1{\nu_r}x_i^{\mbox{\tiny{T}}} \Sigma_r^{-1}x_i\right)^{\nu_r+d}\right)\\
&\geq \sum_{i=1}^n w_i \log\left(\left(1+\frac1{\nu_r}x_i^{\mbox{\tiny{T}}} \Sigma_r^{-1}x_i\right)^{\nu_r}\right)\\
&\geq \sum_{i=1}^n w_i \log\left(\left(1+\frac1{d+1}x_i^{\mbox{\tiny{T}}} \Sigma_r^{-1}x_i\right)^{d+1}\right)\\
&= (d+1)\sum_{i=1}^n w_i \log\left(1+\frac1{d+1}x_i^{\mbox{\tiny{T}}} \Sigma_r^{-1}x_i\right)\\
&\geq (d+1)\sum_{i=1}^n w_i \log\left(1+x_i^{\mbox{\tiny{T}}} \Sigma_r^{-1}x_i\right) - \log(d+1)^{d+1}.
\end{align}
By \eqref{eq:asym_likelihood} this yields
\begin{align}
\lim\limits_{r\to\infty}L(\nu_r,\Sigma_r)
&\geq
d\log(2) - \log(d+1)^{d+1} + \lim\limits_{r\to\infty} (d+1)\sum_{i=1}^n w_i \log\left(1+x_i^{\mbox{\tiny{T}}} \Sigma_r^{-1}x_i\right)+\log(\abs{\Sigma_r})
\\
&=d\log(2)- \log(d+1)^{d+1} +\lim\limits_{r\to\infty}L_1(\Sigma_r)=\infty.
\end{align}
This contradicts the assumption that $(\nu_r,\Sigma_r)_r$ is a minimizing sequence of $L$.
Hence $\overline{\{\Sigma_r:r\in\mathbb N\}}$ is a bounded subset of $\mathrm{SPD}(d)$.
\\[1ex]
Finally, we show that any subsequence of $(\Sigma_r)_r$ has a subsequence which converges to $\hat\Sigma=\sum_{i=1}^n w_i x_ix_i^{\mbox{\tiny{T}}}$.
Then the whole sequence $(\Sigma_r)_r$ converges to $\hat \Sigma$.\\
Let $(\Sigma_{r_k})_k$ be a subsequence of $(\Sigma_r)_r$.
Since it is bounded, it has a convergent subsequence $(\Sigma_{r_{k_l}})_l$
which converges to some $\tilde\Sigma\in\overline{\{\Sigma_r:r\in\mathbb N\}}\subset\mathrm{SPD}(d)$.
For simplicity, we denote $(\Sigma_{r_{k_l}})_l$ again by $(\Sigma_r)_r$.
Since $(\Sigma_r)_r$ is converges, we know that also $(x_i^{\mbox{\tiny{T}}} \Sigma_r^{-1}x_i)_r$ converges and is bounded.
By $\lim\limits_{r\to\infty}\nu_r=\infty$ we know that the functions $x\mapsto\left(1+\frac{x}{\nu_r}\right)^{\nu_r}$
converge locally uniformly to $x\mapsto \exp(x)$ as $r\to\infty$.
Thus we obtain
\begin{align}
&
\lim\limits_{r\to\infty}(d+\nu_r)\sum_{i=1}^n w_i \log\left(1+\frac1{\nu_r}x_i^{\mbox{\tiny{T}}} \Sigma_r^{-1}x_i\right)\\
&=
\lim\limits_{r\to\infty}\sum_{i=1}^n w_i\log\left(\left(1+\frac1{\nu_r}x_i^{\mbox{\tiny{T}}} \Sigma_r^{-1}x_i\right)^{d+\nu_r}\right)
\\
&=
\lim\limits_{r\to\infty} \sum_{i=1}^n w_i \log\left(\lim\limits_{r\to\infty}\left(1+\frac1{\nu_r}x_i^{\mbox{\tiny{T}}}
\Sigma_r^{-1}x_i\right)^{\nu_r}\left(1+\frac1{\nu_r}x_i^{\mbox{\tiny{T}}} \Sigma_r^{-1}x_i\right)^d\right)\\
&=
\lim\limits_{r\to\infty}\sum_{i=1}^n w_i \log\left(\lim\limits_{r\to\infty}\left(1+\frac1{\nu_r}x_i^{\mbox{\tiny{T}}} \Sigma_r^{-1}x_i\right)^{\nu_r}\right)\\
&=
\sum_{i=1}^n w_i \log\left(\exp(x_i^{\mbox{\tiny{T}}}\tilde{\Sigma}^{-1}x_i)\right)=\sum_{i=1}^n w_ix_i^{\mbox{\tiny{T}}} \tilde\Sigma^{-1}x_i.
\end{align}
Hence we have
\begin{align}
\inf_{(\nu,\Sigma)\in\mathbb{R}_{>0}\times\mathrm{SPD}(d)}L(\nu,\Sigma)=\lim\limits_{r\to\infty} L(\nu_r,\Sigma_r)
=d\log(2)+\sum_{i=1}^n w_ix_i^{\mbox{\tiny{T}}}\tilde\Sigma^{-1}x_i+\log(|\tilde\Sigma|).
\end{align}
By taking the derivative with respect to $\Sigma$ we see that the right-hand side is minimal if and only if
$\Sigma=\hat\Sigma=\sum_{i=1}^nw_ix_ix_i^{\mbox{\tiny{T}}}$.
On the other hand, by similar computations as above we get
\begin{align}
\inf_{(\nu,\Sigma)\in\mathbb{R}_{>0}\times\mathrm{SPD}(d)}L(\nu,\Sigma)
&\leq
\lim\limits_{r\to\infty} L(\nu_r,\hat\Sigma)\\
&= d\log(2) + \log(|\hat\Sigma|)
+ \lim_{v_r \rightarrow \infty} (d+\nu_r) \sum_{i=1}^n w_i \log \big(1+\frac1{\nu_r}x_i^{\mbox{\tiny{T}}} \hat \Sigma^{-1}x_i\big)\\
&= d\log(2) + \log(|\hat\Sigma|) + \sum_{i=1}^n w_ix_i^{\mbox{\tiny{T}}} \hat\Sigma^{-1}x_i+\log(|\hat\Sigma|),
\end{align}
so that $\tilde\Sigma=\hat\Sigma$. This finishes the proof.
\end{proof}
\section{Zeros of $F$}\label{sec:zero_F}
In this section, we are interested in the existence of solutions of
\eqref{ML_nu}, i.e., in zeros of $F$ for arbitrary fixed $\mu$ and $\Sigma$.
Setting $x \coloneqq \frac{\nu}{2} > 0$, $t \coloneqq \frac{d}{2}$ and
$$
s_i \coloneqq \frac12 (x_i - \mu)^{\mbox{\tiny{T}}} \Sigma^{-1} (x_i - \mu), \quad i=1,\ldots,n.
$$
we rewrite the function $F$ in \eqref{ML_nu} as
\begin{align} \label{function_F}
F(x) &= \phi (x) - \phi(x+t) +
\sum_{i=1}^n w_i \left( \frac{x+t}{x + s_i}- \log\left(\frac{x + t}{x + s_i} \right) - 1\right)
\\
&= \sum_{i=1}^n w_i F_{s_i} (x)
=
\sum_{i=1}^n w_i \big( A(x) + B_{s_i}(x) \big),
\end{align}
where
\begin{equation} \label{Fs}
F_s(x) \coloneqq A(x) + B_s(x)
\end{equation}
and
\begin{align}\label{A+B}
A(x) \coloneqq \phi (x) - \phi(x+t),\qquad
B_s (x) \coloneqq \frac{x+t}{x + s}- \log\left(\frac{x + t}{x + s} \right) - 1.
\end{align}
The digamma function $\psi$ and $\phi = \psi - \log(\cdot)$ are well examined in the literature, see \cite{AS65}.
The function $\phi(x)$ is the expectation value of a random variable which is $\Gamma(x,x)$ distributed.
It holds $-\frac{1}{x} < \phi(x) < - \frac{1}{2x}$ and it is well-known that
$-\phi$ is \emph{completely monotone}. This implies that the negative of $A$
is also completely monotone, i.e. for all $x > 0$ and $m \in \mathbb N_0$ we have
\begin{equation}\label{cm_A}
(-1)^{m+1} \phi^{(m)} (x) > 0, \qquad (-1)^{m+1} A^{(m)} (x) > 0,
\end{equation}
in particular $A < 0$, $A' > 0$ and $A'' < 0$.
Further, it is easy to check that
\begin{align}
&\lim_{x\rightarrow 0} \phi(x) = -\infty, \qquad \lim_{x\rightarrow \infty} \phi(x) = 0^-,\label{asymp_phi}\\
&\lim_{x\rightarrow 0} A(x) = -\infty, \qquad \lim_{x\rightarrow \infty} A(x) = 0^-.\label{asymp_A}
\end{align}
On the other hand, we have that $B(x) \equiv 0$ if $s=t$ in which case $F_s = A < 0$ and has therefore no zero.
If $s \not = t$, then $B_s$ is \emph{completely monotone}, i.e., for all $x > 0$ and $m \in \mathbb N_0$,
\begin{equation}\label{cm_B}
(-1)^m B_s^{(m)} (x) > 0,
\end{equation}
in particular $B_s> 0$, $B_s' < 0$ and $B_s'' >0$,
and
\begin{equation}\label{asymp_B}
B_s(0) = \frac{t}{s} - \log \left( \frac{t}{s} \right) - 1 > 0, \qquad \lim_{x\rightarrow \infty} B_s (x) = 0^+.
\end{equation}
Hence we have
\begin{equation} \label{eq1}
\lim_{x \rightarrow 0} F_s(x) = -\infty, \qquad \lim_{x \rightarrow \infty} F_s(x) = 0.
\end{equation}
If $X \sim {\mathcal N}(\mu,\Sigma)$ is a $d$-dimensional random vector, then $Y \coloneqq (X-\mu)^{\mbox{\tiny{T}}} \Sigma^{-1} (X-\mu) \sim \chi_d^2$
with $\mathbb E (Y) = d$ and $\Var(Y) = 2d$. Thus we would expect
that for samples $x_i$ from such a random variable $X$
the corresponding values $(x_i - \mu)^{\mbox{\tiny{T}}} \Sigma^{-1} (x_i - \mu)$ lie
with high probability in the interval $[d - \sqrt{2d},d+ \sqrt{2d}]$, respective $s_i \in [t -\sqrt{t}, t + \sqrt{t}]$.
These considerations are reflected in the following theorem and corollary.
\begin{Theorem}\label{thm:ensure}
For $F_s: \mathbb{R}_{>0} \rightarrow \mathbb{R}$ given by \eqref{Fs} the following relations hold true:
\begin{itemize}
\item[i)]
If $s \in [t - \sqrt{t},t+ \sqrt{t}] \cap \mathbb R_{>0}$, then $F_s(x) < 0$
for all $x >0$ so that $F_s$ has no zero.
\item[ii)]
If $s > 0$ and $s \not \in [t - \sqrt{t},t+ \sqrt{t}]$,
then there exists
$x_+$ such that $F_s(x) >0$ for all $x \ge x_+$.
In particular, $F_s$ has a zero.
\end{itemize}
\end{Theorem}
\begin{proof}
We have
\begin{align}
F_s'(x)
&=
\phi'\left(x\right) - \phi'(x+t) - \frac{(s-t)^2}{(x +s)^2(x+t)}\\
&=
\psi'(x) - \psi'(x+t) - \frac{t}{x(x+t)} - \frac{(s-t)^2}{(x +s)^2(x+t)}.
\end{align}
We want to sandwich $F'_s$ between two rational functions $P_s$ and $P_s + Q$
which zeros can easily be described.
Since the trigamma function $\psi'$ has the series representation
\begin{equation}\label{psi_series}
\psi'(x) = \sum_{k=0}^{\infty} \frac{1}{(x+k)^{2}},
\end{equation}
see~\cite{AS65},
we obtain
\begin{equation} \label{function_1}
F_s'(x) = \sum_{k=0}^\infty\frac{1}{(x+k)^2} - \frac{1}{(x+k+t)^2} - \frac{t}{x(x+t)} - \frac{(s-t)^2}{(x+s)^2(x+t)}.
\end{equation}
For $x > 0$, we have
$$I(x) = \int_0^\infty \underbrace{\frac{1}{(x+u)^2}-\frac{1}{(x+u+t)^2}}_{g(u)} \, du
=\frac1x-\frac1{x+t} = \frac{t}{(x+t)x}.$$
Let $R(x)$ and $T(x)$ denote the rectangular and trapezoidal rule, respectively,
for computing the integral with step size 1.
Then we verify
\[R(x)=\sum_{k=0}^\infty g(k)=\sum_{k=0}^\infty \frac1{(x+k)^2}-\frac1{(x+k+t)^2}\]
so that
\begin{align}
F_s'(x) &= \left( R(x) - T(x) \right) + \left( T(x) - I(x) \right) - \frac{(s-t)^2}{(x+s)^2(x+t)}\\
& = \frac12 \left(\frac{1}{x^2} -\frac{1}{(x+t)^2} \right)+ \left( T(x) - I(x) \right) - \frac{(s-t)^2}{(x+s)^2(x+t)}.
\end{align}
By considering the first and second derivative of $g$ we see the integrand in $I(x)$
is strictly decreasing and strictly convex.
Thus,
$
P_s(x) < F_s'(x)
$,
where
\begin{align}
P_s(x)
&\coloneqq \frac12 \left(\frac{1}{x^2} -\frac{1}{(x+t)^2} \right) - \frac{(s-t)^2}{(x+s)^2(x+t)}
= \frac{(2tx + t^2)(x+s)^2 - (s-t)^2 x^2(x+t)}{2x^2(x+s)^2(x+t)^2}\\
&= \frac{p_s(x)}{2x^2(x+s)^2(x+t)^2}.
\end{align}
with
$
p_s(x) \coloneqq a_3 x^3 + a_2 x^2 + a_1 x + a_0
$
and
$$
a_0 = t^2s^2 > 0,\quad a_1 = 2st(s+t) > 0, \quad a_2 = t(4s+t - (s-t)^2),\quad a_3 = 2\left( t- (s-t)^2 \right) .
$$
We have
\begin{equation} \label{main_coeff}
a_3 \ge 0 \quad \Longleftrightarrow \quad s \in [t - \sqrt{t}, t + \sqrt{t}]
\end{equation}
and
$$a_2 \geq 0 \quad \Longleftrightarrow \quad
s \in [t+2-\sqrt{4+ 5t}, t+2 + \sqrt{4+ 5t}] \supset [t - \sqrt{t}, t + \sqrt{t}]$$
for $t \geq 1$. For $t=\frac12$, it holds $[t+2-\sqrt{4+ 5t}, t+2 + \sqrt{4+ 5t}]\supset [0,t+\sqrt{t}]$.
Thus, for $s \in [t - \sqrt{t}, t + \sqrt{t}]$,
by the sign rule of Descartes, $p_s(x)$ has no positive zero
which implies
$$
0 \le P_s(x) < F_s'(x) \quad \mathrm{for} \quad s \in [t - \sqrt{t}, t + \sqrt{t}] \cap \mathbb R _{>0}.
$$
Hence, the continuous function $F_s$ is monotone increasing
and by \eqref{eq1} we obtain
$F_s (x) < 0$ for all $x > 0$ if $s \in [t - \sqrt{t}, t + \sqrt{t}] \cap \mathbb R _{>0}$.
\\[1ex]
Let $s>0$ and $s \not \in [t - \sqrt{t}, t + \sqrt{t}]$.
By
$$
T(x)-I(x)=\sum_{k=0}^\infty \left( \frac12(g(k+1)+g(k)) - \int_0^1 g(k+u) \, du \right)
$$
and Euler's summation formula, we obtain
$$
T(x) - I(x) = \sum_{k=0}^\infty \frac{1}{12} \left( g'(k+1) - g'(k) \right) - \frac{1}{720} g^{(4)}(\xi_k), \quad \xi_k \in (k,k+1)
$$
with $g'(u) = -\frac{2}{(x+u)^3}+\frac{2}{(x+u+t)^3}$ and $g^{(4)}(u) = \frac{5!}{(x+u)^6}-\frac{5!}{(x+u+t)^6}$,
so that
\begin{align} \label{**}
T(x) - I(x) =& -\frac{1}{12} g'(0) +\sum_{k=0}^\infty \frac16\frac1{(x+\xi_k+t)^6}-\frac16\frac1{(x+\xi_k)^6}\\
<&- \frac{1}{12}g'(0)
=\frac16\frac{3t x^2 + 3t^2x + t^3}{x^3(x+t)^3}.
\end{align}
Therefore, we conclude
$$
F_s'(x) < P_s(x) + \underbrace{\frac16\frac{3t x^2 + 3t^2x + t^3}{x^3(x+t)^3}}_{Q(x)} =
\frac{p_s(x) x (x+t) + (t x^2 + t^2x + \frac13 t^3)(x+s)^2}{2 x^3(x+s)^2(x+t)^3}
$$
The main coefficient of $x^5$ of the polynomial in the numerator is $2(t-(s-t)^2)$ which fulfills \eqref{main_coeff}.
Therefore, if $s \not \in [t - \sqrt{t}, t + \sqrt{t}]$, then there exists $x_+$ large enough
such that the numerator becomes smaller than zero for all $x \ge x_+$.
Consequently, $F'_s(x) \leq P_s(x) + Q(x)<0$ for all $x \geq x_+$.
Thus, $F_s$ is decreasing on $[x_+,\infty)$.
By \eqref{eq1}, we conclude that $F_s$ has a zero.
\end{proof}
The following corollary states that $F_s$ has exactly one zero if $s > t+ \sqrt{t}$.
Unfortunately we do not have such a results for $s < t - \sqrt{t}$.
\begin{Corollary}
Let $F_s: \mathbb{R}_{>0} \rightarrow \mathbb{R}$ be given by \eqref{Fs}. If $s >t + \sqrt{t}$, $t \ge 1$, then
$F_s$ has exactly one zero.
\end{Corollary}
\begin{proof}
By Theorem \ref{thm:ensure}ii) and since $\lim_{x\rightarrow 0} F_s(x) = -\infty$ and
$\lim_{x\rightarrow \infty} = 0^+$, it remains to prove that $F_s'$ has at most one zero.
Let $x_0>0$ be the smallest number such that $F_s'(x_0)=0$.
We prove that $F_s'(x)<0$ for all $x>x_0$.
To this end, we show that $h_s(x)\coloneqq F_s'(x)(x+s)^2(x+t)$ is strictly decreasing.
By \eqref{function_1} we have
\begin{align} \label{function_h}
h_s(x) &= (x+s)^2(x+t)\left(\sum_{k=0}^\infty\frac{1}{(x+k)^2} - \frac{1}{(x+k+t)^2} - \frac{t}{x(x+t)} \right)- (s-t)^2,
\end{align}
and for $s>t$ further
\begin{align}
h_s'(x)
&= \left(2(x+s)(x+t)+ (x+s)^2\right)\left(\sum_{k=0}^\infty\frac{1}{(x+k)^2} - \frac{1}{(x+k+t)^2} - \frac{t}{x(x+t)} \right) \\
&\quad + (x+s)^2(x+t)\left(\sum_{k=0}^\infty\frac{-2}{(x+k)^3} + \frac{2}{(x+k+t)^3} + \frac{t(2x+t)}{x^2(x+t)^2} \right)\\
&\leq 3(x+s)^2 \left(\sum_{k=0}^\infty\frac{1}{(x+k)^2} - \frac{1}{(x+k+t)^2} - \frac{t}{x(x+t)} \right)\\
&\quad + (x+s)^2(x+t)\left(\sum_{k=0}^\infty\frac{-2}{(x+k)^3} + \frac{2}{(x+k+t)^3} + \frac{t(2x+t)}{x^2(x+t)^2} \right). \\
&= (x+s)^2 (R(x)-I(x)),
\end{align}
where $I(x)$ is the integral and $R(x)$ the corresponding rectangular rule
with step size 1 of the function $g\coloneqq g_1 + g_2$ defined as
\begin{equation}
g_1(u)\coloneqq 3\left( \frac{1}{(x+u)^2} - \frac{1}{(x+ t + u)^2}\right),
\quad
g_2(u)\coloneqq(x+t)\left( \frac{-2}{(x+u)^3} + \frac{2}{(x+t+ u)^3}\right).
\end{equation}
We show that $R(x)-I(x) <0$ for all $x>0$.
Let $T(x)$, $T_i(x)$ be the trapezoidal rules with step size 1 corresponding to $I(x)$ and
$I_i(x)=\int_{0}^{\infty} g_i(u)du$, $i=1,2$.
Then it follows
$$
R(x)- I(x) = R(x) - T(x) + T(x) - I(x)
=R(x) - T(x) + T_1(x) - I_1(x) + T_2(x) - I_2(x).
$$
Since $g_2$ is a decreasing, concave function, we conclude $T_2(x) - I_2(x)<0$.
Using Euler's summation formula in \eqref{**} for $g_1$, we get
\begin{align}
T_1(x) - I_1(x) &= -\frac{1}{12}g_1'(0) - \frac{1}{720}\sum_{k=0}^{\infty} g_1^{(4)}(\xi_k), \quad \xi_k\in(k,k+1).
\end{align}
Since $g_1^{(4)}$ is a positive function, we can write
\begin{align}
R(x) - I(x) &< R(x) - T(x) + T_1(x) - I_1(x) \leq \frac{1}{2} g(0) -\frac{1}{12}g_1'(0)\\
&= \frac{3}{2}\left( \frac{1}{x^2}-\frac{1}{(x+t)^2}\right) +
\frac{1}{2}(x+t) \left( \frac{-2}{x^3} + \frac{2}{(x+t)^3}\right) -
\frac{1}{2}\left( \frac{-1}{x^3} + \frac{1}{(x+t)^3}\right)\\
&=\frac{t}{2} \, \frac{(- 3 t + 3 )x^2 +(- 5 t^2 + 3t)x -2 t^3 +t^2}{x^3(x+t)^3}.
\end{align}
All coefficients of $x$ are smaller or equal than zero for $t \ge 1$ which implies that $h_s$
is strictly decreasing.
\end{proof}
Theorem \ref{thm:ensure} implies the following corollary.
\begin{Corollary}\label{cor:ensure}
For $F: \mathbb{R}_{>0} \rightarrow \mathbb{R}$ given by \eqref{function_F} and
$\delta_i \coloneqq (x_i - \mu)^{\mbox{\tiny{T}}} \Sigma^{-1} (x_i - \mu)$, $i=1,\ldots,n$, the following relations hold true:
\begin{itemize}
\item[i)]
If $\delta_i \in [d - \sqrt{2d},d+ \sqrt{2d}] \cap \mathbb R_{>0}$ for all $i\in \{1,\ldots,n\}$, then $F(x) < 0$
for all $x >0$ so that $F$ has no zero.
\item[ii)]
If $\delta_i > 0$ and $\delta_i \not \in [d - \sqrt{2d},d+ \sqrt{2d}]$ for all $i\in \{1,\ldots,n\}$,
there exists
$x_+$ such that $F(x) >0$ for all $x \ge x_+$.
In particular, $F$ has a zero.
\end{itemize}
\end{Corollary}
\begin{proof}
Consider $F = \sum_{i=1}^n F_{s_i}$.
If $\delta_i \in [d - \sqrt{2d},d+ \sqrt{2d}] \cap \mathbb R_{>0}$ for all $i\in \{1,\ldots,n\}$,
then we have by Theorem \ref{thm:ensure} that $F_{s_i} (x) < 0$ for all $x>0$.
Clearly, the same holds true for the whole function $F$ such that it cannot have a zero.
If $\delta_i \not \in [d - \sqrt{2d},d+ \sqrt{2d}]$ for all $i\in \{1,\ldots,n\}$, then we know
by Theorem \ref{thm:ensure} that there exist $x_{i+} > 0$ such that $F_{s_i} (x) > 0$ for
$x \ge x_{i+}$. Thus, $F(x) > 0$ for $x \ge x_+ \coloneqq \max_i(x_{i+})$.
Since $\lim_{x \rightarrow 0} F(x) = -\infty$
this implies that $F$ has a zero.
\end{proof}
\section{Algorithms} \label{sec:algs}
In this section, we propose an alternative of the classical EM algorithm
for computing the parameters of the Student-$t$ distribution along with convergence results.
In particular, we are interested in estimating the degree of freedom parameter $\nu$,
where the function $F$ is of particular interest.
\\
\textbf{Algorithm \ref{alg:EM}} with weights $w_i = \frac{1}{n}$, $i=1,\ldots,n$,
is the classical EM algorithm.
Note that the function in the third M-Step
\begin{align} \label{poly_EM}
\Phi_r \left( \frac{\nu}{2} \right)
&\coloneqq
\phi \left( \frac{\nu}{2} \right)
\underbrace{ - \, \phi \left( \frac{\nu_r + d}{2} \right)
+ \sum_{i=1}^n w_i \left( \gamma_{i,r} - \log( \gamma_{i,r} ) - 1 \right)}_{c_r}
\end{align}
has a unique zero since by \eqref{asymp_phi} the function $\phi < 0$
is monotone increasing with $\lim_{x \rightarrow \infty} \phi(x) = 0^-$ and $c_r > 0$.
Concerning the convergence of the EM algorithm it is known
that the values of the objective function $L(\nu_r,\mu_r,\Sigma_r)$ are monotone decreasing in $r$ and that
a subsequence of the iterates
converges to a critical point of $L(\nu,\mu,\Sigma)$ if such a point exists, see \cite{Byrne2017}.
\\
\begin{algorithm}[!ht]
\caption{EM Algorithm (EM)} \label{alg:EM}
\begin{algorithmic}
\State \textbf{Input:} $x_1,\ldots,x_n\in \mathbb{R}^d$, $n \geq d+1$, $w \in \mathring \Delta_n$
\State \textbf{Initialization:}
$\nu_0 = \eps>0$, $\mu_0 =\frac{1}{n} \sum\limits_{i=1}^n x_i$,
$\Sigma_0 =\frac{1}{n}\sum\limits_{i=1}^n (x_i-\mu_0)(x_i-\mu_0)^{\mbox{\tiny{T}}}$
\For{$r=0,\ldots$}
\vspace{0.2cm}
\textbf{E-Step:} Compute the weights
\begin{align*}
\delta_{i,r} &= (x_i-\mu_r)^{\mbox{\tiny{T}}} \Sigma_r^{-1} (x_i-\mu_r)\\
\gamma_{i,r} &= \frac{\nu_r + d}{ \nu_r + \delta_{i,r} }
\end{align*}
\hspace*{0.2cm} \textbf{M-Step:} Update the parameters
\begin{align*}
\mu_{r+1}
&=
\frac{ \sum\limits_{i=1}^{n} w_i \gamma_{i,r} x_i}{ \sum\limits_{i=1}^{n} w_i\gamma_{i,r} }
\\
\Sigma_{r+1}
&=
\sum\limits_{i=1}^{n} w_i \gamma_{i,r} (x_i-\mu_{r+1})(x_i-\mu_{r+1})^{\mbox{\tiny{T}}}
\\
\nu_{r+1}& \; = \;
\text{ zero of } \;
\phi\left(\frac{\nu}{2}\right) -\phi\left( \frac{\nu_r + d}{2}\right)
+
\sum_{i=1}^n w_i\left( \gamma_{i,r} - \log(\gamma_{i,r} ) - 1 \right)
\end{align*}
\EndFor
\end{algorithmic}
\end{algorithm}
\textbf{Algorithm \ref{alg:aEM}} distinguishes from the EM algorithm in the iteration of $\Sigma$, where the
factor $\frac{1}{\sum\limits_{i=1}^n w_i \gamma_{i,r}}$ is incorporated now.
The computation of this factor requires no additional computational effort,
but speeds up the performance in particular for smaller $\nu$.
Such kind of acceleration was suggested in \cite{KTV94,MVD97}.
\emph{For fixed $\nu \ge 1$}, it was shown in \cite{vanDyk1995} that this algorithm is indeed an EM algorithm
arising from another choice of the hidden variable than used in the standard approach, see also \cite{Laus2019}.
Thus, it follows for fixed $\nu \ge 1$ that the sequence $L(\nu ,\mu_r,\Sigma_r)$ is monotone decreasing.
However, we also iterate over $\nu$. In contrast to the EM Algorithm \ref{alg:EM}
our $\nu$ iteration step depends on $\mu_{r+1}$ and $\Sigma_{r+1}$
instead of $\mu_{r}$ and $\Sigma_{r}$. This is important for our convergence results.
Note that for both cases, the accelerated algorithm can no longer be interpreted as an EM algorithm, so that
the convergence results of the classical EM approach are no longer available.
Let us mention that a Jacobi variant of Algorithm \ref{alg:aEM} for \emph{fixed} $\nu$ i.e.
$$
\Sigma_{r+1}
=
\sum\limits_{i=1}^{n} \frac{w_i\gamma_{i,r} (x_i-\mu_{r})(x_i-\mu_{r})^{\mbox{\tiny{T}}} }{\sum_{i=1}^n w_i \gamma_{i,r}},
$$
with $\mu_r$ instead of $\mu_{r+1}$ including a convergence proof was suggested in \cite{LS2019}.
The main reason for this index choice was that we were able to prove monotone convergence of a simplified version
of the algorithm for estimating the location and scale of Cauchy noise ($d=1$, $\nu = 1$)
which could be not achieved with the variant incorporating $\mu_{r+1}$, see \cite{LPS18}.
This simplified version is known as myriad filter in image processing.
In this paper, we keep the original variant from the EM algorithm \eqref{eq:aem} since we are mainly interested in the
computation of $\nu$.
Instead of the above algorithms we suggest to take the critical point equation \eqref{ML_nu} more
directly into account in the next two algorithms.
\\
\begin{algorithm}[!ht]
\caption{Accelerated EM-like Algorithm (aEM)} \label{alg:aEM}
\begin{algorithmic}
\State Same as Algorithm \ref{alg:EM} except for
\begin{align}
\Sigma_{r+1}
&=
\sum\limits_{i=1}^{n} \frac{w_i\gamma_{i,r} (x_i-\mu_{r+1})(x_i-\mu_{r+1})^{\mbox{\tiny{T}}} }{\sum_{i=1}^n w_i \gamma_{i,r}}\label{eq:aem}\\
\nu_{r+1}& \; = \;
\text{ zero of } \;
\phi\left(\frac{\nu}{2}\right) -\phi\left( \frac{\nu_r + d}{2}\right)
+
\sum_{i=1}^n w_i\left( \frac{\nu_r+d}{\nu_r+\delta_{i,r+1}} - \log\left(\frac{\nu_r+d}{\nu_r+\delta_{i,r+1}} \right) - 1 \right)
\end{align}
\end{algorithmic}
\end{algorithm}
\textbf{Algorithm \ref{alg:MMF}} computes a zero of
\begin{equation} \label{eq:alternative}
\Psi_r \left( \frac{\nu}{2} \right)
\coloneqq
\phi \left( \frac{\nu}{2} \right) - \phi \left( \frac{\nu + d}{2} \right)
+ \underbrace{ \sum_{i=1}^n w_i\left( \frac{\nu_r+d}{\nu_r+\delta_{i,r+1}} - \log\left(\frac{\nu_r+d}{\nu_r+\delta_{i,r+1}} \right) - 1 \right) }_{b_r}
\end{equation}
This function has a unique zero since by \eqref{asymp_A} the function $A(x) = \phi(x) -\phi(x+t) < 0$ is monotone increasing with
$\lim_{x \rightarrow \infty} A(x)= 0_-$ and $b_r > 0$.
\\
\begin{algorithm}[!ht]
\caption{Multivariate Myriad Filter (MMF)} \label{alg:MMF}
\begin{algorithmic}
\State Same as Algorithm \ref{alg:aEM} except for
\begin{equation} \label{eq:one_step}
\nu_{r+1} \; = \;
\text{ zero of } \;
\phi\left(\frac{\nu}{2}\right) -\phi\left( \frac{\nu + d}{2}\right)
+
\sum_{i=1}^n w_i\left( \frac{\nu_r+d}{\nu_r+\delta_{i,r+1}} - \log\left(\frac{\nu_r+d}{\nu_r+\delta_{i,r+1}} \right) - 1 \right)
\end{equation}
\end{algorithmic}
\end{algorithm}
Finally, \textbf{Algorithm \ref{alg:GMMF}} computes the update of $\nu$
by directly finding a zero of the whole function $F$ in \eqref{ML_nu} given $\mu_r$ and $\Sigma_r$.
The existence of such a zero was discussed in the previous section.
The zero computation is done by an inner loop which iterates the update step of $\nu$ from Algorithm \ref{alg:MMF}.
We will see that the iteration converge indeed to a zero of $F$.
\\
\begin{algorithm}[!ht]
\caption{General Multivariate Myriad Filter (GMMF)} \label{alg:GMMF}
\begin{algorithmic}
\State Same as Algorithm \ref{alg:aEM} except for
\begin{align}
\nu_{r+1}&= \; \text{ zero of }
\phi\left( \frac{\nu}{2} \right)
-\phi\left( \frac{\nu +d}{2} \right)
+ \sum_{i=1}^n w_i \left( \frac{\nu + d}{\nu + \delta_{i,r+1}} - \log\left( \frac{\nu + d}{\nu + \delta_{i,r+1}} \right) - 1 \right)
\end{align}
\For{$l=0,\ldots$}
\begin{align}
&\nu_{r,0} = \nu_r\\
&\nu_{r,l+1} \; \text{ zero of } \; \phi\left( \frac{\nu}{2} \right)
-\phi\left( \frac{\nu +d}{2} \right)
+ \sum_{i=1}^n w_i \left( \frac{\nu_{r,l} + d}{\nu_{r,l} + \delta_{i,r+1}} -
\log\left( \frac{\nu_{r,l} + d}{\nu_{r,l} + \delta_{i,r+1}} \right) - 1 \right)
\end{align}
\EndFor
\end{algorithmic}
\end{algorithm}
In the rest of this section, we prove that the sequence $(L(\nu_r,\mu,r,\Sigma_r))_r$ generated by Algorithm \ref{alg:aEM} and \ref{alg:MMF} decreases in each iteration step
and that there exists a subsequence of the iterates which
converges to a critical point.
We will need the following auxiliary lemma.
\begin{Lemma}\label{prop:inner_loop}
Let $F_a,F_b\colon \mathbb{R}_{>0}\to\mathbb{R}$ be continuous functions,
where $F_a$ is strictly increasing and $F_b$ is strictly decreasing.
Define $F\coloneqq F_a + F_b$.
For any initial value $x_0 >0$ assume that the sequence generated by
$$
x_{l+1} = \text{ zero of } F_a(x)+F_b(x_l)
$$
is uniquely determined, i.e., the functions on the right-hand side have a unique zero.
Then it holds
\begin{enumerate}
\item[i)] If $F(x_0)<0$, then $(x_l)_l$ is strictly increasing and $F(x) < 0$ for all $x \in [x_l,x_{l+1}]$, $l \in \mathbb N_0$.
\item[ii)] If $F(x_0)>0$, then $(x_l)_l$ is strictly decreasing and $F(x) >0$ for all $x \in [x_{l+1},x_{l}]$, $l \in \mathbb N_0$.
\end{enumerate}
Furthormore, assume that there exists
$x_->0$ with $F(x) <0$ for all $x<x_-$
and
$x_+>0$ with $F(x)>0$ for all $x>x_+$.
Then, the sequence $(x_l)_l$ converges to a zero $x^*$ of $F$.
\end{Lemma}
\begin{proof}
We consider the case i) that $F(x_0)<0$. Case ii) follows in a similar way.
We show by induction that $F(x_l)<0$ and that $x_{l+1} > x_l$ for all $l \in \mathbb N$.
Then it holds for all $l\in\mathbb N$ and $x \in(x_l,x_{l+1})$ that
$F_a(x) + F_b(x) < F_a(x) + F_b(x_l) < F_a(x_{l+1} ) + F_b(x_l) = 0$.
Thus $F(x) < 0$ for all $x \in [x_l,x_{l+1}]$, $l \in \mathbb N_0$.
\\[1ex]
\textbf{Induction step.} Let $F_a(x_l)+F_b(x_l)<0$.
Since $F_a(x_{l+1})+F_b(x_l) =0 > F_a(x_l)+F_b(x_l)$
and $F_a$ is strictly increasing,
we have $x_{l+1}>x_l$.
Using that $F_b$ is strictly decreasing, we get $F_b(x_{l+1})<F_b(x_l)$
and consequently
$$
F(x_{l+1}) = F_a(x_{l+1}) + F_b(x_{l+1}) < F_a(x_{l+1}) + F_b(x_l)=0.
$$
Assume now that $F(x)>0$ for all $x>x_+$.
Since the sequence $(x_l)_l$ is strictly increasing and $F(x_l) < 0$
it must be bounded from above by $x_+$.
Therefore it converges to some $x^*\in \mathbb{R}_{>0}$.
Now, it holds by the continuity of $F_a$ and $F_b$ that
$$
0 =\lim\limits_{l\to\infty} F_a(x_{l+1}) + F_b(x_l) = F_a(x^*) + F_b(x^*) = F(x^*).
$$
Hence $x^*$ is a zero of $F$.
\end{proof}
For the setting in Algorithm \ref{alg:GMMF}, Lemma \ref{prop:inner_loop} implies the following corollary.
\begin{Corollary}
Let $F_a (\nu) \coloneqq \phi \left( \frac{\nu}{2} \right) - \phi\left(\frac{\nu+d}{2} \right)$
and
$F_b (\nu) \coloneqq \sum_{i=1}^n w_i \left( \frac{\nu + d}{\nu + \delta_{i,r+1}} -
\log\left( \frac{\nu + d}{\nu + \delta_{i,r+1}} \right) - 1 \right)$, $r \in \mathbb{N}_0$
Assume that there exists $\nu_+ > 0$ such that $F \coloneqq F_a + F_b >0$ for all $\nu \ge \nu_+$.
Then, the sequence $(\nu_{r,l})_l$ generated by the $r$-th inner loop of Algorithm \ref{alg:GMMF}
converges to a zero of $F$.
\end{Corollary}
Note that by Corollary \ref{cor:ensure} the above condition on $F$ is fulfilled in each iteration step, e.g.
if $\delta_{i,r} \not \in [d - \sqrt{2d} , d + \sqrt{2d}]$ for $i=1,\ldots,n$ and $r \in\mathbb{N}_0$.
\begin{proof}
From the previous section we know that $F_a$ is strictly increasing and $F_b$ is strictly decreasing.
Both functions are continuous.
If $F(\nu_r) < 0$, then we know from Lemma \ref{prop:inner_loop} that $(\nu_{r,l})_l$ is increasing and converges to a zero $\nu_r^*$
of $F$.
If $F(\nu_r) > 0$, then we know from Lemma \ref{prop:inner_loop} that $(\nu_{r,l})_l$ is decreasing.
The condition that there exists
$x_-\in\mathbb{R}_{>0}$ with $F(x) <0$ for all $x<x_-$ is fulfilled since $\lim_{x \rightarrow 0} F(x) = -\infty$.
Hence, by Lemma \ref{prop:inner_loop}, the sequence converges to a zero $\nu_r^*$
of $F$.
\end{proof}
To prove that the objective function decreases in each step of the Algorithms \ref{alg:aEM} - \ref{alg:GMMF} we need the following lemma.
\begin{Lemma}\label{lem:function_decreasing}
Let $F_a,F_b\colon \mathbb{R}_{>0}\to\mathbb{R}$ be continuous functions,
where $F_a$ is strictly increasing and $F_b$ is strictly decreasing.
Define $F\coloneqq F_a + F_b$ and let $G\colon \mathbb{R}_{>0}\to \mathbb{R}$ be an antiderivative of $F$, i.e.
$F= \frac{\mathrm{d}}{\mathrm{d}x} G$.
For an arbitrary $x_0 >0$, let $(x_{l})_l$ be the sequence generated by
$$
x_{l+1} = \text{ zero of } F_a(x) + F_b(x_l).
$$
Then the following holds true:
\begin{enumerate}
\item[i)] The sequence $(G(x_{l}))_l$ is monotone decreasing with $G(x_l)=G(x_{l+1})$
if and only if $x_0$ is a critical point of $G$.
If $(x_{l})_l$ converges, then the limit $x^*$ fulfills
$$
G(x_0) \geq G(x_1) \geq G(x^*),
$$
with equality if and only if $x_0$ is a critical point of $G$.
\item[ii)] Let $F = \tilde F_a + \tilde F_b$ be another splitting of $F$
with continuous functions $\tilde F_a, \tilde F_b$, where the first one is strictly increasing and the second one strictly decreasing.
Assume that $\tilde F_a'(x) > F_a'(x)$ for all $x>0$.
Then holds for $y_1 \coloneqq \text{ zero of } \tilde F_a(x) + \tilde F_b(x_0)$
that $G(x_0) \geq G(y_1) \geq G(x_1)$
with equality if and only if $x_0$ is a critical point of $G$.
\end{enumerate}
\end{Lemma}
\begin{proof}
i) If $F(x_0)=0$, then $x_0$ is a critical point of $G$.
Let $F(x_0)<0$. By Lemma \ref{prop:inner_loop} we know that $(x_{l})_l$ is strictly increasing
and that $F(x) < 0$ for $x \in [x_r,x_{r+1}]$, $r \in \mathbb{N}_0$.
By the Fundamental Theorem of calculus it holds
$$
G(x_{l+1})=G(x_{l})+\int_{x_{l}}^{x_{l+1}} F(\nu) d\nu.
$$
Thus, $G(x_{l+1})<G(x_{l})$.
Let $F(x_0)>0$. By Lemma \ref{prop:inner_loop} we know that $(x_{l})_l$ is strictly decreasing
and that $F(x) > 0$ for $x \in [x_{r+1},x_{r}]$, $r \in \mathbb{N}_0$.
Then $$
G(x_{l}) = G(x_{l+1})+\int_{x_{l+1}}^{x_{l}} F(\nu) d\nu.
$$
implies $G(x_{l+1})<G(x_{l})$.
Now, the rest of assertion i) follows immediately.
\\[1ex]
ii) It remains to show that $G(x_1)\leq G(y_1)$. Let $F(x_0) <0$.
Then we have $y_1\geq x_0$ and $x_1\geq x_0$.
By the Fundamental Theorem of calculus we obtain
\begin{align}
F(x_0) + \int_{x_0}^{x_1} F_a'(x)dx &= F_a(x_0)+\int_{x_0}^{x_1} F_a'(x) dx + F_b (x_0) = F_a (x_1) + F_b (x_0)=0,\\
F(x_0) + \int_{x_0}^{y_1} \tilde F_a'(x)dx&=\tilde F_a(x_0)+\int_{x_0}^{y_1}\tilde F_a'(x) dx+\tilde F_b(x_0) =\tilde F_a(y_1)+\tilde F_b(x_0)=0.
\end{align}
This yields
\begin{equation}\label{eq:int_equal}
\int_{x_0}^{x_1} F_a'(x) dx=\int_{x_0}^{y_1}\tilde F_a'(x)dx,
\end{equation}
and since $\tilde F'_a(x) > F'_a(x)$ further $y_1\leq x_1$ with equality if and only if $x_0=x_1$,
i.e., if $x_0$ is a critical point of $G$.
Since $F(x)<0$ on $(x_0,x_1)$ it holds
$$
G(x_1)=G(y_1)+\int_{y_1}^{x_1}F(x) dx \leq G(y_1),
$$
with equality if and only if $x_0=x_1$.
The case $F(x_0) >0$ can be handled similarly.
\end{proof}
Lemma \ref{lem:function_decreasing} implies the following relation between the values of the objective function
$L$ for Algorithms \ref{alg:aEM} - \ref{alg:GMMF}.
\begin{Corollary}\label{lem:likelihood_decreasing_nu}
For the same fixed $\nu_r>0, \mu_r\in\mathbb{R}^d, \Sigma_r\in\mathrm{SPD}(d)$ define $\mu_{r+1}$, $\Sigma_{r+1}$,
$\nu_{r+1}^{\mathrm{aEM}}$, $\nu_{r+1}^{\mathrm{MMF}}$ and $\nu_{r+1}^{\mathrm{GMMF}}$ by Algorithm \ref{alg:aEM}, \ref{alg:MMF} and \ref{alg:GMMF},
respectively. For the GMMF algorithm assume that the inner loop converges.
Then it holds
$$
L(\nu_r,\mu_{r+1},\Sigma_{r+1})
\geq L(\nu_{r+1}^{\mathrm{aEM}},\mu_{r+1},\Sigma_{r+1})
\geq L(\nu_{r+1}^{\mathrm{MMF}},\mu_{r+1},\Sigma_{r+1})
\geq L(\nu_{r+1}^{\mathrm{GMMF}},\mu_{r+1},\Sigma_{r+1}).
$$
Equality holds true if and only if $\frac{\mathrm{d}}{\mathrm{d}\nu}L(\nu_r,\mu_{r+1},\Sigma_{r+1})=0$ and
in this case $\nu_{r} = \nu_{r+1}^{\mathrm{aEM}} = \nu_{r+1}^{\mathrm{MMF}} = \nu_{r+1}^{\mathrm{GMMF}}$.
\end{Corollary}
\begin{proof}
For $G(\nu) \coloneqq L(\nu,\mu_{r+1},\Sigma_{r+1})$, we have
$\frac{\mathrm{d}}{\mathrm{d}\nu} L(\nu,\mu_{r+1},\Sigma_{r+1}) = F(\nu)$,
where
$$
F(\nu) \coloneqq \phi\left( \frac{\nu}{2} \right)
-\phi\left( \frac{\nu +d}{2} \right)
+ \sum_{i=1}^n w_i \left( \frac{\nu + d}{\nu + \delta_{i,r+1}} -
\log\left( \frac{\nu + d}{\nu + \delta_{i,r+1}} \right) - 1 \right).
$$
We use the splitting
$$F = F_a + F_b = \tilde F_a + \tilde F_b$$
with
$$
F_a (\nu)\coloneqq
\phi\left(\frac\nu2 \right)- \phi\left(\frac{\nu + d}{2} \right), \quad
\tilde F_a \coloneqq \phi\left(\frac\nu2 \right)
$$
and
$$
F_b(\nu) \coloneqq
\sum_{i=1}^n w_i \left( \frac{\nu + d}{\nu + \delta_{i,r+1}} -
\log\left( \frac{\nu + d}{\nu + \delta_{i,r+1}} \right) - 1 \right), \quad
\tilde F_b (\nu)\coloneqq - \phi \left(\frac{\nu+d}{2} \right) + F_b(\nu).
$$
By the considerations in the previous section we know that $F_a$, $\tilde F_a$ are strictly increasing
and $F_b$, $\tilde F_b$ are strictly decreasing.
Moreover, since $\phi' > 0$ we have $\tilde F'_a > F'_a$.
Hence it follows from Lemma \ref{lem:function_decreasing}(ii) that
$L(\nu_r,\mu_{r+1},\Sigma_{r+1}) \ge L(\nu_r^{\mathrm{aEM}},\mu_{r+1},\Sigma_{r+1}) \ge L(\nu_r^{\mathrm{MMF}},\mu_{r+1},\Sigma_{r+1})$.
Finally, we conclude by Lemma \ref{lem:function_decreasing}(i) that
$L(\nu_r^{\mathrm{MMF}},\mu_{r+1},\Sigma_{r+1}) \ge L(\nu_r^{\mathrm{GMMF}},\mu_{r+1},\Sigma_{r+1})$.
\end{proof}
Concerning the convergence of the three algorithms we have the following result.
\begin{Theorem}\label{cor:likelihood_decreases}
Let $(\nu_r,\mu_r,\Sigma_r)_r$ be sequence generated by Algorithm \ref{alg:aEM}, \ref{alg:MMF} or \ref{alg:GMMF}, respectively
starting with arbitrary initial values $\nu_0 >0,\mu_0\in\mathbb{R}^d,\Sigma_0\in\mathrm{SPD}(d)$.
For the GMMF algorithm we assume that in each step the inner loop converges.
Then it holds for all $r\in\mathbb N_0$ that
$$
L(\nu_r,\mu_r,\Sigma_r) \geq L(\nu_{r+1},\mu_{r+1},\Sigma_{r+1}),
$$
with equality if and only if $(\nu_r,\mu_r,\Sigma_r)=(\nu_{r+1},\mu_{r+1},\Sigma_{r+1})$.
\end{Theorem}
\begin{proof}
By the general convergence results of the accelerated EM algorithm for fixed $\nu$, see also
\cite{LS2019}, it holds
$$
L(\nu_r,\mu_{r+1},\Sigma_{r+1})\leq L(\nu_r,\mu_r,\Sigma_r),
$$
with equality if and only if $(\mu_r,\Sigma_r)=(\mu_{r+1},\Sigma_{r+1})$.
By Corollary \ref{lem:likelihood_decreasing_nu} it holds
$$
L(\nu_{r+1},\mu_{r+1},\Sigma_{r+1})\leq L(\nu_r,\mu_{r+1},\Sigma_{r+1}),
$$
with equality if and only if $\nu_r=\nu_{r+1}$.
The combination of both results proves the claim.
\end{proof}
\begin{Lemma}\label{lem:Tcont}
Let $T = (T_1, T_2, T_3): \mathbb{R}_{>0} \times \mathbb{R}^d \times \SPD(d) \rightarrow \mathbb{R}_{>0} \times \mathbb{R}^d \times \SPD(d)$
be the operator of one iteration step of Algorithm \ref{alg:aEM} (or \ref{alg:MMF}).
Then $T$ is continuous.
\end{Lemma}
\begin{proof}
We show the statement for Algorithm \ref{alg:MMF}. For Algorithm \ref{alg:aEM} it can be shown analogously.
Clearly the mapping $(T_2,T_3) (\nu,\mu,\Sigma)$ is continuous.
Since
$$T_1(\nu,\mu,\Sigma) = \text{zero of } \Psi(x, \nu,T_2(\nu,\mu,\Sigma),T_3(\nu,\mu,\Sigma)),$$
where
\begin{align}
\Psi(x,\nu,\mu,\Sigma)
&=\phi\left(\frac{x}{2}\right)-\phi\left(\frac{x+d}{2}\right)\\
&+\sum_{i=1}^n w_i\left(\frac{\nu+d}{\nu+(x_i-\mu)^T\Sigma^{-1}(x_i-\mu)}-\log\left(\frac{\nu+d}{\nu+(x_i-\mu)^T\Sigma^{-1}(x_i-\mu)}\right)-1\right).
\end{align}
It is sufficient to show that the zero of $\Psi$ depends continuously on $\nu$, $T_2$ and $T_3$.
Now the continuously differentiable function $\Psi$ is strictly increasing in $x$, so that $\frac{\partial}{\partial x} \Psi(x,\nu,T_2,T_3)>0$.
By $\Psi(T_1,\nu,T_2,T_3)=0$, the Implicit Function Theorem yields the following statement:
There exists an open neighborhood $U\times V$ of $(T_1,\nu,T_2,T_3)$ with $U\subset\mathbb{R}_{>0}$
and $V\subset \mathbb{R}_{>0}\times\mathbb{R}^d\times\SPD(d)$ and a continuously differentiable function $G\colon V\to U$
such that for all $(x,\nu,\mu,\Sigma)\in U\times V$ it holds
$$
\Psi(x,\nu,\mu,\Sigma)=0 \quad \text{if and only if}\quad G(\nu,\mu,\Sigma)=x.
$$
Thus the zero of $\Psi$ depends continuously on $\nu$, $T_2$ and $T_3$.
\end{proof}
This implies the following theorem.
\begin{Theorem}
Let $(\nu_r,\mu_r,\Sigma_r)_r$ be the sequence generated by Algorithm \ref{alg:aEM} or \ref{alg:MMF}
with arbitrary initial values $\nu_0 >0,\mu_0\in\mathbb{R}^d,\Sigma_0\in\mathrm{SPD}(d)$.
Then every cluster point of $(\nu_r,\mu_r,\Sigma_r)_r$ is a critical point of $L$.
\end{Theorem}
\begin{proof}
The mapping $T$ defined in Lemma \ref{lem:Tcont} is continuous. Further we know from its definition that $(\nu,\mu,\Sigma)$
is a critical point of $L$ if and only if it is a fixed point of $T$. Let $(\hat\nu,\hat\mu,\hat\Sigma)$ be a cluster point of $(\nu_r,\mu_r,\Sigma_r)_r$.
Then there exists a subsequence $(\nu_{r_s},\mu_{r_s},\Sigma_{r_s})_s$ which converges to $(\hat\nu,\hat\mu,\hat\Sigma)$.
Further we know by Theorem \ref{cor:likelihood_decreases} that $L_r=L(\nu_r,\mu_r,\Sigma_r)$ is decreasing.
Since $(L_r)_r$ is bounded from below, it converges. Now it holds
\begin{align}
L(\hat \nu,\hat \mu,\hat \Sigma)&=\lim_{s\to\infty}L(\nu_{r_s},\mu_{r_s},\Sigma_{r_s})\\
&=\lim_{s\to\infty}L_{r_s}=\lim_{s\to\infty}L_{r_s+1}\\
&=\lim_{s\to\infty}L(\nu_{r_s+1},\mu_{r_s+1},\Sigma_{r_s+1})\\
&=\lim_{s\to\infty}L(T(\nu_{r_s},\mu_{r_s},\Sigma_{r_s}))=L(T(\hat\nu,\hat\mu,\hat\Sigma)).
\end{align}
By Theorem \ref{cor:likelihood_decreases} and the definition of $T$ we have that $L(\nu,\mu,\Sigma)=L(T(\nu,\mu,\Sigma))$ if and only if $(\nu,\mu,\Sigma)=T(\nu,\mu,\Sigma)$. By the definition of the algorithm this is the case if and only if $(\nu,\mu,\Sigma)$ is a critical point of $L$. Thus $(\hat\nu,\hat\mu,\hat\Sigma)$ is a critical point of $L$.
\end{proof}
\section{Numerical Results} \label{sec:numerics}
In this section we give two numerical examples of the developed theory. First, we compare the four different algorithms in Subsection \ref{sec:comp}.
Then, in Subsection \ref{sec:accel}, we address further accelerations of our algorithms by
SQUAREM \cite{VR2008} and DAAREM \cite{HV2019} and show also a comparison with the ECME algorithm \cite{LR95}.
Finally, in Subsection \ref{sec:images}, we provide an application in image analysis by determining the degree of freedom parameter
in images corrupted by Student-$t$ noise.
\subsection{Comparison of Algorithms} \label{sec:comp}
In this section, we compare the numerical performance of the classical EM algorithm \ref{alg:EM}
and the proposed Algorithms \ref{alg:aEM}, \ref{alg:MMF} and \ref{alg:GMMF}. To this aim, we did the following Monte Carlo simulation: Based on the stochastic representation of the Student-$t$ distribution, see equation~\eqref{stochastic_representation}, we draw $n=1000$ i.i.d. realizations of the $T_\nu(\mu,\Sigma)$ distribution
with location parameter $\mu=0$ and different scatter matrices $\Sigma$
and degrees of freedom parameters $\nu$. Then, we used Algorithms \ref{alg:aEM}, \ref{alg:MMF} and \ref{alg:GMMF} to compute the ML-estimator $(\hat\nu,\hat\mu,\hat\Sigma)$.
We initialize all algorithms with the sample mean for $\mu$ and the sample covariance matrix for $\Sigma$. Furthermore, we set $\nu=3$ and in all algorithms the zero of the respective function is computed by Newtons Method.
As a stopping criterion we use the following relative distance:
$$
\frac{ \sqrt{ \| \mu_{r+1} - \mu_r \|^2 + \| \Sigma_{r+1} -\Sigma_r \|_F^2} }{ \sqrt{\|\mu_r\|^2+\|\Sigma_r\|_F^2} } + \frac{ \sqrt{(\log(\nu_{r+1})-\log(\nu_r))^2}}{\abs{\log(\nu_r)}}<10^{-5}.
$$
We take the logarithm of $\nu$ in the stopping criterion, because $T_\nu(\mu,\Sigma)$ converges to the normal distribution as $\nu\to\infty$
and therefore the difference between $T_\nu(\mu,\Sigma)$ and $T_{\nu+1}(\mu,\Sigma)$ becomes small for large $\nu$.
To quantify the performance of the algorithms, we count the number of iterations until the stopping criterion is reached.
Since the inner loop of the GMMF is potentially time consuming we additionally measure the execution time until the stopping criterion is reached.
This experiment is repeated $N=10.000$ times for different values of $\nu\in\{1,2,5,10\}$.
Afterward we calculate the average number of iterations and the average execution times.
The results are given in Table \ref{tab:performance}.
We observe that the performance of the algorithms depends on $\Sigma$.
Further we see, that the performance of the aEM algorithm
is always better than those of the classical EM algorithm.
Further all algorithms need a longer time to estimate large $\nu$.
This seems to be natural since the likelihood function becomes very flat for large $\nu$.
Further, the GMMF needs the lowest number of iterations.
But for small $\nu$ the execution time of the GMMF
is larger than those of the MMF and the aEM algorithm.
This can be explained by the fact,
that the $\nu$ step has a smaller relevance for small $\nu$ but is still time consuming in the GMMF.
The MMF needs slightly more iterations
than the GMMF but if $\nu$ is not extremely large the execution time is smaller
than for the GMMF and for the aEM algorithm. In summary, the MMF algorithm is proposed as algorithm of choice.
\begin{table}[htp]
\begin{center}
\resizebox*{!}{8cm}{
\begin{tabular}{c|c|c c c c}
$\Sigma$ & $\nu$ & EM & aEM & MMF & GMMF\\\hline
\multirow{5}{7em}{$\left(\begin{array}{cc}0.1&0\\0&0.1\end{array}\right)$}
& $1$&$\n62.32\pm\n2.50$&$\n23.44\pm\n0.79$&$22.16\pm0.75$&$\mathbf{20.61\pm0.70}$\\
& $2$&$\n46.17\pm\n1.82$&$\n26.42\pm\n1.08$&$21.48\pm0.94$&$\mathbf{17.79\pm0.80}$\\
& $5$&$\n50.42\pm11.22$&$\n49.97\pm\n7.48$&$25.28\pm2.61$&$\mathbf{12.14\pm1.73}$\\
& $10$&$122.62\pm31.74$&$117.40\pm31.65$&$38.16\pm4.51$&$\mathbf{14.32\pm0.96}$\\
& $100$&$531.07\pm91.41$&$528.14\pm92.19$&$53.66\pm6.98$&$\mathbf{10.76\pm2.07}$\\\hline
\multirow{5}{7em}{$\left(\begin{array}{cc}1&0\\0&1\end{array}\right)$}
& $1$&$\n62.34\pm\n2.52$&$\n23.43\pm\n0.78$&$22.16\pm0.75$&$\mathbf{20.59\pm0.70}$\\
& $2$&$\n46.20\pm\n1.81$&$\n26.43\pm\n1.07$&$21.49\pm0.94$&$\mathbf{17.79\pm0.80}$\\
& $5$&$\n50.68\pm10.86$&$\n50.06\pm\n7.42$&$25.31\pm2.58$&$\mathbf{12.06\pm1.75}$\\
& $10$&$122.72\pm31.65$&$117.51\pm31.56$&$38.18\pm4.50$&$\mathbf{14.28\pm0.97}$\\
& $100$&$531.75\pm90.98$&$528.84\pm91.75$&$53.62\pm6.94$&$\mathbf{10.64\pm2.02}$\\\hline
\multirow{5}{7em}{$\left(\begin{array}{cc}10&0\\0&10\end{array}\right)$}
& $1$&$\n62.35\pm\n2.55$&$\n23.44\pm\n0.78$&$22.15\pm0.76$&$\mathbf{20.59\pm0.71}$\\
& $2$&$\n46.27\pm\n1.82$&$\n26.45\pm\n1.08$&$21.51\pm0.95$&$\mathbf{17.81\pm0.80}$\\
& $5$&$\n50.71\pm11.21$&$\n50.15\pm\n7.61$&$25.34\pm2.63$&$\mathbf{12.08\pm1.78}$\\
& $10$&$122.44\pm30.66$&$117.19\pm30.56$&$38.17\pm4.46$&$\mathbf{14.27\pm0.96}$\\
& $100$&$533.21\pm89.80$&$530.27\pm90.57$&$53.64\pm6.93$&$\mathbf{10.62\pm2.01}$\\\hline
\multirow{5}{7em}{$\left(\begin{array}{cc}2&-1\\-1&2\end{array}\right)$}
& $1$&$\n62.32\pm\n2.55$&$\n23.43\pm\n0.78$&$22.15\pm0.76$&$\mathbf{20.60\pm0.70}$\\
& $2$&$\n46.22\pm\n1.82$&$\n26.43\pm\n1.09$&$21.50\pm0.94$&$\mathbf{17.80\pm0.80}$\\
& $5$&$\n50.76\pm11.12$&$\n50.21\pm\n7.52$&$25.35\pm2.59$&$\mathbf{12.09\pm1.75}$\\
& $10$&$122.37\pm31.01$&$117.17\pm30.92$&$38.13\pm4.49$&$\mathbf{14.30\pm0.96}$\\
& $100$&$530.89\pm91.36$&$527.96\pm92.15$&$53.68\pm7.07$&$\mathbf{10.75\pm2.08}$
\end{tabular}\,
\vspace{0.5cm}
\resizebox*{!}{8cm}{
\begin{tabular}{c|c|c c c c}
$\Sigma$ & $\nu$ & EM & aEM & MMF & GMMF\\\hline
\multirow{5}{7em}{$\left(\begin{array}{cc}0.1&0\\0&0.1\end{array}\right)$}
& $1$&$0.008469\pm0.00111$&$0.003511\pm0.00044$&$\mathbf{0.003498\pm0.00044}$&$0.006954\pm0.00114$\\
& $2$&$0.006428\pm0.00069$&$0.003995\pm0.00042$&$\mathbf{0.003409\pm0.00036}$&$0.005388\pm0.00061$\\
& $5$&$0.007237\pm0.00208$&$0.007768\pm0.00181$&$0.004133\pm0.00085$&$\mathbf{0.003752\pm0.00100}$\\
& $10$&$0.017421\pm0.00532$&$0.017991\pm0.00567$&$0.006187\pm0.00122$&$\mathbf{0.005796\pm0.00110}$\\
& $100$&$0.070024\pm0.01306$&$0.075191\pm0.01418$&$0.008146\pm0.00131$&$\mathbf{0.005601\pm0.00097}$\\\hline
\multirow{5}{7em}{$\left(\begin{array}{cc}1&0\\0&1\end{array}\right)$}
& $1$&$0.008645\pm0.00090$&$0.003581\pm0.00034$&$\mathbf{0.003572\pm0.00036}$&$0.007126\pm0.00098$\\
& $2$&$0.006431\pm0.00074$&$0.003989\pm0.00044$&$\mathbf{0.003417\pm0.00039}$&$0.005427\pm0.00071$\\
& $5$&$0.006883\pm0.00162$&$0.007352\pm0.00128$&$0.003939\pm0.00058$&$\mathbf{0.003550\pm0.00079}$\\
& $10$&$0.016434\pm0.00439$&$0.016964\pm0.00470$&$0.005869\pm0.00089$&$\mathbf{0.005493\pm0.00077}$\\
& $100$&$0.072309\pm0.01507$&$0.077724\pm0.01624$&$0.008363\pm0.00155$&$\mathbf{0.005773\pm0.00117}$\\\hline
\multirow{5}{7em}{$\left(\begin{array}{cc}10&0\\0&10\end{array}\right)$}
& $1$&$0.008839\pm0.00108$&$0.003664\pm0.00043$&$\mathbf{0.003639\pm0.00042}$&$0.007217\pm0.00104$\\
& $2$&$0.006516\pm0.00075$&$0.004054\pm0.00048$&$\mathbf{0.003449\pm0.00039}$&$0.005428\pm0.00065$\\
& $5$&$0.007293\pm0.00207$&$0.007799\pm0.00180$&$0.004149\pm0.00082$&$\mathbf{0.003740\pm0.00098}$\\
& $10$&$0.020598\pm0.00659$&$0.021193\pm0.00683$&$0.007228\pm0.00167$&$\mathbf{0.006834\pm0.00155}$\\
& $100$&$0.078682\pm0.01969$&$0.084275\pm0.02087$&$0.009039\pm0.00213$&$\mathbf{0.006246\pm0.00160}$\\\hline
\multirow{5}{7em}{$\left(\begin{array}{cc}2&-1\\-1&2\end{array}\right)$}
& $1$&$0.008837\pm0.00107$&$0.003648\pm0.00039$&$\mathbf{0.003641\pm0.00041}$&$0.007207\pm0.00104$\\
& $2$&$0.006481\pm0.00070$&$0.004016\pm0.00041$&$\mathbf{0.003433\pm0.00036}$&$0.005413\pm0.00061$\\
& $5$&$0.006968\pm0.00167$&$0.007440\pm0.00129$&$0.003965\pm0.00055$&$\mathbf{0.003561\pm0.00077}$\\
& $10$&$0.016608\pm0.00442$&$0.017107\pm0.00468$&$0.005920\pm0.00092$&$\mathbf{0.005499\pm0.00076}$\\
& $100$&$0.072354\pm0.01509$&$0.077586\pm0.01619$&$0.008385\pm0.00153$&$\mathbf{0.005715\pm0.00114}$
\end{tabular}
}
\end{center}
\caption{Average number of iterations (top) and execution times (bottom) and the corresponding standard deviations of the different algorithms.}
\label{tab:performance}
\end{table}
In Figure \ref{fig:conv_speed_plots} we exemplarily show the functional values $L(\nu_r,\mu_r,\Sigma_r)$
of the four algorithms and samples generated for different values of $\nu$ and $\Sigma=I$.
Note that the $x$-axis of the plots is in log-scale.
We see that the convergence speed (in terms of number of iterations)
of the EM algorithm is much slower than those of the MMF/GMMF.
For small $\nu$ the convergence speed
of the aEM algorithm is close to the GMMF/MMF,
but for large $\nu$ it is close to the EM algorithm.
In Figure \ref{fig:hist_plots} we show the histograms of the
$\nu$-output of $1000$ runs for different values of $\nu$ and $\Sigma=I$.
Since the $\nu$-outputs of all algorithms are very close together we only plot the output of the GMMF. We see that the accuracy of the estimation of $\nu$ decreases for increasing $\nu$. This can be explained by the fact, that the likelihood function becomes very flat for large $\nu$ such that the estimation of $\nu$ becomes much harder.
\begin{figure}
\centering
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{images/Like_nu1}
\caption{$\nu=1$.}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{images/Like_nu2}
\caption{$\nu=2$.}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{images/Like_nu5}
\caption{$\nu=5$.}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{images/Like_nu10}
\caption{$\nu=10$.}
\end{subfigure}
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{images/Like_nu100}
\caption{$\nu=100$.}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{images/Like_nu200}
\caption{$\nu=200$.}
\end{subfigure}\hfill
\caption{Plots of $L(\nu_r,\mu_r,\Sigma_r)$ on the y-axis and $r$ on the x-axis for all algorithms.
}
\label{fig:conv_speed_plots}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{images/hist_nu1}
\caption{$\nu=1$.}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{images/hist_nu2}
\caption{$\nu=2$.}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{images/hist_nu5}
\caption{$\nu=5$.}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{images/hist_nu10}
\caption{$\nu=10$.}
\end{subfigure}
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{images/hist_nu100}
\caption{$\nu=100$.}
\end{subfigure}\hfill
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{images/hist_nu200}
\caption{$\nu=200$. }
\end{subfigure}\hfill
\caption{Histograms of the output $\nu$ from the algorithms.}
\label{fig:hist_plots}
\end{figure}
\subsection{Comparison with other Accelerations of the EM Algortihm}\label{sec:accel}
In this section, we compare our algorithms with the Expectation/Conditional Maximization Either (ECME) algorithm \cite{LR1994, LR95} and apply the SQUAREM acceleration \cite{VR2008} as well as the damped Anderson Acceleration (DAAREM) \cite{HV2019} to our algorithms.
\paragraph{ECME algorithm:}
The ECME algorithm was first proposed in \cite{LR1994}.
Some numerical examples of the behavior of the ECME algorithm for estimating the parameters $(\nu,\mu,\Sigma)$ of a Student-$t$ distribution $T_\nu(\mu,\Sigma)$ are given in \cite{LR95}.
The idea of ECME is first to replace the M-Step of the EM algorithm by the following update of the parameters $(\nu_r,\mu_r,\Sigma_r)$:
first, we fix $\nu=\nu_r$ and compute the update $(\mu_{r+1},\Sigma_{r+1})$
of the parameters $(\mu_r,\Sigma_r)$ by performing one step of the EM algorithm for fixed degree of freedom (CM1-Step).
Second, we fix $(\mu,\Sigma)=(\mu_r,\Sigma_r)$ and compute the update $\nu_{r+1}$ of $\nu_r$ by maximizing the likelihood function with respect to $\nu$ (CM2-Step).
The resulting algorithm is given in Algorithm \ref{alg:ECME}.
It is similar to the GMMF (Algorithm \ref{alg:GMMF}), but uses the $\Sigma$-update of the EM algorithm (Algorithm \ref{alg:EM}) instead of the $\Sigma$-update of the aEM algorithm (Algorithm \ref{alg:aEM}).
The authors of \cite{LR1994} showed a similar convergence result as for the EM algorithm.
Alternatively, we could prove Theorem \ref{cor:likelihood_decreases}
for the ECME algorithm analogously as for the GMMF algorithm.\\
\begin{algorithm}[!ht]
\caption{ECME Algorithm (ECME)} \label{alg:ECME}
\begin{algorithmic}
\State \textbf{Input:} $x_1,\ldots,x_n\in \mathbb{R}^d$, $n \geq d+1$, $w \in \mathring \Delta_n$
\State \textbf{Initialization:}
$\nu_0 = \eps>0$, $\mu_0 =\frac{1}{n} \sum\limits_{i=1}^n x_i$,
$\Sigma_0 =\frac{1}{n}\sum\limits_{i=1}^n (x_i-\mu_0)(x_i-\mu_0)^{\mbox{\tiny{T}}}$
\For{$r=0,\ldots$}
\vspace{0.2cm}
\textbf{E-Step:} Compute the weights
\begin{align*}
\delta_{i,r} &= (x_i-\mu_r)^{\mbox{\tiny{T}}} \Sigma_r^{-1} (x_i-\mu_r)\\
\gamma_{i,r} &= \frac{\nu_r + d}{ \nu_r + \delta_{i,r} }
\end{align*}
\hspace*{0.2cm} \textbf{CM1-Step:} Update the parameters
\begin{align*}
\mu_{r+1}
&=
\frac{ \sum\limits_{i=1}^{n} w_i \gamma_{i,r} x_i}{ \sum\limits_{i=1}^{n} w_i\gamma_{i,r} }
\\
\Sigma_{r+1}
&=
\sum\limits_{i=1}^{n} w_i \gamma_{i,r} (x_i-\mu_{r+1})(x_i-\mu_{r+1})^{\mbox{\tiny{T}}}
\end{align*}
\hspace*{0.2cm} \textbf{CM2-Step:} Update the parameter
\begin{align*}
\nu_{r+1}&= \; \text{ zero of }
\phi\left( \frac{\nu}{2} \right)
-\phi\left( \frac{\nu +d}{2} \right)
+ \sum_{i=1}^n w_i \left( \frac{\nu + d}{\nu + \delta_{i,r+1}} - \log\left( \frac{\nu + d}{\nu + \delta_{i,r+1}} \right) - 1 \right)
\end{align*}
\EndFor
\end{algorithmic}
\end{algorithm}
Next, we consider two acceleration schemes of arbitrary fixed point algorithms $\vartheta_{r+1}=G(\vartheta_r)$. In our case $\vartheta\in\mathbb{R}^p$ is given by $(\nu,\mu,\Sigma)$ and $G$ is given by one step of Algorithm \ref{alg:EM}, \ref{alg:aEM}, \ref{alg:MMF}, \ref{alg:GMMF} or \ref{alg:ECME}.
\paragraph{SQUAREM Acceleration:}
The first acceleration scheme, called squared iterative methods (SQUAREM) was proposed in \cite{VR2008}.
The idea of SQUAREM is to update the parameters $\vartheta_r=(\nu_r,\mu_r,\Sigma_r)$ in the following way:
we compute $\vartheta_{r,1}=G(\vartheta_r)$ and $\vartheta_{r,2}=G(\vartheta_{r,1})$.
Then, we calculate $s=\vartheta_{r,1}-\vartheta_r$ and $v=(\vartheta_{r,2}-\vartheta_{r,1})-s$.
Now we set $\vartheta'=\vartheta_r-2\alpha r+\alpha^2 v$ and define the update $\vartheta_{r+1}=G(\vartheta')$, where $\alpha$ is chosen as follows.
First, we set $\alpha=\min(-\tfrac{\|r\|_2}{\|v\|_2},-1)$. Then we compute $\vartheta'$ as described before.
If $L(\vartheta')<L(\vartheta_r)$, we keep our choice of $\alpha$.
Otherwise we update $\alpha$ by $\alpha=\tfrac{\alpha-1}{2}$.
Note that this scheme terminates as long a $\vartheta_r$ is not a critical point of $L$ by the following argument:
it holds that $\vartheta_r+2r+v=\vartheta_{r,2}$,
which implies that it holds that $\lim_{\alpha\to-1}L(\vartheta_r-2\alpha+\alpha^2v)=L(\vartheta_{r,2})\leq L(\vartheta_r)$
with equality if and only if $\vartheta_r$ is a critical point of $L$,
since all our algorithms have the property that $L(\vartheta)\geq L(G(\vartheta))$ with equality if and only if $\vartheta$ is a critical point of $L$.
By construction this scheme ensures that the negative log-likelihood values of the iterates is decreasing.
\paragraph{Damped Anderson Acceleration with Restarts and $\epsilon$-Monotonicity (DAAREM):}
The DAAREM acceleration was proposed in \cite{HV2019}.
It is based on the Anderson acceleration, which was introduced in \cite{A1965}.
As for the SQUAREM acceleration want to solve
the fixed point equation $\vartheta=G(\vartheta)$ with $\vartheta=(\nu,\mu,\Sigma)$ using the iteration $\vartheta_{r+1}=G(\vartheta_r)$.
We also use the equivalent formulation to solve $f(\vartheta)=0$, where $f(\vartheta)=G(\vartheta)-\vartheta$.
For a fixed parameter $m\in\mathbb{N}_{>0}$, we define $m_r=\min(m,r)$.
Then, one update of $\vartheta_r$ using the Anderson Acceleration is given by
\begin{align}
\vartheta_{r+1}=&G(\vartheta_r)-\sum_{j=1}^{m_r} (G(\vartheta_{r-m_r+j})-G(\vartheta_{r-m_r+j-1}))\gamma_j^{(r)}\label{eq:AA_update}\\
=&\vartheta_r+f(\vartheta_r)-\sum_{j=1}^{m_r} ((\vartheta_{r-m_r+j}-\vartheta_{r-m_r+j-1})-(f(\vartheta_{r-m_r+j})-f(\vartheta_{r-m_r+j-1})))\gamma_j^{(r)},
\end{align}
with $\gamma^{(r)}=(\mathcal{F}_r^{\mbox{\tiny{T}}}\mathcal{F}_r)^{-1}\mathcal{F}_r^{\mbox{\tiny{T}}} f(\vartheta_r)$,
where the columns of $\mathcal{F}_r\in\mathbb{R}^{p\times m_r}$ are given by $f(\vartheta_{r-m_r+j+1})-f(\vartheta_{r-m_r+j})$ for $j=0,...,m_r-1$.
An equivalent formulation of update step \eqref{eq:AA_update} is given by
\begin{align}
\vartheta_{r+1}=\vartheta_r+f(\vartheta_r)-(\mathcal{X}_r+\mathcal{F}_r)\gamma^{(r)},
\end{align}
where the columns of $\mathcal{X}_r\in\mathbb{R}^{p\times m_r}$ are given by $\vartheta_{r-m_r+j+1}-\vartheta_{r-m_r+j}$ for $j=0,...,m_r-1$.
The Anderson acceleration can be viewed as a special case of a multisecant quasi-Newton procedure to solve $f(\vartheta)=0$. For more details we refer to \cite{FS2009, HV2019}.\\
The DAAREM acceleration modifies the Anderson acceleration in three points.
The first modification is to restart the algorithm after $m$ steps.
That is, to set $m_r=\min(m,c_r)$ instead of $m_r=\min(m,r)$, where $c_r\in\{1,...,m\}$ is defined by $c_r=r\,\mathrm{mod}\,m$.
The second modification is to add damping term in the computation coefficients $\gamma^{(r)}$.
This means, that $\gamma^{(r)}$ is given by $\gamma^{(r)}=(\mathcal{F}_r^{\mbox{\tiny{T}}}\mathcal{F}_r+\lambda_r I)^{-1}\mathcal{F}_r^{\mbox{\tiny{T}}} f(\vartheta_r)$ instead of $\gamma^{(r)}=(\mathcal{F}_r^{\mbox{\tiny{T}}}\mathcal{F})^{-1}\mathcal{F}_r^{\mbox{\tiny{T}}} f(\vartheta_r)$.
The parameter $\lambda_r$ is chosen such that
\begin{align}
\|(\mathcal{F}_r^{\mbox{\tiny{T}}}\mathcal{F}_r+\lambda_r I)^{-1}\mathcal{F}_r^{\mbox{\tiny{T}}} f(\vartheta_r)\|_2^2=\delta_r\|(\mathcal{F}_r^{\mbox{\tiny{T}}}\mathcal{F}_r)^{-1}\mathcal{F}_r^{\mbox{\tiny{T}}} f(\vartheta_r)\|_2^2\label{eq:DAAREM_lambda}
\end{align}
for some damping parameters $\delta_r$. We initialize the $\delta_r$ by $\delta_1=\tfrac1{1+\alpha^{\kappa}}$ and decrease the exponent of $\alpha$ in each step by $1$ up to a minimum of $\kappa-D$ for some parameter $D\in\mathbb{N}_{>0}$.
The third modification is to enforce that for the negative log-likelihood function $L$ does not increase more than $\epsilon$ in one iteration step.
To do this, we compute the update $\vartheta_{r+1}$ using the Anderson acceleration. If $L(\vartheta_{r+1})>L(\vartheta_r)+\epsilon$, we use our original fixed point algorithm in this step, i.e.\ we set $\vartheta_{r+1}=G(\vartheta_r)$.
We summarize the DAAREM acceleration in Algorithm \ref{alg:DAAREM}. In our numerical experiments we use for the parameters the values suggested by \cite{HV2019}, that is $\epsilon=0.01$, $\epsilon_c=0$, $\alpha=1.2$, $\kappa=25$, $D=2\kappa$ and $m=\min(\lceil\tfrac{p}2\rceil,10)$, where $p$ is the number of parameters in $\vartheta$.
\begin{algorithm}[!ht]
\caption{DAAREM acceleration} \label{alg:DAAREM}
\begin{algorithmic}
\State \textbf{Input:} Parameters $\epsilon\geq0$, $\epsilon_c\geq0$, $\alpha>1$, $\kappa\geq0$, $D\geq0$, $m\geq1$
\State \textbf{Initialization:} Initialize $\vartheta_0=(\nu_0,\mu_0,\Sigma_0)$ as in the corresponding fixed point algorithm.
\State Set $c_1=1$, $s_1=0$, $\vartheta_1=\vartheta_0+f(\vartheta_0)$, $L^*=L(x_1)$.
\For{r=1,2,...}
\State Set $m_r=\min(m,c_r)$, $\delta_r=\tfrac1{1+\alpha^{\kappa-s_r}}$ and compute $f_r=f(\vartheta_r)$.
\State Define the columns of $\mathcal{F}_r,\mathcal{X}_r\in\mathbb{R}^{p\times m_k}$ by $f_{r-m_r+j+1}-f_{r-m_r+j}$ and $\vartheta_{r-m_r+j+1}-\vartheta_{r-m_r+j}$ respectively, $j=0,...,m_r-1$.
\State Define $\lambda_r$ by \eqref{eq:DAAREM_lambda} and set $\gamma^{(r)}=(\mathcal{F}_r^{\mbox{\tiny{T}}}\mathcal{F}_r+\lambda_r I)^{-1}\mathcal{F}_r^{\mbox{\tiny{T}}} f_r$.
\State Set $t_{r+1}=\vartheta_r+f_r-(\mathcal{X}_r+\mathcal{F_r})\gamma^{(r)}$
\If{$L(t_{r+1})\leq L(\vartheta_r)+\epsilon$}
\State Set $\vartheta_{r+1}=t_{r+1}$ and $s_\text{new}=s_r+1$.
\Else
\State Set $\vartheta_{r+1}=\vartheta_r+f_r$ and $s_\text{new}=s_r$.
\EndIf
\If{$k\,\mathrm{mod}\,m=0$}
\If{$L(\vartheta_{r+1})> L^*+\epsilon_c$}
\State Set $s_\text{new}=\max\{s_\text{new}-m,-D\}$
\EndIf
\State Set $c_{k+1}=1$ and $L*=L(\vartheta_{k+1})$.
\Else
\State Set $c_{r+1}=c_r+1$.
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\paragraph{Simulation Study:}
To compare the performance of all of these algorithms we perform again a Monte Carlo simulation. As in the previous section we draw $n=100$ i.i.d.~realizations of $T_\nu(\mu,\Sigma)$ with $\mu=0$, $\Sigma=0.1\,\mathrm{Id}$ and $\nu\in\{1,2,5,10,100\}$. Then, we use each of the Algorithms \ref{alg:EM}, \ref{alg:aEM}, \ref{alg:MMF}, \ref{alg:GMMF} and \ref{alg:ECME} to compute the ML-estimator $(\hat\nu,\hat\mu,\hat\Sigma)$. We use each of these algorithms with no acceleration, with SQUAREM acceleration and with DAAREM acceleration.\\
We use the same initialization and stopping criteria as in the previous section and repeat this experiment $N=1.000$ times. To quantify the performance of the algorithms, we count the number of iterations and measure the execution time. The results are given in Table \ref{tab:performance2}.
We observe that for nearly any choice of the parameters the performance of the GMMF is better than the performance of the ECME. For small $\nu$, the performance of the SQUAREM-aEM is also very good. On the other hand, for large $\nu$ the SQUAREM-GMMF behaves very well. Further, for any choice of $\nu$ the performance of the SQUAREM-MMF is close to the best algorithm.
\begin{table}[htp]
\begin{center}
\resizebox*{7cm}{!}{
\begin{sideways}
\begin{tabular}{c|c c c c c}
Algorithm&$\nu=1$&$\nu=2$&$\nu=5$&$\nu=10$&$\nu=100$\\\hline
EM&$62.24\pm2.47$&$46.20\pm1.84$&$50.14\pm11.01$&$122.45\pm30.81$&$530.72\pm\n89.11$\\
aEM&$23.39\pm0.75$&$26.46\pm1.08$&$49.60\pm\n7.55$&$117.21\pm30.74$&$527.77\pm\n89.92$\\
MMF&$22.13\pm0.73$&$21.51\pm0.96$&$25.12\pm\n2.63$&$\n38.17\pm\n4.47$&$\n53.98\pm\phantom{0}\n7.06$\\
GMMF&$20.56\pm0.67$&$17.79\pm0.79$&$12.06\pm\n1.73$&$\n14.35\pm\n0.97$&$\n10.86\pm\phantom{0}\n2.10$\\
ECME&$60.81\pm2.41$&$40.73\pm1.97$&$29.07\pm\n1.81$&$\n22.12\pm\n3.81$&$\n12.81\pm\phantom{0}\n2.96$\\\hline
DAAREM-EM&$22.09\pm4.05$&$22.26\pm4.59$&$20.39\pm\n5.42$&$\n24.72\pm\n6.34$&$\n28.09\pm\phantom{0}\n6.93$\\
DAAREM-aEM&$15.52\pm1.57$&$14.90\pm2.39$&$15.35\pm\n3.22$&$\n17.84\pm\n4.41$&$\n20.07\pm\phantom{0}\n3.68$\\
DAAREM-MMF&$15.16\pm1.45$&$14.02\pm2.09$&$13.12\pm\n2.09$&$\n14.99\pm\n3.62$&$\n66.86\pm630.74$\\
DAAREM-GMMF&$14.11\pm1.04$&$12.81\pm1.46$&$\n9.61\pm\n1.27$&$\phantom{0}\n9.84\pm\n1.46$&$\n10.15\pm\phantom{0}\n2.10$\\
DAAREM-ECME&$22.69\pm4.71$&$19.15\pm3.50$&$17.06\pm\n3.33$&$\n16.89\pm\n3.75$&$\n12.35\pm\phantom{0}\n3.90$\\\hline
SQUAREM-EM&$26.36\pm2.25$&$21.77\pm4.56$&$21.43\pm\n3.13$&$\n46.01\pm10.72$&$111.24\pm\n40.47$\\
SQUAREM-aEM&$15.32\pm0.98$&$14.86\pm0.86$&$22.87\pm\n2.26$&$\n43.57\pm\n8.29$&$\n38.56\pm\n35.35$\\
SQUAREM-MMF&$15.47\pm1.09$&$14.05\pm1.40$&$14.18\pm\n1.56$&$\n18.40\pm\n1.21$&$\n22.41\pm\phantom{0}\n9.39$\\
SQUAREM-GMMF&$\mathbf{13.30\pm1.49}$&$\mathbf{11.99\pm0.16}$&$\mathbf{\n9.02\pm\n0.49}$&$\mathbf{\phantom{0}\n8.90\pm\n0.80}$&$\mathbf{\phantom{0}\n8.28\pm\phantom{0}\n1.29}$\\
SQUAREM-ECME&$24.25\pm2.79$&$19.20\pm1.96$&$18.48\pm\n3.12$&$\n17.98\pm\n3.33$&$\n13.41\pm\phantom{0}\n3.41$
\end{tabular}
\end{sideways}}\quad
\resizebox*{7cm}{!}{
\begin{sideways}
\begin{tabular}{c|c c c c c}
Algorithm&$\nu=1$&$\nu=2$&$\nu=5$&$\nu=10$&$\nu=100$\\\hline
EM&$0.00890\pm0.00163$&$0.00644\pm0.00074$&$0.00682\pm0.00158$&$0.01659\pm0.00432$&$0.07076\pm0.01350$\\
aEM&$0.00365\pm0.00056$&$0.00401\pm0.00049$&$0.00732\pm0.00128$&$0.01706\pm0.00465$&$0.07513\pm0.01416$\\
MMF&$0.00369\pm0.00075$&$0.00342\pm0.00039$&$0.00390\pm0.00052$&$0.00589\pm0.00085$&$0.00834\pm0.00151$\\
GMMF&$0.00763\pm0.00193$&$0.00540\pm0.00061$&$0.00355\pm0.00074$&$0.00551\pm0.00063$&$0.00599\pm0.00112$\\
ECME&$0.01998\pm0.00343$&$0.01214\pm0.00137$&$0.00927\pm0.00114$&$0.00801\pm0.00105$&$0.00684\pm0.00157$\\\hline
DAAREM-EM&$0.00728\pm0.00163$&$0.00726\pm0.00158$&$0.00652\pm0.00180$&$0.00796\pm0.00218$&$0.00905\pm0.00233$\\
DAAREM-aEM&$0.00554\pm0.00095$&$0.00519\pm0.00097$&$0.00530\pm0.00124$&$0.00613\pm0.00160$&$0.00687\pm0.00141$\\
DAAREM-MMF&$0.00553\pm0.00090$&$0.00500\pm0.00084$&$0.00463\pm0.00082$&$0.00529\pm0.00137$&$0.02410\pm0.22518$\\
DAAREM-GMMF&$0.00837\pm0.00185$&$0.00679\pm0.00091$&$0.00491\pm0.00081$&$0.00601\pm0.00086$&$0.00772\pm0.00201$\\
DAAREM-ECME&$0.01527\pm0.00351$&$0.01061\pm0.00175$&$0.00968\pm0.00171$&$0.00993\pm0.00189$&$0.00825\pm0.00207$\\\hline
SQUAREM-EM&$0.00456\pm0.00081$&$0.00372\pm0.00077$&$0.00375\pm0.00068$&$0.00831\pm0.00220$&$0.02299\pm0.00837$\\
SQUAREM-aEM&$\mathbf{0.00291\pm0.00050}$&$0.00269\pm0.00029$&$0.00441\pm0.00065$&$0.00913\pm0.00203$&$0.00795\pm0.00621$\\
SQUAREM-MMF&$0.00308\pm0.00059$&$\mathbf{0.00268\pm0.00035}$&$\mathbf{0.00270\pm0.00041}$&$\mathbf{0.00373\pm0.00041}$&$0.00474\pm0.00184$\\
SQUAREM-GMMF&$0.00569\pm0.00129$&$0.00400\pm0.00040$&$0.00304\pm0.00042$&$0.00375\pm0.00046$&$\mathbf{0.00420\pm0.00080}$\\
SQUAREM-ECME&$0.01153\pm0.00222$&$0.00722\pm0.00086$&$0.00717\pm0.00112$&$0.00761\pm0.00090$&$0.00727\pm0.00182$
\end{tabular}
\end{sideways}}
\end{center}
\caption{Average number of iterations (top) and execution times (bottom) and the corresponding standard deviations of the different algorithms.}
\label{tab:performance2}
\end{table}
\subsection{Unsupervised Estimation of Noise Parameters} \label{sec:images}
Next, we provide an application in image analysis. To this aim, we consider images corrupted by one-dimensional Student-$t$ noise
with $\mu=0$ and unknown $\Sigma \equiv \sigma^2$ and $\nu$.
We provide a method
that allows to estimate $\nu$ and $\sigma$ in an unsupervised way.
The basic idea is to consider constant areas of an image,
where the signal to noise ratio is weak and differences between pixel values
are solely caused by the noise.
\paragraph{Constant area detection:}
In order to detect constant regions in an image, we adopt an idea presented in~\cite{SDA15}.
It is based on Kendall's $\tau$-coefficient, which is a measure of rank correlation,
and the associated $z$-score, see~\cite{Ken38,Ken45}.
In the following, we briefly summarize the main ideas behind this approach.
For finding constant regions we proceed as follows: First, the image grid $\mathcal{G}$ is partitioned into $K$ small,
non-overlapping regions $\mathcal{G}= \bigcup_{k=1}^K R_k$, and for each region we consider the hypothesis testing problem
\begin{align}
\mathcal{H}_0&\colon R_k\text{ is constant}\qquad \text{vs.}\qquad
\mathcal{H}_1\colon R_k\text{ is not constant} \label{constant_test}.
\end{align}
To decide whether to reject $\mathcal{H}_0$ or not, we observe the following: Consider a fixed region $R_k$
and let $I, J\subseteq R_k$ be two disjoint subsets of $R_k$ with the same cardinality. Denote with $u_I$ and $u_J$ the vectors
containing the values of $u$ at the positions indexed by $I$ and $J$. Then, under $\mathcal{H}_0$, the vectors $u_I$ and $u_J$
are uncorrelated (in fact even independent) for all choices of $I, J\subseteq R_k$ with $I\cap J = \emptyset$ and $|I|=|J|$.
As a consequence, the rejection of $\mathcal{H}_0$ can be reformulated as the question whether we can find $I,J$ such that $u_I$ and $u_J$
are significantly correlated, since in this case there has to be some structure in the image region $R_k$
and it cannot be constant. Now, in order to quantify the correlation, we adopt an idea presented in~\cite{SDA15} and make use of Kendall's $\tau$-coefficient,
which is a measure of rank correlation, and the associated $z$-score, see~\cite{Ken38,Ken45}.
The key
idea is to focus on the rank (i.e., on the relative order) of the values
rather than on the values themselves. In this vein, a block is considered homogeneous if
the ranking of the pixel values is uniformly distributed, regardless of the spatial arrangement of the pixels.
In the following, we assume that we have extracted two disjoint subsequences $x = u_I$ and $y = u_J$
from a region $R_k$ with $I$ and $J$ as above.
Let $(x_i,y_i)$ and $(x_j,y_j)$ be two pairs of observations. Then, the pairs are said to be
\begin{equation*}
\begin{cases}
\text{concordant} & \text{if } x_i<x_j \text{ and } y_i<y_j\\& \text{or }
x_i>x_j \text{ and } y_i>y_j,\\
\text{discordant} & \text{if } x_i<x_j \text{ and } y_i>y_j\\& \text{or } x_i>x_j \text{ and } y_i<y_j,\\
\text{tied} & \text{if } x_i=x_j \text{ or } y_i=y_j.
\end{cases}
\end{equation*}
Next, let $x,y\in \mathbb{R}^n$ be two sequences without tied pairs and let $n_c$ and $n_d$ be the number of concordant and discordant pairs, respectively.
Then, \emph{Kendall's $\tau$ coefficient}~\cite{Ken38} is defined as $\tau\colon \mathbb{R}^n\times \mathbb{R}^n\to [-1,1]$,
\begin{equation*}
\tau(x,y) = \frac{n_c - n_d}{\frac{n(n-1)}{2}}.
\end{equation*}
From this definition we see that if the agreement between the two rankings is perfect, i.e.\ the two rankings are the same,
then the coefficient attains its maximal value 1. On the other extreme, if the disagreement between the two rankings is perfect,
that is, one ranking is the reverse of the other, then the coefficient has value -1.
If the sequences $x$ and $y$ are uncorrelated, we expect the coefficient to be approximately zero. Denoting with $X$ and $Y$
the underlying random variables that generated the sequences $x$ and $y$, we have the following result,
whose proof can be found in~\cite{Ken38}.
\begin{Theorem}\label{Theo:tau_asymptotic}
Let $X$ and $Y$ be two arbitrary sequences under $\mathcal{H}_0$ without tied pairs.
Then, the random variable $\tau(X,Y)$ has an expected value of 0 and a variance of $\frac{2(2n+5)}{9n(n-1)}$.
Moreover, for $n\to \infty$, the associated \emph{$z$-score} $z\colon \mathbb{R}^n\times \mathbb{R}^n\to \mathbb{R}$,
\begin{align*}
z(x,y) = \frac{3\sqrt{n(n-1)}}{\sqrt{2(2n+5)}}\tau(x,y)=\frac{3\sqrt{2}(n_c - n_d)}{\sqrt{n(n-1)(2n+5)}}
\end{align*}
is asymptotically standard normal distributed,
\begin{equation*}
z(X,Y)\overset{n\to \infty}{\sim}\mathcal{N}(0,1).
\end{equation*}
\end{Theorem}
With slight adaption, Kendall's $\tau$ coefficient can be generalized to sequences with tied pairs, see~\cite{Ken45}.
As a consequence of Theorem~\ref{Theo:tau_asymptotic}, for a given significance level $\alpha\in (0,1)$,
we can use the quantiles of the standard normal distribution to decide whether to reject $\mathcal{H}_0$ or not.
In practice, we cannot test any kind of region and any kind of disjoint sequences.
As in~\cite{SDA15}, we restrict our attention to quadratic regions and pairwise comparisons of neighboring pixels.
We use four kinds of neighboring relations (horizontal,
vertical and two diagonal neighbors) thus perform in total four tests. We reject the hypothesis $\mathcal{H}_0$
that the region is constant as soon as one of the four tests rejects it.
Note that by doing so, the final significance level is smaller than the initially chosen one.
We start with blocks of size $64\times 64$
whose side-length is incrementally decreased until enough constant areas are found.
\\
\paragraph{Parameter estimation.}
In each constant region we consider the pixel values in the region as i.i.d.\
samples of a univariate Student-$t$ distribution $T_\nu(\mu,\sigma^2)$,
where we estimate the parameters using Algorithm~\ref{alg:MMF}.
After estimating the parameters in each found constant region,
the estimated location parameters $\mu$ are discarded,
while the estimated scale and degrees of freedom parameters $\sigma$ respective $\nu$
are averaged to obtain the final estimate of the global noise parameters.
At this point, as both $\nu$ and $\sigma$ influence the resulting distribution in a multiplicative way,
instead of an arithmetic mean, one might use a geometric which is slightly less affected by outliers.
In Figure~\ref{Fig:constant_area} we illustrate this procedure for two different noise scenarios.
The left column in each figure depicts the detected constant areas.
The middle and right column show histograms of the estimated values for $\nu$ respective $\sigma$.
For the constant area detection we use the code of~\cite{SDA15}\footnote{\url{https://github.com/csutour/RNLF}}.
The true parameters used to generate the noisy images where $\nu=1$ and $\sigma = 10$ for the top row and $\nu=5$ and $\sigma = 10$
for the bottom row, while the obtained estimates are (geometric mean in brackets)
$\hat{\nu} = 1.0437$ ($1.0291$)
and
$\hat{\sigma}= 10.3845$ ($10.3111$) for the top row and $\hat{\nu}= 5.4140$ ($5.0423$) and $\hat{\sigma}=10.5500$ ($10.1897$) for the bottom row.
\begin{figure}[htp]
\centering
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=0.98\textwidth]{images/hom_areas_1.pdf}
\caption{Noisy image with detected homogeneous areas.}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{images/hist_nu_1.pdf}
\caption{Histogram of estimates for $\nu$.}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{images/hist_sigma_1.pdf}
\caption{Histogram of estimates for $\sigma^2$.}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=0.98\textwidth]{images/hom_areas_5.pdf}
\caption{Noisy image with detected homogeneous areas.}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{images/hist_nu_5.pdf}
\caption{Histogram of estimates for $\nu$.}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{images/hist_sigma_5.pdf}
\caption{Histogram of estimates for $\sigma^2$.}
\end{subfigure}
\caption[]{Unsupervised estimation of the noise parameters $\nu$ and $\sigma^2$. }\label{Fig:constant_area}
\end{figure}
A further example is given in Figure \ref{Fig:constant_area_muehle}. Here, the obtained estimates are (geometric mean in brackets)
$\hat{\nu} = 1.0075$ ($0.99799$)
and
$\hat{\sigma}= 10.2969$ ($10.1508$) for the top row and $\hat{\nu}= 5.4184$ ($5.1255$) and $\hat{\sigma}=10.2295$ ($10.1669$) for the bottom row.
\begin{figure}[htp]
\centering
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[height=4.7cm]{images/hom_areas_muehle1}
\caption*{Noisy image with detected homogeneous areas.}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[height=4.7cm]{images/hist_nu_muehle1}
\caption*{Histogram of estimates for $\nu$.}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[height=4.7cm]{images/hist_sigma_muehle1}
\caption*{Histogram of estimates for $\sigma^2$.}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[height=4.7cm]{images/hom_areas_muehle5}
\caption*{Noisy image with detected homogeneous areas.}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[height=4.7cm]{images/hist_nu_muehle5}
\caption*{Histogram of estimates for $\nu$.}
\end{subfigure}
\begin{subfigure}[t]{0.32\textwidth}
\centering
\includegraphics[height=4.7cm]{images/hist_sigma_muehle5}
\caption*{Histogram of estimates for $\sigma^2$.}
\end{subfigure}
\caption[]{Unsupervised estimation of the noise parameters $\nu$ and $\sigma^2$. }\label{Fig:constant_area_muehle}
\end{figure}
|
1,314,259,995,925 | arxiv | \section{Introduction}
Turbulence is ubiquitous in astrophysical plasmas in both local and high-redshift universe
\citep{BraL14}.
It accompanies the large scale structure formation
and amplifies cosmic magnetic fields
\citep{Ryu08}.
It influences multi-scale diverse astrophysical processes, such as star formation
\citep{Mckee_Ostriker2007},
cosmic ray propagation
\citep{XY13},
magnetic reconnection and particle acceleration
\citep{Zh11,LaR20}.
The fundamental problem of turbulence is turbulent statistics
\citep{Chan49}.
The statistical studies of astrophysical turbulence greatly benefit from the
recent development of turbulence measurement techniques, including, e.g.,
the principal component analysis
\citep{Hey97},
Velocity Channel Analysis
\citep{LP00},
Velocity Coordinate Spectrum
\citep{LP06},
core velocity dispersion
\citep{Qi12},
polarization variance analysis
and polarization spatial analysis
\citep{LP16},
velocity gradient technique
\citep{Yu17}.
Statistical measurements of velocity field
\citep{Chep10, Xu20,Li20},
density field
\citep{Armstrong95,Burk09,CheL10},
magnetic field
\citep{Han04,Gae11},
and other observables associated with turbulence
\citep{XZ16,XuZ17}
reveal both the properties and important roles of turbulence in
the interstellar medium (ISM) and intracluster medium (ICM).
Turbulence in the intergalactic medium (IGM) is closely related to the formation of large scale structure in the universe.
For the turbulence of non-primordial origin,
the possible driving mechanisms include cosmological shocks in filaments
\citep{Ryu08}
and supernovae-driven galactic outflows
\citep{Evo11}.
The intergalactic turbulence significantly affects
the dynamics of baryon fluid, galaxy-IGM interplay, amplification of magnetic fields, and enrichment of metals in the IGM through cosmic time
\citep{Evo10}.
Despite the observational and numerical evidence indicating the presence of intergalactic turbulence
(e.g., \citealt{Rau01,Lap11}),
unlike the turbulence in the ISM and ICM,
the statistical properties of intergalactic turbulence are poorly constrained by observations, as
the detection and measurements of the tenuous IGM are very challenging.
Moreover, the statistical analysis of the large-scale intergalactic turbulence is infeasible
with current computational resources
\citep{Lap11}.
Transient extragalactic radio bursts, such as fast radio bursts (FRBs),
have their dispersion measures (DMs) dominated by the contribution of the IGM
\citep{Lor07,Tho13,Pat16}
and are powerful probes of the intergalactic turbulence
\citep{Macq13,XZ16,Rav16}.
Besides the scattering effect that causes the temporal broadening for individual FRBs
\citep{Macq13,Zhu18},
density fluctuations induced by intergalactic turbulence can also give rise to fluctuations in DMs of different FRBs.
Similar to using Galactic pulsars to sample the interstellar turbulence
\citep{Armstrong95,XuZ17},
we can also use a substantial population of FRBs to sample the intergalactic turbulence.
With a range of separations between sight lines through the IGM,
FRBs can provide the measurement on the scale-dependent DM fluctuations induced by the multi-scale intergalactic turbulence.
For the first time, we perform a statistical measurement of the intergalactic turbulence by using a population of FRBs.
In this Letter,
{we apply the statistical method developed by
\citet{LP16}
(hereafter LP16)
for extended sources to point sources.
The same statistical approach can also be used for e.g., other extragalactic point sources
\citep{XuZ16},
molecular cloud cores
\citep{Xu20},
Galactic pulsars, to study the fluctuations of observables in various media
and the associated astrophysical processes.}
The basic formalism of the statistical method is presented in \S 2.
In \S 3, we compare the measured structure function of DMs of FRBs with our theoretical expectation.
Discussion and conclusions are given in \S 4.
\section{Structure function analysis of DMs}
\label{sec: sfdm}
In a turbulent medium,
we consider that the correlation function (CF) of electron density fluctuations $\delta n_e$ follows the
power-law scaling,
\begin{equation}\label{eq: fopwcf}
\begin{aligned}
\xi(R,\Delta l) &= \langle \delta n_e(\bm{X_1},l_1) \delta n_e (\bm{X_2},l_2)\rangle \\
&= \langle \delta n_e^2 \rangle \frac{L_i^m}{L_i^m + (R^2 + \Delta l^2)^\frac{m}{2}} ,
\end{aligned}
\end{equation}
where $\bm{X}$ is the 2D position of the source on the sky plane, $l$ is the distance along the line of sight (LOS),
$R = |\bm{X_1} - \bm{X_2}|$ is the projected separation between sources,
$\Delta l = l_1 - l_2$,
and the angle brackets denote an ensemble average.
$R$ can be converted to the angular separation $\theta$ by $\theta = R / L$.
Here $L$ is the size of the turbulent medium that extends from the observer to a distance $L$.
The above power-law form of CF is commonly used for describing fluctuations in observables induced by turbulence
(LP16; \citealt{XuZ16,Xu20}).
The correlation length $L_i$ and the power-law index $m$ characterize the statistical properties of turbulence.
$m$ is related to the 3D power-law index of a turbulent spectrum $\alpha$ by
\begin{equation}
\alpha = -m-3.
\end{equation}
We note that for Kolmogorov turbulence, $m=2/3$ and $\alpha = -11/3$.
To calculate the structure function (SF) of dispersion measures
$\text{DM} = \int n_e dl $,
where $n_e$ is the electron density,
we consider two cases with
(1) a single thin turbulent screen between the sources and the observer with the screen thickness much smaller than
the distances of the sources from the observer
(Fig. \ref{fig: sketa}),
and (2) a turbulent volume along the entire LOS containing both the sources and the observer (Fig. \ref{fig: sketb}).
In the former case, only the components of DMs from the turbulent screen are correlated.
Case (1): a thin turbulent screen.~
In this case, the SF of DMs is
\begin{equation}
\begin{aligned}
D(R) &= \langle [\text{DM} (\bm{X_1}) - \text{DM} (\bm{X_2})]^2 \rangle \\
& = \Big\langle \Big[ \int_0^L dl n_e (\bm{X_1},l) - \int_0^L dl n_e (\bm{X_2}, l) \Big]^2 \Big\rangle \\
& = 4 \langle \delta n_e^2 \rangle \int_0^L d\Delta l (L-\Delta l) \\
& ~~~~~~ \Bigg[ \frac{L_i^m}{L_i^m + \Delta l^m} - \frac{L_i^m}{L_i^m + (R^2 + \Delta l^2)^\frac{m}{2}} \Bigg] ,
\end{aligned}
\end{equation}
where the expression in Eq. \eqref{eq: fopwcf} is used.
When the thickness of the turbulent screen $L$ is larger than $L_i$, it has asymptotic scalings in different regimes
(LP16),
\begin{subnumcases}
{ D(R) \approx \label{eq: drthc} }
4 \langle \delta n_e^2 \rangle L_i^{-m} L R^{m+1}, ~~~~~~~~R<L_i, \label{eq: steiner}\\
4 \langle \delta n_e^2 \rangle L_i^m L R^{-m+1} ,~~~~ L_i<R<L,\\
4 \langle \delta n_e^2 \rangle L_i^m L^{-m+2}, ~~~~~~~~~~~ R > L.
\end{subnumcases}
For a steep turbulent spectrum dominated by large-scale turbulent fluctuations
\citep{LP04}
with $\alpha < -3$, e.g., Kolmogorov turbulence,
$L_i$ is the outer scale of density fluctuations, and only
Eq. \eqref{eq: steiner} is applicable.
We then have
\begin{subnumcases}
{ D(R) \approx \label{eq: drsscst}}
4 \langle \delta n_e^2 \rangle L_i^{-m} L R^{m+1}, ~~~~~~~~R<L_i, \\
4 \langle \delta n_e^2 \rangle L_i L , ~~~~~~~~~~~~~~~~~~~~~~R>L_i.
\end{subnumcases}
The dependence on $R$ is seen when $R$ is in the inertial range of turbulence ($< L_i$).
At $R > L_i$, DMs become uncorrelated, and $D(R)$ remains constant.
Case (2): a turbulent volume along the entire LOS.~
In a different case with both the sources and the observer within the same turbulent volume,
the SF of DMs is
\begin{align}
D(R,l_1,l_2) & = \langle [\text{DM} (\bm{X_1},l_1) - \text{DM} (\bm{X_2},l_2)]^2 \rangle \nonumber\\
& = \Big\langle \Big[ \int_0^{l_1} dl n_e (\bm{X_1},l) - \int_0^{l_2} dl n_e (\bm{X_2}, l) \Big]^2 \Big\rangle \nonumber\\
& \approx 2 D^+(R,l_+) + \frac{1}{2} \Lambda (\Delta l)^2 . \label{eq: tdzzd}
\end{align}
Compared with Case (1) with a localized thin turbulent screen,
the LOS integral here is not limited by the screen thickness,
but is taken over the entire path from the observer to the source.
The difference between the distances of sources $\Delta l $ only enters the second term.
The dependence on $R$ appears in the first term
(LP16),
\begin{equation}
\begin{aligned}
D^+(R,l_+) & = 2 \langle \delta n_e^2 \rangle \int_0^{l_+} d\Delta l (l_+ -\Delta l) \\
& ~~~~~~ \Bigg[ \frac{L_i^m}{L_i^m + \Delta l^m} - \frac{L_i^m}{L_i^m + (R^2 + \Delta l^2)^\frac{m}{2}} \Bigg] ,
\end{aligned}
\end{equation}
where $l_+ = (l_1 + l_2)/2$.
If we consider distant sources from the observer with $l_+ > L_i $, then we can reach
\begin{subnumcases}
{ D^+(R,l_+) \approx }
2 \langle \delta n_e^2 \rangle L_i^{-m} l_+ R^{m+1}, ~~~~~~R<L_i,\\
2 \langle \delta n_e^2 \rangle L_i^m l_+ R^{-m+1} , L_i<R<l_+,\\
2 \langle \delta n_e^2 \rangle L_i^m l_+^{-m+2}, ~~~~~~~~~~~ R > l_+,
\end{subnumcases}
which is similar to Eq. \eqref{eq: drthc} but $L$ is replaced by $l_+$.
For the second term of $D(R,l_1,l_2)$ in Eq. \eqref{eq: tdzzd},
the coefficient $\Lambda$ (LP16) can be simplified to
\begin{align}
\Lambda &= \xi(0,l_+) - \xi(R,l_+) + 2\xi(R,0) \nonumber\\
&= \langle \delta n_e^2 \rangle \Bigg[ \frac{L_i^m}{L_i^m + l_+^m} -
\frac{L_i^m}{L_i^m + (R^2 + l_+^2)^\frac{m}{2}} \nonumber\\
& ~~~~ + 2 \frac{L_i^m}{L_i^m + R^m } \Bigg ], \nonumber\\
& \approx
\begin{cases}
2 \langle \delta n_e^2 \rangle , ~~~ R< L_i \\
0, ~~~~~~~~~~~~ R > L_i ,
\end{cases}
\end{align}
where the expression in Eq. \eqref{eq: fopwcf} is used.
We again consider a steep turbulent spectrum with $\alpha < -3$. Based on the above expressions, we now approximately {have}
\begin{subnumcases}
{ D(R,l_1,l_2) \approx \label{eq: extdsf} }
4 \langle \delta n_e^2 \rangle L_i^{-m} R^{m+1} l_+ + \langle \delta n_e^2 \rangle (\Delta l)^2, \nonumber \\
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~R<L_i,\\
4 \langle \delta n_e^2 \rangle L_i l_++ \langle \delta n_e^2 \rangle (\Delta l)^2, \nonumber \\
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~R > L_i .
\end{subnumcases}
The quantities related to the distances of sources, i.e., $l_+$, $\Delta l$, do not distort the power-law scaling of $D(R,l_1,l_2)$
with $R$.
Next by averaging over $l_+$ and $\Delta l$, we can {obtain}
\begin{align}
D(R)
& = \frac{1}{2L} \int_{-L}^L \frac{d \Delta l}{L-\Delta l} \int_{|\Delta l| /2 }^{L-|\Delta l| /2} d l_+ D(R,l_1,l_2) \label{eq: fadazf}\\
& \approx
\begin{cases}
2 \langle \delta n_e^2 \rangle L_i^{-m} L R^{m+1} +\frac{1}{3} \langle \delta n_e^2 \rangle L^2 , ~~~ R< L_i \label{eq: resavfr}\\
2 \langle \delta n_e^2 \rangle L_i L +\frac{1}{3} \langle \delta n_e^2 \rangle L^2 , ~~~~~~~~~~~~~~~~~ R > L_i .
\end{cases}
\end{align}
It has a similar form as Eq. \eqref{eq: drsscst}, but here
$L$ is the length of the entire turbulent volume along the LOS containing both the sources and the observer.
Besides, the extra second term at $R < L_i$ arises from the different distances of sources,
which adds ``noise" to the scaling of $D(R)$ with $R$
revealed by the first term.
In Eq. \eqref{eq: fadazf},
we assume that the distance differences can range from $0$ to $L$, but in fact for distant sources from the observer under consideration,
they mainly occupy a subvolume
within the range of distances $[L_0, L]$, where $L_0 > L_i$.
Therefore $D(R)$ should be adjusted {as}
\begin{align}
D(R) &= \frac{1}{2(L-L_0)} \int_{-L+L_0}^{L-L_0} d \Delta l \\
& ~~~~~ \frac{1}{L-L_0 -\Delta l} \int_{L_0+|\Delta l| /2 }^{L-|\Delta l| /2} d l_+ D(R,l_1,l_2) \\
& \approx
\begin{cases}
2 \langle \delta n_e^2 \rangle L_i^{-m} (L+L_0) R^{m+1} +\frac{1}{3} \langle \delta n_e^2 \rangle (L-L_0)^2 , \\
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ R< L_i \label{eq: disingd}\\
2 \langle \delta n_e^2 \rangle L_i (L+L_0) +\frac{1}{3} \langle \delta n_e^2 \rangle (L-L_0)^2 , \\
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ R > L_i .
\end{cases}
\end{align}
As a result, compared with Eq. \eqref{eq: resavfr}, we see an
increase of the first term by a factor of $(1+ L_0/L)$ and
a decrease of the second term by a factor of $(L-L_0)^2 / L^2$,
leading to a significantly reduced level of ``noise".
\begin{figure}[htbp]
\centering
\subfigure[Case (1)]{
\includegraphics[width=4cm]{sketgla.jpg}\label{fig: sketa}}
\subfigure[Case (2)]{
\includegraphics[width=4cm]{sketext.jpg}\label{fig: sketb}}
\caption{ Sketches of (a) a thin turbulent screen between the sources (FRBs) and the observer and
(b) a turbulent volume along the entire LOS containing both the sources and the observer.
The open circles indicate the 2D positions of FRBs projected on the sky plane. }
\label{fig: sket}
\end{figure}
\section{SF of DMs of FRBs}
Using the most updated published population of FRBs
\citep{Pat16}
\footnote{http://www.frbcat.org},
we calculate the SF of their measured total DMs as
\begin{equation}
D (\theta) = \langle (\text{DM} (\bm{X_1}) - \text{DM} (\bm{X_2}))^2 \rangle,
\end{equation}
which is the average value of the squared DM differences of all pairs of FRBs at a given angular separation.
Here $\bm{X}$ is the projected position of an FRB on the sky plane, $\theta$ is the angular separation between projected positions,
and the angle brackets denote the spatial average at a fixed $\theta$.
From the sky distribution of FRBs with measured DMs shown in Fig. \ref{fig: map},
we see that they sample the turbulent fluctuations along the LOS in different directions.
So we are unlikely biased to detect the turbulent structure toward a particular direction.
The result for the SF is displayed in Fig. \ref{fig: sf1}, where the error bars show $95\%$ confidence intervals.
The error bars are larger toward a small $\theta$ due to the fewer number of pairs of FRBs available at a small $\theta$.
Based on the above analysis, we use a function
\begin{equation}\label{eq: fitfun}
D (\theta \lesssim 13.8^\circ) [\text{pc$^2$ cm$^{-6}$}] = \alpha (\theta [^\circ] ) ^ \beta + \gamma
\end{equation}
to fit the data points at small $\theta$.
We find that for the best least-squares fit, there are
\begin{equation}\label{eq: fitpa}
\begin{aligned}
& \alpha = 8595 \pm 1.03\times10^4, \\
& \beta = 1.68 \pm 0.44, \\
& \gamma = 5.13\times10^4 \pm 5.87\times10^4,
\end{aligned}
\end{equation}
where the uncertainties are given at $68\%$ confidence.
$D(\theta)$ saturates and basically remains constant at $\theta > 13.8^\circ$.
\begin{figure}[htbp]
\centering
\includegraphics[width=9.5cm]{mapnew.jpg}
\caption{ FRBs with measured DMs on the sky in Galactic coordinates.
The circle size scales with DM.
The color coding gives the DM values. }
\label{fig: map}
\end{figure}
\begin{figure*}[htbp]
\centering
\subfigure[]{
\includegraphics[width=8.5cm]{sf1new.jpg}\label{fig: sf1}}
\subfigure[]{
\includegraphics[width=8.5cm]{dmesfnew.jpg}\label{fig: sfe}}
\caption{ (a) $D (\theta)$ vs. $\theta$ for 112 FRBs. Error bars indicate $95\%$ confidence intervals.
The dashed line is the fit to the data points at small $\theta$ with the fitting function and parameters given by Eqs. \eqref{eq: fitfun} and \eqref{eq: fitpa}.
(b) Same as (a) but for $D_E (\theta)$.
The dashed line shows the fit (Eq. \eqref{eq: fitfun}) with
$\alpha = 1.13\times10^4 \pm 1.31\times10^4$,
$\beta = 1.60 \pm 0.43$,
and $\gamma = 4.17\times10^4 \pm 6.50\times10^4$,
where the uncertainties are given at 68$\%$ confidence. }
\label{fig: sf}
\end{figure*}
The SF of DMs of FRBs at cosmological distances can be decomposed into its Galactic component $D_G$
and extragalactic component $D_E$,
\begin{equation}\label{eq: gendedg}
\begin{aligned}
&~~~~~~D(R) \\
&= \langle [\text{DM}_G (\bm{X_1}) + \text{DM}_E(\bm{X_1}) - \text{DM}_G (\bm{X_2}) - \text{DM}_E (\bm{X_2})]^2 \rangle \\
& = \underbrace{\langle [\text{DM}_G (\bm{X_1}) - \text{DM}_G (\bm{X_2}) ]^2 \rangle}_{D_G} \\
& ~~~~ + \underbrace{\langle [\text{DM}_E (\bm{X_1}) - \text{DM}_E (\bm{X_2}) ]^2 \rangle}_{D_E} , \\
\end{aligned}
\end{equation}
where $\text{DM}_G$ and $\text{DM}_E$ are the Galactic and extragalactic components of the total DM, respectively.
We next consider two different cases with the power-law behavior of $D(\theta)$ dominated by
(1) the Galactic ISM,
or (2) the IGM.
(1) The Galactic ISM.~
If the DMs$_E$ toward different FRBs are uncorrelated, then $D_E$ is independent of $R$.
We can write Eq. \eqref{eq: gendedg} as
\begin{equation}
D(R) = D_G(R) + C,
\end{equation}
where $C$ is a constant representing $D_E$.
In this situation, Case (1) in Section \ref{sec: sfdm} applies, and
our Galaxy acts as a thin turbulent screen with the thickness $L$ given by the average path length through the Galactic ISM.
We consider that the Galactic interstellar turbulence has
a steep power-law spectrum
and its driving scale is much smaller than $L$
\citep{Armstrong95,CheL10,Chep10}.
Accordingly, $D_G$ can be described by Eq. \eqref{eq: drsscst}, and thus there is
\begin{subnumcases}
{ D(\theta) \approx }
4 \langle \delta n_e^2 \rangle L_i^{-m} L^{m+2} \theta^{m+1} + C , \theta<L_i/L, \label{eq: galadmi}\\
4 \langle \delta n_e^2 \rangle L_i L + C , ~~~~~~~~~~~~~~~~~~~~\theta>L_i/L,
\end{subnumcases}
where we use $\theta = R/L$ as the angular separation corresponding to $R$.
We compare Eq. \eqref{eq: galadmi} with the fit to the measured $D(\theta)$ in Eq. \eqref{eq: fitfun}.
To explain the observations, there should be
\begin{equation}
\begin{aligned}
& m+1 = 1.68, \\
& 4 \langle \delta n_e^2 \rangle L_i^{-m} L^{m+2} \Big(\frac{\pi}{180}\Big)^{m+1} = 8595, \\
& C = 5.13\times10^4 , \\
& \frac{L_i}{L} \approx 0.24.
\end{aligned}
\end{equation}
From the above constraints one can easily get
\begin{equation}
\langle \delta n_e^2 \rangle L^2 [\text{pc$^2$ cm$^{-6}$}] = 7.35\times10^5.
\end{equation}
It requires that the typical DM$_G$ of an FRB is
\begin{equation}
\text{DM}_G [\text{pc cm$^{-3}$}] \approx n_e L \sim \sqrt{\langle \delta n_e^2 \rangle} L = 857.
\end{equation}
Obviously, this value is much larger than those of pulsars in the high Galactic latitude region where most FRBs were detected
\citep{Cord19}.
In fact, the IGM is believed to be the dominant source of dispersion for most FRBs
\citep{Iok03,Ino04,Lor07,Tho13}.
In Fig. \ref{fig: sfe}, we present the SF of DMs$_E$, where
$\text{DM}_E = \text{DM} - \text{DM}_G$, and DM$_G$ is estimated based on the
NE2001 Galactic electron density model
\citep{Cor02,Pat16}.
\footnote{Here we exclude the source with DM$_G > $ DM. }
The difference between Fig. \ref{fig: sf1} and Fig. \ref{fig: sfe} is marginal, which confirms the negligible Galactic contribution to
$D(\theta)$.
(2) The IGM. ~
If the DMs$_E$ are correlated so that $D_E$ is a function of $R$,
then $D(R)$ mainly reflects the statistical properties of the intergalactic turbulence given $D_E \gg D_G$.
By probing the intergalactic turbulence along the entire LOS, we are dealing with Case (2) in Section \ref{sec: sfdm}.
Hence we approximately {have}
\begin{align}
D(\theta) & \approx D_E(\theta) \\
& \approx
\begin{cases}
2 \langle \delta n_e^2 \rangle L_i^{-m} (L+L_0) L^{m+1} \theta^{m+1} \\
~~~~~~~~~~~~~~~~~~~+\frac{1}{3} \langle \delta n_e^2 \rangle (L-L_0)^2 ,
~~~~~~ \theta< L_i /L \label{eq: frbing}\\
2 \langle \delta n_e^2 \rangle L_i (L+L_0) +\frac{1}{3} \langle \delta n_e^2 \rangle (L-L_0)^2 , \\
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \theta > L_i /L,
\end{cases}
\end{align}
where $\theta = R/L$ and $L$ is the depth of the intergalactic turbulent volume that FRBs sample.
Here we use Eq. \eqref{eq: disingd} under the consideration that FRBs are distant sources from the observer and the distances of most FRBs are
larger than $L_0$, which can be constrained by the observational result (see below).
Similar to the earlier analysis,
the comparison between Eq. \eqref{eq: frbing} and the fit to data (Eqs. \eqref{eq: fitfun} and \eqref{eq: fitpa}) leads to
\begin{equation}
\begin{aligned}
& m+1 = 1.68, \\
& 2 \langle \delta n_e^2 \rangle L_i^{-m} (L+L_0) L^{m+1} \Big(\frac{\pi}{180}\Big)^{m+1} = 8595 ,\\
& \frac{1}{3} \langle \delta n_e^2 \rangle (L-L_0)^2 = 5.13\times10^4, \\
& \frac{L_i}{L} \approx 0.24.
\end{aligned}
\end{equation}
From these relations we obtain
\begin{align}
& m = 0.68, \label{eq: turpinx}\\
& \frac{L_0}{L} \approx 0.59,\\
& \frac{L_i}{L} \approx 0.24. \label{eq: corrtur}
\end{align}
Eq. \eqref{eq: turpinx} indicates that the intergalactic turbulence follows the Kolmogorov scaling ($m=2/3$).
We note that the Kolmogorov scaling also applies to magnetized turbulence
\citep{GS95,LV99,CLV_incomp},
which would not be distorted by the presence of intergalactic magnetic fields
\citep{Ryu08}.
\begin{figure*}[htbp]
\centering
\subfigure[]{
\includegraphics[width=8.5cm]{pdfdme.jpg}\label{fig: pdf}}
\subfigure[]{
\includegraphics[width=8.5cm]{dmred.jpg}\label{fig: zdm}}
\caption{ (a) $\text{DM}_E$ distribution for the entire FRB sample. The thick solid line shows the
kernel density estimate of the distribution. The peak of the distribution at $\text{DM}_{Ep} = 306.3$ pc cm$^{-3}$
is indicated by the vertical dashed line.
(b) DM$_\text{IGM}$-$z$ relation (solid line).
The dashed line marks $z \approx 0.36$ corresponding to $\text{DM}_{Ep}$.}
\end{figure*}
By using Eq. \eqref{eq: corrtur}, we can also evaluate the driving scale of intergalactic turbulence,
which is about one order of magnitude smaller than $L$.
From $\text{DM}_E$ distribution (see Fig. \ref{fig: pdf}), where we subtract DM$_G$ based on the NE2001 model (see above),
we find the peak at $\text{DM}_{Ep} \approx 306.3$ pc cm$^{-3}$.
The relation between the intergalactic component of DM, DM$_\text{IGM}$, and redshift $z$ was derived by
\citet{Deng14}.
Its numerical value
\citep{Zha18}
\begin{equation}
\text{DM}_\text{IGM} \approx 807~ \text{pc cm}^{-3} \int_0^z \frac{(1+z) dz}{ [\Omega_m (1+z)^3 + \Omega_\Lambda]^\frac{1}{2}}
\end{equation}
is shown in Fig. \ref{fig: zdm},
where $\Omega_m = 0.3089\pm 0.0062$
and $\Omega_\Lambda = 0.6911\pm0.0062$ are the matter density parameter and dark energy density parameter
\citep{Pla16}.
By assuming $\text{DM}_E \approx \text{DM}_\text{IGM} $, we see that
the redshift corresponding to $\text{DM}_{Ep}$ is approximately $0.36$.
The LOS comoving distance for $z = 0.36$ is
$1455$ Mpc.
We adopt $L = 1455$ Mpc as the size of the intergalactic turbulent volume sampled by most FRBs
and obtain
$L_i \approx 350$ Mpc
(Eq. \eqref{eq: corrtur}) as the estimated driving scale of intergalactic turbulence.
This is of the same order of magnitude as the scale of galaxy superclusters
\citep{Oort83},
indicative of a possible connection between the formation of superclusters and intergalactic turbulence.
{Our result can be treated as a tentative evidence for the Kolmogorov intergalactic turbulence up to the scale
of the order of $100$ Mpc.
Upcoming observations of a larger population of FRBs will be used for further testing the result. }
\section{Conclusions}
Despite its astrophysical and cosmological significance,
the large-scale intergalactic turbulence and its statistical properties are poorly constrained by both observations and simulations.
FRBs, with their cosmological distances and isotropic sky distribution, can serve as unique probes
of the intergalactic turbulence.
This work further demonstrates the universality of turbulence in the universe
and provides information on the turbulence properties in the range of length scales beyond that of earlier measurements
(see Fig. \ref{fig: tursky}).
\begin{figure}[h!]
\centering
\includegraphics[width=8.5cm]{turscalnew.jpg}
\caption{ 3D power-law index $|\alpha|$ of turbulence vs. the range of length scales where the turbulent power spectrum is measured
in the Milky Way
\citep{Armstrong95,CheL10},
Hydra A galaxy cluster
\citep{Vog05},
the Coma galaxy cluster
\citep{Schue04},
and in the IGM taken from this work.
The shaded region indicates the uncertainty.
The dashed line marks the Kolmogorov index. }
\label{fig: tursky}
\end{figure}
The SF of DMs of FRBs provide a direct measurement of the multi-scale turbulent fluctuations in electron density in the turbulent volume
that FRB signals travel through.
As the FRB signal passes through its host galaxy, the IGM, and the Milky Way, its DM includes multiple components.
The resulting SF of DMs also contains the Galactic and extragalactic components.
The latter is mainly contributed by the IGM under the assumption of generally small host contributions to DMs
\citep{Sha18}.
The power-law behavior of SF at small angular separations
is expected from the energy cascade of turbulence in the inertial range.
As the turbulent fluctuations in different host galaxies are uncorrelated,
this power-law feature of SF can only come from either the Galactic interstellar turbulence or the intergalactic turbulence.
The SF saturates at large angular separations as the electron density fluctuations are uncorrelated on scales beyond the inertial range of turbulence.
It is well established and tested that the Galactic ISM is turbulent and the turbulence has a characteristic Kolmogorov power spectrum
in the warm ionized medium
\citep{Armstrong95,Han04,CheL10}.
By comparing the observationally measured SF with the theoretically modeled SF dominated by the Galactic ISM,
although the Kolmogorov power-law scaling can be explained,
the Galactic DMs are too small to account for the measured SF value.
This is also confirmed by the minor difference between the SF of total DMs and that of extragalactic DMs with the Galactic contributions
subtracted based on the NE2001 model.
The large amplitude and power-law behavior of SF
lead to the conclusion that
the large and correlated DM fluctuations originate from the IGM.
The comparison with the measured SF
suggests that the intergalactic turbulence has
a Kolmogorov scaling
and a large driving scale on the order of $100$ Mpc corresponding to the transition angular separation where the SF saturates.
The Kolmogorov velocity spectrum of cosmological turbulence up to the scale of
superclusters ($\sim 100$ Mpc), which is the largest scale of inhomogeneities in the universe,
is suggested by some cosmological models
(e.g., \citealt{Oze78}).
{However,
it is known that the structure formation models involving primordial cosmic turbulence face some observational difficulties
\citep{Gol93},
and the role of hydrodynamics beyond the scales of galaxy clusters remains an unsolved problem.}
The current measured SF especially at small angular separations
suffers from small source statistics and thus has a large uncertainty.
Future observational tests with a larger population of FRBs are necessary for further studying the intergalactic turbulence
and its cosmological implications {on structure formation scenarios. }
\acknowledgments
S.X. acknowledges the support for Program number HST-HF2-51400.001-A provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.
\bibliographystyle{apj.bst}
|
1,314,259,995,926 | arxiv | \section{Introduction}
\label{intro}
In QCD the strong interaction among quarks depends on their color charge.
When quarks are placed in a medium, this color charge is screened due density and temperature effects~\cite{Fukushima:2010bq}.
If the density and/or the temperature increases beyond a certain critical value, one expects that the interactions between quarks no longer confine them inside a hadron, so that they are free to travel longer distances and deconfine.
This transition from a confined to a deconfined phase is usually referred to as the deconfinement phase transition.
A separate phase transition takes place when the realization of chiral symmetry shifts from a Nambu-Goldstone phase to a Wigner-Weyl phase.
Based, on lattice QCD evidence~\cite{Petreczky:2012rq} one expects these two phase transitions to take place at approximately the same temperature at zero chemical potential.
At finite density these two transitions can arise at different critical temperatures.
The result will be a quarkyonic phase, where the chiral symmetry is restored but the quarks and gluons remains confined.
In order to characterize the properties of these phase transitions it has been customary to study the behavior of corresponding order parameters as functions of the temperature $T$ and the baryon chemical potential $\mu$, namely the trace of the Polyakov loop (PL) $\Phi$ (deconfinement phase transition) and quark anti-quark chiral condensate $\langle\bar{\psi} \psi\rangle$ (chiral symmetry restoration), respectively.
Another important parameter in the discussion of these phase transitions is the role that an external magnetic field may play, inducing changes in the critical temperature, in the location of the critical end point, etc~\cite{Miransky:2015ava}.
However, in this work we will not refer to magnetic field effects, since the goal of our discussion is to compare the Polyakov loop order parameter with another QCD deconfinement parameter that has been introduced in the literature~\cite{Bochkarev:1986es} in the form of the squared energy threshold, $s_0(T)$, for the onset of perturbative QCD (PQCD) in hadronic spectral functions.
For an actual general review see Ref.~\cite{Ayala:2016vnt}.
Around this energy, and at zero temperature, the resonance peaks in the spectrum are either no longer present or become very broad.
The smooth hadronic spectral function thus approaches the PQCD regime.
With increasing temperature approaching the critical temperature for deconfinement, one would expect hadrons to disappear from the spectral function which should then be described entirely by PQCD.
When both $T$ and $\mu$ are nonzero, lattice QCD simulations cannot be used, because of the sign problem in the fermionic determinant.
Therefore, one need to resort either to mathematical constructions to overcome the above limitation, or to model calculations.
The two deconfinement order parameters mentioned before: the trace of the PL ($\Phi$) and the continuum threshold ($s_0$) can be used to realize a phenomenological description of the deconfinement transition at finite temperature and density.
The natural framework to determine $s_0$ has been that of QCD sum rules~\cite{QCDSRreview}.
This quantum field theory framework is based on the operator product expansion (OPE) of current correlators at short distances, extended beyond perturbation theory, and on Cauchy's theorem in the complex $s$-plane.
The latter is usually referred to as quark-hadron duality.
Vacuum expectation values of quark and gluon field operators effectively parametrize the effects of confinement.
An extension of this method to finite temperature was first outlined in~\cite{Bochkarev:1986es}.
Further evidence supporting the validity of this program was provided in~\cite{Dominguez:1994re}, followed by a large number of applications~\cite{Dominguez,QCDT}.
To analyze the role of the PL, we will concentrate on nonlocal Polyakov$-$Nambu$-$Jona-Lasinio (nlPNJL) models~\cite{Blaschke:2007np,Contrera:2007wu,Contrera:2009hk,Hell:2008cc,Hell:2009by,Carlomagno:2013ona}, in which quarks move in a background color field and interact through covariant nonlocal chirally symmetric four point couplings.
These approaches, which can be considered as an improvement over the (local) PNJL model~\cite{Meisinger:1995ih,Fukushima:2003fw,Megias:2004hj,Ratti:2005jh,Roessner:2006xn,Mukherjee:2006hq,Sasaki:2006ww}, offer a common framework to study both the chiral restoration and deconfinement transitions.
In fact, the nonlocal character of the interactions arises naturally in the context of several successful approaches to low-energy quark dynamics~\cite{Schafer:1996wv,Roberts:1994dr,Roberts:2000aa}, and leads to a momentum dependence in the quark propagator that can be made consistent~\cite{Noguera:2008cm} with lattice results~\cite{bowman,Parappilly:2005ei,Furui:2006ks}.
In view of the above mentioned points, the aim of the present work is to study the relation between both order parameters for the deconfinement transition at finite temperature and chemical potential, $\Phi$ and $s_0$, using the thermal finite energy sum rules (FESR) with inputs obtained from nlPNJL models.
\section{Finite energy sum rules}
\label{fesr}
We begin by considering the (charged) axial-vector current correlator at $T=0$
\begin{align}
\Pi_{\mu\nu}(q^2) &= i\int d^4x \,e^{iq\cdot x}\,
\langle 0| T(A_\mu(x) A_\nu(0))|0 \rangle, \nonumber \\
&= - g_{\mu\nu} \, \Pi_1(q^2) + q_\mu q_\nu \Pi_0(q^2) \;,
\label{correlator}
\end{align}
where $A_\mu(x) = :\bar{u}(x) \gamma_\mu \gamma_5 d(x):$ is the axial-vector current, $q_\mu = (\omega, \vec{q})$ is the four-momentum transfer, and the functions $\Pi_{0,1}(q^2)$ are free of kinematical singularities.
Concentrating on the function $\Pi_0(q^2)$ and writing the OPE beyond perturbation theory in QCD \cite{QCDSRreview}, one of the two pillars of the sum rule method, one has
\begin{equation}
\Pi_0(q^2)|_{\mbox{\tiny{QCD}}} = C_0 \, \hat{I} + \sum_{N=1} C_{2N} (q^2,\mu^2) \langle \hat{\mathcal{O}}_{2N} (\mu^2) \rangle \;,
\label{OPE}
\end{equation}
where $\mu^2$ is a renormalization scale.
The Wilson coefficients $C_N$ depend on the Lorentz indices and quantum numbers of the currents.
Finally, the local gauge invariant operators ${\hat{\mathcal{O}}}_N$, are built from the quark and gluon fields in the QCD Lagrangian. The vacuum expectation values of those operators ($\hat{\mathcal{O}}_{2N} (\mu^2)$), dubbed as condensates, parameterize nonperturbative effects and have to be extracted from experimental data or model calculations.
These operators are ordered by increasing dimensionality and the Wilson coefficients, calculable in PQCD, fall off by corresponding powers of $-q^2$.
The unit operator above has dimension $d=0$ and $C_0 \hat{I}$ stands for the purely perturbative contribution.
Hence, this OPE factorizes short distance physics, encapsulated in the Wilson coefficients, and long distance effects parametrized by the vacuum condensates.
The second pillar of the QCD sum rules technique is Cauchy's theorem in the complex squared energy $s$-plane
\begin{align}
\frac{1}{\pi}\int_{0}^{s_0} ds\ f(s)\ \mbox{Im}\ \Pi_0 (s)|_{\mbox{\tiny{HAD}}} = \nonumber \\
-\frac{1}{2\pi i}\oint_{C(|s_0|)}ds\ f(s)\ & \Pi_0 (s)|_{\mbox{\tiny{QCD}}} \;,
\label{disprel}
\end{align}
where $f(s)$ is an arbitrary analytic function, and the radius of the circle $s_0$ is large enough for QCD and the OPE to be used on the circle.
The integral along the real $s$-axis involves the hadronic spectral function.
This equation is the mathematical statement of what is usually referred to as quark-hadron duality.
Using the OPE, Eq.(\ref{OPE}), and an integration kernel $f(s) = s^N \; (N=1,2,\cdots)$ one obtains the FESR
\begin{align}
(-)^{N-1} C_{2N} \langle {\mathcal{\hat{O}}}_{2N}\rangle = 4 \pi^2 \int_0^{s_0} ds\, s^{N-1} \,\frac{1}{\pi} {\mbox{Im}} \Pi_0(s)|_{\mbox{\tiny{HAD}}}
\nonumber \\
- \frac{s_0^N}{N} \left[1+{\mathcal{O}}(\alpha_s)\right] \;\; (N=1,2,\cdots) \;.
\label{FESR}
\end{align}
For $N=1$, the dimension $d=2$ term in the OPE does not involve any condensate, as it is not possible to construct a gauge invariant operator of such a dimension from the quark and gluon fields.
There is no evidence for such a term (at $T=0$) from FESR analyses of experimental data on $e^+ e^-$ annihilation and $\tau$ decays into hadrons \cite{Dominguez:1999xa, Dominguez:2006ct}.
At high temperatures, though, there seems to be evidence for some $d=2$ term \cite{Megias:2009ar}.
However, the analysis to be reported here is performed at lower values of $T$, so that we can safely ignore this contribution in the sequel.
The dimension $d=4$ term, a renormalization group invariant quantity, is given by
\begin{equation}
C_4 \langle \hat{\mathcal{O}}_{4} \rangle =
\frac{\pi}{6} \langle \alpha_s G^2\rangle + 2 \pi^2 (m_u + m_d) \langle\bar{q} q \rangle ,
\label{C4}
\end{equation}
The leading power correction of dimension $d=6$ is the four-quark condensate, which in the vacuum saturation approximation~\cite{QCDSRreview} becomes
\begin{equation}
C_6 \langle \hat{\mathcal{O}}_{6} \rangle = \frac{896}{81} \,\pi^3 \, \alpha_s \,|\langle \bar{q} q \rangle|^2\;,
\label{C6}
\end{equation}
which has a very mild dependence on the renormalization scale.
This approximation has no solid theoretical justification, other than its simplicity.
Hence, there is no reliable way of estimating corrections, which in fact appear to be rather large from comparisons between Eq. (\ref{C6}) and direct determinations from data~\cite{Dominguez:2006ct}.
The extension of this program to finite temperature is fairly straightforward~\cite{Bochkarev:1986es,Dominguez:1994re,Ayala:2011vs}, with the Wilson coefficients in the OPE, Eq.(\ref{OPE}), remaining independent of $T$ at leading order in $\alpha_s$, and the condensates developing a temperature dependence.
Radiative corrections in QCD involve now an additional scale, i.e. the temperature, so that $\alpha_s \equiv \alpha_s(\mu^2,T)$.
This problem has not yet been solved successfully.
Nevertheless, from the size of radiative corrections at $T=0$ one does not expect any major loss of accuracy in results from thermal FESR to leading order in PQCD, as long as the temperature is not too high, say $T \lesssim 200 \, {\mbox {MeV}}$.
Essentially all applications of FESR at $T \neq 0$ have been done at leading order in PQCD, thus implying a systematic uncertainty at the level of 10 \%.
In the static limit ($\vec{q} \rightarrow 0$), to leading order in PQCD, and for $T\neq 0$ and $\mu \neq 0$ the function $\Pi_0(q^2)|_{\mbox{\tiny{QCD}}}$ in Eq.(\ref{correlator}) becomes $\Pi_0(\omega^2, T, \mu)|_{\mbox{\tiny{QCD}}}$; to simplify the notation we shall omit the $T$ and $\mu$ dependence in the sequel.
A straightforward calculation of the spectral function in perturbative QCD, at finite temperature and finite density gives
\begin{align}
\frac{1}{\pi} {\mbox{Im}}\Pi_0(s)|_{\mbox{\tiny{PQCD}}}
=
\frac{1}{4\pi^2}\left[1-\tilde{n}_+\left(\frac{\sqrt{s}}{2}\right)
-\tilde{n}_-\left(\frac{\sqrt{s}}{2}\right)\right] \nonumber \\
-\frac{2}{\pi^2} \;T^2 \;\delta (s)\; \left[
{\mbox{Li}}_2(-e^{\mu/T})
+ {\mbox{Li}}_2(-e^{-\mu/T})\right],
\label{pertQCD}
\end{align}
where ${\mbox{Li}}_2(x)$ is the dilogarithm function, $s=\omega^2$, and
\begin{equation}
\tilde{n}_\pm(x)=\frac{1}{e^{(x\mp \mu)/T}+1}
\label{F-D}
\end{equation}
are the Fermi-Dirac thermal distributions for particles and antiparticles,
respectively.
In the hadronic sector we assume pion-pole dominance of the hadronic spectral function, i.e. the continuum threshold $s_0$ to lie below the first radial excitation with mass $M_{\pi_1} \simeq 1 300\;{\mbox{MeV}}$.
This is a very good approximation at finite $T$, as we expect $s_0$ to be monotonically decreasing with increasing temperature.
In this case,
\begin{equation}
\frac{1}{\pi}{\mbox{Im}}\Pi_0 (s)|_{\mbox{\tiny{HAD}}}
= 2 \; f_\pi^2(T,\mu_B)\; \delta (s-m_\pi^2),
\label{HAD}
\end{equation}
where $f_\pi(T,\mu_B)$ is the pion decay constant at finite $T$ and $\mu$, with $f_\pi(0,0) =92.21 \pm 0.14 \;{\mbox{MeV}}$ \cite{Agashe:2014kda}.
Notice we will not include in our spectral function the first part of $a_1$ resonance obtained from the $\tau$-decay data~\cite{Dominguez:2012bs}, since still there is no counterpart in the SU(2) nlPNJL model for the description of the hadronic vector resonance.
A zero temperature analysis has been done for the vector case in Ref.~\cite{Villafane:2016ukb}.
Turning to the FESR, Eq.(\ref{FESR}), with $N=1$ and no dimension $d=2$ condensate, and using Eqs.(\ref{pertQCD}) and (\ref{HAD}) one finds
\begin{align}
\frac{}{}\int_0^{s_0(T,\mu)} ds \, \left[1 - \tilde{n}_+\left(\frac{\sqrt{s}}{2}\right)
-\tilde{n}_-\left(\frac{\sqrt{s}}{2}\right)\right] = \nonumber\\
8 \pi^2 f_\pi^2(T,\mu) + 8 T^2 \left[{\mbox{Li}}_2(-e^{\mu/T})
+ {\mbox{Li}}_2(-e^{-\mu/T})\right] .
\label{FESRTMU}
\end{align}
This is a transcendental equation determining $s_0(T,\mu)$ in terms of $f_\pi(T,\mu)$.
For completeness, the other two thermal FESR at zero chemical potential are given by \cite{Dominguez:2012bs},
\begin{align}
- C_{4}\langle {\mathcal{\hat{O}}}_{4}\rangle(T) = 4 \pi^2 \int_0^{s_0(T)} ds\, s \frac{1}{\pi} {\mbox{Im}}\, \Pi_0(s)|_{\mbox{\tiny{HAD}}}
\nonumber \\
- \int_0^{s_0(T)}ds \, s \left[1 - 2 n_F\left(\frac{\sqrt{s}}{2 T}\right)\right] ,
\label{FESRT2}
\end{align}
\begin{align}
C_{6}\langle {\mathcal{\hat{O}}}_{6}\rangle(T) = 4 \pi^2 \int_0^{s_0(T)} ds\, s^2 \frac{1}{\pi} {\mbox{Im}}\, \Pi_0(s)|_{\mbox{\tiny{HAD}}}
\nonumber \\
- \int_0^{s_0(T)}ds \; s^2 \left[1 - 2 n_F\left(\frac{\sqrt{s}}{2 T}\right)\right] \;,
\label{FESRT3}
\end{align}
where $n_F(x)=1/(1+e^x)$ is the Fermi thermal function.
\section{Thermodynamics at finite density in the PNJL model}
\label{thermo}
We consider a nonlocal SU(2) chiral quark model that includes quark couplings to the color gauge fields.
The corresponding Euclidean effective action is given by~\cite{Contrera:2010kz,Pagura:2011rt}
\begin{align}
S_{E} = \int d^{4}x\ \bigg\{
\bar{\psi}(x)\left( -i\gamma_{\mu}D_{\mu}
+\hat{m}\right) \psi(x) - \nonumber \\
\left. \frac{G_{S}}{2} \Big[ j_{a}(x)j_{a}(x)- j_{P}%
(x)j_{P}(x)\Big]+ \ {\cal U}\,(\Phi[A(x)]) \right\rbrace \ ,
\label{action}%
\end{align}
where $\psi$ is the $N_{f}=2$ fermion doublet $\psi\equiv(u,d)^T$, and $\hat{m}={\rm diag}(m_{u},m_{d})$ is the current quark mass matrix. In what follows we consider isospin symmetry, $m_{u}=m_{d}=m$.
The fermion kinetic term in Eq.~(\ref{action}) includes a covariant derivative $D_\mu\equiv\partial_\mu - iA_\mu$, where $A_\mu$ are color gauge fields.
The nonlocal currents $j_{a}(x),j_{P}(x)$ are given by
\begin{align}
j_{a}(x) & =\int d^{4}z\ {\cal G}(z)\ \bar{\psi}\left( x+\frac{z}{2}\right)
\ \Gamma_{a}\ \psi\left( x-\frac{z}{2}\right) \ ,\nonumber\\
j_{P}(x) & =\int d^{4}z\ {\cal F}(z)\ \bar{\psi}\left( x+\frac{z}{2}\right)
\ \frac{i {\overleftrightarrow{\rlap/\partial}}}{2\ \kappa_{p}}
\ \psi\left( x-\frac{z}{2}\right)\ ,
\label{currents}%
\end{align}
where,
$\Gamma_{a}=(\leavevmode\hbox{\small1\kern-3.8pt\normalsize1},i\gamma
_{5}\vec{\tau})$ and $u(x^{\prime}){\overleftrightarrow{\partial}%
}v(x)=u(x^{\prime})\partial_{x}v(x)-\partial_{x^{\prime}}u(x^{\prime})v(x)$.
The functions ${\cal G}(z)$ and ${\cal F}(z)$ in Eq.~(\ref{currents}) are nonlocal covariant form factors characterizing the corresponding interactions.
Notice that the four currents $j_a(x)$ require a common form factor ${\cal G}(z)$ in order to guarantee chiral invariance, while the coupling $j_{P}(x)j_{P}(x)$ is self-invariant under chiral transformations.
The scalar-isoscalar component of the $j_{a}(x)$ current will generate a momentum dependent quark mass in the quark propagator, while the ``momentum'' current $j_{P}(x)$ will be responsible for a momentum dependent quark wave function renormalization (WFR)~\cite{Noguera:2008cm,Contrera:2010kz,Pagura:2011rt}, if is not included then the mass parameter in the quark propagator cannot be compare with lattice results.
Now we perform a bosonization of the theory, introducing bosonic fields $\sigma_{1,2}(x)$ and $\pi_a(x)$, and integrating out the quark fields.
Details of this procedure can be found e.g.\ in Ref.~\cite{Noguera:2008cm}.
In order to analyze the properties of meson fields it is necessary to go beyond the mean field approximation, considering quadratic fluctuations in the Euclidean action:
\begin{align}
\label{spiketa}
S_E^{\rm quad} &=& \dfrac{1}{2} \int \frac{d^4 p}{(2\pi)^4} \sum_{M}\ r_M\
G_M(p^2)\ \phi_M(p)\, \bar\phi_M(-p) \ ,
\end{align}
where meson fluctuations $\delta\sigma_a$, $\delta\pi_a$ have been translated to a charged basis $\phi_M$, being $M$ the scalar and pseudoscalar mesons ($\sigma,\pi^0$, $\pi^\pm$) plus the $\sigma_2$ field, and $G_M$ are the inverse dressed propagators.
The coefficient $r_M$ is 1 for charge eigenstates $M=\sigma_i,\pi^0$, and 2 for $M=\pi^+$.
Meson masses are then given by the equations
\begin{equation}
G_M(-m_M^2)\ =\ 0 \ ,
\end{equation}
where the full expressions for the one-loop functions $G_M(q)$ can be found in Ref.~\cite{Noguera:2008cm,Carlomagno:2013ona}.
In addition, physical states have to be normalized through
\begin{equation}
\tilde{\phi}_M(p)=Z_M^{-1/2}\ \phi_M(p)\ ,
\end{equation}
where
\begin{equation}
\label{zr}
Z_M^{-1}=\frac{dG_M(p)}{dp^2}\bigg\vert_{p^2=-m_M^2} \ .
\end{equation}
At finite temperature, the meson masses are obtained by solving $G_P (- m_P^2, 0) = 0$.
The mass values determined by these equations are the spatial ``screening-masses'' corresponding to the zeroth Matsubara mode, and their inverses describe the persistence lengths of these modes at equilibrium with the heat bath~\cite{Contrera:2009hk}.
At zero temperature, one can also calculate the weak decay constants of pseudoscalar mesons.
These are given by the matrix elements of the axial currents $ A_\mu^a$ between the vacuum and the physical meson states,
\begin{equation}
\label{fpiab}
\imath f_{ab}(p^2) \; p_\mu=\langle 0 \vert A_\mu^a(0) \vert \delta\pi_b(p)
\rangle\ .
\end{equation}
The matrix elements can be calculated from the expansion of the Euclidean effective action in the presence of external axial currents,
\begin{equation}
\label{der}
\langle 0 \vert A_\mu^a(0) \vert \delta\pi_b(p) \rangle = \frac{\delta^2
S_E}{\delta A_\mu^a \delta \pi_b(p)}\bigg\vert_{A_\mu^a=\delta\pi_b=0} \ ,
\end{equation}
Performing the derivative of the resulting expressions with respect to the renormalized meson fields, we can finally identify the corresponding pion weak decay constant~\cite{Noguera:2008cm,Carlomagno:2013ona}
\begin{equation}
f_{\pi}=\frac{m_{c}\; Z^{-1/2}_\pi}{m_{\pi}^{2}}\; F_{0}(-m_{\pi}^{2})\ .
\label{fpi}%
\end{equation}
with
\begin{align}
F_{0}(p^{2})=8\, N_{c}\int\frac{d^{4}q}{(2\pi)^{4}}\ g(q)\;\frac
{Z(q^{+})Z(q^{-})}{D(q^{+})D(q^{-})}\times \nonumber \\
\left[q^{+}\cdot q^{-}+M(q^{+})M(q^{-})\right]
\label{fpi2}
\end{align}
where $q^{\pm}=q\pm p/2\,$ and $D(q)=q^{2}+M^{2}(q)$, with $M(p)$ and $Z(p)$ defined as
\begin{align}
M(p) &= Z(p) \left[m_q + \bar\sigma_1 \ g(p) \right] \ , \nonumber \\
Z(p) &= \left[ 1 - \bar\sigma_2 \ f(p) \right]^{-1}\ .
\label{mz}
\end{align}
here $g(p)$ and $f(p)$ are the Fourier transforms of the form factors in Eq.~(\ref{currents}).
\vspace*{0.5cm}
Since we are interested in the deconfinement and chiral restoration critical temperatures, we extend the bosonized effective action to finite temperature $T$ and chemical potential $\mu$.
This will be done using the standard imaginary time formalism.
Concerning the gauge fields $A_\mu$, we assume that quarks move on a constant background field $\phi = A_4 = i A_0 = i g\,\delta_{\mu 0}\, G^\mu_a \lambda^a/2$, where $G^\mu_a$ are SU(3) color gauge fields.
Then the traced Polyakov loop, which in the infinite quark mass limit can be taken as an order parameter of confinement, is given by $\Phi=\frac{1}{3} {\rm Tr}\, \exp( i \phi/T)$.
For the light quark sector the trace of the Polyakov loop turn out to be an approximate order parameter in the same way the chiral quark condensate is an approximate order parameter for the chiral symmetry restoration outside the chiral limit.
We work in the so-called Polyakov gauge~\cite{Diakonov:2004kc}, where the matrix $\phi$ is given a diagonal representation $\phi = \phi_3 \lambda_3 + \phi_8 \lambda_8$.
This leaves only two independent variables, $\phi_3$ and $\phi_8$.
Owing to the charge conjugation properties of the QCD Lagrangian, the expectation values $\langle \Phi \rangle$ and $\langle \Phi^* \rangle$ of the conjugate Polyakov loop fields must be real quantities~\cite{Dumitru:2005ng,Roessner:2006xn}.
This means $ \Phi = \Phi^*$ for the mean field configurations that satisfy the gap equations.
With the constraint of $\phi_3$ and $\phi_8$ being real: $\phi_8=0$, leaving only $\phi_3$ as an independent variable, and therefore $\Phi = [ 2 \cos(\phi_3/T) + 1 ]/3$.
Thus, in the mean field approximation (MFA), and following the same prescriptions as in previous works, see e.g.~Refs.~\cite{GomezDumm:2001fz,GomezDumm:2004sr}, the thermodynamical potential $\Omega^{\rm MFA}$ at finite temperature $T$ and chemical potential $\mu$ is given by
\begin{equation}
\label{omegareg}
\Omega^{\rm MFA} \ = \ \Omega^{\rm reg} + \Omega^{\rm free} +
\mathcal{U}(\Phi,T) + \Omega_0 \ ,
\end{equation}
where
\begin{widetext}
\begin{align}
\Omega^{\rm reg} &= \,- \,4 T \sum_{c=r,g,b} \ \sum_{n=-\infty}^{\infty}
\int \frac{d^3\vec p}{(2\pi)^3} \ \log \left[ \frac{ (\rho_{n,
\vec{p}}^c)^2 + M^2(\rho_{n,\vec{p}}^c)}{Z^2(\rho_{n, \vec{p}}^c)}\right]+
\frac{\bar\sigma_1^2 + \kappa_p^2\; \bar\sigma_2^2}{2\,G_S} \ , \nonumber \\
\Omega^{\rm free} \ &= \ -4 T \int \frac{d^3 \vec{p}}{(2\pi)^3}\;
\sum_{c=r,g,b} \ \sum_{s=\pm 1}\mbox{Re}\;
\ln\left[ 1 + \exp\left(-\;\frac{\epsilon_p + i s \phi_c}{T}
\right)\right]
\ ,
\label{granp}
\end{align}
\end{widetext}
here $\bar\sigma_{1,2}$ are the mean field values of the scalar fields. We have also defined
\begin{equation}
\Big({\rho_{n,\vec{p}}^c} \Big)^2 =
\Big[ (2 n +1 )\pi T + \phi_c - \imath \mu \Big]^2 + {\vec{p}}\ \! ^2 \ ,
\end{equation}
the sums over color indices run over $c=r,g,b$, with the color background fields components being $\phi_r = - \phi_g = \phi_3$, $\phi_b = 0$, and $\epsilon_p = \sqrt{\vec{p}^{\;2}+m^2}\;$.
The term $\Omega^{\rm reg}$ is the regularized expression with the thermodynamical potential of a free fermion gas, and finally the last term in Eq.~(\ref{omegareg}) is just a constant fixed by the condition that $\Omega^{\rm MFA}$ vanishes at $T=\mu=0$.
The effective gauge field self-interactions are given by the Polyakov loop potential $\mathcal{U}(\Phi,T)$. At finite temperature $T$, it is usual to take for this potential a functional form based on properties of pure gauge QCD.
One possible Ansatz is that based on the logarithmic expression of the Haar measure associated with the SU(3) color group integration.
The corresponding potential is given by~\cite{Roessner:2006xn}
\begin{align}
\frac{{\cal{U}}_{\rm log}(\Phi ,T)}{T^4} =\ -\,\frac{1}{2}\, a(T)\,\Phi^2 \;+\nonumber \\
\;b(T)\, \log\left(1 - 6\, \Phi^2 + 8\, \Phi^3
- 3\, \Phi^4 \right) \ ,
\label{ulog}
\end{align}
where
\begin{align}
a(T) &= a_0 +a_1 \left(\dfrac{T_0}{T}\right) + a_2\left(\dfrac{T_0}{T}\right)^2 \ , \nonumber\\
b(T) &= b_3\left(\dfrac{T_0}{T}\right)^3 \ .
\label{log}
\end{align}
The parameters can be fitted to pure gauge lattice QCD data to properly reproduce the corresponding equation of state and the Polyakov loop behavior~\cite{Roessner:2006xn}.
The values of $a_i$ and $b_i$ are constrained by the condition of reaching the Stefan-Boltzmann limit at $T \rightarrow \infty$ and by imposing the presence of a first-order phase transition at $T_0$, which is a further parameter of the model.
At the critical temperature, the Polyakov loop potential develops a second degenerate minimum giving raise to a first order phase transition.
In the absence of dynamical quarks, from lattice calculations one expects a deconfinement temperature $T_0 = 270$~MeV.
However, it has been argued that in the presence of light dynamical quarks this temperature scale should be adequately reduced to about 210 and 190~MeV for the case of two and three flavors, respectively, with an uncertainty of about 30 MeV~\cite{Schaefer:2007pw}. In this work we will use $T_0 = 208$~MeV.
Besides the logarithmic function in Eq.~(\ref{ulog}), a widely used potential is that given by a polynomial function based on a Ginzburg-Landau Ansatz~\cite{Ratti:2005jh,Scavenius:2002ru}:
\begin{align}
\frac{{\cal{U}}_{\rm poly}(\Phi ,T)}{T ^4} \ = \ -\,\frac{b_2(T)}{2}\, \Phi^2
-\,\frac{b_3}{3}\, \Phi^3 +\,\frac{b_4}{4}\, \Phi^4 \ ,
\label{upoly}
\end{align}
where
\begin{align}
b_2(T) = a_0 +a_1 \left(\dfrac{T_0}{T}\right) + a_2\left(\dfrac{T_0}{T}\right)^2
+ a_3\left(\dfrac{T_0}{T}\right)^3\ .
\label{pol}
\end{align}
Once again, the parameters can be fitted to pure gauge lattice QCD results to reproduce the corresponding equation of state and Polyakov loop behavior (numerical values can be found in Ref.~\cite{Ratti:2005jh}).
Given the full form of the thermodynamical potential, the mean field values $\bar\sigma_{1,2}$ and $\phi_{3}$ can be obtained as solutions of the coupled set of gap equations
\begin{equation}
\frac{\partial \Omega^{\rm MFA}_{\rm reg}}
{\left(\partial\sigma_{1},\partial\sigma_{2}, \partial\phi_3\right)}\ = \ 0 \ .
\label{fullgeq}
\end{equation}
In order to fully specify the model under consideration, we proceed to fix the model parameters as well as the nonlocal form factors $g(q)$ and $f(q)$. We consider here Gaussian functions
\begin{align}
g(q) &= \mbox{exp}\left(-q^{2}/\Lambda_{0}^{2}\right) \ , \nonumber \\
f(q) &= \mbox{exp}\left(-q^{2}/\Lambda_{1}^{2}\right)\ ,
\label{regulators}
\end{align}
which guarantee a fast ultraviolet convergence of the loop integrals.
The values of the five free parameters can be found in~\cite{Pagura:2011rt}.
Once the mean field values are obtained, the behavior of other relevant quantities as functions of the temperature and chemical potential can be determined.
We concentrate, in particular, on the chiral quark condensate $\langle\bar{q}q\rangle = \partial\Omega^{\rm MFA}_{\rm reg}/\partial m$ and the traced Polyakov loop $\Phi$, which will be taken as order parameters for the chiral restoration and deconfinement transitions, respectively.
The associated susceptibilities will be defined as $\chi_{\rm ch} = \partial\,\langle\bar qq\rangle/\partial m$ and $\chi_{\rm PL} = d \Phi / d T$.
\section{Results}
\label{results}
In order to determine the relation between both order parameters for the deconfinement transition, namely the perturbative QCD threshold $s_0$ and the trace of the Polyakov loop $\Phi$ as functions of the temperature and chemical potential we begin our analysis studying the finite energy sum rules at zero density.
In this scenario, when $\mu=0$, the Eq.~(\ref{FESRTMU}) becomes
\begin{align}
8 \pi^2 f^2_\pi(T) &=& \frac{4}{3} \pi^2 T^2 + \int_0^{s_0(T)}ds \,\left[1 - 2\, n_F \left(\frac{\sqrt{s}}{2 T} \right) \right] \;, \label{FESRT1}
\end{align}
where the pion decay constant at finite temperature and/or chemical potential is calculated using the Eq.~(\ref{fpi}) with Eq.~(\ref{fpi2}) as
\begin{align}
F_{0}(p^{2})=8\, T \sum_{c,n} \int\frac{d^{3}\vec{q}}{(2\pi)^{4}}\ g({\rho_{n,\vec{q}}^c})\;
\frac{Z({\rho_{n,\vec{q}}^c}^{+})Z({\rho_{n,\vec{q}}^c}^{-})}{D({\rho_{n,\vec{q}}^c}^{+})D({\rho_{n,\vec{q}}^c}^{-})}\ \times \nonumber \\
\left[ {\rho_{n,\vec{q}}^c}^{+}\cdot {\rho_{n,\vec{q}}^c}^{-}+M({\rho_{n,\vec{q}}^c}^{+}%
)M({\rho_{n,\vec{q}}^c}^{-})\right]
\label{fpi3}
\end{align}
where ${\rho_{n,\vec{q}}^c}^{\pm}={\rho_{n,\vec{q}}^c} \pm p/2\,$.
It is known that in local versions of the PNJL model, at zero chemical potential, the restoration of the chiral symmetry and the deconfinement transition take place at different temperatures (see e.g.~Refs.~\cite{Fu:2007xc,Costa:2008dp}), usually separated by approximate $20$~MeV.
Therefore, it is interesting to analyze the results obtained in a nonlocal and in a local PNJL model, the latter one parametrized according to~\cite{Ratti:2005jh}.
In Fig.~\ref{fig:lvsnl} we plot the continuum threshold and the trace of the PL for the nonlocal (local) PNJL model in solid (dashed) line, for the logarithmic and polynomial effective potentials.
As we expected from previous results, in the local version both transitions do not occur simultaneously.
In this scenario, the PQCD threshold vanishes at a critical temperature, $T_c^{s_0}$, located between the chiral critical temperature $T_c^{\chi}$ and the PL deconfinement temperature $T_c^{\Phi}$ (obtained through the corresponding susceptibilities) and hence, although it is not possible to conclude a direct relation between $s_0$ and $\Phi$, the continuum threshold, in any case, vanishes before the restoration of the chiral symmetry, in agreement with general arguments~\cite{Bochkarev:1986es}.
In the case of the nonlocal PNJL model, for both effective potentials, $s_0$ and $\Phi$ have a similar critical temperature for the deconfinement transition of approximate $T_c \sim 170$ MeV.
These temperatures are summarized in Table~\ref{TableI:tes_c}.
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{pkv_s0_nomu}
\caption{\label{fig:lvsnl} Continuum threshold (red line) and trace of the Polyakov loop (blue line) as a function of the temperature for nonlocal (solid line) and local PNJL model (dashed line) at zero chemical potential for logarithmic (upper panel) and polynomial (lower panel) effective potentials.}
\end{figure}
\begin{table}
\begin{tabular}{c c c c c}
\hline
& \multicolumn{2}{c}{Logarithmic} & \multicolumn{2}{c}{Polynomial} \\
\hline
& Non local & Local & Non local & Local \\
\hline
$T_c^{\chi}$ [MeV] & 171 & 205 & 176 & 201 \\
$T_c^{\Phi}$ [MeV] & 171 & 171 & 174 & 183 \\
$T_c^{s_0}$ [MeV] & 171 & 189 & 170 & 190 \\
\hline
\end{tabular}
\caption{Chiral critical temperatures $T_c^{\chi}$, deconfinement temperatures $T_c^{\Phi}$ and $T_c^{s_0}$ for the local and nonlocal PNJL model with logarithmic and polynomial effective potentials.}
\label{TableI:tes_c}
\end{table}
The value obtained at zero temperature for the continuum threshold, $s_0 \sim 670$, MeV is rather small but in a good agreement with other calculations in sum rules using as input LQCD results.
The main reason for this lower value is the pion pole approximation for the spectral function. When additional information is incorporated, for instance the $a_1$ resonance, the value of $s_0(T=0)$ increases substantially~\cite{Dominguez:2012bs}.
Just for completeness and, in addition to the main goal of this article, from the higher order FESR, Eqs.~(\ref{FESRT2}) and (\ref{FESRT3}), we can estimate the gluon condensate and the four-quark condensate.
The former shows the expected behavior with a finite value at zero temperature. It decreases monotonically as function of temperature, vanishing at $T \sim 170$~MeV.
The four quark condensate, plotted in Fig.~\ref{fig:qqqq}, was compared, according to the vacuum saturation approximation (VSA), with the squared of the chiral quark condensate obtained within the $SU(2)$ nlPNJL model.
If we assume that the previous approximation is exact, from Eqs.~(\ref{C6}) and (\ref{FESRT3}), at zero temperature and in the chiral limit, we obtain that $\alpha_s = \dfrac{108\ \pi^3}{7}\dfrac{f_\pi^6}{|\langle \bar{q}q \rangle|^2} \simeq 1.6$ (a very similar result is obtained outside the chiral limit), meaning that the VSA underestimate $C_6 \langle \hat{\mathcal{O}}_{6} \rangle$.
This value is considerably higher than recent estimations of the strong coupling at low energies, based on completely different approaches~\cite{Pich:2016bdg,Deur:2016tte}. The first one relies on a recent analysis of the ALEPH data for the $\tau$ decay, whereas the second one corresponds to a general recent review including different perspectives.
From Fig.~\ref{fig:qqqq}, we see that for both Polyakov effective potentials, the VSA is about $40 \%$ less than the four-quark condensate obtained from FESR at zero temperature, in qualitatively agreement with estimates, based on $K^0-\bar{K}^0$ mixing~\cite{Chetyrkin:1988yr}.
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{qqqq.eps}
\caption{\label{fig:qqqq} Four-quark condensate in the vacuum saturation approximation with $\alpha_s=1$~\cite{Deur:2016tte} (blue line) and $C_6 \langle \hat{\mathcal{O}}_{6} \rangle$ (red line) for the logarithmic (polynomial) Polyakov effective potential in solid (dashed) line, at zero density as a function of the temperature.}
\end{figure}
From lattice QCD calculations, at zero chemical potential, the chiral symmetry restoration and the deconfinement transition take place at the same critical temperature. This behavior was verified in nlPNJL models~\cite{Contrera:2010kz,Carlomagno:2013ona} and also obtained by finite energy sum rules~\cite{Ayala:2011vs}.
The next natural step is to extend our analysis to a finite density scenario, to identify the relation between $s_0(T,\mu)$ and $\Phi(T,\mu)$.
In Fig.~\ref{fig:munocero} we plot, for the logarithmic Polyakov effective potential, the normalized quark condensate $\langle\bar qq\rangle/\langle\bar qq\rangle_0$, the trace of the PL $\Phi$ and the continuum threshold $s_0$ as functions of the temperature for three different values of chemical potential.
In the middle panel we choose $\mu=139$~MeV, which correspond to the critical end point chemical potential $\mu_{CEP}$. For values of $\mu$ smaller than $\mu_{CEP}$, the chiral restoration arises via a crossover transition. Beyond this critical density, a first order phase transition occurs.
This value, together with the critical temperature $T_{CEP} = 161$~MeV determines the coordinates of the critical end point.
All the results presented here were obtained by Gaussian regulators (see Eq.~(\ref{regulators})).
Nevertheless, similar outcomes would be obtained if other form factors would have been employed.
For instance, a lattice inspired dependence (Lorentzian regulator)~\cite{Carlomagno:2013ona} or we may also neglect the momentum current, this means no WFR effects~\cite{Contrera:2009hk}.
It turns out that the chiral and deconfinement critical temperatures get a minor dependence on the explicit shape used to parameterize the form factors~\cite{Pagura:2011rt,Carlomagno:2015hea}.
\begin{figure}[h]
\includegraphics[width=0.45\textwidth]{pkv_s0_mu.eps}
\caption{\label{fig:munocero} Continuum threshold (solid red line), trace of the Polyakov loop (black dashed lined) and the normalized quark condensate (blue dotted line) as a function of the temperature at a constant density for the logarithmic effective potential.}
\end{figure}
In the upper panel of Fig.~\ref{fig:munocero}, where $\mu=100$ MeV, we see that the chiral and deconfinement transitions are crossovers occurring at the same critical temperature.
The peak of the Polyakov susceptibility and the point where the continuum threshold vanishes occur at approximate the same temperature $T_c \sim 166$ MeV.
When $\mu$ becomes equal or higher than $\mu=139$~MeV, the order parameter for the chiral symmetry restoration has a discontinuity signaling a first order phase transition.
These gap in the quark condensate induces also a jump in the trace of the PL (see middle and lower panels in Fig.~\ref{fig:munocero}). The value of $\Phi$ at the discontinuity indicates that at this temperature the system remains confined but in a chiral symmetry restored state. This region is usually referred as the quarkyonic phase~\cite{McLerran:2007qj,McLerran:2008ua}.
At bigger densities than the critical end point chemical potential, the thermal equation has not solution beyond the critical temperature.
The term proportional to the dilogarithm becomes too negative and therefore Eq.~\ref{FESRTMU} can not be satisfied.
The continuum threshold stops with a finite value at the chiral critical temperature (see middle and lower panels in Fig.~\ref{fig:munocero}).
We see in this way, that the Polyakov loop and the continuum threshold provide the same information. When the chiral symmetry is restored, $s_0$ and $\Phi$ show that we are still in a confined phase. This characterize the occurrence of a quarkyonic phase.
\section{Summary and conclusions}
\label{finale}
In this article we discuss if the behavior of two vastly used order parameters for the deconfinement transition: the continuum threshold and the trace of the Polyakov loop, provide us with the same physical insight.
To accomplish this analysis, we use finite energy sum rules for the axial-vector current correlator.
In this framework, one can define the continuum threshold as the energy where the resonance peaks in the spectrum become very broad.
On the other side, the Polyakov loop is a thermal Wilson loop, gauge-invariant under the center of the color group and is expected to vanish in the confined phase and being different from zero in the deconfined phase.
The idea was to carry on the FESR program saturating the spectral function with the pion pole approximation.
The input parameters we used in the spectral function, namely the pion mass, the pion decay constant and the chiral quark condensate, were obtained from a nonlocal SU(2) Polyakov-NJL model with Gaussian form factors.
In this way we establish the connection between both approaches.
At zero density, we compare the trace of the Polyakov loop and the continuum threshold for the local and the nonlocal version of a PNJL model.
We determine, for the nlPNJL model, that the continuum threshold vanishes at the same temperature where the Polyakov susceptibility has its maximum value.
In the case of the local PNJL, $s_0$ becomes zero between the critical temperature for the deconfinement transition, according to the Polyakov loop analysis, and the chiral restoration temperature. The fact that both deconfinement temperatures are smaller than the chiral critical temperature is in agreement with other analysis.
At finite chemical potential, we find that for both deconfinement parameters, beyond the critical end point chemical potential, the system remains in its confined phase even when the chiral symmetry is restored.
This is an evidence for the appearance of a quarkyonic phase.
We may conclude saying that our analysis gives strong support to the idea that both deconfinement parameters, in fact, provide the same kind of physical information.
\section*{Acknowledgements}
This work has been partially funded by CONICET (Argentina) under Grant No.\ PIP 449; by the National University of La Plata (Argentina), Project No.\ X718; by FONDECYT (Chile), under grants No. 1130056, 1150171 and 1150847; and by Proyecto Basal (Chile) FB 0821.
|
1,314,259,995,927 | arxiv | \section*{Introduction}
\noindent A little variation to looking some mathematical concept may sometimes sheds light on some hidden facts. For instance, continuity and differentiability are both limits concepts but defined differently. However latter one tells more geometric facts about function than that of former one. Similarly generalization of any mathematical concept besides being a great source of motivation in its own not just simplifies various intricate facts pertaining to it but extend its applicability to a wider class of problems. For instance, the first proof of prime number theorem, a very popular theorem in real numbers, goes through the techniques of complex analysis. Likewise there have been several generalizations of the notion of derivative to fractional derivative since the time L'Hospital first asked this question to Leibniz in his letter \cite{Leibniz} in $1695$ about a meaningful interpretation of $\;\frac{d^{1/2}y}{d x^{1/2}}$. Various types of fractional derivative have been introduced so far. Ironically most of the definitions of fractional derivative carry integral form. However only few grab attention of mathematicians and become popular in world of fractional calculus namely Riemann-Liouville, Caputo, Hadamard, Grunwald-Letnikov and Riesz fractional derivatives. To gain a good insight into fractional calculus, reader is advised to go through \cite{Miller,Podlubny,Oldham}.\\
As pointed most of definitions of fractional derivative use the integral form. In contrast, R. Khalil introduced a limit based definition of fraction derivative \cite{Khalil70} in $2014$, calling it as conformable fraction derivative in analogy to that of standard one. However his definition lacks to include zero and negative numbers. We hereby define a new derivative, referred to as deformable derivative, which is much simpler than that of Khalil's one and overcomes not only this shortcoming but ranges over a wider class of functions.\\
\begin{definition} Let $f(t)$ be real valued function defined on interval $(a,b)$. For a given number $\alpha$,
$0\leqslant\alpha\leqslant 1$, we define {\it deformable derivative} by the following limit:
\begin{equation*}
\lim_{\epsilon\rightarrow0}\frac{(1+\epsilon\beta)f(t+\epsilon\alpha)-f(t)}{\epsilon},\quad\mbox{where}\;\;\alpha+\beta=1.\tag{*}
\end{equation*}
If this limit exists, we denote it by $D^\alpha f(t)$.
\end{definition}
\begin{remark}
One can note that the definition (*) is compatible with $\alpha= 0,1$. For, if $\alpha=0$, $D^0 f(t)=f(t)$ which is the usual convention and if $\alpha=1$,
$Df(t)=f'(t)$. Therefore it can be deemed as a new fractional derivative with respect to parameter $\alpha$. Throughout the article until unless specified we assume that $0<\alpha\leqslant 1.$ The deformable derivative with a given $\alpha$ is sometimes referred to as $\alpha$-derivative as well.
\end{remark}
\indent In first section we derive a formula connecting both $\alpha$-derivative and ordinary derivative, viz. $D^\alpha f(t)=\beta f(t)+\alpha D f(t)$ and end it up with a conclusion that for a function, $\alpha$-differentiability is same as differentiability in the sense that existence of one implies that of other. Section two focuses on some of basic properties of deformable derivative. We illustrate geometrically the behavior of operator $D^\alpha$ on some elementary functions. These examples exhibit how deformable derivative sits between function and its derivative. Section three discusses the form of extension of Rolle's, Mean-Value and Taylor's theorems in the context of deformable derivative. Next section introduces the integral fractional operator defined as follows:
\[
I^{\alpha}_{a}f(t)=\frac{1}{\alpha}e^{\frac{-\beta }{\alpha}t}\int_{a}^{t}e^{\frac{\beta }{\alpha}x}f(x)dx
\]
for continuous function $f$ over the interval $(a,b)$. Some basic properties of this fractional integral operator $I^{\alpha}_a$ are also discussed. In the last section we solve some fractional differential equations using these operators.\\
\section{\textbf{Preliminary Results}}
This section exhibits a relation of deformable derivative with function and its derivative. This is where the name deformable derivative is taken from. The relation further exposes the interesting fact about the graph of deformable derivative lying linearly between that of function and derivative.
\bigskip
The first result is quite natural and asserts that {\it differentiability implies $\alpha$-differentiability}. The proof connects both operators.
\begin{theorem}\label{diff.1}
A differentiable $f$ at a point $t\in (a,b)$ is always $\alpha$- differentiable at that point for any $\alpha$. Moreover in this case we have
\begin{equation}
D^\alpha f(t)=\beta f(t)+\alpha D f(t),\quad\mbox{where\;\;} \alpha+\beta=1.\tag{**}
\end{equation}
\end{theorem}
\begin{proof}
By definition we have
\begin{eqnarray*}
D^{\alpha}f(t)&=&\lim\limits_{\epsilon\rightarrow0}\frac{\left(1+\epsilon\beta\right)f(t+\alpha \epsilon)-f(t)}{\epsilon}\\
&=& \lim\limits_{\epsilon\rightarrow 0}\left(\frac{f(t+\alpha \epsilon)-f(t)}{\epsilon}-\beta f(t+\alpha \epsilon)\right)\\
&=& \alpha\cdot D f(t)-\beta\cdot \lim\limits_{\epsilon\rightarrow0}f(t+\alpha \epsilon).
\end{eqnarray*}
Both the terms exist as $f$, being differentiable at $t$ is continuous as well. Hence theorem follows.
\end{proof}
\bigskip
The second result of the section that is also natural is about {\it Does $\alpha$-differentiability imply continuity} ? Answer is affirmative. However to prove the claim we need an auxiliary result concerning to locally bounded function. A function is said to be {\it locally bounded} at a point if it is bounded in some neighbourhood of that point. Precisely a function $f$ defined on $(a,b)$ is locally bounded at $t$ if there are positive numbers $M\;\&\;\delta$ such that
\[
\bigl\lvert f(t+\epsilon)\bigr\rvert\leqslant M\quad \mbox{whenever}\;\;\bigl\lvert\epsilon\bigr\rvert<\delta.
\]
Here $\delta$ is chosen sufficiently small so that $~t+\epsilon \in(a,b)$.\\
\begin{lemma}\label{locally.bounded}
Suppose $f$ is $\alpha$-differentiable in $(a,b)$ with respect to some $\alpha$. Then $f$ is locally bounded there.
\end{lemma}
\begin{proof}
Suppose $f$ is $\alpha$-differentiable at $t$, there exist a number $\delta>0$ such that
\begin{eqnarray*}
\bigl\lvert (1+\epsilon\beta)f(t+\epsilon\alpha)-f(t)-\epsilon\cdot D^\alpha f(t)\bigr\rvert \leqslant \bigl\lvert\epsilon\bigr\rvert,\quad &&\mbox{whenever}\;\;\bigl\lvert\epsilon\bigr\rvert<\delta\\
\Rightarrow\quad \bigl\lvert (1+\epsilon\beta)f(t+\epsilon\alpha)\bigr\rvert \leqslant |\epsilon|+\bigl\lvert f(t)+\epsilon\cdot D^\alpha f(t)\bigr\rvert,\quad &&\mbox{whenever}\;\;\bigl\lvert\epsilon\bigr\rvert<\delta\\
\leqslant |\epsilon|\left(1+\bigl\lvert D^\alpha f(t)\bigr\rvert \right)+\bigl\lvert f(t)\bigr\rvert,\quad &&\mbox{whenever}\;\;\bigl\lvert\epsilon\bigr\rvert<\delta\\
\Rightarrow\quad \bigl\lvert f(t+\epsilon\alpha)\bigr\rvert \leqslant \frac{|\epsilon|\left(1+\bigl\lvert D^\alpha f(t)\bigr\rvert \right)+\bigl\lvert f(t)\bigr\rvert}{\bigl\lvert 1+\beta \epsilon\bigr\rvert} \quad &&\mbox{whenever}\;\;\bigl\lvert\epsilon\bigr\rvert<\delta
\end{eqnarray*}
This yields that $f$ is locally bounded at $t$.
\end{proof}
The next theorem establish the claim that $\alpha$-differentiability implies continuity.\\
\begin{theorem}\label{diff.2}
Let $f$ be $\alpha$-differentiable at a point $t$ for some $\alpha$. Then it is continuous there.
\end{theorem}
\begin{proof}
For continuity, it suffices to prove the following:
\[
\lim\limits_{\epsilon\rightarrow0}\left(f(t+\epsilon\alpha)-f(t)\right)=0.
\]
The left hand side can also be written as:
\begin{eqnarray*}
&&\lim\limits_{\epsilon\rightarrow0}\frac{\left(1+\epsilon\beta\right)f(t+\epsilon\alpha)-f(t)-\epsilon\beta f(t+\epsilon\alpha)}{\epsilon}\epsilon\\
&&=\lim\limits_{\epsilon\rightarrow0}\left(\frac{\left(1+\epsilon\beta\right)f(t+\epsilon\alpha)-f(t)}{\epsilon}\cdot\epsilon-\beta\epsilon\cdot f(t+\epsilon\alpha)\right)\\
&&= D^\alpha f(t)\cdot 0-\beta\lim\limits_{\epsilon\rightarrow0}\epsilon f(t+\epsilon\alpha)=0,\quad(\mbox{by hypothesis})\\
&&=-\beta\lim\limits_{\epsilon\rightarrow0}\epsilon f(t+\epsilon\alpha)=0.\quad(\mbox{by using lemma \ref{locally.bounded}})
\end{eqnarray*}
This completes theorem.
\end{proof}
A strong version of the theorem \ref{diff.2} as an easy consequence is given in the following corollary:
\begin{corollary}\label{diff.3}
An $\alpha$-differentiable function $f$ defined in $(a,b)$ is differentiable as well.
\end{corollary}
\begin{proof}
For the existence of derivative we use its definition
\begin{eqnarray*}
Df(t)&=& \frac{1}{\alpha}\cdot\lim\limits_{\epsilon\rightarrow 0}\frac{f(t+\alpha \epsilon)-f(t)}{\epsilon}\\
&=&\frac{1}{\alpha}\cdot\lim\limits_{\epsilon\rightarrow 0}\frac{\left(1+\epsilon\beta\right)f(t+\alpha \epsilon)-f(t)-\epsilon\beta f(t+\alpha \epsilon)}{\epsilon}\\
\Rightarrow \quad D f(t) &=& \frac{1}{\alpha}\left( \lim\limits_{\epsilon\rightarrow 0}\frac{\left(1+\epsilon\beta\right)f(t+\alpha \epsilon)-f(t)}{\epsilon}-\beta\cdot \lim\limits_{\epsilon\rightarrow0}f(t+\alpha \epsilon)\right)
\end{eqnarray*}
By using hypothesis and theorem \ref{diff.2}, we get the result done.
\end{proof}
We summarise all these by saying that two concepts $\alpha$-differentiability and differentiability of a function defined in $(a,b)$ are equivalent in the sense that one implies other. We write it as separate theorem.\\
\begin{theorem}\label{diff.4}
Let $f$ be defined in $(a,b)$. For any $\alpha$, $f$ is $\alpha$-differentiable if and only if it is differentiable.
\end{theorem}
\begin{remark}\label{rem.1.1}
We make a remark here that over the interval $(a,b)$, the existence of $\alpha$-derivative with respect to one particular value of $\alpha>0$ is sufficient for the existence of $\alpha$-derivative with respect to all other values of $\alpha$.
\end{remark}
Though the most important case for $\alpha$ is when $\alpha\in [0,1]$ but what happens if $\alpha \in (n,n+1]$ for any natural number $n$ and what will be definition?
\\
\begin{definition}
Suppose $f$ is $n$-times differentiable at $t\in (a,b).$ For given $\alpha\in(n,n+1]$, we extend deformable derivative in a very natural way and define it by the following limit:
\[
D^\alpha f(t)=\lim\limits_{\epsilon\rightarrow 0}\frac{\left(1+\epsilon\{\beta\}\right))D^{n}f\left(t+\epsilon\{\alpha\}\right)-D^{n}f(t)}{\epsilon}
\]
where $\{\alpha\}$ is the fractional part of $\alpha$ and $\{\alpha\}+\{\beta\}=1$.
\end{definition}
As the consequence of the above definition, if $f^{(n+1)}$ exists, we have
\[
D^\alpha f(t)=\{\beta\}D^{n}f(t)+\{\alpha\}D^{n+1}f(t).
\]
\section{\textbf{Basic Properties of Deformable Derivative}}
\noindent Apart from discussing some fundamental properties of deformable derivative like linearity and commutativity the section deals with fundamental theorems: Rolle's, Mean-Value and Taylor's theorems. The geometric illustration of $D^{\alpha}f$ for some elementary functions $f$ is also given.
\begin{theorem}\label{properties.1}
The operator $D^{\alpha}$ possesses the following properties:
\begin{itemize}
\item [$($\rm a$)$] Linearity: $D^{\alpha} \left(a f+b g\right)=a D^{\alpha} f+b D^{\alpha} g$
\item [$($\rm b$)$] Commutativity: $D^{\alpha_1}\cdot D^{\alpha_2}=D^{\alpha_2}\cdot D^{\alpha_1}$. Hence in general, for any positive integer $n\in \mathbb N$ we have $D^\alpha \cdot D^n=D^n\cdot D^\alpha$
\item [$($\rm c$)$] For a constant function $k$, \;$D^\alpha(k)=\beta k$.
\item [$($\rm d$)$] $D^{\alpha} (f\cdot g)= \left(D^{\alpha}f\right)\cdot g+\alpha f\cdot D g.$ Hence $D^\alpha$ does not obey the Leibniz rule.
\end{itemize}
\end{theorem}
\begin{proof}
Linearity is evident from definition. Commutativity follows readily by noticing the symmetries in the expression below:
\[
D^{\alpha_{1}}\left(D^{\alpha_{2}}f\right)
=\beta_{1}\beta_{2}f+(\alpha_{1}\beta_{2}+\alpha_{2}\beta _{1})D f+\alpha_{1}\alpha_{2}D^2 f,
\]
where $\alpha_i+\beta_i=1$ for $i=1,2$. Using the relation: $D^\alpha =\beta I+\alpha D$, the third and fourth can be easily established. Violation of Leibniz rule in part (d) motivates to regard $D^\alpha$ as fractional derivative. Readers are advised to see \cite{Tarasov}.
\end{proof}
Most of the familiar functions behave well with respect to differentiation so their deformable derivatives can be obtained from expression (**) in theorem \ref{diff.1}. We list out the deformable derivatives of some elementary functions in the following proposition.
\begin{proposition}
$~$
\begin{enumerate}
\item $ D^{\alpha} (t^r )=\beta t^r+r\alpha t^{r-1}$, \quad $r\in \mathbb{R}$.
\item $D^{\alpha}(e^t)=e^t.$
\item $D^{\alpha}(\sin t)=\beta\sin t+\alpha\cos t$.
\item $D^{\alpha}(\log t)=\beta\log t+\frac{\alpha}{t}$, $t>0$.
\end{enumerate}
\end{proposition}
It is intuitively clear that the operator $D^\alpha$ is continuous with respect to parameter $\alpha$. However we leave it for reader to prove. Instead we focus on the geometric realization of the ideal with some examples. The following figures depict not only continuity phenomenon but also explain its nature of deforming function to its derivative.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{squareeps}
\caption{$D^\alpha$ operating on $t^2$}
\label{fig:1}
\end{subfigure}
~
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{ratioaleps}
\caption{$D^\alpha$ operating on $t^{3/2}$}
\label{fig:2}
\end{subfigure}
\\
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{sineps}
\caption{$D^\alpha$ operating on $\sin t$}
\label{fig:3}
\end{subfigure}
~
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{logeps}
\caption{$D^\alpha$ operating on $\log t, t > 0$}
\label{fig:4}
\end{subfigure}
\caption{$D^\alpha$-derivative of different functions, $\alpha \in (0,1)$}
\end{figure}
\section{\textbf {Some Useful Theorems on deformable Derivative}}
In this section we extend $ Rolle's $, Mean Value and Taylor's theorems to deformable derivative with respect to $\alpha$.
\begin{theorem}\label{Rolle}{\bf\text{(Rolle's theorem on deformable derivative)}}
Let $f:[a,b]\longrightarrow \mathbb{R}$ be a function satisfying:
\begin{itemize}
\item [$($\rm i$)$] $f$ is continuous on $[a,b]$
\item [$($\rm ii$)$] $f$ is $\alpha$-differentiable in $(a,b)$
\item [$($\rm iii$)$] $f(a)=f(b).$
\end{itemize}
Then, there exists a point $c\in (a,b)$ such that $D^\alpha f(c)=\beta f(c)$.
\end{theorem}
\begin{proof}
By applying corollary \ref{diff.4}, $f$ is differentiable in $(a,b)$. Thus $f$ satisfies all conditions of classical Rolle's theorem. Then there is a point $c\in(a,b)$ such that $D f(c)=0$. Hence using equation (**) of theorem \ref{diff.1}, we have $D^\alpha f(c)=\beta f(c)$.
\end{proof}
Mean Value theorem is a consequence of Rolle's theorem so is case with deformable derivative.
\begin{theorem}\label{meanvalue}
{\bf\text{(Mean Value theorem on deformable derivative)}}
Let $f:[a,b]\longrightarrow R$ be a function satisfying:
\begin{itemize}
\item [$($\rm i$)$] $f$ is continuous on $[a,b]$
\item [$($\rm ii$)$] $f$ is $\alpha$- differentiable in $(a,b)$.
\end{itemize}
Then, there exists a point $c\in (a,b)$ such that
\[
D^\alpha f(c)=\beta f(c)+\alpha\frac{f(b)-f(a)}{b-a}.
\]
\end{theorem}
\begin{proof}
We consider a function $g$ defined by:
\[
g(x)=f(t)-f(a)-\frac{f(b)-f(a)}{b-a}t.
\]
Notice that $g(t)$ satisfies all the conditions of Rolle's Theorem \ref{Rolle}, there exists $c\in(a,b)$ such that $D^\alpha g(c)=\beta g(c).$ This yields the desired expression in the theorem.
\end{proof}
\begin{theorem}\label{Taylor}
{\bf\text{(Taylor's theorem)}}
Suppose $f$ is $n$-times $\alpha$-differentiable such that all $\alpha$-derivatives are continuous on $[a,a+h].$ Then
\[
f(a+h)=\sum\limits_{k=0}^{n-1}\frac{h^{k}}{k!\alpha^{k}}\left(D^{\alpha}_{k}f(a)-\beta\frac{(1-\theta)^{k-n+1}h}{\alpha n}D^{\alpha}_{k}f(a+\theta h)\right)+\frac{h^{n}}{n!\alpha }D^{\alpha}_{n}f(a+\theta h)
\]
where $D_k^\alpha= D^\alpha D^\alpha\cdots D^\alpha$, $(k$-times$)$,\; $0<\theta<1$.
\end{theorem}
\begin{proof}
Consider a function $\phi$ defined by:
\begin{equation}
\phi(t)=\sum\limits_{k=0}^{n-1}\frac{(a+h-t)^k}{k!\alpha^k}D^{\alpha}_k f(t) + \frac{A}{n!\alpha^{n}}(a+h-t)^{n},
\end{equation}
where $A$ is a constant to be chosen $A$ such that $\phi(a+h)=\phi(a)$. This yields
\begin{equation}
\frac{A}{n!\alpha^{n}}h^{n}=f(a+h)-\sum\limits_{k=0}^{n-1} \frac{h^{k}}{k!\alpha^{k}}D^{\alpha}_{k}f(a)
\end{equation}
Now by hypothesis, $\phi$ is $\alpha$-differentiable in $(a,a+h)$. Using part (d) of theorem \ref{properties.1}, the $\alpha$-derivative $D^{\alpha}\phi$ is given by
\begin{equation}
D^{\alpha}\phi(t)=\frac{(a+h-t)^{n-1}}{\alpha^{n-1}(n-1)!}D^{\alpha}_{n}f(t)+\frac{A}{\alpha^{n}n!}\left(\beta(a+h-t)^{n}-\alpha n(a+h-t)^{n-1}\right)
\end{equation}
Hence $\phi$ satisfies all the conditions of Rolle's Theorem \ref{Rolle}. So there is some $\theta\in(0,1)$ such that
\[
D^{\alpha}\phi(a+\theta h)=\beta\phi(a+\theta h).
\]
Using equations ($1$), ($2$) and ($3$), we have
\[
f(a+h)=\sum\limits_{k=0}^{n-1}\frac{h^{k}}{k!\alpha^{k}}\left(D^{\alpha}_{k}f(a)-\beta\frac{(1-\theta)^{k-n+1}h}{\alpha n}D^{\alpha}_{k}f(a+\theta h)\right)+\frac{h^{n}}{n!\alpha }D^{\alpha}_{n}f(a+\theta h)
\]
This completes the theorem.
\end{proof}
\section{\textbf{Fractional Integral}}
\noindent Fractional integral, being an inverse operator to fractional derivative, plays equally important role as that of fractional derivative in the field of fractional calculus. The section defines a fractional integral as an inverse operator for deformable derivative. Some basic properties of this fractional integral are also discussed. All functions considered in this section are assumed to be continuous.\\
\begin{definition}
Let $f$ be a continuous function defined on $[a,b]$. We define {\it $\alpha$-fractional Integral} of $f,~$ denoted by $I^{\alpha}_{a}f,$ by the integral
\begin{equation*}
I^{\alpha}_{a}f(t)=\frac{1}{\alpha}e^{\frac{-\beta }{\alpha}t}\int_{a}^{t}e^{\frac{\beta }{\alpha}x}f(x)dx,\quad\mbox{where}\;\alpha+\beta=1,\;\alpha\in(0,1].\tag{\#}
\end{equation*}
\end{definition}
Some basic properties of this fractional integral are contained in next theorem.
\begin{theorem}\label{properties.2}
The operator $I^{\alpha}_a$ possesses the following properties:
\begin{itemize}
\item [$($\rm a$)$] Linearity: $I^{\alpha}_{a}\left(b f+c g\right)=b I^{\alpha}_{a}f+c I^{\alpha}_{a}g.$
\item [$($\rm b$)$] Commutativity:
$I_{a}^{\alpha_1}I_{a}^{\alpha_2}=I_{a}^{\alpha_2}I_{a}^{\alpha_1}$,
where $\alpha_i+\beta_i=1$,\;$i=1,2.$
\end{itemize}
\end{theorem}
\begin{proof}
Linearity readily follows from definition (\#). For commutativity, we consider
\begin{eqnarray*}
I_{a}^{\alpha_{1}}I_{a}^{\alpha_{2}}f(t) &=& I_{a}^{\alpha_{1}}\left(\frac{1}{\alpha_{2}}e^{-\frac{\beta_{2}}{\alpha_{2}}t}\int\limits_{a}^{t}e^{\frac{\beta_{2}}{\alpha_{2}}\theta}f(\theta)d\theta\right)\\
&=& \frac{1}{\alpha_{1}}e^{-\frac{\beta_{1}}{\alpha_{1}}t}\int\limits_{a}^{t}e^{\frac{\beta_{1}}{\alpha_{1}}x}\left(\frac{1}{\alpha_{2}}e^{-\frac{\beta_{2}}{\alpha_{2}}x}\int\limits_{a}^{x}e^{\frac{\beta_{2}}{\alpha_{2}}\theta}f(\theta)d\theta\right)dx\\
&=& \frac{1}{\alpha_{1}\alpha_{2}}e^{-\frac{\beta_{1}}{\alpha_{1}}t}\int\limits_{a}^{t}\int\limits_{a}^{x}e^{(\frac{\beta_{1}}{\alpha_{1}}-\frac{\beta_{2}}{\alpha_{2}})x}e^{\frac{\beta_{2}}{\alpha_{2}}\theta}f(\theta)\; d\theta dx\\
&=& \frac{1}{\alpha_{1}\alpha_{2}}e^{-\frac{\beta_{1}}{\alpha_{1}}t}\int\limits_{a}^{t}\int\limits_{\theta}^{t}e^{(\frac{\beta_{1}}{\alpha_{1}}-\frac{\beta_{2}}{\alpha_{2}})x}e^{\frac{\beta_{2}}{\alpha_{2}}\theta}f(\theta)\;dx d\theta\\
&=& \frac{1}{\alpha_{1}\alpha_{2}}e^{-\frac{\beta_{1}}{\alpha_{1}}t}\int\limits_{a}^{t}e^{\frac{\beta_{2}}{\alpha_{2}}\theta}f(\theta)\left(\int\limits_{\theta}^{t}e^{(\frac{\beta_{1}}{\alpha_{1}}-\frac{\beta_{2}}{\alpha_{2}})x}dx\right)d\theta\\
&=&\frac{1}{\beta_{1}\alpha_{2}-\beta_{2}\alpha_{1}}\left(e^{-\frac{\beta_{2}}{\alpha_{2}}t}\int\limits_{a}^{t}e^{\frac{\beta_{2}}{\alpha_{2}}\theta}f(\theta)d\theta-e^{-\frac{\beta_{1}}{\alpha_{1}}t}\int\limits_{a}^{t}e^{\frac{\beta_{1}}{\alpha_{1}}\theta}f(\theta)d\theta\right)\\
&=& \frac{1}{\beta_{1}\alpha_{2}-\beta_{2}\alpha_{1}}\left(\alpha_{2}I^{\alpha_{2}}_{a}-\alpha_{1}I^{\alpha_{1}}_{a}\right)f(t)
\end{eqnarray*}
Interchanging the role of $\alpha_1$ and $\alpha_2$, we have
\[
I_{a}^{\alpha_{2}}I_{a}^{\alpha_{1}}f(t)= \frac{1}{\beta_{2}\alpha_{1}-\beta_{1}\alpha_{2}}\left(\alpha_{1}I^{\alpha_{1}}_{a}-\alpha_{2}I^{\alpha_{2}}_{a}\right)f(t)
=I_{a}^{\alpha_{1}}I_{a}^{\alpha_{
2}}f(t)
\]
This completes the proof
\end{proof}
The next theorem is a version of fundamental theorem of calculus that roughly says that fraction integral $I_a^\alpha$ is an inverse operation of $\alpha$-differentiation $D^\alpha$.
\begin{theorem}\label{FTC}
{\bf\text{(Inverse Property)}} Let $f$ be a continuous function defined on $[a,b].$ Then $I_{a}^{\alpha}f$ is $\alpha$-differentiable in $(a,b)$. In fact, we have $D^{\alpha}\left(I_{a}^{\alpha}f(x)\right)=f(x).$ Conversely Suppose $g$ is a continuous anti-$\alpha$-derivative of $f$ over $(a,b)$, that is $g=D^\alpha f$. Then we have
\[
I^{\alpha}_{a}\left(D^\alpha f(t)\right)=I^{\alpha}_{a}\left(g(t)\right)=f(t)-e^{\frac{\beta}{\alpha}(a-t)}f(a).
\]
\end{theorem}
\begin{proof}
Since $f$ is given to be continuous so in view of theorem \ref{diff.4}, $I^\alpha_a f$ is $\alpha$-differentiable. If we set $g=I^\alpha_a f$ then we have
\[
D^{\alpha}\left(I_{a}^{\alpha}f(t)\right)=D^\alpha g(t)=\alpha Dg(t)+\beta g(t)
\]
We know that a particular solution of the differential equation: $\alpha Dg+\beta g=f$ is given as
\[
g(t)=\frac{1}{\alpha}e^{\frac{-\beta }{\alpha}t}\int_{a}^{t}e^{\frac{\beta }{\alpha}x}f(x)dx.
\]
Thus first part of theorem is complete. For the second part, we have
\begin{eqnarray*}
g(t)&=& D^\alpha f(t)=\alpha Df(t)+\beta f(t)\\
\Rightarrow\quad I^\alpha_a g(t) &=& \alpha I^\alpha_a\left( Df(t) \right)+\beta I^\alpha_a f(t)\\
&=& e^{\frac{-\beta }{\alpha}t} \int_{a}^{t}e^{\frac{\beta }{\alpha}x}f^\prime(x)dx +\beta I^\alpha_a f(t)\\
&=& e^{\frac{-\beta }{\alpha}t}\left( \left[e^{\frac{\beta }{\alpha}x} f(x) \right]_a^t-\frac{\beta}{\alpha}\int_{a}^{t}e^{\frac{\beta }{\alpha}x}f(x)dx\right)
+\beta I^\alpha_a f(t)\\
\Rightarrow\quad I^\alpha_a g(t) &=& f(t)- e^{\frac{\beta }{\alpha}(a-t)}f(a).
\end{eqnarray*}
This completes the second part.
\end{proof}
We end up the section with a list of fractional integrals of some elementary functions in the following proposition and leave their verification for reader.
\begin{proposition}$~$
\begin{enumerate}
\item $I^{\alpha}_a\sin t=\frac{1}{\alpha^2+\beta^2}\left(\beta \sin t-\alpha\cos t + e^{\frac{\beta}{\alpha}(a-t)} \left(\alpha\cos a -\beta\sin a\right)\right)$
\item $I^{\alpha}_a e^{t}= \left(e^{t} - e^{\frac{(a-\beta t)}{\alpha}}\right)$
\item $I^{\alpha}_a\lambda =\frac{\lambda}{\beta} \left(1-e^{\frac{\beta}{\alpha}(a-t)}\right)$where $\lambda$ is a constant.
\item $I^{\alpha}_0 t^n=\frac{1}{\beta}\left(\sum_{k=0}^{n}\frac{(-1)^k\; n!}{(n-k)!}\left(\frac{\alpha}{\beta}\right)^k t^{n-k} + (-1)^{n+1}\; n!\left(\frac{\alpha}{\beta}\right)^n e^{-\frac{\beta}{\alpha}t}\right)$
\end{enumerate}
\end{proposition}
\section{\textbf{Applications to Fractional Differential Equations}}
We solve some simple linear fractional differential equations using deformable derivative as $D^\alpha$ operator. In first example we discuss method of solving homogeneous linear, while in second non-homogeneous linear fractional differential equations.\\
\begin{example} Consider the fractional differential equation:
\[
D^{\alpha} y(t)+P(t) y(t)=0,
\]
where $P(t)$ is continuous. Using expression (**), the equation get transformed to
\begin{eqnarray*}
\alpha Dy+ \beta y+P(t) y &=& 0\\
\Rightarrow\quad Dy+\frac{\left(\beta+P(t)\right)}{\alpha}y &=& 0.
\end{eqnarray*}
Which is simple first order linear ordinary differential equation whose general solution is given by
\[
y = C e^{\displaystyle{\frac{-\left(\beta t+\int P(t)dt\right)}{\alpha}}}
\]
where $C$ is arbitrary constant.
\end{example}
\begin{example} We now consider a non-homogeneous linear fractional equation:
\[
D^{1/2}y+y=te^{-t}.
\]
This can be written as
\[
\frac{1}{2}y+\frac{1}{2}Dy + y = te^{-t}\quad
\Rightarrow \quad Dy + 3y = 2te^{-t}
\]
Whose general solution is given by
\[
y(t)=Ce^{-3t}+\left (t-\frac{1}{2}\right)e^{-t}
\]
where $C$ is a constant.
\end{example}
\begin{example} The fractional differential equation :
\[
D^{\alpha_{2}}[D^{\alpha_{1}}y(t)]=0
\]
is equivalent to the following second order homogeneous differential equation:
\[
\alpha_{1}\alpha_{2}D^2y+(\alpha_{1}\beta_{2}+\alpha_{2}\beta_{1})Dy+\beta_{1}\beta_{2}y=0.
\]
The roots its auxiliary equation are:
\[
\frac{-\beta_{1}}{\alpha_{1}}\; \mbox{and}\;\frac{-\beta_{2}}{\alpha_{2}}.
\]
Hence in case of distinct roots the general solution of the fractional differential equation is
\[
y=C_1e^{\displaystyle{-\frac{\beta_1}{\alpha_1}t}}+C_2e^{\displaystyle{-\frac{\beta_2}{\alpha_2}t}},
\]
and in case of repeated roots, we have
\[
y=(C_1+C_2t)e^{\displaystyle{-\frac{\beta}{\alpha}t}}
\]
\end{example}
We end up the paper with some important questions that are yet to be answered.
\begin{itemize}
\item [$($\rm i$)$] What is geometric interpretation and physical significance of the deformable derivative ?
\item [$($\rm ii$)$] Is there any similarity between the classical fractional derivative and deformable derivative ?
\item [$($\rm iii$)$] Deformable derivative is equivalent to ordinary one but not same so it could be used to analyze function.
\end{itemize}
\bibliographystyle{amsplain}
|
1,314,259,995,928 | arxiv |
\subsection{Jailhouse: Concepts and Rationale}
Safely running real-time workloads of mixed criticality on multi-core
systems~\cite{linuxlife} next to Linux is a common industrial requirement
in many domains. Contemporary multi-core platforms typically feature more
CPU cores---Hardware Threads (HARTs) in \five{RISC} language---than workloads, and critical tasks can be exclusively assigned to
dedicated, isolated CPU cores.
Linux, together with its feature-rich
ecosystem, can then execute uncritical tasks on the remaining CPU cores.
Embedded virtualisation is a promising approach for implementing safe isolation of
different workloads. Execution domains, including Linux, run as guests of a hypervisor.
This approach is, for example, implemented by
XtratuM\cite{xtratum}, NOVA~\cite{nova}, and PikeOS~\cite{pikeos}.
Static hardware partitioning is a special case of embedded virtualisation;
it \emph{exclusively} assigns hardware resources to compute domains.
Exclusive assignment of hardware resources includes exclusive assignment of
physical CPU cores to logical domains. Hence, static hardware partitioning
assumes that available computational resources exceed required computational power.
Consequently, no scheduler is required by the hypervisor, which avoids scheduling
overhead. Virtualisation extensions ensure safe cross-domain isolation.
Our approach is based on Jailhouse, a thin Linux-based partitioning
hypervisor that targets real-world systems. Motivated by the
exokernel concept~\cite{exokernel}, our aim is to reduce the
hypervisor to a minimum level of abstraction. Our goal is to minimise
the hypervisor's interaction with guests, with the intention of
preserving key quality parameters of any guest software regardless of
if it is executed natively, or under the presence of a control structure.
With this approach, guests inherit real-time guarantees of the
underlying hardware by design.
Besides unavoidable hardware overhead due to the virtualisation of the system
(\emph{e.g.}\xspace, second level page table translation~\cite{drepper:costofvirtualisation}),
no further software-induced overhead due to the existence of a VMM occurs during
operation.
A small code base is a precondition for certifiability for critical environments. The reduction
of guest interaction ensures the maintenance of the platform's
real-time capabilities by design---if no interceptions take place,
the hypervisor cannot introduce increased latencies.
Running Linux in uncritical partitions of the system is a requirement for many
real-world use cases. Therefore, we partition a booted Linux system, instead of
booting Linux on a partitioned system. This offloads complex hardware
initialisation to Linux, and ensures a small code base of the hypervisor, as
only a few platform specific drivers are required (during the operational
phase, Linux is lifted into the state of a virtual machine).
To create new isolated domains, specific hardware resources (\emph{e.g.}\xspace, CPUs, memory,
peripheral devices) are offlined and removed from Linux. The hypervisor is
called to create a new domain has raw access to these resources. Secondary
real-time operating systems, including Linux, or even bare-metal applications
can be loaded into the domains. Jailhouse does not paravirtualise any resources
as it exclusively assigns resources to computing domains.
The hypervisor shall only be active during its boot phase (the initialisation
of the hypervisor) and during the partitioning phase (creation, initialisation
and boot of new domains). During the operational phase (system is partitioned,
and all partitions are running), the goal is no further action by the
hypervisor.
\subsection{RISC-V Platform Virtualisation}
\subsubsection{Virtualisation Architecture}
While the \five{RISC} platform is designed to be fully virtualisable even without
dedicated virtualisation extensions (\emph{i.e.}\xspace, via trap-and-emulate mechanisms), the
hypervisor extension, which allows for executing
most instructions of virtual guests natively, has recently been ratified by
\emph{RISC-V International}. \five{RISC} implements three basic
privilege modes: (1) Machine-Mode (M-Mode) where usually the Supervisor Binary
Interface (SBI)---a BIOS-like firmware---resides, (2) Supervisor-Mode (S-Mode),
typically used for the privileged operating system and (3) User-Mode (U-Mode)
for unprivileged user-level applications. When the hypervisor extension is
active, the S-Mode is utilised by the hypervisor and called \emph{Hypervisor
extended-Supervisor (HS-Mode)}. Guests run in Virtualised Supervisor (VS-Mode),
which provides shadows of key registers to minimise interventions by
the hypervisor.
\subsubsection{Memory Management}
The memory management unit (MMU) is virtualisation aware: page tables are
resolved transparently for guests using a second translation stage for guest
physical memory to host physical memory conversion. No hypervisor intervention
is needed for page table walks and modifications. The two-stage address
translation process~\cite{drepper:08:acmqueue} does reduce performance,
especially on TLB misses. However, TLB misses can be reduced by using huge
pages in the second \emph{G-Stage} translation level. As the MMU counterpart
for IO-devices (IOMMU) is still under specification process~\cite{iommu}, its
desired memory protection features and virtualisation capabilities are lacking
for devices that use direct memory access (DMA) features. This makes direct
assignment of such devices to a guest---via techniques like interrupt
remapping---impossible, and guests cannot use such devices without
heavy hypervisor intervention.
\subsubsection{Interrupt Controller}
An architectural weakness of current generation \five{RISC} is the \emph{Platform
Level Interrupt Controller} (PLIC), which is the first generation standard
interrupt controller.
The \five{RISC} hypervisor extension defines an \emph{interrupt pass-through}
mechanism. The intention is to allow interrupt requests to raise exceptions in
guests without mediation by the hypervisor. As such, this feature is
highly desirable for a hypervisor focused on the real-time domain.
Unfortunately, having been developed earlier than the hypervisor extension,
neither the \emph{Core Local Interrupt Controller} (CLINT) nor PLIC allow
for direct interrupt remapping to the guest. Any interrupt (timer,
software, external) therefore first arrives at the hypervisor (timer and
software interrupts even make another detour through M-mode), before
having to be injected into the targeted domain. Due to further design
misconceptions of the interrupt controllers, additional hypervisor
intervention is needed for the guest to mark an interrupt as being handled
(\emph{claim}) respectively handled (\emph{complete}). This is because
registers associated with claim and complete are memory mapped for multiple HARTS on
the same memory page, which means we cannot rely on the MMU for access control. Therefore a superordinate control structure is needed to
prevent a cell from (un-)intentionally interfering with any other cell's
interrupts. These problems heavily affect interrupt latency and thus real-time
capabilities.
\subsubsection{Hyperthreading}
As \five{RISC} does not support hyperthreading, there are way less possibilities for
malicious inter-cell interference than on platforms like Intel (\emph{e.g.}\xspace spectre,
meltdown). However, last level caches are shared and not yet
partitionable, which opens up the possibility for influencing other cell
latencies via cache pollution~\cite{pinto:21:first}.
\subsection{Benchmarking Setup}
To test performance implications of the hypervisor overhead on \emph{real
hardware}, we use the \textit{Xilinx Virtex UltraScale+ VCU118 FPGA}, using the
\five{NOEL}~\cite{noelv_gaisler} bitstream, which is a synthesisable VHDL model of a
\five{RISC} processor that implements Hypervisor Extensions. H-Extensions and the
interrupt controllers follow the final, ratified
specifications. While there is an open-source bitstream available, we used the
commercial one, which supports performance optimisations and L1 and last level
caches (LLCs). The \five{NOEL} has six HARTs, each of which has a dedicated L1
cache, while sharing a common LLC. HARTs and caches run at 100MHz.
For static hardware partitioning, we use Jailhouse as hypervisor.
We perform micro-benchmarks to quantify additional overheads resp. latencies
due to the existence of the hypervisor. All micro-benchmarks are conducted in
the following measurement scenarios:
\begin{enumerate}[label=(\Alph*)]
\item As bare-metal application without an underlying hypervisor,
\item with Jailhouse in a static partitioned execution domain (parallel to Linux),
\item As (B), but with additional load in the Linux partition.
\end{enumerate}
In scenario (A), we measure the \emph{baseline} of the raw system, that is,
overheads and latencies without the existence of a hypervisor. (A) represents
the \emph{raw noise} of the platform that we cannot fall below. Scenario (B)
represents the base overhead that exists due to the existence of the
hypervisor. Finally, scenario (C) simulates conditions in a real asymmetric
multiprocessing (AMP) environment: arbitrary load on neighbouring execution
domain to stress shared system components, such as caches or system buses.
For our micro-benchmarks, we implemented our own minimalist operating system,
which is publicly available as Open Source Software.\colfootnote{Refer to
\url{https://github.com/lfd/grinch}. We call it the \emph{Grinch}, as it
benchmarks \five{NOEL}, which---apart from being a \five{RISC} implementation---is also
French for \emph{Christmas}.}
We selected micro-benchmarks to measure relevant code paths where the
hypervisor has to intervene active in typical real-time scenarios, such as
cyclic timer interrupts, IPIs, external interrupts and frequent firmware calls,
such as those used in \five{RISC} for remote fences.
\begin{figure}[t]
\includegraphics[trim={3.6cm 0 0 0 }]{generated/fig1}
\caption{Illustration of the cross-systems code path for our
benchmarks. Dots/squares mark traps; squares are unavoidable, while dots arise from HV interaction. For the IPI round trip measurement (teal),
the path is also traversed in backward direction, as indicated by the loop.
The additional load in the Linux domain to perturb the measurement
is optional, and only generated in scenario (C).}\label{fig:overview}
\end{figure}
\begin{figure*}
\includegraphics{generated/histogram}
\caption{Measurement Results (notice the double logarithmic axes): CPU cycles taken for our benchmarks---comparing performance bare-metal vs.\ with hypervisor (with and without load on other cores).}
\label{fig:results}
\end{figure*}
\subsection{Benchmark \#1---IRQ Reinjection}
As mentioned before, any IRQ on \five{RISC} is received in S-mode and re-injected into
VS-mode. Basically, there are 3 types of IRQs: Timer, IPI, and External
interrupts (peripheral devices). External IRQs are managed by the interrupt
controller (\emph{i.e.}\xspace, PLIC).
In our first benchmark, we investigate timers (shown in Figure~\ref{fig:overview} in dotted ochre),
as they do not need interaction
with the PLIC, but still need to be injected by the hypervisor. Further, while
reading the current timer value can be done without hypervisor interaction,
programming the timer requires interaction with the SBI~\cite{sbispec}, which
results in moderation by the hypervisor. Any SBI call must be moderated by the hypervisor
to ensure that the call has no cross-domain effects (\emph{e.g.}\xspace, CPU offlining, which
is conducted via SBI, must not affect a neighboured domain).
Typically, the overhead that is required for setting the timer only plays a
subordinate role, as the time that is required to set the timer vs. its
expiration time are significantly apart. However, for the sake of completeness,
we quantify all overheads in~\cref{fig:results}.
The essential measurement is the timer
\href{https://wiki.linuxfoundation.org/realtime/documentation/howto/tools/cyclictest/start}{jitter}:
The difference between scheduled and actual arrival time of the timer IRQ in
(V)S-Mode. In a virtualised scenario, the hypervisor receives the timer
(2)---again via detour through the SBI (3)---and
directly injects it by setting the corresponding pending-bit (2)--(1). When the
timer arrives, our benchmark will set the next timer expiration time to a point
in future. This will automatically clear the pending flag~\cite{sbispec}.
\subsection{Benchmark \#2---IPI Round Trip Time}
As IPI Round Trip Time (RTT), shown in Figure~\ref{fig:overview} in solid teal,
we define the time that is required for sending an IPI to a secondary target
HART, and back. The only task of the target is to send the IPI back to the
initial sender. We chose this measurement, as IPIs are frequently used by
operating systems for signalling and synchronisation purposes in real-time
contexts.
On \five{RISC}, IPIs are raised on the platform via SBI, where the target is
specified as an argument. The SBI call must be intercepted by the hypervisor
(2) as the domain membership of the target must be verified. After
verification, the IPI is propagated to the firmware (3), where it is finally
sent. From now on, the sender actively polls for the returning IPI. On receiver
side, the IPI first arrives at the hypervisor (6)---again via detour through
SBI---which injects the IPI into the guest (7)--(8) by setting the appropriate
pending bit. The guest software actively polls on the pending bit, so once the
guest sees the IPI, it sends an IPI back to the sender. The same path is
traversed backwards again: Moderation of the IPI, arrival at the sender in the
hypervisor, re-injection.
In total, four hypervisor interceptions are required for the IPI RTT
measurement: moderation for sender, arrival at receiver, moderation for
receiver, arrival at sender.
\subsection{Benchmark \#3---PLIC Emulation}
The PLIC interrupt controller offers no virtualisation possibilities.
Furthermore, the memory layout of the PLIC is unfavourably organised (\emph{e.g.}\xspace,
cross-hart configuration interfaces reside on the same memory page).
This requires that accesses to the PLIC must be completely emulated.
\newcommand{\rnum}[1]{\MakeUppercase{\romannumeral #1}}
The PLIC processes arriving external IRQs as follows:
\begin{itemize}
\item Physical arrival: set the external IRQ pending bit: (a)
\item Interruption of S-mode: (b)
\item \emph{Claim}ing the IRQ (\emph{i.e.}\xspace, read from PLIC register): (c)--(d)
\item Acknowledgement (\emph{complete}) of the IRQ (\emph{i.e.}\xspace, write to a PLIC
register): (e)--(f).
\end{itemize}
Under the presence of a hypervisor, the IRQ, shown in Figure~\ref{fig:overview}
in dash-dotted black, first arrives in HS-Mode. The hypervisor re-injects the
external interrupt to its guest, which will be interrupted.\colfootnote{This is the
first trap, comparable with the arrival of a timer interrupt.} The guest claims the
IRQ by reading the PLIC claim/complete register, which requires hypervisor
moderation, as well as the acknowledgement of the IRQ. The time required
for moderation can be found in~\cref{fig:results}.
\subsection{Benchmark \#4---Synchronous Traps}
Synchronous traps, shown in Figure~\ref{fig:overview} in dashed grey, arise when certain
privileged instructions are executed from less privileged modes like VS-mode. The
processor traps into higher privileged modes, where the
instructions are handled (\emph{i.e.}\xspace, for permission checks). We
measure the overhead of the remote fence (\emph{rfence}) firmware call, which is
frequently used to enforce ordering constraints on memory operations. It has
to detour through the SBI (3). The call from VS-Mode to SBI
is moderated by the hypervisor by trapping into S-Mode (2). We measure
synchronous traps cycle-precise using the \emph{rdcycle} instruction before and
after the trap.
\section{Introduction}
\label{sec:intro}
\input{1-intro.tex}
\section{Related Work}
\label{sec:rel}
\input{2-related_work.tex}
\section{Architecture}
\label{sec:arch}
\input{3-arch_jh.tex}
\input{3-arch_riscv.tex}
\section{Evaluation}
\label{sec:evaluation}
\input{4-evaluation.tex}
\section{Discussion}
\label{sec:discussion}
\input{5-discussion.tex}
\section{Conclusion}
\label{sec:conclusion}
\input{6-conclusion.tex}
\printbibliography
\end{document}
|
1,314,259,995,929 | arxiv | \section{Acknowledgements}
We thank Christine Aidala, Shanshan Cao, Paul Caucal, Stefan Hoche, Weiyao Ke, Kara Mattioli, Jared Reiten, Peter Skands, Alba Soto-Ontoso, Varun Vaidya, and Nima Zardoshti
for their helpful comments, discussions, and questions, as well as collaboration on related work. We thank Jack Holguin, Johannes Michel, and Jingjing Pan for comments on the draft. I.M. thanks Philip Burrows, Jim Brau and Michael Peskin for making him aware of the existence of SLD. K.L. was supported by the LDRD program of LBNL and the U.S. DOE under contract number DE-AC02-05CH11231 and DE-SC0011090. B.M. and I.M. are supported by start-up funds from Yale University.
|
1,314,259,995,930 | arxiv | \section{Introduction}
Super-resolution is the process of obtaining a high resolution (HR) image from one or more low resolution (LR) images. Classical reconstruction based image super-resolution requires multiple low-resolution images with sub-pixel misalignment at the same scale, whereas single image super-resolution requires a database of LR and HR matched pairs to learn a mapping function between the patch pairs at different scales~\cite{glasner}. Given a low resolution image during testing, this learned function or representation can be used to reconstruct the corresponding HR image.
Since the advent of deep learning technologies in the past decade, super-resolution algorithms have shown remarkable improvement in the quality of the reconstructed image. Most of the work reported in the literature have used mean square error (MSE) loss function to minimize the error between the reconstructed model output and the ground truth image~\cite{srcnn14}~\cite{subpixel}~\cite{rkagrnatural}~\cite{rkagrdocument1}~\cite{rkagrdocument2}~\cite{rkagrdocument3}. Minimizing this loss function may reduce the high frequency content in the reconstructed image and thus may blur the edges in it. Also, the reconstructed image may not lie precisely in the manifold of the HR image. Researchers have endeavored to find ways to solve this problem to a good extent as can be seen in SRGAN~\cite{srgan}, where the authors claim that the reconstructed output lies precisely in the manifold of HR images, even if the reconstructed images have less peak signal to noise ratio (PSNR) and structural similarity (SSIM). Ledig et al. \cite{srgan} have used a weighted combination of MSE loss, content loss~\cite{perceptualloss} and adversarial loss to reconstruct the HR image. This approach requires a deep architecture, such as the VGG net~\cite{vggnet}, to obtain the local covariance structure in the image. Most of the image transformation tasks use mean square error as loss function, which provides smooth transformed images.
Our main contributions in this paper are as follows:
\begin{itemize}
\item We have performed a large number of experiments to obtain a robust loss function that improves the performance of the existing algorithms that employ MSE loss function.
\item While training, we apply Canny edge detector \cite{canny} the reconstructed output (in batches) and also separately on the corresponding ground truth image to compute the proposed mean square Canny error (MSCE) and assign weights (convex combination) based on our experiments i.e. the loss function can be given as: $ \mu \times MSE + (1-\mu)\times MSCE $.
\item Our approach guarantees performance improvement in terms of PSNR and SSIM over the existing approaches, if the model is trained on one dataset and tested on different datasets as mentioned in Tables~\ref{result_table_all1} and~\ref{result_table_all2}.
\item Our model does not incur additional overhead in terms of computation during testing to obtain the performance gain reported in Tables~\ref{result_table_all1} and~\ref{result_table_all2} due to our proposed MSCE loss function.
\end{itemize}
\section{Related work}
Super-resolution and image denoising can be assumed as image transformation tasks. In super-resolution, a LR image is fed to a transformation network such as a multilayer neural network to generate a HR image. Most of the image processing tasks such as image denoising and super-resolution minimize a per-pixel loss function to obtain reconstruction. In this work, our focus is on improving the quality of existing super-resolution algorithms such as SRCNN~\cite{srcnn14} and ESPCN~\cite{subpixel} that use per-pixel loss function. Recently proposed perceptual loss function has shown significant improvement in the perceptual quality of the images. Simonyan et al~\cite{featurevisualization} use perceptual loss for feature visualization. Gatys et al.~\cite{Gatystexture} and \cite{Gatysstyle} use perceptual loss for texture synthesis and style transfer, respectively. These approaches solve an optimization problem and hence, are slower.
Justin Johnson et al~\cite{Johnson} and Pandey et al~\cite{rkagrart} use the benefits of per-pixel as well as perceptual loss funtions and propose a computationally efficient, optimization-free approach that provides results for image transformation tasks that are qualitatively similar to those of the above optimization-based approaches. The super-resolution algorithm SRGAN~\cite{srgan} uses a weighted combination of three different loss functions, namely mean square error, perceptual and adversarial loss to obtain a sharper reconstruction. The images reconstructed by these methods perceptually look sharper, even if they have low values of PSNR and SSIM.
In this work, our focus is on improving the perceptual quality, PSNR and SSIM without incurring any additional computational overhead during testing by the addition of a new, robust loss function that aims to preserve the edge information.
\section{The proposed edge-preserving MSCE loss function}
We employ Canny edge detector~\cite{canny} to detect the edges in the reconstructed and ground truth images. We have chosen this algorithm, since Canny operator provides the most reliable edges amongst all the edge detection algorithms in the literature, and also satisfies all the general edge detection criteria.
Most of the recent papers on image super-resolution and denoising use mean square error as the loss function. This loss function may smooth the edge components in an image. We thought of preserving the edges by defining the loss function as a convex combination of mean square error loss and our edge preserving loss as follows:
Suppose the training set consists of image pairs $\{L_{i},H_{i}\}$ ; $ i = 1... N $, where N is the total number of training examples. The model $\Theta$, parameterized by $ \lambda$, predicts the output $ O_{j}$ for a given input $L_{j}$. Let C denote the Canny operator. Let $C(\Theta_{\lambda}(L_{j})))$ be the resultant image obtained by applying Canny operator on the predicted output image, $O_{j} = \Theta_{\lambda}(L_{j})$. The proposed edge preserving loss function, called the mean square Canny error - (MSCE) is given by :
\begin{equation}
\scriptsize{
Loss =\underbrace{\mu\times\frac{1}{N}\sum\limits_{j=1}^{N}\parallel \Theta_{\lambda}(L_{j})-H_{j}\parallel_{F}}_{MSE \hspace{0.1cm}Loss\hspace{0.1cm}(l_{mse})} + \underbrace{(1-\mu)\times\frac{1}{N}\sum\limits_{j=1}^{N}\parallel C(\Theta_{\lambda}(L_{j}))-C(H_{j})\parallel_{F}}_{Edge\hspace{0.1cm}preserving\hspace{0.1cm} loss\hspace{0.1cm}(l_{edge})}}
\label{equation1}
\end{equation}
The first term in the equation above is the mean square loss function used to minimize the error between the reconstructed output and the ground truth image. The second term in the loss function is the edge preserving loss function. After a large number of experiments, the weighing factor $\mu$ has been fixed to lie in the range $0.8\leq \mu \leq 0.99$. To minimize this loss function, Adam optimizer~\cite{adam} is used with learning rate $(lr) =0.001$, $\beta_{1}=0.999$ and $\beta_{2}=0.99$.
\subsection{Choosing the value of $\mu$}
\begin{itemize}
\item {\bf Exhaustive Experimentation} : We performed experiments varying $\mu $ in the range $0.8 \leq \mu \leq 0.99$ by incrementing its value by 0.01 each time. We found that the models were consistently giving better results for the particular values of $\mu$ = 0.84, 0.85 and 0.86. For the results reported in the Figures~\ref{Fig1},~\ref{Fig2},~\ref{Fig3} and~\ref{Fig4} and Tables~\ref{result_table_all1} and ~\ref{result_table_all2}, the value of $\mu$ used is 0.85. \\
\item {\bf Dynamic choice of $\mu$}: While performing the experiments, we found that sometimes, values of $\mu$ (still in the range $0.8\leq \mu \leq 0.99$) other than the three specific ones mentioned above, gave better results. We made a list of those different values of $\mu $ and tried each of them parallely in each epoch. For the subsequent epoch, we select the model corresponding to the least value of the loss function. Let $l_{mse}$ and $l_{edge}$ denote the mean square error loss and our edge-preserving loss, respectively, as mentioned in equation~\ref{equation1}. Let $ \{ \hat{\lambda} ,\hat{ \mu\ } \} $ be the optimal model parameters and $\mu$ be the weighing parameter currently chosen during training. In each epoch, we selected the value of $\mu $ that minimized the loss function in the right hand side of equation~\ref{equation2}:
\begin{equation}
\hat{ \mu\ } = \underset{\mu}{\operatorname{argmin}}\{\mu \times l_{mse}+ (1-\mu)\times l_{edge}\}
\label{equation2}
\end{equation}
We used the earlier approach for calculating the loss in our experiments, results for which have been reported in the Tables. A dynamic choice of the value of $\mu$ gives similar results in less number of epochs. It can be experimented further to possibly achieve still better results.
\end{itemize}
\section{Datasets used for training and testing}
The models are trained on DIV2K~\cite{div2k} training dataset with the original architecture (without changing the architectural details of the existing model) proposed in the respective papers. We have performed testing on different datasets such as Set5~\cite{set5}, Set14~\cite{set14}, BSD~\cite{bsd100} and URBAN100~\cite{urban100} for the different scale factors of 2, 3, 4 and 8. We have found that there is consistent performance gain over the original models, in terms of PSNR and SSIM, on all the datasets on which our MSCE loss function has been tested so far. These results are seen quantitatively in Tables~\ref{result_table_all1} and~\ref{result_table_all2}.
\section{Experiments and Discussion}
We have performed extensive experiments on different super-resolution algorithms proposed recently, by augmenting the original loss function with our proposed mean square Canny error loss function.
We have validated the effectiveness of our proposed MSCE loss function on the recent techniques of SRCNN~\cite{srcnn14} and ESPCN~\cite{subpixel}. We found that the addition of our MSCE loss leads to better results and the improvement is consistent on both methods across different upscaling factors of 2, 3, 4 and 8.
\section{Results}
Figures~\ref{Fig1},~\ref{Fig2},~\ref{Fig3} and~\ref{Fig4} show both the results qualitatively: one obtained by passing the input image directly to the original models SRCNN~\cite{srcnn14} and ESPCN~\cite{subpixel} with the loss functions used in the original papers, and the other obtained by augmenting the loss function with our MSCE loss function.
Tables~\ref{result_table_all1} and~\ref{result_table_all2} list the quantitative results obtained by the two superresolution methods on the datasets Set5, Set14, URBAN and BSD for different upscaling factors and the corresponding values obtained after they are modified by our MSCE loss function.
{\bf Note 1:} Table~\ref{result_table_all1} lists the results obtained from the LR images created by downsampling using normal bicubic interpolation. Whereas, the results reported in Table~\ref{result_table_all2} are obtained by blurring the image by a Gaussian filter with radius 2 and then downsampling by bicubic interpolation to obtain the LR images at different scales.
\begin{figure}
\subfloat[][SRCNN original]{\includegraphics[width=3cm]{ppt3_orig_srcnn2}\label{ppt s-original}}
\subfloat[][SRCNN MSCE]{\includegraphics[width=3cm]{ppt3_custom_espcn2}\label{ppt s-custom}}
\subfloat[][ESPCN original]{\includegraphics[width=3cm]{ppt3_orig_espcn2}\label{ppt e-original}}
\subfloat[][ESPCN MSCE]{\includegraphics[width=3cm]{ppt3_custom_espcn2}\label{ppt e-original}}
\caption{Qualitative comparison of the results for an upscale factor of 2, when the ppt image from Set14 is directly fed to the original model and the model modified with MSCE loss trained by us. (a) The output image reconstructed by the original SRCNN model. (b) The output image reconstructed by SRCNN model modified with MSCE loss function. (c) Image reconstructed by the original ESPCN model. (d) Output image reconstructed by ESPCN model modified with MSCE loss function.}
\label{Fig1}
\end{figure}
\begin{figure}
\centering
\subfloat[][SRCNN original]{\includegraphics[width=3cm]{comic_ori}\label{comic original}}
\subfloat[][SRCNN MSCE]{\includegraphics[width=3cm]{comic_cus}\label{comic custom}}
\subfloat[][ESPCN original]{\includegraphics[width=3cm]{comic_orig_espcn3}\label{comic custom}}
\subfloat[][ESPCN MSCE]{\includegraphics[width=3cm]{comic_custom_espcn3}\label{comic custom}}
\caption{Comparison of the results for an upscale factor of 3, when the comic image from Set14 is directly fed to the original model and the model modified with MSCE loss trained by us. (a) Output image reconstructed by the original SRCNN model. (b) Output image reconstructed by SRCNN model modified by MSCE loss function. (c) Output image reconstructed by the original ESPCN model. (d) Output image reconstructed by ESPCN model modified by MSCE loss function.}
\label{Fig2}
\end{figure}
\begin{figure}
\subfloat[][SRCNN original]{\includegraphics[width=3cm]{baby_GT_ori_srcnn4}\label{baby original}}
\subfloat[][SRCNN MSCE]{\includegraphics[width=3cm]{baby_GT_cus_srcnn4}\label{baby custom}}
\subfloat[][ESPCN original]{\includegraphics[width=3cm]{baby_GT_orig_espcnn4}\label{baby original}}
\subfloat[][ESPCN MSCE]{\includegraphics[width=3cm]{baby_GT_custom_espcnn4}\label{baby original}}
\caption{Comparison of the results for an upscale factor of 4, when the baby input image from Set5 is directly fed to the original model trained by us and the model modified with MSCE loss trained by us. (a) Output image reconstructed by the original SRCNN model. (b) Output image reconstructed by SRCNN model modified by MSCE loss function. (c) Output image reconstructed by the original ESPCN model. (d) Output image reconstructed by ESPCN model modified by MSCE loss function.}
\label{Fig3}
\end{figure}
\begin{figure}
\subfloat[][SRCNN original]{\includegraphics[width=3cm]{baboon_orig_srcnn8}\label{baboon s-original}}
\subfloat[][SRCNN MSCE]{\includegraphics[width=3cm]{baboon_custom_srcnn8}\label{baboon s-custom}}
\subfloat[][ESPCN original]{\includegraphics[width=3cm]{baboon_orig_espcn8}\label{baboon e-original}}
\subfloat[][ESPCN MSCE]{\includegraphics[width=3cm]{baboon_custom_espcn8}\label{baboon e-custom}}
\caption{Comparison of the results for an upscale factor of 8, when the baboon input image from Set14 is directly fed to the original model trained by us and the model modified with MSCE loss trained by us. (a) Output image reconstructed by the original SRCNN model. (b) Output image reconstructed by SRCNN model modified by MSCE loss function. (c) Output image reconstructed by the original ESPCN model. (d) Output image reconstructed by ESPCN model modified by MSCE loss function.}
\label{Fig4}
\end{figure}
\begin{figure}
\subfloat[][SRCNN 2x] {\includegraphics[width=3.1cm]{ppt3_orig_srcnn2_d}\label{baboon s-original}}
\subfloat[][SRCNN MSCE 2x] {\includegraphics[width=3.1cm]{ppt3_custom_srcnn2_d}\label{baboon s-custom}}
\subfloat[][ESPCN 2x] {\includegraphics[width=3.1cm]{ppt3_orig_espcn2_d}\label{baboon e-original}}
\subfloat[][ESPCN MSCE 2x] {\includegraphics[width=3.1cm]{ppt3_custom_espcn2_d}\label{baboon e-custom}}\\
\subfloat[][SRCNN 3x] {\includegraphics[width=3.1cm]{img1_orig_srcnn3_d}\label{baboon s-original}}
\subfloat[][SRCNN MSCE 3x] {\includegraphics[width=3.1cm]{img1_custom_srcnn3_d}\label{baboon s-custom}}
\subfloat[][ESPCN 3x] {\includegraphics[width=3.1cm]{img1_orig_espcn3_d}\label{baboon e-original}}
\subfloat[][ESPCN MSCE 3x] {\includegraphics[width=3.1cm]{img1_custom_espcn3_d}\label{baboon e-custom}}\\
\subfloat[][SRCNN 4x] {\includegraphics[width=3.1cm]{lenna_orig_srcnn4_d}\label{baboon s-original}}
\subfloat[][SRCNN MSCE 4x] {\includegraphics[width=3.1cm]{lenna_custom_srcnn4_d}\label{baboon s-custom}}
\subfloat[][ESPCN 4x] {\includegraphics[width=3.1cm]{lenna_orig_espcn4_d}\label{baboon e-original}}
\subfloat[][ESPCN MSCE 4x] {\includegraphics[width=3.1cm]{lenna_custom_espcn4_d}\label{baboon e-custom}}\\
\subfloat[][SRCNN 8x] {\includegraphics[width=3.1cm]{img_orig_srcnn8_d}\label{baboon s-original}}
\subfloat[][SRCNN MSCE 8x] {\includegraphics[width=3.1cm]{img_custom_srcnn8_d}\label{baboon s-custom}}
\subfloat[][ESPCN 8x] {\includegraphics[width=3.1cm]{img_orig_espcn8_d}\label{baboon e-original}}
\subfloat[][ESPCN MSCE 8x] {\includegraphics[width=3.1cm]{img_custom_espcn8_d}\label{baboon e-custom}}
\caption{Comparison of the results obtained on down-sampled (by bicubic interpolation without blurring) images on different upscaling factors. (a), (b), (c) and (d) have been down-sampled by a factor of 2 and reconstructed. (e), (f), (g) and (h) have been down-sampled by a factor of 3 and reconstructed. (i), (j), (k) and (l) have been down-sampled by a factor of 4 and reconstructed. (m), (n), (o) and (p) have been down-sampled by a factor of 8 and reconstructed.}
\label{Fig5}
\end{figure}
\begin{table*}[!]
\centering
\caption{$P_{*}$ and $S_{*}$ are the PSNR and SSIM values obtained by the method \(*\) for the upscaling factors of 2, 3, 4 and 8, whereas $P_{*}^{c}$ and $S_{*}^{c}$ are the corresponding PSNR and SSIM values obtained after augmenting the loss function by the MSCE loss function designed by us. All the models other than bicubic (non-learnable) have been trained on DIV2K training dataset. For testing, we have used 4 datasets, namely Set5, Set14, Urban and BSD.}
\begin{tabular}{| l l | c c | c c| c c| c c| c c|}
\hline
\multicolumn{2}{|l |}{Dataset} & $P_{bicubic}$ & $S_{bicubic}$ & $P_{srcnn}$&$S_{srcnn}$& $\bf P_{srcnn}^{c}$&$\bf S_{srcnn}^{c}$& $P_{espcn}$ &$S_{espcn}$ &$\bf P_{espcn}^{c}$ & $ \bf S_{espcn}^{c}$ \\
\hline
\multirow{4}{*}{Set5} & 2x &27.02 & 0.92& 28.44&0.93 &28.57 &0.93 &26.48 &0.92 &26.59 &0.92\\
& 3x & 25.41 & 0.89 &26.59 &0.90 &26.75 &0.91 &25.882 &0.91 &25.888 &0.91\\
& 4x & 21.96 & 0.79&23.22 &0.82 &23.37 &0.83 &22.35 &0.81 &22.49 &0.82\\
& 8x & 18.10 & 0.61 &18.740 &0.63 &18.743 &0.63 &18.33 &0.62 &18.43 &0.62\\
\hline
\multirow{4}*{Set14} & 2x &24.10 & 0.86 &25.22 &0.88 &25.32 &0.88 &23.50 &0.87 &23.56 &0.87\\
& 3x & 22.65 & 0.81 &23.62 & 0.84 &23.68 &0.84 &23.06 &0.84 &23.06 &0.84\\
& 4x & 20.01 & 0.70 & 20.96 & 0.73 &21.04 &0.73 &20.12 &0.71 &20.32 &0.72\\
& 8x & 17.13 & 0.53 & 17.57 & 0.56 &17.58 &0.56 &17.20 &0.56 & 17.27&0.56\\
\hline
\multirow{4}*{Urban} & 2x & 20.66 & 0.84 &22.26 &0.87 &22.44 &0.87 &21.38 &0.87 &21.42 &0.87\\
& 3x & 20.22 & 0.79 &21.47 &0.83 &21.53 &0.83 &21.18 &0.83 &21.18 &0.83\\
& 4x & 16.92 & 0.65 &17.81 &0.69 &17.84 &0.69 &17.54 &0.70 &17.59 &0.70\\
& 8x & 14.63 & 0.48 &15.04 &0.50 &15.04 &0.50 &14.94 &0.507 &14.99 &0.509\\
\hline
\multirow{4}*{BSD} & 2x &25.88 & 0.89& 25.96&0.90 &26.18 &0.90 &23.36 &0.87 &23.41 &0.88\\
& 3x & 21.86 & 0.77 &22.49 &0.81 &22.54 &0.81 &22.34 &0.81 &22.35 &0.81\\
& 4x & 21.43 & 0.73&22.08 &0.77 &22.13 &0.77 &21.28 &0.76 &21.41 &0.77\\
& 8x & 18.43 & 0.57 & 18.78&0.59 &18.81 &0.59 &18.47 &0.587 &18.58 &0.589\\
\hline
\end{tabular}
\label{result_table_all1}
\end{table*}
\begin{table*}[!]
\centering
\caption{$P_{*}$ and $S_{*}$ are the PSNR and SSIM values obtained by the method \(*\) at the different upscaling factors of 2, 3, 4 and 8, whereas $P_{*}^{c}$ and $S_{*}^{c}$ are the corresponding PSNR and SSIM values obtained by augmenting the loss function by the MSCE loss function designed by us. All the models other than bicubic (non-learnable) are trained on DIV2K (blurred by Gaussian blurring, then downsampled by bicubic) training dataset. For testing, we use 4 datasets, namely Set5, Set14, Urban and BSD.}
\begin{tabular}{| l l | c c | c c| c c| c c| c c|}
\hline
\multicolumn{2}{|l |}{Dataset} & $P_{bicubic}$ & $S_{bicubic}$ & $P_{srcnn}$&$S_{srcnn}$& $\bf P_{srcnn}^{c}$&$\bf S_{srcnn}^{c}$& $P_{espcn}$ &$S_{espcn}$ &$\bf P_{espcn}^{c}$ & $ \bf S_{espcn}^{c}$ \\
\hline
\multirow{4}{*}{Set5} & 2x &21.10 & 0.77& 23.96&0.83 &24.06 &0.84 &21.21 &0.75 &21.87 &0.79\\
& 3x & 21.63& 0.79 &22.38 &0.85 &24.75 &0.86 & 22.50 & 0.80 & 22.88 & 0.83\\
& 4x & 20.12 & 0.72&21.92 &0.78 &21.94 &0.78 &21.53 &0.77 &21.92 &0.78\\
& 8x & 17.72 & 0.59 &18.17 &0.61 &18.34 &0.61 &18.48 &0.613 &18.56 &0.614\\
\hline
\multirow{4}*{Set14} & 2x &19.35 & 0.67 & 21.75 & 0.76 &21.78 &0.76 &19.50 &0.67 &20.08 &0.70\\
& 3x & 19.84 & 0.69 &20.64 & 0.77 &22.25 &0.78 & 20.63&0.73 &20.99 &0.74\\
& 4x & 18.67 & 0.62 & 20.02 & 0.69 &20.07 &0.69 &19.75 &0.68 &20.04 &0.69\\
& 8x & 16.84 & 0.52 & 17.16 & 0.54 &17.29 &0.54 &17.35 &0.55 & 17.43&0.54\\
\hline
\multirow{4}*{Urban} & 2x & 16.57 & 0.63 &18.87 &0.74 &18.86 &0.74 &16.93 &0.63 &17.33 &0.66\\
& 3x & 17.63 & 0.66 &18.96 &0.76 &20.03 &0.76 &18.68 &0.71 &18.87 &0.72\\
& 4x & 15.85 & 0.58 &17.05 &0.65 &17.07 &0.65 &16.96 &0.64 &17.05 &0.64\\
& 8x & 14.42 & 0.47 &14.74 &0.49 &14.81 &0.49 &14.96 &0.49 &14.97 &0.48\\
\hline
\multirow{4}*{BSD} & 2x &21.00 & 0.71& 23.27&0.80 &23.28 &0.80 &20.92 &0.71 &21.67 &0.75\\
& 3x & 19.82 & 0.66 &20.27 &0.75 &21.59 &0.75 &20.30 &0.70 &20.60 &0.72\\
& 4x & 20.13 & 0.67&21.32 &0.73 &21.35 &0.73 &21.06 &0.73 &21.38 &0.73\\
& 8x & 18.18 & 0.56 & 18.38&0.58 &18.51 &0.58 &18.70 &0.58 &18.70 &0.58\\
\hline
\end{tabular}
\label{result_table_all2}
\end{table*}
\section{Conclusion}
A large number of research papers have been published in the recent past by designing different models or algorithms that work reasonably well. The unique contribution of our work is that it improves the performance of any existing method, rather than proposing another technique. In this paper, we have proposed a robust edge-preserving loss function that adds performance gain in terms of PSNR and SSIM to any existing model, without increasing the computational cost involved in testing. We train the existing model by adding weighted Canny edge based loss. Minimizing this loss function helps to preserve the edges by giving more weightage to the edges. As shown by the Tables of results, the PSNR and SSIM values obtained after including our MSCE loss function are consistently better.
\pagebreak
|
1,314,259,995,931 | arxiv | \section{Introduction}
Today finding geographical position is a normal subject. Every smart mobile phone has the accessibility and finds our geographical longitude, latitude and height from sea level up to few ten meters accuracies \cite{hulbert2001accuracy,wing2005consumer}. But as we know and we have had the experience in our real life; technology makes our life much easier than before. But the problem is when we become familiar with a technology and depend on it. In this situation when we cannot have access to the technology we are not able to do anything; only we have to wait for solving the problem and continue our normal life. But always we need to have some alternatives which are more independent of the technology and usually the most important reference is nature near around us \cite{pappalardi2001alternatives}. A part of the nature is the sky, where we don't pay attention to it at all, and it covers half of our observable solid angle.
Ancient times navigators were using stars as a reference to find their ways, right directions \& positions. By development of the technology, the measurements have become more accurate \cite{secroun2001high}. Izadmehr et.al. \cite{IzadmehrArXiv180600607I} tried to improve the measurements, this report tries to optimize the errors by using as the most numbers of stars as possible in its taken pictures. Use a very light system to be a portable and low price instrument to obtain our position on the Earth coordinate system (CS) as accessible as it could be. Meanwhile, the direction of the north at the position of the observer is accessible by this method with an accuracy of 6 arcsec too. Benefits of the instrument are as follow:
\begin{enumerate}
\item Providing angle with true north not magnetic north.
\item It's low cost. The electronic board is about 200\$. We need a camera with angular resolution better than 0.0025 degrees, which most of the cameras today have the accessibility.
\item It is portable, the board and the camera and a laptop approximately about 3 to 4 kg.
\item Need to a narrow field of view of less than 12 degrees.
\end{enumerate}
The details of the instrument and its capabilities have been presented in Izadmehr et.al. \cite{IzadmehrArXiv180600607I}
. It is optimized the accuracy in this report by a new method which is based on the least square method by using about 80 to 100 stars in each taken picture from the sky.
In chapter 2 it is presented the procedure for finding the position, its details, and sequential steps. Chapter 3 presents the calculation method of paper which is quite a new method to improve the accuracy of the results via least square method by using as the most number of stars as possible in each taken picture. In chapter 4 it is presented the result of 50 nights observations. Each night, it is taken 100 images from a part of the sky with the exposure time of 0.5 seconds. But 50 taken pictures in different 50 nights have been taken from different parts of the sky. By using the optimization method the accuracy of the positioning improved up to 50 times better. Chapter 5 concludes the results and the futures plans of this work.
\section{Positioning procedure}
The main goal of this section is calculation of an observed longitude and latitude $(\lambda, \varphi)$ using star pattern of the night sky. $\lambda$, $\varphi$ and the north direction are outputs of the following procedure:
\begin{equation}\label{eq:1}
A_2 A_3 \textbf{W} = A_4 A_1 \textbf{V}
\end{equation}
Where $W$ and $V$ are unit 3 dimensional vectors in camera and reference CS respectively. $A_i$ are $3\times3$ rotational matrices for the projective of the two vectors from one system to another one (Figure~\ref{fig:Schematic_design_of_coordinates_conversions}).
Matrix $A_1$ rotates ICRF\footnote{International Celestial Reference Frame} to ITRF\footnote{International Terrestrial Reference Frame}.
Matrix $A_4$ rotates ITRF to the observers CSs. (Figure~\ref{fig:Schematic_design_of_coordinates_conversions})
From left hand side of Eq.~\ref{eq:1}, it should project the camera taken picture to the observer's CS and result compares with the previous results.
Matrix $A_3$ projects the taken picture from the night sky to the essential plane on the inclinometer CS. The optimization of the paper which improves the result of our positioning is in the matrix $A_3$. The uncertainty is due to the installation of the camera and inclinometer, which is dominant in the configuration system. Therefore we tried to decrease it by a least square method.
Matrix $A_2$ projects the gravity CS essential plane to the observer's CS. Inclinometer plane is projected to the horizontal plane of the observer by the output two angels of the inclinometer with the accuracy of 0.0025 degrees.
The main plane of the observer's local CS is on its horizontal plane, but the mismatching is the angle between their axis. the x-axis of the horizontal plane is to the north, and the angle between the x-axis of the observed CS and the north direction is the north angle which is one of the outputs of the system for the observer.
\begin{figure
\centering
\subfloat[Configuration of the used coordinate systems. ]{%
\includegraphics[width=0.75\textwidth]{Coordinate_systems_shematic.png} %
}
\includegraphics[width=0.75\textwidth]{Schematic_design_of_coordinates_conversions-eps-converted-to.pdf}%
\caption{
\textbf{W} and \textbf{V} are unit vectors in camera CS and ICRF, respectively.\\\hspace{\textwidth}
$A_1$ is the rotational matrix which converts star vectors in ICRF into ITRF.\\\hspace{\textwidth}
$A_2$ converts sensor CS into the local CS.\\\hspace{\textwidth}
$A_3$ converts camera CS into the sensor CS.\\\hspace{\textwidth}
$A_4$ converts ITRF into the observer's local CS.}
\label{fig:Schematic_design_of_coordinates_conversions}%
\end{figure}
\section{Calculating rotational matrix between camera and inclinometer Coordinate System ( matrix $A_3$)}
Matrix $A_3$ could be minimized by a calibration procedure. The procedure is recursive. At first, matrix $A_4$ is calculated for latitude and longitude of the observer and then multiplied by $V$ vector to obtain the vector in the local CS. Angles of the rotation matrix between camera CS and local CS are calculated first. Then, using these angles, the camera CS is rotated to be projected on the horizon plane. The inclinometer is aligned to the plumb line using inclinometer outputs. This procedure is not the most accurate one. The best solution is to calculate the $A_3$ in a known location and use it as a constant. It means, star vectors $W$, $V$ are used for the known observer latitude and longitude. First, $A_4$ decomposed to its rotational matrices:
\begin{equation}\label{eq:A4}
A_4= R_3(c)R_2(\frac{\pi}{2}-\lambda)R_3(\beta)
\end{equation}
Where $\beta$ and $\lambda$ are longitude and latitude of the observer, respectively.
Using $A_4$, equation Eq.~\ref{eq:1} converts as follow:
\begin{equation}\label{eq:2}
R_3(-c)A_2 A_3 \textbf{W} = R_2(\frac{\pi}{2}-\lambda)R_3(\beta) A_1 \textbf{V}
\end{equation}
Latitude and longitude are available for the certain points, but $c$ depends on the camera direction. In the right side of Eq.~\ref{eq:2}, $A_1$, $V$, $\beta$ and $\lambda$ are quite available, therefore it becomes:
\begin{equation}\label{eq:3}
A' \textbf{W} = \textbf{V$_{Alt-AZ}$}
\end{equation}
Where \textbf{V$_{Alt-AZ}$} is quite known vector in Horizontal CS and $A'$ is $R_3(-c)A_2 A_3$. \textbf{V$_{Alt-AZ}$} and \textbf{W} are available star vectors for each star on the taken picture from the night sky. Therefore, $A'$ could be optimized by the least square method, which has been described in section \ref{calculation_rotational_matrix_from_star_vectors}.
\subsection{calculation of the optimized rotational matrix from the star vectors} \label{calculation_rotational_matrix_from_star_vectors}
The algorithm for calculating this rotational matrix divides into two categories. The first one uses a minimal set of data and then solves three possibilities for the nonlinear equations to obtain the altitude \cite{wang2014}. This category is generally called deterministic category,
a name which has been popularized by Wertz \cite{wertz2012}. The most well known deterministic algorithm in current use is the TRIAD algorithm \cite{shuster2004}.
The rest of the algorithms, generally called optimal, which determines the altitude by minimizing
an appropriate cost function \cite{markley1993attitude} . In spite of the deterministic method which is using two vectors, the new method uses all extracted stars from the sky picture.
The equation $A\textbf{W} = \textbf{V}$ can be solved by minimization of the non-negative equation \cite{wahba1965least}:
\begin{equation}\label{eq:4}
L(A) = \sum_{i=1}^{n} |\textbf{W}'_i - A\textbf{V}'_i|^2
\end{equation}
Where $n$ is the number of the identified stars in the picture. Usually for a normal camera with aperture120 mm, focal length 600 mm and exposure time of 0.5 seconds in a normal sky with magnitude limitation up to 9.5. The number of the identified stars in the picture, $n$, is usually about 80 to100.
\cite{IzadmehrArXiv180600607I}
Eq.~\ref{eq:4} can be broken down to:
\begin{equation}\label{eq:5}
\begin{aligned}
L_x(A) = \sum_{i=1}^{n} |{W_x}'_i - (A_{0,0}{V'_x}_i+A_{0,1}{V'_y}_i+A_{0,2}{V'_z}_i)|^2\\
L_y(A) = \sum_{i=1}^{n} |{W_y}'_i - (A_{1,0}{V'_x}_i+A_{1,1}{V'_y}_i+A_{2,2}{V'_z}_i)|^2\\
L_z(A) = \sum_{i=1}^{n} |{W_z}'_i - (A_{2,0}{V'_x}_i+A_{2,1}{V'_y}_i+A_{2,2}{V'_z}_i)|^2
\end{aligned}
\end{equation}
These three equations are solved independently and each of them provides a row of matrix$A$. ALGIB \cite{bochkanov2011alglib} has been used for solving each independent equation of Eq.~\ref{eq:5}. ALGIB reduce the matrix to bidiagonal form and then diagonalize it using QR algorithm. This simple method is quite efficient, but it can speed up an algorithm significantly \cite{goodall199313}.
Matrix $A_3$ calculated by Eq.~\ref{eq:6}:
\begin{equation}\label{eq:6}
A' = R_3(-c)A_2A_3
\end{equation}
In Eq.~\ref{eq:6}, $A_2= R_2(-b)R_1(-a)$. $a$ and $b$ are rotational angels from sensor outputs. Therefore, $R_3(-c)A_2$ is equal to:
\begin{equation}\label{eq:7_1}
R_3(-c)A_2=
\left[
\begin{array}{ccc}
C_cC_b & C_cS_bS_a+S_cC_a & C_cS_bC_a-S_cS_a \\
-S_cC_b & -S_cS_bS_a+C_cC_a & -S_cS_b C_a-C_cS_a \\
-S_b & C_bS_a & C_bC_a
\end{array}
\right]
\end{equation}
Where $C_i$s and $S_i$s are standing for $\cos(i)$ and $\sin(i)$ respectively.
In Eq.~\ref{eq:6}, elements of the $A_3$ and $c$ angle are unknown. Because $c$ is unknown, all elements of the matrix product $R_3(-c)A_2A_3$ couldn't be used. The third row of the $R_3(-c)A_2$ is independent of the angle $c$ (Eq.~\ref{eq:7_1}). Three equations are extracted from Eq.~\ref{eq:6}:
\begin{equation}\label{eq:7}
\begin{aligned}
-S_bA_{0,0}+C_bS_aA_{1,0}+C_bC_aA_{2,0} = A'_{2,0}\\
-S_bA_{0,1}+C_bS_aA_{1,1}+C_bC_aA_{2,1} = A'_{2,1}\\
-S_bA_{0,2}+C_bS_aA_{1,2}+C_bC_aA_{2,2} = A'_{2,0}
\end{aligned}
\end{equation}
Eq.~\ref{eq:7}, shows three independent equations, with 9 unknown elements. Eq.~\ref{eq:7} can be converted to a matrix equation:
\begin{equation}\label{eq:8}
\left[
\begin{array}{ccc}
A_{0,0} & A_{1,0} & A_{2,0} \\
A_{0,1} & A_{1,1} & A_{2,1} \\
A_{0,2} & A_{1,2} & A_{2,2}
\end{array}
\right]
\times
\left[
\begin{array}{c}
-S_b\\
C_bS_a\\
C_bC_a
\end{array}
\right]
=
\left[
\begin{array}{c}
A'_{2,0}\\
A'_{2,1}\\
A'_{2,0}
\end{array}
\right]
\end{equation}
This equation could be written as:
\begin{equation}\label{eq:9}
A'' \textbf{V}' = \textbf{W}'
\end{equation}
Where $\textbf{V}' $ and $ \textbf{W}'$ obtain by the image data, stars catalog, latitude, longitude and inclinometer outputs. For each picture two vectors $\textbf{V}' $ and $ \textbf{W}'$ are created. Therefore, for a set of $\textbf{V}' $ and $ \textbf{W}'$ vectors, from different images, matrix elements are obtained by the optimization method as described previously.
\section {Results}
Results of positioning error for two situations have been used. One without calculating $A_3$ and just minimizing the $A_3$ by instrument, the other one is with the application of the least square method, calculating it with the presented method. Therefore latitude and longitude for 50 different locations and nights are used for both conditions. Each night 100 images have been taken in the same direction for the investigations.
Average and standard deviation of this part of the sky for each night has been calculated. Since images from a part of the sky for each night has been taken in the same direction, standard deviation shows error populated by the calculation and image processing procedures. The average error indicated error populated by the inclinometer.
In Figures~\ref{fig:old_latitude_error} and ~\ref{fig:old_longitude_error}, results are shown for the positioning without calculating $A_3$ are shown. In Figures~\ref{fig:LatitudeResults} and ~\ref{fig:LongitudeResults}, results are shown for the positioning with calculating $A_3$. Average absolute deviation reduces for latitude and longitude, from $30.168$ to $4.77$ arcseconds and from $48.6$ to $5.53$ arcseconds, respectively ( Figure~\ref{fig:Reduced_error_shematic}).
\begin{figure}
\centerline{\includegraphics[width=0.7\textwidth]{latitude_error-eps-converted-to.pdf}}
\caption{Latitude error in 50 nights as well as different locations and camera directions. Average absolute deviation of latitude is $0.5028$ arcminutes. Each point is obtained by at least the average of 100 images.
\label{fig:old_latitude_error}}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.7\textwidth]{longitude_error-eps-converted-to.pdf}}
\caption{Longitude error in 50 nights as well as different locations and camera directions. Average absolute deviation of longitude is $0.816$ arcminutes. Each point is obtained by at least the average of 100 images.
\label{fig:old_longitude_error}}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.7\textwidth]{Latitude-results.png}}
\caption{Latitude error in 50 nights as well as different locations and camera directions. Average absolute deviation of latitude is $4.77$ arcseconds. Each point is obtained by at least the average of 100 images.
\label{fig:LatitudeResults}}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.7\textwidth]{Longitude-results.png}}
\caption{Latitude error in 50 nights as well as different locations and camera directions. Average absolute deviation of latitude is $5.53$ arcseconds. Each point is obtained by at least the average of 100 images.
\label{fig:LongitudeResults}}
\end{figure}
\begin{figure}
\centerline{\includegraphics[width=0.7\textwidth]{Reduced_error_shematic.png}}
\caption{Scale of the old and new error. For latitude error has been reduced from $30.168$ arcseconds to $4.77$ arcseconds. For longitude error has been reduced from $48.6$ arcseconds to $5.53$ arcseconds.
\label{fig:Reduced_error_shematic}}
\end{figure}
\section{conclusion}
To get better results than about 5.5 arcseconds in longitude and 4.5 arcseconds in latitude (or approximately 200 meters), it is needed to be use a more accurate inclinometer. Instead on SCA-100T1, A701-2 from Jewell Instruments can be used which reduced the error 12.5 times but increases the price up to 20 times. On the other hand, the exposure time of the pictures taken from the sky needs to be decreased. Therefore it is needed a more advanced optics instruments with increased light gathering, which increases the price and of course makes harder the portability of the instrument. Therefore with the accessible facilities, this accuracy about 5.5 and 4.5 arcseconds are the most optimized results. This obtained accuracy is only a few times weaker than GPS \cite{kaartinen2015accuracy}, which is a very good alternative when GPS is not accessible or doesn't work well, which is great.
\bibliographystyle{spmpsci}
|
1,314,259,995,932 | arxiv | \section{Introduction}
This talk does not pretend to be a professional exercise in the history
of science, but all cosmologists have a fascination with origins:
thus we should expect to have a reasonable familiarity with the developments
that set our subject in motion. There is indeed a conventional narrative that
has been repeated in compressed form in innumerable classrooms and
public talks -- generally centring on \citet{Hubble1929} as having provided the first observational
evidence for an expanding universe. But this conference
represents the convergence of many individual trajectories
of re-evaluation, all reflecting a growing recognition that our standard tale
is seriously at variance with the actual events.
Chief among the casualties of this over-simplification has been V.M. Slipher.
His existence was not hidden, and he appears on the first page
of the textbook by \citet{Peebles1971}, which was a great influence on my
generation of cosmologists. But a proper appreciation of Slipher's work was
perhaps hindered by the fact that
all of his major papers appeared in the obscurity
of internal Lowell reports, or journals that ceased publication.
In 2004, I tried and failed to discover electronic versions of
Slipher's seminal papers anywhere on the web. But the Royal Observatory Edinburgh
is fortunate enough to possess an outstanding collection of
historical journals, so I was able to track down the originals and make scans.
Since then, I am proud to say that my
complaints to ADS have had an effect: not only are Slipher's main
papers all now listed (Slipher's 1917 masterpiece lacked any entry whatsoever in the
database), but you can also find the scans I made through ADS.
Reading these papers cannot fail to generate an enormous respect
for Slipher as a scientist:
they are confidently argued, and make some
points that are astonishingly perceptive with the aid of 21st-century
hindsight. Given the confusion that naturally attended the first engagement
with modern cosmological questions, this is all the more impressive. When
all is unclear, it is tempting to hedge papers with so many qualifications
that no conclusion ever emerges -- but the mark of a great scientist is to
stick your neck out and state firmly what you believe to be true.
Slipher achieves this on a number of occasions, and his conclusions
have stood the test of time.
The intention of this presentation is to try to illuminate
the magnitude of Slipher's achievements by viewing them through the eyes of a working
cosmologist, and going back to the analysis of the original data.
In particular, by comparing with modern data, the aim is to understand
why Slipher did not use the general tendency for galaxies to be
redshifted as evidence for the expansion of the universe -- but how
he came to reach an under-appreciated conclusion of similar importance.
\section{Slipher's great papers}
Before focusing on Slipher's most important paper (\citejap{Slipher1917}), it
is worth giving a brief overview of his achievements during the
period when he was the lone pioneer of `nebular' spectroscopy.
During all of this, it should be clearly borne in mind that the
nature of the nebulae was unclear in this period; although the
`island universe' hypothesis of distant stellar systems was
a known possibility, a considerable weight of opinion viewed
the spiral nebulae as planetary systems in formation.
In \citet{Slipher1913} the blueshift of Andromeda was measured to be $300\kms$.
This velocity was very high by the standards of the time, and there
could understandably be skepticism about whether this really
was a Doppler shift (cf. the quasar
redshift controversy). Slipher trenchantly asserts that ``\dots we have
at the present no other interpretation for it. Hence we may conclude
that the Andromeda Nebula is approaching the solar system\dots''. Since
the blue shift is now believed to be induced by dark-matter density
perturbations, it is amusing to note Slipher's speculation that the
nebula ``might have encountered a dark star''.
\citet{Slipher1914} was unknown to me prior to my 2004 archival search, but
appears to be the first demonstration that spiral galaxies rotate.
This would make Slipher a figure of importance, even if he had done
nothing else. A striking contrast with modern `publish or perish'
culture is Slipher's statement that he believed he had data showing
the tilt of spectral lines, but was not fully satisfied; therefore he
waited an entire year until he could repeat and check the results.
The paper in which Slipher
presented his results to the American Astronomical Society
(\citejap{Slipher1915}; August
1914 meeting) is perhaps the most well-known.
Out of 15 galaxies, 11 were clearly redshifted, and he
received a standing ovation after reporting this fact.
It must have been clear to all present
that this was an observation of deep significance -- even if the
interpretation was lacking at the time.
The 1917 paper is the most extensive of Slipher's works on
nebular spectroscopy, but surprisingly it seems to
be less well known than the papers of 1913 and 1915, and
I had never seen any mention of its contents prior to reading it
for the first time in 2004. The redshift:blueshift
ratio has now risen to 21:4, but it is the interpretation that is
startling.
\section{Slipher's intellectual leap of 1917}
Although the mean redshift of the 1917 sample is large and positive, Slipher does
not draw what might today be regarded as the obvious conclusion:
\medskip
{\narrower\noindent
The mean of the velocities with regard to sign
is positive, implying that the nebulae are receding
with a velocity of nearly 500 km.
This might suggest that the spiral nebulae are
scattering but their distribution on the sky is not
in accord with this since they are inclined to cluster.
}
\medskip
The term ``scattering'' clearly denotes a tendency to recede in
all directions, which must be regarded as the most basic symptom
of an expanding universe. The reason Slipher does not
state this as a conclusion is because there is an issue of
reference frame. Astronomers of this era were completely familiar
with the fact that the Sun moves with respect to the nearby
stars, inducing a dipole pattern in the observed velocities. It
must therefore have seemed entirely natural to fit a dipole pattern
to the sky distribution of velocities.
Slipher makes this analysis, deducing a mean velocity of $700\kms$
for the Sun and thus
noting that we are not at rest with respect to
the other galaxies on average. He then
makes a tremendous intellectual leap, which is described in language
of a beautiful clarity:
\begin{figure}[ht]
\centering
\begin{tabular}{c}
\includegraphics[scale=0.7]{moll.eps}\\
\includegraphics[scale=0.7]{moll2.eps} \\
\end{tabular}
\caption{The sky distribution of Slipher's 1917 galaxies, in Mollweide
projection of celestial coordinates, indicating the galactic plane at
$b=\pm15^\circ$. Velocities covering the range $-300\kms$ to $+1000\kms$
are coded red solid (redshift) or blue open (blueshift), with the width of the symbol
being proportional to the magnitude of the velocity. The top panel shows the
observed velocities; the lower panel shows the same data after removal of the
best-fitting dipole. Removal of the dipole reduces the mean velocity from
$502\kms$ to $145\kms$ with a dispersion of $414\kms$.}\label{peacockfig01}
\end{figure}
\medskip
{\narrower\noindent
We may in like manner determine our motion relative to the spiral
nebulae, when sufficient material becomes available. A preliminary
solution of the material at present available indicates that we are
moving in the direction of right ascension 22 hours and declination
$-22^\circ$ with a velocity of about 700 km. While the number of
nebulae is small and their distribution poor this result may still
be considered as indicating that we have some such drift through
space. For us to have such motion and the stars not show it means
that our whole stellar system moves and carries us with it. It has
for a long time been suggested that the spiral nebulae are stellar
systems seen at great distances \dots This theory, it seems to me,
gains favor in the present observations.
}
\medskip
This argument is a dizzying shift of perspective: we start in the
Milky Way looking out at the nebulae, from whose dipole reflex motion
Slipher correctly infers that the entire Milky Way is in motion at a
previously undreamed-of speed. This is almost as shocking a discovery
as Copernicus's proposal that the Earth is in motion. But then the
perspective shifts, and suddenly Slipher imagines himself to be within
one of the nebulae -- looking out at the Milky Way and other nebulae:
since they all have rms motions in the region of $400\kms$, they must
clearly all be the same kind of thing. Hence the nebulae are hugely
distant analogues of the Milky Way. This, remember, is 8 years before
Hubble detected Cepheids in Andromeda and settled the `island
universe' question directly. Slipher was not actually the first to
consider measuring the motion of the Milky Way in this fashion (see
\citejap{Sullivan1916}; \citejap{Truman1916}; \citejap{Young1916};
\citejap{Paddock1916}; \citeauthor{Wirtz1916} \citeyear{Wirtz1916},
\citeyear{Wirtz1917}; I thank Michael Way for pointing out these
references). But all these investigations used Slipher's data, and
somehow their conclusions lack his confidence and compact clarity
regarding the physical implications -- although it would be
interesting to know if he was motivated by these earlier papers.
Given the neatness of the argument that Slipher uses here, one can
hardly complain that he does not focus on the fact that the mean
redshift is non-zero, even after adjustment for the best-fitting
dipole. Indeed, this feature is not statistically compelling: the mean
redshift after dipole subtraction is $145\kms$ with an rms of
$414\kms$, which is only a $1.8\sigma$ deviation from zero. This transformation
of the data can
be seen at work in the sky distributions of Slipher's data shown in
Figure \ref{peacockfig01}:
the largest velocities are concentrated around $\alpha=12^h$,
$\delta=0^\circ$ (dominated by the Virgo cluster), and so can be
heavily reduced by an appropriate dipole -- even though the pattern of
residuals shown in the lower panel of Figure \ref{peacockfig01}
is clearly non-random.
It is a great pity that Slipher
did not revisit this analysis with the redshifts he continued
to accumulate. Rather than write a further paper, he was content
simply to have these results appear in Eddington's (\citeyear{Eddington1923})
book (page 162). By this time, there were 41 velocities, of which 36 were
positive; the most negative remained at the $-300\kms$ of M31,
whereas 5 objects had redshifts above $1000\kms$, including
$1800\kms$ for NGC584, which came close to doubling the maximum
velocity of the 1917 data. If we repeat Slipher's 1917 analysis
with the expanded 1923 dataset, the mean velocity after subtracting
the best-fitting dipole rises to $201\kms$ with an
rms of $508\kms$; this is now a $2.5\sigma$ deviation from zero,
and so Slipher's velocities alone give a very clear signal of
a general tendency towards expansion (in fact, for reasons explained
below, this analysis greatly underestimates the significance of the effect).
Eddington does not attempt a dipole analysis, but his discussion
of Slipher's data clearly focuses on the high mean value of the raw data as
representing a general tendency for galaxies to be redshifted, albeit
with some dispersion. However, this is not simply an abstract statement about
the pattern of the numbers: for reasons explained in the
following section, Eddington had a theoretical
expectation of a general redshift.
\section{The theoretical prior}
By the time of Slipher's 1917 analysis, the theorists
were on the march. Two years after the
creation of General Relativity, \citet{Einstein1917} had created his
static cosmological model, introducing the cosmological
constant for the purpose. This is a wonderful paper, which can be
read in English in e.g. \citet{Bernstein1986}, and the basic
argument is one that Newton might almost have generated.
Consider an infinite uniform sea of matter, which we want to
be static (an interesting question is whether Einstein was
influenced by data in imposing this criterion, or whether
he took it to be self-evident): we want zero gravitational
force, so both the gravitational potential, $\Phi$ and
the density, $\rho$ have to be constant. The trouble is,
this is inconsistent with Poisson's equation, $\nabla^2\Phi
= 4\pi G \rho$. The `obvious' solution (argues Einstein) is
that the equation must be wrong, and he proposes instead
\be
\nabla^2\Phi + \lambda\Phi = 4\pi G \rho,
\ee
where $\lambda$ has the same logical role as the
$\Lambda$ term he then introduces into the field equations.
In fact, this is not the correct static Newtonian limit of
the field equations, which is
$\nabla^2\Phi + \Lambda = 4\pi G \rho$. But either
equation solves the question posed to Newton by
Richard Bentley concerning the fate of an infinite
mass distribution; Newton opted for a static model despite
the inconsistency analysed above:
\medskip
{\narrower\noindent \dots it seems to me that if the matter of our sun
and planets, and all the matter of the universe, were evenly
scattered throughout all the heavens, and every particle had an
innate gravity towards all the rest, and the whole space, throughout
which this matter was scattered, was but finite; the matter on the
outside of this space would by its gravity tend towards all the
matter on the inside, and by consequence fall down into the middle
of the whole space, and there compose one great spherical mass. But
if the matter was evenly dispersed throughout an infinite space, it
would never convene into one mass\dots
}
\noindent
(see e.g. pp. 94-105 of \citejap{Janiak2004}).
With the advantage of hindsight, Newton seems tantalisingly close at this time
(10 December 1692) to anticipating Friedmann by over
200 years and predicting a dynamical universe.
But at almost the same time as Einstein's work,
the first non-static cosmological
model was enunciated by \citet{deSitter1917} -- based on the same $\Lambda$ term that
was intended to ensure a static universe. It is interesting
to compare the original forms of the metric in these two
models, as they are rather similar:
\begin{eqnarray}
{\rm Einstein:}\quad d\tau^2 &=& - dr^2 - R^2\sin^2(r/R)d\psi^2 + dt^2 \\
{\rm de Sitter:}\quad d\tau^2 &=& - dr^2 - R^2\sin^2(r/R)d\psi^2 + \cos^2(r/R)dt^2
\end{eqnarray}
Staring at these naively, it it tempting to conclude that clocks
slow down at large distances, $\propto \cos(r/R)$, where $R$ is
a characteristic curvature radius of spacetime. In this
case, a redshift-distance relation would be predicted to be
$z\simeq r^2/2R^2$ i.e. quadratic in distance. But we have
made an unjustified assumption here, which is that a free
particle (or galaxy) will remain at constant $r$, which we
know does not actually happen. For this reason, the
correct redshift-distance relation is linear at lowest order. This was
first demonstrated by \citeauthor{Weyl1923} (\citeyear{Weyl1923}; the 5th
edition of his book -- frustratingly, the common Dover reprint is the
4th edition). This was also shown independently by
\citet{Silberstein1924} and \citeauthor{Lemaitre1927}
(\citeyear{Lemaitre1927}; see \citejap{Lemaitre1931} for an English
translation). Interestingly, Eddington (\citeyear{Eddington1923}) proves (on
page 161) that test particles near the
origin experience an outward acceleration proportional to
distance, and from his discussion he clearly sees that this
motion will make a contribution to the observed redshift --
but he never clearly states that the leading effect is thus
a linear term in $D(z)$.
News of this prediction seems to have spread
rapidly, and there were soon a number of attempts
to look for a linear relation between redshift and distance.
It should be made clear that no-one at this stage was thinking
about an expanding universe (Friedmann was perhaps an
exception, but he was decoupled from the interplay between
theory and experiment in the West). The aim was to
search for the `de Sitter effect' and thus
`measure the radius of curvature of spacetime'.
This game can be played with any set of objects where
radial velocities exist, together with some indicator of
distance.
A number
of people (\citejap{Silberstein1924}; \citejap{Wirtz1924};
\citejap{Lundmark1924}) tried this, and the paper by
Lundmark is particularly impressive and comprehensive.
In general, distances to galaxies were lacking
at that time, although the detection of Novae in M31 had suggested a
distance around 500 kpc, which is not too far off. What Lundmark did
was to assume that galaxies were standard objects; thus he was able to estimate distances
in units of the M31 distance, based on diameters and on apparent
magnitudes (these agree reasonably well). The distances clearly
correlate with Slipher's redshifts, as shown in Lundmark's Figure
5 (recreated here as Figure \ref{peacockfig02}). Lundmark was not as impressed
with his result as perhaps he ought to have been:
``\dots we find that there may be a relation between the
two quantities, although not a very definite one''.
But despite the scatter, a positive correlation
of distance and redshift does exist, of a significance so
obvious that it hardly needs formal quantification.
Thus by 1924 it was clear that radial velocities tended to be positive,
and to increase with distance, even if it was not possible
to say with any confidence that the redshifts scaled linearly with distance.
In any case, we reiterate that the physical understanding of the
meaning of any distance-redshift relation still
had some way to go in 1924. Despite Eddington's insight
that there was a kinematical effect at work, the common
interpretation of the de Sitter model in the above papers
was the static view that redshifts simply probed the
curvature of spacetime. And even in 1929 Hubble would mention
``the de Sitter effect'' and Eddington's argument for a kinematical
contribution without actually saying that expansion
dominates locally.
\begin{figure}[ht]
\center{\includegraphics[scale=0.5]{lundmark.eps}}
\caption{\citet{Lundmark1924} searched systematically for a linear
distance-redshift relation, using a variety of classes of
astronomical object. His most impressive result was obtained
using nebulae. Lacking any direct distance estimates for these,
he took a standard-candle approach, in which relative distances
were measured using the apparent magnitudes and/or diameters of
nebulae; results were quoted in units where the distance to
M31 was unity. The solid line shows the modern truth, assuming
$H_0=73\kmsmpc$ and a distance to M31 of 0.79 Mpc. It can be seen
that Lundmark's approach works reasonably well out to
$D/D_{\rm M31}\simeq 25$, but thereafter comes adrift as dwarf
galaxies are assigned incorrectly high distances (8 further dwarfs
exist at larger values of $D$, and these are not shown).}\label{peacockfig02}
\end{figure}
\section{Comparison with modern data}
How well might the studies of a Century ago be expected to work with
modern data? Today, we can measure relative distances to a
precision of order 5\% using Cepheid variable stars out to $D\simeq 30\mpcoh$,
or out to almost arbitrary distances using SNe Ia (with the aid of the Hubble
Space Telescope in both
cases). The traditional distance ladder starting with star clusters
within the Milky Way can be used, as in the HST Key Program value of
$H_0=72\pm 8\kmsmpc$ (\citejap{Freedman2001}), or a more accurate value
obtained by absolute calibration of the Cepheid distance scale using
the maser galaxy NGC4258, yielding $H_0=73.8\pm 2.4\kmsmpc$ (\citejap{Riess2011}).
Supernovae are especially useful in studying the expansion at larger
distances, since they can readily be detected to $z\simeq1$
(or beyond with effort) -- hence the ability of the SNe Hubble diagram
to probe cosmic acceleration. Figure \ref{peacockfig03}
shows the SNe Hubble relation
in Lundmark's form out to $D=60D_{\rm M31}$, where we can see that
the relation has quiet and noisy regions: the deviation from
uniform expansion is episodic. If we had data only at
$D<30D_{\rm M31}$, there would hardly be evidence for any
correlation between distance and redshift, much less a linear
relation. Things only improve when we probe to 40 or
50 times $D_{\rm M31}$.
Once we get close enough that Cepheids can be detected
(20 Mpc or so) they are a better probe than SNe, since they
are simply more numerous while the distance precision is
comparable. Figure \ref{peacockfig04} plots local Cepheid data and shows that,
closer than the noisy region at $D=20D_{\rm M31}$, we are lucky
enough to experience an unusually quiet part of the Hubble
flow (a fact that has puzzled many workers: e.g.
\citejap{Governato1997}). Although the SNe data show that
this is in fact globally unrepresentative, it is clear that
one could be forgiven for claiming a well-defined
linear $D(z)$ relation given results out to
$D=20D_{\rm M31}$ (although not with any great significance
for distance limits twice as small).
\begin{figure}[ht]
\center{\includegraphics[scale=0.55]{sn_tonry.eps}}
\caption{Data on nearby SNe Ia (taken from the compilation of \citejap{Tonry2003})
give accurate enough distances that
we can see clearly the dispersion in the $D-z$ relation caused
by peculiar velocities. This is sporadic: we can have lucky
regions where the
dispersion is low, and others, such as around $D/D_{\rm M31}=20$,
where it blows up. Typically,
we can see that high-precision distances to perhaps
$D/D_{\rm M31}=50$ would be required for a convincing demonstration of an
underlying linear relation. Again, the solid line
shows the modern truth, assuming
$H_0=73\kmsmpc$ and a distance to M31 of 0.79 Mpc.}\label{peacockfig03}
\end{figure}
\begin{figure}[ht]
\center{\includegraphics[scale=0.50]{hst_key.eps}}
\caption{The local Hubble flow is `colder' than in typical regions,
so in fact a linear $D-z$ relation might be detected given
data to $D/D_{\rm M31}\simeq15-20$. This is demonstrated by the
local Cepheid data (taken from \citejap{Freedman2001}).
Again, the solid line shows the modern truth, assuming
$H_0=73\kmsmpc$ and a distance to M31 of 0.79 Mpc.}\label{peacockfig04}
\end{figure}
\begin{figure}[ht]
\center{\includegraphics[scale=0.50]{hubble.eps}}
\caption{Hubble's 1929 data (largely Slipher's velocities, recall), on the
same scale as the previous plot. Again, the solid line
shows the modern truth, assuming
$H_0=73\kmsmpc$ and a distance to M31 of 0.79 Mpc.
Owing to the distance units as a
ratio, the form of this plot is independent of the assumed
absolute distance calibration, so that the incorrect Cepheid
calibration used by Hubble in deriving $H\simeq 500\kmsmpc$
does not contribute here. Nevertheless, the slope of the relation
is completely wrong, so Hubble's distance estimates were hugely
in error through an independent additional effect.}\label{peacockfig05}
\end{figure}
\section{Hubble's 1929 analysis}
With the above perspective, what are we to make of \citeauthor{Hubble1929}'s
1929 paper, in which a relation between distance and redshift
was announced?
Hubble used a sample of 24 nebulae, 20 of which
had redshifts measured by Slipher; and with a maximum redshift of
$1100\kms$ the sample is no deeper than that available to Slipher
in 1917. One might therefore have expected that the mean redshift
after dipole subtraction would not be significantly non-zero.
But treating Hubble's sample in the same way as Slipher's reduces the mean
redshift from $373\kms$ to $197\kms$ with an rms of $343\kms$ -- which
is a $2.8\sigma$ deviation from zero.
The extra significance is aided by the inclusion of the LMC and
SMC: being Southern objects, they constrain the dipole velocity more
strongly and prevent solutions in which the mean redshift is made
as low as is the case with Slipher's sky coverage. Since the Magellanic systems
are so nearby as to be almost part of the Milky Way, the case for including
them is not obvious.
Let us now see what is added by Hubble's distance data.
His greatest distance was only the
rather modest $D=7.5D_{\rm M31}$, so one might have expected no significant
claims: we have seen from modern data that a linear distance-redshift
relation would not reveal itself clearly with data of twice
this depth. But in fact Hubble's data
do exhibit a correlation between
distance and redshift (Figure \ref{peacockfig05}).
This plot is made in the Lundmark
form used earlier, showing distance as $D/D_{\rm M31}$; and
comparing with the
line representing modern `truth' we see that the slope is
completely wrong. It should be clearly noted that this is not
the same phenomenon as Hubble's overestimation
of $H_0$, because we have used different coordinates. Plotting distance
as $D/D_{\rm M31}$ should remove any calibration errors; thus
Hubble's distances suffer from an entirely distinct additional problem of
internal inconsistency, in addition to the well-known
miscalibration of his Cepheid scale. The symptom
is effectively that the distances for all the most distant objects
are strongly underestimated. This could be suggestive of Malmquist
bias: the distances presented by Hubble go well beyond what
was possible with Cepheids in those days, so Hubble had switched
to using the brightest individual stars as standard candles.
These have a substantial dispersion, so the most distant
galaxies for which such distances can be inferred will be those
where individual stars are abnormally luminous -- causing the
distances to be underestimated. The effect is however extremely
large (roughly a factor 2 in distance), and a simpler alternative explanation is
that Hubble may have simply mistaken compact HII regions in the more
distant galaxies for individual stars (\citejap{Sandage1958}). This is
undeniably a great irony: by combining Slipher's effectively perfect
velocity data with distance estimates that are so badly flawed,
Hubble nevertheless routinely receives sole credit for the
discovery of the expanding universe (including the
assertion that he measured the redshifts, which is frequently
encountered in popular accounts -- and too often even in those
written by professional scientists).
\begin{table}
\caption{Statistics of various early redshift samples, showing the
influence of correction for Solar motion. This is quoted in Cartesian
components within a J2000 coordinate system, in units of $\kms$.
The dipole is the least-square fit to a dipole-only model; but nevertheless
the mean residual ($\langle v\rangle$) can be significantly positive, when the population
standard deviation ($\sigma_v$) is converted to a standard error ($\sigma_v/N^{1/2}$).
In the case of \citet{Hubble1929} we show results with the full sample and also
excluding the LMC/SMC.
}
\begin{center}
\begin{tabular}{| c | r | r r r | r r | c |} \hline
Sample & $N$ & $v^\odot_x$ & $v^\odot_y$ &$v^\odot_z$ & $\langle v \rangle$ & $\sigma_v$ & Significance \\ \hline
S17 & 25 & 0 & 0 & 0 & 502 & 422 & $5.9\sigma$ \\
S17 & 25 & 566 & $-356$ & $-268$ & 145 & 414 & $1.8\sigma$ \\
S23 & 41 & 0 & 0 & 0 & 571 & 439 & $8.3\sigma$ \\
S23 & 41 & 467 & $-856$ & $-298$ & 201 & 508 & $2.5\sigma$ \\
H29 & 24 & 0 & 0 & 0 & 373 & 371 & $4.9\sigma$ \\
H29 & 24 & 462 & $-317$ & $-117$ & 197 & 343 & $2.8\sigma$ \\
H29 & 22 & 0 & 0 & 0 & 386 & 385 & $4.7\sigma$ \\
H29 & 22 & 426 & $-205$ & $-200$ & 159 & 370 & $2.0\sigma$ \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{The equivalent of Table 1, but now assuming a model
of an explicit non-zero (constant) mean velocity. Note
that, with respect to the fits of Table 1, the best-fitting
dipole is different and the dispersion
in the residuals is smaller -- representing a more
significant detection of a non-zero mean.
}
\begin{center}
\begin{tabular}{| c | r | r r r | r r | c |} \hline
Sample & $N$ & $v^\odot_x$ & $v^\odot_y$ &$v^\odot_z$ & $\langle v \rangle$ & $\sigma_v$ & Significance \\ \hline
S17 & 25 & 246 & $25$ & $-430$ & 566 & 328 & $8.6\sigma$ \\
S23 & 41 & 81 & $-109$ & $661$ & 805 & 364 & $14.1\sigma$ \\
H29 & 24 & 323 & $-267$ & 113 & 315 & 306 & $5.0\sigma$ \\
H29 & 22 & 322 & $-483$ & 424 & 493 & 286 & $8.1\sigma$ \\ \hline
\end{tabular}
\end{center}
\end{table}
\begin{table}
\caption{Statistics of Hubble's 1929 data. The population standard deviation, $\sigma_v$,
about the best-fitting linear model is given with and without dipole correction.
Results are given with the full sample and also
excluding the LMC/SMC. The units of $H$ are $\kms$, since distances are in
units of $D_{\rm M31}$.
}
\begin{center}
\begin{tabular}{| r | r r r | r r |} \hline
$N$ & $v^\odot_x$ & $v^\odot_y$ &$v^\odot_z$ & $H$ & $\sigma_v$ \\ \hline
24 & 0 & 0 & 0 & 373 & 371 \\
24 & 67 & $-219$ & 189 & 462 & 192 \\
22 & 0 & 0 & 0 & 386 & 385 \\
22 & 68 & $-235$ & 205 & 467 & 199 \\ \hline
\end{tabular}
\end{center}
\end{table}
Another interesting aspect of Hubble's analysis is that he assumes from
the start a model that includes a linear $D(z)$ relation as well as a reflex dipole:
\be
v = HD - {\bf v}_\odot \cdot {\bf\hat r}.
\ee
The famous $v-D$ plot from his 1929 paper shows not the
raw velocities, but rather the velocities corrected by the
dipole that best-fits the above relation -- i.e. the plot
has been manipulated in order to make a linear relation
look as good as possible. Admittedly, Hubble does state that
the data ``\dots indicate a linear correlation between distances
and velocities, whether the latter are used directly or corrected for solar motion.'',
but we are not shown the uncorrected plot -- and we have seen
in the case of Slipher's data that the Solar motion can change
the picture very substantially.
It is therefore worth looking carefully at the statistics of the
various samples that have been discussed, and these are collected
in Table 1. From this, it is apparent that Hubble was a little
fortunate with his
1929 data: the mean redshift
after dipole correction is substantially more significant
than Slipher's 1917 results -- and more so than even
Slipher's much deeper data of 1923. But this ceases to be
true when the LMC and SMC are removed from the 1929 sample.
Hubble's sample is therefore poised to deliver evidence
for an expanding universe, even before adding distance data.
Because of these non-zero mean velocities, we should
make it clear that
there are (at least) three distinct models worth considering:
\be
\eqalign{
(1)\ v &= - {\bf v}_\odot \cdot {\bf\hat r} \cr
(2)\ v &= \bar v - {\bf v}_\odot \cdot {\bf\hat r} \cr
(3)\ v &= HD - {\bf v}_\odot \cdot {\bf\hat r}.
}
\ee
Model 1 is all that has been considered so far.
\citet{Hubble1929} quotes $H=513\pm 60\kmsmpc$, which represents an
$8.6\sigma$ detection of a linear $D(z)$ (model 3) in comparison
to model 1 (pure dipole). But this is not the right question, since
we have seen that the mean redshift in Hubble's data is clearly
non-zero. Since model (3) naturally predicts a non-zero mean,
we have to ask whether the distance data add anything
significant beyond this fact. There are a number of ways in
which this can be assessed, and the simplest is to look at
the size of the residuals about the best-fitting linear+dipole
model. Table 2 shows these results, again with and without
the Magellanic systems.
The striking feature of Table 2 is that the standard deviations
in the residuals are smaller than in Table 1. This may seem
puzzling at first sight, since in each case the only correction
made to the data has been to remove a dipole. But in model
1 (which is what was used by \citejap{Slipher1917}), we are attempting
to minimise the mean square velocity, not the dispersion about
the mean; this naturally pushes the mean low.
If we are open to the possibility of a non-zero
mean, then we need to minimise the standard deviation. This
yields a different dipole and a more secure detection of
the mean. Indeed, the remarkable conclusion of Table 2
is that Slipher's data alone provide a very secure
detection of a non-zero mean velocity: $8.6\sigma$ in
1917 and $14.1\sigma$ in 1923. This significance is
slightly overestimated because of the reduction in
degrees of freedom caused by best-fitting the dipole.
But this would only be a slight effect -- especially with
the $N=41$ of 1923. This huge significance vindicates Eddington's ready
acceptance of a non-zero mean velocity without the need
for a detailed analysis.
We now consider the fits of model 3, which are given in Table 3.
Model 3 is clearly a better fit than model
2: for Hubble's full sample, the rms residual is reduced from
$306\kms$ to $192\kms$ -- but is this reduction significant?
The question is whether the low rms might be simply a
statistical fluctuation downwards from a true value of around 300.
For Gaussian distributions, the standard deviation, $\epsilon$, on the
estimate of the population standard deviation, $s$, is
$\epsilon = s/[2(N-1)]^{1/2}$. We are therefore
comparing $192\pm28$ with $306\pm 45$, which is only
a $2.1\sigma$ difference. But this
statement applies only to independent samples, whereas we
have the same data fitted with two different models.
A better way to deal with this objection, plus the issue that the
dipole is fitted to the data, is to use
Monte Carlo: we hypothesise that there is no information
in Hubble's distances, so we randomly permute them, and fit
in each case a model of linear $D(z)$ plus dipole. This allows us to compute
how often the rms is lowered by as much as is observed relative
to model 2.
The answer is about 1 in 23,000 for Hubble's full sample
(a $3.9\sigma$ deviation), or 1 in 3000 if we ignore
the LMC/SMC (a $3.5\sigma$ deviation). Thus the distance
estimates do contain evidence for a correlation between
distance and redshift -- but at a lesser additional degree of
significance than the basic fact that the mean redshift
tends to be positive.
The picture that emerges from this study is thus that Hubble's 1929
work was perhaps more an exercise in
validation of a linear $D(z)$ than a discovery.
Hubble's closing quote that ``\dots the velocity-distance
relation may represent the de Sitter effect\dots'' shows
that he was certainly aware of the theoretical prediction
that motivated earlier studies, such as that of \citet{Lundmark1924}.
Hubble is not explicit in his introduction about the role
that theory played in his work, although he did state that previous
(un-named) investigations had sought
``\dots a correlation between apparent radial velocities
and distances, but so far the results have not been
convincing''. Since this previous work was motivated
by a search for the de Sitter effect, we can conclude
that Hubble was influenced by the same theoretical
prior as Lundmark in 1924 -- and it is debatable which
of these investigations achieved greater success in
tracking down their quarry.
\section{Peculiar velocities today}
\subsection{Velocities and structure formation}
Slipher's demonstration that the Milky Way is not at rest is as
revolutionary a moment as Bradley's proof in 1728 from
stellar aberration that the Earth is in motion. We see
this effect today most clearly in the dipole component of the
Cosmic Microwave Background,
which measures the Solar motion as $368\kms$. The fact
that this differs from Slipher's $700\kms$ is further proof that
his sample of galaxies is not deep enough to be a fair sample
of the universe, from which one could really expect to
measure the expansion.
But Slipher's data were deep enough to
show that all galaxies have a random component
to their velocities, so that the universe contains a
peculiar velocity field. These deviations from the general
expansion have been of great importance in
cosmological research over the past several decades. The
significance of peculiar velocities is that they must have
their origin in the gravitational forces that cause the
growth of cosmic inhomogeneities. If the dimensionless
density fluctuation, $\delta$ is defined by
$\rho = (1+\delta)\langle\rho\rangle$, then conservation
of mass requires
\be
{\partial\delta\over\partial t} = -{\bf \nabla\cdot (1+\delta)u}
\simeq -{\bf \nabla\cdot u},
\ee
where $u$ is a comoving peculiar velocity: the physical peculiar
velocity is $\delta {\bf v} = a {\bf u}$, where $a(t)$ is the
dimensionless cosmic scale factor. The last equality holds in
the linear limit of small density fluctuations.
The perturbation growth rate is more commonly written in terms of the
logarithmic derivative, $f_g$:
\be
f_g\equiv{\partial \ln\delta\over \partial \ln a} =
{1\over H\delta} {\partial\delta\over\partial t} = -{1\over H\delta}{\bf \nabla\cdot u}.
\ee
In this form, the growth rate depends purely on the density of the
universe, and in years gone by this was seen as a powerful
route towards measuring the matter density. Today, with the density
measured accurately via the CMB, the focus
has shifted to using the growth of structure as a test of Einstein's
relativistic theory of gravity. This boils down to the common approximation
\be
f_g \simeq \Omega_m(a)^\gamma
\ee
where $\gamma\simeq 0.55$ for Einstein gravity, largely
independent of the value of any cosmological constant, but
non-standard gravity models can yield values that differ
from this by several tenths
(\citejap{Peebles1980}; \citejap{Lahav1991}; \citejap{Linder2007}).
The motivation for thinking about deviations from Einstein gravity is not simply
that it is always a good idea to verify fundamental
assumptions of a field where possible. The possibility that
Einstein's theory may be incorrect derives its motivation from
the most radical constituent of modern cosmology: the deduction
that roughly 75\% of the mean density is contributed by a
nearly uniform component termed dark energy. So far, the
properties of this substance are empirically indistinguishable
from a cosmological constant or vacuum energy, but are we really
sure that the dark energy exists? The doubt comes not through
uncertain data, but because the inference derives entirely from
the expansion history of the universe, which is interpreted
via the Friedmann equation
\be
H^2(a) = H_0^2\left(\Omega_r a^{-4} + \Omega_m a^{-3}
+ (1-\Omega_{\rm total})a^{-2} + \Omega_v \right).
\ee
Empirically, it is impossible to match the data on $H(a)$
using only known matter constituents without adding a constant term on the
right-hand side.
But this might simply say that the Friedmann equation is
wrong; it could be that some alternative to Einstein gravity
might generate a Friedmann equation containing a constant term
without needing to introduce dark energy as a physical
substance. The way to distinguish between these options is to look
for a scale dependence of any gravitational modifications,
and the peculiar velocities associated with the
growth of structure are a perfect tool for this job,
since they measure the strength of gravity on scales
of $\sim 10 - 100 \mpcoh$. As a result studies of the
growth rate of perturbations have, together with gravitational
lensing, assumed huge importance in recent years as an
industry has built up around cosmological tests of
gravity (see e.g. \citejap{Jain2010}).
\subsection{Direct velocity measurements}
There are two main ways in which the growth rate can be
measured, and the first to receive attention was the most
direct: estimate the peculiar velocity field from data.
To do this requires some means of estimating distances, since
\be
\delta v = v - HD
\ee
(assuming low enough redshifts that the cosmological and
Doppler peculiar redshifts simply add; at higher redshifts
we should multiply the $1+z$ factors). Taking the divergence
of the peculiar velocities inferred in this way is problematic
since we only observe the radial component. This can be cured by
adding the assumption that density perturbations under
gravitational instability are expected to be in the growing
mode, in which the velocities are irrotational.
Thus ${\bf u = -\nabla\psi}$, where $\psi$ is a velocity
potential -- which can be measured by integrating along the
line of sight (\citejap{Bertschinger1989}).
There are two problems with this method. The difficulty of
principle is that the divergence of ${\bf u}$ is proportional
to $\delta$ times $f_g$, so we need to know the absolute
level of density fluctuations. This is not so easy when using
galaxies as tracers, because they are {\it biased\/}:
$\delta_{\rm gal}\simeq b\delta$ on large scales. Thus
we measure not $\Omega_m^\gamma$, but $\Omega_m^\gamma/b$.
The second difficulty is the practical one: the only tracers
of peculiar velocities that have high space densities are galaxies,
so we need to treat them as some kind of standard candles
in order to deduce distances. Even with luminosities calibrated
by an internal velocity (the `Tully-Fisher method' for spirals;
the `fundamental plane' for ellipticals), the distances are
good to only around 20\%, and this scatter necessitates
careful statistical treatment in order to avoid Malmquist bias
and related effects.
A number of studies appeared in the 1990s claiming that these
problems could be cured (e.g. \citejap{Sigad1998}), and the consistent
result was a high value of $\Omega_m^\gamma/b$, close to unity.
It was possible to argue from the statistics of the collapse of
dark-matter haloes that $b$ should never be very much less
than unity for any given class of galaxy (e.g. \citejap{Cole1989};
\citejap{Mo1996}), and therefore these results were seen as
supporting a high matter density -- most naturally $\Omega_m=1$.
This flat model was known to be in good agreement with the
early limits on CMB fluctuations; these ruled out low-density
open models, so that a cosmological constant was the only option
if a low matter density $\Omega_m\simeq 0.2$ was preferred. A
good body of evidence existed at that time (ranging from large-scale galaxy
clustering to the baryon fraction in rich clusters) to suggest
that $\Omega_m=1$ was too high, so there were strong arguments
in favour of a $\Lambda$-dominated model (\citejap{Efstathiou1990});
the last resistance to such a model crumbled with the
arrival of the high-redshift supernova data in the late 1990s.
From this point on, direct use of peculiar velocity estimates has
been somewhat neglected. No convincing explanation has really been given
for why the 1990s velocity measurements gave what is now
considered to be too high a density, and indeed discrepant
results continue to exist in the form of apparent `streaming
velocities' that are inconsistent with what we think we know
about the mass distribution (e.g. \citejap{Watkins2009};
\citejap{Kashlinsky2009}). But this is probably a common
situation in science: where the evidence for a standard model is
strong, discrepant results are most likely to be flawed and so
a community is rightly reluctant to invest too much effort in
understanding what has gone wrong. Sometimes this approach
will ignore the key to a revolution, of course, but the large
dispersion in individual peculiar velocity estimates means that
a claimed rejection of $\Lambda$CDM based on such data will
continue to be treated skeptically.
\subsection{Redshift-space distortions}
Nevertheless, peculiar velocities remain a major tool in
conventional cosmology. This is because of the existence
of major galaxy redshift surveys, where up to $10^6$ galaxies
are used to build up a picture of the 3D distribution of
luminous matter. Such surveys have turned out to be
fantastic statistical tools, because the power spectrum of
fluctuations contains characteristic lengths that can be
measured and used as a diagnostic of conditions in the
early universe. Chief among these are the relatively broad
curvature in the spectrum around the horizon size at
matter-radiation equality, and the sharper feature at the
acoustic horizon following last scattering. These are
mainly sensitive to the density of the universe, and gave some
of the first evidence for low-density models, as mentioned above.
Today, the frontier is to measure the angular
scale corresponding to these lengths as a function
of redshift, mapping the $D(z)$ relation with standard rulers.
But the 3D picture given by redshift surveys is distorted
in the radial direction by peculiar velocities, and in a
complicated way that is correlated with the actual structures
to be studied. Rather than being a bug, this is a feature:
it causes observed galaxy clustering to be anisotropic in
a way that allows a very precise statistical characterization of
the amplitude of peculiar velocities.
\begin{figure}[ht]
\center{\includegraphics[scale=0.65]{xi2d_x4.eps}}
\caption{Redshift-space clustering as measured in the GAMA survey,
split by colour of galaxy, together with theoretical models
allowing for different degrees of linear flattening and of
the pairwise dispersion (red galaxies to the left; blue to
the right). As expected, the higher-mass haloes hosting red
galaxies give them a higher pairwise dispersion and a higher
bias -- thus a lower large-scale flattening.}\label{peacockfig06}
\end{figure}
Redshift-space distortions of clustering were first given a comprehensive
analysis by \citet{Kaiser1987}. In the limit of a distant observer, where all
pairs subtend small angles, the apparent anisotropic power spectrum
for some biased tracer is given in linear theory by
\be
\eqalign{
P(k,\mu) &= P_m(k) (b+f_g\mu^2)^2 \cr
&= b^2P_m(k) (1+\beta \mu^2)^2; \quad \beta\equiv f_g/b,\cr}
\ee
where $P_m(k)$ is the linear matter power spectrum,
$f_g$ is the desired growth factor, $f_g\equiv d\ln\delta/d\ln a$ and
$b$ is a linear bias parameter.
It is common to find this model extended to allow for `Fingers of God',
in which the density field is convolved radially by random virialized velocities in haloes.
Most usually an exponential pairwise velocity distribution is adopted,
with rms $\sigma_p$ (expressed as a length),
leading to Lorenzian line-of-sight damping in Fourier space:
\be
P_s(k,\mu)= b^2P(k) (1+\beta \mu^2)^2 / (1+k^2\mu^2\sigma_p^2/2).
\ee
The original derivation is only valid for small density fluctuations
in the linear regime, but this expression has been used with some success
inserting the non-linear real-space power spectrum of galaxies in
place of $b^2P(k)$.
An example of such modelling is shown in
Figure \ref{peacockfig06}, which presents preliminary results from the GAMA
survey (\citejap{Driver2011}). Here we see the galaxy population
split by colour, with the result that the red population shows
larger fingers of God, and less pronounced large-scale flattening
(a smaller value of $\beta$). Both these results can be understood
in terms of the typical mass of the dark-matter haloes hosting the
galaxies: where this is larger, the small-scale velocity dispersion
is larger and the large-scale clustering amplitude increases
(which reduced $\beta$, since it is $\propto 1/b$).
The bias parameter is hard to predict a priori,
meaning that this method is unable to yield a direct measurement
of the perturbation growth rate without additional assumptions.
The way this is
dealt with in practice is to realise that the real-space
clustering amplitude of galaxies is observable, so that the
bias can be measured if a model for the mass fluctuations
is assumed. At the level of a consistency check, this can be
a standard $\Lambda$CDM model taken from CMB and other data.
A slightly more general way of putting this is to say that galaxy
data determine $b\sigma_8$, where $\sigma_8$ is the usual
normalization measure of density fluctuations: the linear-theory
extrapolated fractional rms variation when averaged in spheres
of radius $8\mpcoh$. Thus the slightly unlovely combination
$f_g(z)\sigma_8(z)$ can be measured in an approximately
model-independent fashion. A compilation of recent
estimates of this quantity is shown in Figure \ref{peacockfig07}, which
shows impressive consistency with the standard model,
indicating that Einstein's relativistic theory of gravity
can be verified at about the 5-10\% level on scales
$\sim 10-30\mpcoh$ over a wide range of cosmological time.
This is hardly yet at the level of precision of solar-system
tests, but this limit will in due course be brought down to the
per cent level by future experiments such as ESA's Euclid
satellite. This should be launched around 2020 (\citejap{Laureijs2011}), and
will provide redshifts for around 50 million galaxies in a redshift band around
$z\simeq 2$, as opposed to current studies which are based on
in total around one million galaxies in the smaller volume at
$z<1$.
\begin{figure}[ht]
\center{\includegraphics[scale=0.5]{fgzplot2.eps}}
\caption{A plot of various measurements of the linear growth of
density perturbations inferred from redshift-space distortions
(see e.g. \citejap{Beutler2012} for a compilation of the data).
The solid line shows a default flat $\Lambda$-dominated model
with $\Omega_m=0.25$ and $\sigma_8=0.8$, which matches the
data very well (perhaps too well: 9 out of 10 measurements
agree with the theory to within 1$\sigma$).}\label{peacockfig07}
\end{figure}
\section{Discussion and conclusions}
It is irresistible to speculate about how Slipher might feel
were he able to hear us talk calmly of having measured a
million galaxy redshifts, and how we plan to increase
this number a hundredfold -- when each of his measurements
cost him several nights standing alone in the cold. But
from the anecdotes aired at this meeting, one suspects
he might not have been all that jealous, since he seems
to have found a deep attraction in the basic process of
observing. And it is undeniable that something is in danger
of being lost as we pursue large-scale cosmology with an
industrial efficiency; there are declining opportunities
for young astronomers to work at telescopes and experience
that sense of a mystical connection to the cosmos that
comes from standing by a telescope in the dark under
a clear sky. As the machines become larger, one way
of retaining that sense of wonder is to remember the
efforts of the pioneers.
And Slipher was a great pioneer; not simply through his instrumental
virtuosity in achieving reliable velocities where others had failed,
but through the clarity of reasoning he applied. Respect for what he
did and did not claim can only be increased by the exercise presented
here of analyzing his data as if the information was freshly available,
and trying as best we can to rid the mind of modern preconceptions. At the depth
to which he worked, and with the restrictions of sky coverage, it was
hard for the signature of a general expansion to stand out.
But it is hard not to wonder
what would have happened if data for more southerly or even slightly more distant
galaxies had been available; Slipher's comment in 1917 that
generally positive velocities ``\dots might suggest that the spiral
nebulae are scattering\dots'' suggests that he would have been open to
the conclusion of a general expansion. As shown above, such a result
can actually be obtained from Slipher's 1917 data
(a $>$$8\sigma$ detection of a non-zero mean velocity, even after
allowance for the best-fitting Solar dipole), and the signal
rises to $14\sigma$ with the expanded dataset that Slipher
gave freely to Eddington and others in 1923. It is more than a little
surprising that no-one attempted to repeat Slipher's 1917 work
with this expanded material, in which case Slipher could have
been clearly established as the discoverer of the expanding universe.
Unlike Hubble and other
workers from the 1920s, Slipher in 1917 lacked the theoretical
prior of a predicted linear distance-redshift relation, which de Sitter
only published the same year. Slipher was simply looking for a message
that emerged directly from the data, and it is therefore all the more
impressive that he was able to reach his beautiful 1917 conclusions
concerning the motion of the Milky Way and the nature of spiral nebulae as similar
stellar systems. But this is characteristic of Slipher's work: right from
his early assertion that the velocity of M31 must be Doppler in origin,
he was willing to stick his neck out and state firm conclusions when
he believed that these were supported by the data.
Rather than using hindsight to regret that he did not focus
on the non-zero mean velocity of his data, we should look on
with admiration at how much he was nevertheless able to learn from the
observations he had gathered.
By adding distance data to existing velocities, \citet{Hubble1929} claimed
not only that the mean velocity was a redshift, but that redshift correlated
linearly with distance. We have seen that Hubble was fortunate in a number
of ways to have been able to make such a claim with the material to hand:
(1) peculiar velocities are unusually low in the local
volume; (2) his mean redshift was higher than Slipher's in 1917,
despite the sample containing no greater velocities;
(3) he included the LMC and SMC, which could be viewed
as unjustified; (4) his distance estimates were flawed
in two distinct ways. Also, Hubble considered from the
outset only the hypothesis of a linear relation between
distance and redshift, and never asked how much his
information added to the simple statement that the mean
velocity was positive (which we have seen accounts for
the majority of the statistical weight in his result).
Hubble admitted that he was following up previous
searches for a distance-redshift correlation, and
these studies were explicitly motivated by the theoretical
prior of the de Sitter effect. If this prediction had
been absent in 1929, one wonders if claims of a
linear distance-redshift relation would have been made at
that time.
If the data in 1929 were really too shallow for a truly robust proof
of a linear distance-redshift relation, when was this first
seen unequivocally? Credit is often given to \citet{Hubble1931},
who pushed the maximum velocity out to $20,000\kms$ -- ten times
what had been achieved by Slipher. But the distances used in that
paper were based on the same unjustified
assumption used by Lundmark in 1924: that
galaxies could be treated as standard objects.
Indeed, Hubble gives a pre-echo of this argument in his
1929 paper, referring to the large redshift of NGC7619.
Because galaxies at these distances lacked any sort
of well-justified distance estimates,
one could imagine that the 1931 paper should have received a
good deal of critical skepticism
-- but by this time a linear $D(z)$ was already regarded as
having been proved.
In fact, right through the 1980s,
cosmology journals and conferences were treated to a continuing
critique of a linear $D(z)$ as deduced from galaxy data
by Irving Segal (e.g. \citejap{Segal1989}). Segal made major contributions to quantum
field theory, and could hardly be dismissed as a crank; the
basic problem is that, even when calibrated dynamically
as in the Tully-Fisher method, the scatter in galaxy properties
is so large that getting distances to better than around 20\%
is not feasible. Thus it was only really in the
1990s, with HST extending the reach of Cepheids and SNe Ia giving
accurate distances, that we could verify what had been generally
assumed to be true since 1929/1931.
But if the work on the distance scale in the 1990s closed the chapter
on the local distance-redshift relation that was begun in the 1920s,
Slipher's other main legacy to modern cosmology remains as relevant
as ever. The peculiar velocity field that he discovered has become
one of the centrepieces of modern efforts to measure the nature of
gravity on cosmological scales. Hence we have come full circle, from
assuming the correctness of Einstein's relativistic gravity (and of the de
Sitter solution in particular) to search for
evidence of expansion in the 1920s, to the present-day use of data on peculiar velocities
to tell us if the theory is correct. Slipher would probably have
been happy to see things being done in this direction.
|
1,314,259,995,933 | arxiv | \section{Introduction}
One of the most fundamental questions that can be asked about jets
associated with active galactic nuclei (AGN) is how do they evolve from
their dense, gas-rich parsec-scale environments out to scales of
hundreds of kiloparsecs, well outside their host galaxies. The
capability of radio wavelength interferometers to penetrate the dense
gas and dust in the centers of AGN host galaxies at high resolution
has brought us tantalizingly close to fully answering this
question. In this review, I briefly describe our current understanding
of young radio jet evolution, and the
relative role played by jet-environment interactions. I begin in
\S2 by discussing what has been learned from statistical
population studies, and devote Sections~3 and 4 to numerical jet
simulations and individual VLBA case studies that have improved our
understanding of interactions between AGN jets and their parsec-scale
environments.
\section{\label{evolution}Evolution of young AGN jets}
Our current knowledge of radio jet evolution owes a great deal to the
gigahertz-peaked spectrum (GPS) class of radio source, which comprise
approximately $\sim 10\%$ of flux-limited samples at
cm-wavelengths. Originally classified in early surveys as 'compact
doubles' by \cite{PM82}, subsequent improvements in VLBI capabilities
revealed weak central components, and in some cases faint bridges of
emission connecting them with bright outer features. It was soon
recognized that these AGN were miniature versions of the classical
kpc-scale lobe-core-lobe radio galaxies, with similar total radio
powers, but over a thousand times smaller in extent.
Based on observed size trends in GPS and compact steep spectrum (CSS)
sources (e.g., \citealt*{JS02}), self-similar expansion models (e.g.,
\citealt*{Beg96,BDO97}) were developed in which the overall linear extent
of the jets grow in proportion with their hotspot diameters. These
hotspots remain in ram pressure equilibrium with the external medium,
which implies that the evolution of the source is strongly dictated by
the density profile of the ISM. Numerical simulations (see \S~3) of
jets expanding into power-law external density profiles confirmed that
a large bow shock forms ahead of the hotspot, allowing the latter to
expand smoothly and propagate outward relatively unimpeded. Unlike the
dentist drill model for kpc-scale lobes, very little side-to-side
motion is expected for the pc-scale hotspot. Spectacular confirmation
of these models came with the first measurements of hotspot proper
motions in GPS radio galaxies (e.g., \citealt*{OCP99}), which
displayed predominantly outward (non-transverse) motion. The derived
kinematic ages, based on constant expansion, were typically $\sim
1000$ y \citep{GTP05}, confirming that these were in fact recently
launched jets.
The first problems with the standard scenario arose with detailed
studies of population statistics. In a steady-state population, one
would expect a rather flat distribution of kinematic ages, but in
fact, the observed one is peaked at young ages
\citep{GTP05}. A similar conclusion had been reached previously
by independent authors who considered the luminosity functions of GPS
sources (e.g., \citealt*{Beg96,RTP96}). Given their high luminosities,
the young radio sources were too numerous compared to their more aged
radio galaxy cousins, implying that must either dim rapidly, or die
out completely before reaching sizes of a few kpc. A lingering issue
of current debate is the relative importance of AGN fueling and
environmental interactions in dictating the evolution of radio jets at
this critical evolutionary stage.
\subsection{AGN fueling and intermittent jet activity}
Although a simple argument for intermittent jet activity in AGN can be
found in the fact that only $\sim10\%$ of all AGN associated with
super-massive black holes are radio loud, yet the lifetimes of
individual AGN are on the order of a few hundred Myr, true 'smoking
gun'-type evidence has become available only relatively recently. The
most compelling has been the discovery of the 'double-double' class of
radio galaxy \citep{SDR00}, of which roughly a dozen are currently
known \citep{MTM06}. These sources contain two sets of nested radio
lobes, which are symmetric with respect to a central component
associated with the active nucleus. The inner double resembles in many
ways a GPS source, with a peaked radio spectrum, bright hotspots, and
fast expansion speed. The outer lobe structures, on the other hand,
have sizes comparable to the those of the largest known radio
galaxies. The notable gap in radio emission between the two
components is indicative of a long quiescent period, on the order of
$10^6-10^7$ Myr, in which the jet was presumably switched off (e.g.,
\citealt*{OKB01}).
Understanding intermittent jet activity in AGN is undoubtedly an
important factor in building a complete model of jet evolution (e.g.,
\citealt*{RB97}). However, it is still a nascent field in which the
necessary statistical samples (needed because of the long evolutionary
timescales involved) are still being gathered. As I will describe
below, considerably larger progress has been made in understanding the
role played by jet-ISM interactions in affecting AGN jet evolution.
\subsection{Basic forms of jet-ISM interaction}
Because they are relatively light compared to their external
environments (density contrasts on the order of $10^{-3}$,
e.g., \citealt*{Krause03}), AGN jets are highly susceptible to external
interactions, which can be classified roughly into three main areas:
\begin{itemize}
\item {\bf Bow shock-hotspot interaction} at the jet terminus, as
in the standard models described above.
\item {\bf Cloud collisions}, which can cause bending and disruption of the flow.
\item {\bf Entrainment}, leading to shear layers, deceleration,
instabilities, and possible particle acceleration at the jet boundaries.
\end{itemize}
Although much is known about the physics of entrainment in
kiloparsec-scale jets, progress on parsec-scales has been limited by
several factors. These include the difficulty of observing faint,
diffuse emission at the jet boundaries with limited dynamic-range
VLBI, as well as a paucity of bright, nearby AGN jets which we can
resolve in a transverse direction to the flow. Furthermore, studies of
the crucial 100-1000 milliarcsecond region where jets may undergo
strong internal changes due to entrainment have been hampered by the
lack of a suitable interferometer matching the sensitivity of the VLA
or VLBA. For these reasons I will concentrate hereafter on the issue
of jet interactions with dense clouds in the nuclear region of the
host galaxy.
\section{\label{sims}Numerical jet-cloud simulations}
Numerical simulations continue to play a vital role in understanding
the structure and evolution of AGN jets, by providing the ability to
test various scenarios under controlled conditions. Early numerical
jet-medium interaction studies were able to reproduce classical bow
shock and hotspot structures by propagating supersonic outflows into
external media with uniform density and pressure gradients (e.g.,
\citealt*{HN90}). The extension of MHD codes to the fully
three-dimensional, relativistic regime has made it possible to
robustly examine powerful jet evolution through a more realistic,
non-uniform medium for the first time. I describe here two such
studies (\citealt{CW07}, and \citealt*{SB07}), that are of particular
relevance to young jet evolution.
The simulations of \cite{CW07} employ a fully 3-D, pure hydrodynamic
code to simulate the passage of the relativistic jet through a
two-phase medium. The latter consists of a single dense cloud embedded
in a constant-pressure gas. They examined cases of both high ($\Gamma
= 7$) and low ($\Gamma = 2.29$) Lorentz factor jets striking the cloud
slightly off-axis. During the interaction, an oblique shock forms in
the jet, causing it to bend. Unlike previous non-relativistic studies
(e.g., \citealt*{wang00,higgins99}), the flow itself does not undergo
any significant deceleration or decollimation, and remains stable
after the interaction event. By varying the cloud-to-ambient medium
ratio, the authors find that the highest deflections occur in the case
of low-Mach number jets hitting denser clouds, with cloud density
being the dominant factor. Thicker clouds end up being less
encompassed by the bow shock, allowing earlier interaction with the
Mach disk and stronger oblique shocks in the flow. The clouds
themselves can actually survive the event, provided the cloud/jet
density contrast is high enough to suppress most Kelvin-Helmholtz
instabilities. These regions of shocked gas may be important star
formation sites (see \S~4) and may play a role in creating the
emission-line/jet alignment effect in AGN (e.g.,
\citealt*{MBS87}).
\cite{SB07} investigate the more general case of a
jet propagating through an inhomogeneous medium in the form of a
massive ($10^{10} \;\mathrm{M_{\sun}}$), turbulently supported disk
plus a hot ($10^7$ K) ISM. Like \cite{CW07}, they use a fully 3-D pure
hydrodynamic code, although in this case a non-relativistic one for
which they derive relativistic scaling parameters according to
\cite{KF96}. In the initial phase of their simulations of a $\sim
10^{43}\; \mathrm {erg\; s^{-1}}$ jet, the morphology looks strikingly
different than those seen in other studies that assume a uniform ISM,
in that the flow attempts to seek out and pass through the
lowest-density locations in the clumpy (fractal) medium. In doing so,
multiple channels are formed and reformed, followed by the formation
of quasi-spherical bubbles around the jet and counter-jets that expand
outward. Making simple assumptions about the gas emissivity, the
authors find that these bubbles should be prominent in hard
X-rays. Once the jet reaches the outer edge of the disk and clears the
last obstruction, a stable, linear outflow develops, containing the
standard re-collimation and bow shock structures. At this point it
pierces the expanding bubble and evolves as in the uniform medium
case.
The authors find a good deal of similarity between the predicted radio
emission from their simulations and the compact symmetric object (CSO)
4C~31.04 \citep{CFG95}. This young radio source is characterized by a
large asymmetry in its jet and counter-jet structure, as well as lobe
spectral index gradients that are difficult to reconcile with standard
models of cocoon backflow \citep{giro03}. Comparison with their
simulations led \cite{SB07} to suggest that the western lobe may be
near the end of the breakout phase, whereas the eastern lobe is at a
slightly earlier stage of evolution. The strong apparent northward
deflection of the western lobe flow at the hotspot is also reminiscent
of structure found in the simulations of \cite{CW07}.
The conclusion that can be drawn from these studies is that powerful
relativistic jets are not likely to be permanently stifled by neither
direct jet-cloud collisions, nor a dense, clumpy external
medium. Instead, it is more likely that they all pass through an
evolutionary stage in which the flow may be bent and not necessarily
well-collimated. The duration of this stage is largely determined by
the power of the jet, and to a lesser extent, the jet/medium density
contrast. Given the good initial agreement with observed jet structure
from these preliminary simulations, it suggests that through careful
study of jet morphologies of young radio sources, it may be possible
to identify the precursors to both high- and low-power radio
galaxies, as well as to characterize their early evolutionary paths.
\section{\label{cases}VLBA studies of jet-environment interactions}
In addition to providing measurements of kinematic expansion speeds,
the VLBA provides a variety of unique tools for studying jet-medium
interactions on parsec scales. These include HI absorption
measurements, Faraday de-polarization and electric vector rotation
measurements at sub-milliarcsecond resolution levels. I discuss here
several recent VLBA studies of ISM interactions in weak Seyfert
jets, as well as in powerful blazars.
\subsection{Seyfert galaxies}
The relative proximity (15-20 Mpc) of Seyfert galaxies makes them
ideal targets for investigating jet-environment effects with the VLBA
at spatial resolutions approaching several thousand A.U. Given that
their jet powers are typically a factor of 100-1000 smaller than
radio-loud quasars (e.g., \citealt*{gold99}), they are much more
subject to entrainment and disruption (e.g.,
\citealt*{deyoung06}). Their sporadic accretion rate also offers the
chance to examine in detail the effects of central engine disruption
on jet structure.
\subsubsection{NGC 4151:}
The nearly-face on Seyfert 1.5 galaxy NGC 4151 has been the subject of
many intensive VLBI studies, due to its well-defined, two-sided,
$\sim100$ parsec-long radio jets, as well as the large quantity of
neutral gas in its nuclear region. HST imaging has revealed numerous
ionized gas clouds in an inner region that is extended about an axis
roughly aligned with the radio jets \citep{hutchings98, kaiser00}. The
spatial geometry of the narrow-line region suggests a thick molecular
torus aligned perpendicular to the jet, which is confirmed by
$\mathrm{H_2}$ measurements \citep{fernan99}. VLBA absorption data
have also provided evidence for an inner HI ring \citep{ulvestad98,
mundell03}. The radio spectral flattening and brightness enhancement
of the jet at this location led \cite{mundell03} to suggest that this
marks a site of jet-ISM interaction. Although the VLBA images lack
sufficient dynamic range to fully examine the extremely weak surface
brightness structure, the jet does undergo an abrupt deviation at this
point, in a manner similar to the jet-cloud simulations of
\cite{CW07}. \cite{mundell03} found the HI absorption line profiles to
vary significantly toward different portions of the jet, indicating a
medium composed of clumpy dense clouds with a variety of
velocities. Although they speculate that some of the other bright
knots in the jet may be the result of jet-cloud encounters, the
authors rule out shock ionization as the main source of the NLR, based
on its imprecise alignment with respect to the radio jet, and the
presence of several low-velocity clouds very near the jet that show no
signs of interaction.
\subsubsection{NGC 3079:}
This is another good example of a Seyfert jet in a dense environment,
albeit in this case the galaxy is viewed nearly edge-on
\citep{sosa01}. Using a series of VLBA measurements over a six year
period, \cite{middel07} have discovered complex kinematics and
variable jet emission in this source. They found one bright jet knot
initially moving at nearly 0.1 c, only to watch it decelerate and
become virtually stationary during the final year of their
observations. During this time its flux density increased and its
spectrum changed to a convex free-free/synchrotron-self absorbed
profile. This behavior is consistent with that expected from the
jet-cloud simulations described in \S~\ref{sims} Furthermore, the
source contains several steeper spectrum features well off the main
jet axis, which could perhaps be remnants of earlier flow channels
as predicted by \citealt*{SB07}. NGC 3079 thus provides an excellent example
of the potential of multi-epoch VLBA studies for exploring the
kinematics of jet-cloud interactions at exceedingly high spatial
resolution.
\subsubsection{PKS 1345+12:}
The ultra-luminous infrared galaxy IRAS 13451+1232 is a recent merger
system with significantly distorted optical morphology and a binary
nucleus, the northwest of which has been classified as a Seyfert 2
(e.g., \citealt*{scoville00}). The latter also contains a spectacular
radio jet (PKS 1345+12), which extends nearly 200 pc in a continuous,
sinusoidal pattern \citep{LKV03}. The counter-jet is also visible, but
only out to $\sim 50$ pc from the nucleus. Although these properties
are consistent with the CSO class, this object is unique in the fact
that \cite{LKV03} measured speeds of 1 c in the innermost
jet region, as well as high fractional polarization at the location of
the southern hotspot. The latter is significant as it implies a
continuous resupply of energy, i.e., the southern jet is not stifled
by this very gas rich galaxy.
By fitting to the apparent ridge line, apparent speeds, and
jet/counter-jet ratio, \cite{LKV03} concluded that the jet follows a
three-dimensional, conical helix aligned 82 degrees from our line of
sight, with an intrinsic flow speed of $\sim 0.8$ c. Similar sinusoidal
ridge lines seen in other CSOs and blazars have led various authors to
conclude that these may be the result of growing Kelvin-Helmholtz
instability modes, driven by small perturbations at the jet nozzle and
excited by interaction with the medium at the jet boundaries. The
northern counter-jet shows a deviation from the predicted best-fit
helical path, and is truncated at the site of dense HI absorption
($>10^{22} \;\mathrm{cm^{-2}}$; \citealt*{morganti05}). This appears
therefore to be a clear case where asymmetries in the external
environment have a strong differential impact on the morphology and
evolutionary rates of the jet and counter-jet of a young radio source.
\subsection{Blazar Jets}
Despite their much larger distances, blazar jets can also serve as
useful probes of parsec-scale jet interactions. First, because they
are viewed directly down the opening in the obscuring torus, there is
much less de-polarization, meaning that the jet polarization and
magnetic field properties can be directly studied. This also means
that any intervening gas can be potentially studied via Faraday
rotation measures (e.g., \citealt*{ZT05}). Second, any slight
deviations in the flow that may be caused by interactions are greatly
magnified by projection effects. Finally, because of Doppler effects,
there are many examples of blazars where over a century of jet evolution is
compressed into a span of only a few years of observing time (e.g.,
\citealt*{KL04}).
\subsubsection{3C 279:}
The powerful jet in the quasar 3C279 was one of the first jets in
which superluminal motion was witnessed, and has been the target of
intensive study in a variety of wave-bands. The jet has been
regularly imaged since 1994 by the 2 cm Survey \citep{KL04} and MOJAVE
\citep{LH05} programs with the VLBA at a wavelength of 2 cm. Shorter
wavelength (7 mm) VLBA monitoring \citep{jorstad04,jorstad05} has
revealed a regular swing in the ejection direction of the jet close to
the nozzle, over a timescale of 3 years. \cite{homan03} describe one
prominent jet feature (C4) that was ejected in late 1984, which moved
steadily along a linear path for over a decade with an apparent speed
of 8 c, before suddenly undergoing an increase in brightness and
change in polarization angle in 1998. These events were followed
shortly thereafter by a rapid apparent acceleration to 13 c, and a
change in trajectory of 26 degrees. Under the most conservative
assumptions, \cite{homan03} found that these changes were consistent
with an intrinsic bend of only 0.5 to 1 degree. Given the fact the
brightening and polarization changed {\it before} the change in
trajectory, the most plausible scenario is one in which C4 is
interacting with the external environment. Furthermore, the direction
of the new trajectory closely matches that of another feature ejected
several decades previously, which rules out a random jet-cloud
collision. The authors suggest instead that the event represents a
collimation of the jet resulting from a jet-boundary interaction at a
de-projected distance $\mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 1$ kpc from the nucleus. Since this is
the first such an event to be witnessed in an AGN jet, it is difficult
to yet draw solid conclusions on the validity of this model. However,
large intensive VLBA monitoring programs such as MOJAVE \citep{LH05}
may soon provide additional examples for further study.
\subsubsection{3C 120:}
Although classified as a Seyfert 1, this nearby (z = 0.033)
broad-lined galaxy shares many properties with blazars, including
superluminal motions of up to 6 c, a one-sided radio jet, and flux
variability. \cite{axon89} found high-velocity emission line
components in the host galaxy that suggested interaction between the
jet and gas clouds in the NLR. The excellent spatial resolution (0.1
pc) achievable by the VLBA at 43 GHz has enabled detailed study of its
jet evolution in both total intensity and linear polarization
\citep{gomez01, jorstad05}. The jet is resolved perpendicular to the
flow direction, and a distinct asymmetry is seen between the northern
and southern edges. In particular, \cite{gomez01} have found a
distinct region in the southern edge, approximately 8 pc
(de-projected) from the base of the jet, where moving jet features
show marked changes as they pass through. These include a brightening
in flux density, and a rotation of their polarization electric vector
position angles (EVPAs). These events are different from that
witnessed in 3C 279, since in this case no accelerations are
seen. \cite{gomez01} conclude the most likely explanation to be
interaction with a cloud, which causes Faraday rotation of the EVPAs,
and shocking of the jet material. There is also an indication of a
slight bend at the interaction site, although the jet remains
well-collimated downstream. Ideally it would be useful to study
additional examples of this type of interaction, but unfortunately
there are still very few known bright jets that are close enough to be
resolved transversely by the VLBA, and yet have viewing angles small
enough not to be heavily de-polarized by foreground nuclear gas.
\section{Summary}
High-resolution radio observations of young radio jets associated with
gigahertz-peaked spectrum AGN have led to considerable insight into
the evolutionary processes of AGN jets. Kinematic and population
studies have shown that these young radio sources undergo a
significant decline in numbers when they reach sizes of $\sim 1$
kpc. VLBA studies of individual jets have provided clear evidence for
interaction with clouds in their external environment, suggesting
stifling by dense gas as a possible cause. However, detailed
numerical simulations of jet-environment interactions indicate that
dense, clumpy environments can only temporarily stifle the flow of
powerful jets, even in the case of direct jet-cloud
collisions. Furthermore, the discovery of 'double-double' galaxies has
provided solid evidence of recurrent jet activity in powerful AGN. It
therefore appears likely that variable accretion rates play a major
role in determining the evolutionary paths of many AGN. The enhanced
resolution and sensitivity of upcoming facilities such as VSOP-II, the
EVLA, and the SKA should provide many new opportunities for studying
the evolution of young radio sources and their interactions with their
external environment.
|
1,314,259,995,934 | arxiv | \section{Introduction}
One of the great triumphs of theoretical physics is Onsager's solution of the two-dimensional Ising model of statistical mechanics \cite{Onsager44}. As his solution included the case of anisotropic couplings, it yields also the spectrum of the associated Ising quantum spin chain. It is therefore quite surprising that his method of solution is not widely known, and seems to have gained a reputation for incomprehensibility. This is rather a shame, since not only is the calculation quite clear and elegant,\footnote{The reputation likely arose because Onsager wrote out the details of the calculation, instead of following the currently fashionable practice of treating essential knowledge as supplemental information.} but he describes a very interesting approach to the problem.
What Onsager did was to show that the transfer matrix can be constructed in terms of operators that are elements of a very elegant and simple infinite-dimensional Lie algebra now bearing his name. Other elements of the Onsager algebra are not part of the transfer matrix, but still have nice commutation properties. These other operators thus can be used to construct raising and lowering operators that map between eigenstates of the transfer matrix or quantum Hamiltonian with different energies. The exact spectrum then can be computed by exploiting one further property: the particular representation of the algebra arising in the Ising model is finite-dimensional, with size depending only linearly on the number of sites. Namely, with periodic boundary conditions, the elements of this representation themselves obey a nice periodicity condition, allowing quasiparticle momenta to be defined and quantised.
Soon after Onsager's work, Kaufman realised \cite{Kaufman49a,Kaufman49b} that fermionic operators arise naturally in the Ising model, allowing a closely related but distinct approach to solving the model using a Jordan-Wigner transformation \cite{Schultz64}. This fermionic method is even easier, and so for the most part the Onsager algebra itself was no longer exploited. Moreover, the elements of the Onsager algebra in the Ising model are all free-fermion bilinears, and their commutation relations are nice because any commutator of such fermion bilinears yields a linear combination of bilinears.
It thus seemed sensible to expect that the Onsager algebra is merely one of the many marvellous properties of free-fermionic systems, and so only occurs in such. This expectation, however, is simply wrong. Motivated by some curious observations by Howes, Kadanoff and den Nijs \cite{Howes83}, von Gehlen and Rittenberg made the remarkable observation that the Onsager algebra is obeyed by operators in an $n$-state clock model \cite{Gehlen84}. They then show that the algebra allows construction of an infinite series of commuting local conserved charges, strongly suggesting a certain chiral clock model commuting with them is integrable.\footnote{Ironically, the Onsager algebra here does not shed much light on the original curious observations of \cite{Howes83}, which instead are best understood by utilising parafermionic operators \cite{Fradkin80,Fendley12}.} This model is indeed integrable, and now bears the name of the superintegrable chiral Potts model. The integrability allows many of its properties to be computed \cite{Albertini89}, but the Onsager algebra is not heavily utilised in this analysis.
The Onsager algebra cannot be used to solve chiral clock models directly because its elements here do not have the simple periodicity property that the Ising presentation has. Thus what is free-fermionic about Onsager's original solution is the periodicity property, not the algebra itself. However, while progress has been made in understanding how the Onsager algebra relates to more standard approaches to integrability (see \cite{Baseilhac18} and references therein), the question remains: what more can the Onsager algebra tell us about properties of clock models?
The purpose of this paper is to define and analyse a series of clock models that have the Onsager algebra as a symmetry algebra: all its elements commute with the Hamiltonian and transfer matrix. We believe this is the simplest set of such models. We dub them the self-dual $U(1)$ clock models, as the ${\mathbb Z}_n$ symmetry is promoted to a full $U(1)$ here. Although these models turn out to be special cases of the integrable XXZ chains of higher spin, the Onsager-algebra symmetry results in a number of striking properties not well understood in the more general setting. In particular, the spectrum should contain degeneracies because this symmetry algebra is non-abelian. We show that for any $n$, these degeneracies do appear, organising the states into multiplets of size $2^N$ for integer $N$. More general degeneracies in the XXZ spectrum have been found by using a ``loop-group'' symmetry \cite{Korff01,FabriciusMcCoy,FabriciusMcCoy2,NishinoDeguchi}, but the approach here is much simpler. Indeed, our results are quite reminiscent of the appearance of Yangian symmetries in long-range quantum-spin chains \cite{Haldane94}.
The basic idea behind our approach is to exploit the combination of self-duality with $U(1)$ symmetry. Kramers-Wannier duality originally arose in the Ising model, relating a partition function in the disordered phase to one in the ordered, with the phase transition occurring at the self-dual coupling \cite{Kramers41}. One of the main motivations for introducing clock models with ${\mathbb Z}_n$ symmetry and Potts models with $S_n$ symmetry was to give other models exhibiting the same type of duality \cite{Baxter82}. We consider nearest-neighbour self-dual clock models whose Hamiltonians and transfer matrices preserve a $U(1)$ symmetry. The self-duality means that the models must exhibit {\em two} $U(1)$ symmetries, the original one generated by an operator $Q$, and another one generator by its dual $\widehat{Q}$.
The key observation is that these two $U(1)$ symmetry operators do not commute, but in fact generate the Onsager algebra! This proves remarkably easy to see. Namely, in a nearest-neighbour Hamiltonian such as ours, acting with $\widehat{Q}$ can change the eigenvalues of $Q$ only by $0,\pm n$. We show this fact explicitly below in section \ref{sec:Onsager}. Thus the dual $U(1)$ operator can be decomposed into a sum of three terms as
\begin{equation}
\widehat{Q} = {Q}^0 + {Q}^+ + {Q}^- \,,
\label{qhatdecomposition}
\end{equation}
where $Q^\pm$ change the charge by $\pm n$, so that
\begin{align}
\big[Q,\,{Q}^0\big]=0\ ,\qquad\quad \big[Q,\, {Q}^\pm] = \pm n\,{Q}^\pm \ .
\label{Qpmalg}
\end{align}
Because $\widehat{Q}$ can be decomposed in such a fashion, it follows immediately that
\begin{align*}
\big[Q,\,\widehat{Q}\big] &= n\big(Q^+-Q^-\big)\ ,\cr
\big[Q,\big[Q,\,\widehat{Q}\big]\big] &= n^2\big(Q^+ +Q^-\big)\ ,\cr
\Big[Q,\big[Q,\big[Q,\,\widehat{Q}\big]\big]\Big] &= n^3\big(Q^+ -Q^-\big)\ .
\end{align*}
Therefore
\begin{align}
\Big[Q,\big[Q,\big[Q,\,\widehat{Q}\big]\big]\Big]= n^2 \big[Q,\,\widehat{Q}\big]\ ,
\label{Dolan1}
\end{align}
and then self-duality requires
\begin{align}
\Big[\widehat{Q},\big[\widehat{Q},\big[\widehat{Q},\,Q\big]\big]\Big] =n^2 \big[\widehat{Q},\,Q\big]\ .
\label{Dolan2}
\end{align}
The relations \eqref{Dolan1} and \eqref{Dolan2} are known as the {\em Dolan-Grady relations} \cite{Dolan81}. Repeatedly commuting with $Q$ and $\widehat{Q}$ subject to these constraints generates the Onsager algebra. Moreover, using solely the Dolan-Grady relations and the Jacobi identity allows the full infinite-dimensional Lie algebra to be written out explicitly with no further constraints \cite{Davies1,Davies2}. We give this algebra in \eqref{Onsager} below. It is amusing to note that the superintegrable chiral Potts Hamiltonians, the place where this chapter in the story started, are in this language simply
\begin{align}
H_{\rm SI}= Q +\lambda \widehat{Q}\
\label{HSI}
\end{align}
for some real coupling $\lambda$.
In section 2, we define the self-dual $U(1)$-invariant $n$-state quantum Hamiltonian and show how the Onsager algebra appears as a symmetry algebra. We also demonstrate another remarkable feature connected to the presence of the Onsager algebra: the Hamiltonians can be split into left- and right-moving pieces that commute with each other. These allow the definition of a set of commuting chiral Hamiltonians that interpolate between the ferromagnetic and antiferromagnetic cases while remaining integrable.
In section 3, we start to explore the degeneracies resulting from the Onsager symmetry algebra. Because of the lack of periodicity of the generators, we cannot derive the multiplicities directly. Instead, we show explicitly in the $n=2$ free-fermion case how the degenerate multiplets are $2^N$ dimensional, and present numerical evidence that a similar structure persists for all $n$.
In section 4, we relate our Hamiltonians to those of the spin-$(n-1)/2$ integrable XXZ chains, and use the correspondence to define a set of commuting transfer matrices. We bring the Onsager algebra into the transfer-matrix setting by showing how transfer matrices built using non-fundamental representations of the quantum-group algebra $U_q(sl_2)$ provide generating functions for the Onsager elements.
In section 5, we analyse the spectrum using the coordinate Bethe ansatz. In this approach the degeneracies stemming from the Onsager symmetry are a consequence of the appearance of exact $n$-string solutions of the Bethe equations, known \cite{Baxtercompleteness} but not heavily studied. We use these solutions to start understanding how to make precise the structure of the degenerate multiplets.
In section 6, we combine the results of sections 4 and 5 to go further in characterising the degeneracies. In particular, we utilise the $T$-$Q$ relations familiar from integrable models \cite{Baxter82} to define operators that create and annihilate the exact $n$-strings. We then give our conclusions in section 7.
\section{The \texorpdfstring{$U(1)$}{U1}-invariant clock models and their symmetries}
\label{sec:symmetries}
\subsection{The self-dual model}
The Hilbert space for models we study consists of an $n$-state quantum ``spin'' on each of the $L$ sites of a chain, i.e. $(\mathbb{C}^n)^{\otimes L}$. The operators $\tau_j,\sigma_j$ act non-trivially only on the $j^{\text{th}}$ spin, i.e.\ as $\tau_j=1\otimes 1\otimes\dots1\otimes\tau\otimes1\otimes\dots 1$. They generalise the Pauli matrices and satisfy
\begin{align}
\sigma_j^n=\tau_j^n = 1\,, \qquad
\sigma_j^\dagger = \sigma_j^{n-1}\,, \qquad
\tau_j^\dagger = \tau_j^{n-1}\,, \qquad
\sigma_j \tau_j = \omega \tau_j \sigma_j \,, \qquad
\sigma_j \tau_k = \tau_k \sigma_j\,,
\label{algebra}
\end{align}
where the parameter $\omega= e^{2 i \pi /n} $ and $j\ne k$. Very little of what follows will require an explicit matrix representation, but a basis where the $\tau_j$ are all diagonal is given by taking
\begin{align}
\tau= \left(
\begin{array}{cccc}
1 & & & \\
& \omega & &\\
& & \ddots &\\
& & & \omega^{n-1}
\end{array}
\right) \,,
\qquad
\sigma = \left(
\begin{array}{cccc}
0 & 1 & & \\
& \ddots & \ddots & \\
& & \ddots & 1\\
1 & & & 0
\end{array}
\right) \, .
\label{tausigma}
\end{align}
Thus in this basis $\tau$ can be thought of as measuring the value of the spin, while $\sigma$ shifts it.
The simplest, and most widely studied, version of the $n$-state clock chain has Hamiltonian
\begin{align}
H_{\mathbb{Z}_n} = - \sum_{j=1}^L
\sum_{a=1}^{n-1} \alpha_a (\tau_j)^a
- \sum_{j=1}^L \sum_{a=1}^{n-1} \widehat{\alpha}_a (\sigma_j^\dagger \sigma_{j+1})^a \,,
\label{HZn}
\end{align}
where hermiticity requires that the couplings obey $\alpha_a^* = \alpha_{n-a}$, $\widehat{\alpha}_a^* = \widehat{\alpha}_{n-a}$.
This Hamiltonian is invariant under the global $\mathbb{Z}_n$ symmetry $\tau_j \to \omega \tau_j$. A famous special case called the $n$-state Potts model arises by equating $\alpha_j=\alpha_{j'}$ and $\widehat{\alpha}_j=\widehat{\alpha}_{j'}$ for all $j,j'$, and so promotes the symmetry to the permutation group $S_n$. However, \eqref{HZn} need not have any symmetries other than $\mathbb{Z}_n$; for example taking any of the $\alpha_a$ complex breaks time-reversal symmetry, while taking any $\widehat{\alpha}_a$ complex breaks spatial parity symmetry.
One reason these models were introduced and widely studied is that they generalize the quantum Ising chain (the $n=2$ case) in a fairly natural way. In particular, they allow for Kramers-Wannier duality \cite{Kramers41}, exchanging high and low temperatures in the corresponding classical model. The most important part of the duality transformation for this translation-invariant system
can be taken to be
\begin{align}
\tau_j
\longrightarrow
\sigma_j^\dagger \sigma_{j+1}\,,\qquad\quad
\sigma_j^\dagger \sigma_{j+1}\longrightarrow
\tau_{j+1}\ ,
\label{duality}
\end{align}
up to some subtleties with boundary conditions. The key observation is that the duality transformation preserves
the algebra \eqref{algebra}. Duality interchanges the two types of terms, and so the model is self-dual with periodic boundary conditions when $\alpha_a = \widehat{\alpha}_a$ for all $a$.
This Hamiltonian \eqref{HZn} is typically not integrable for $n>2$. A well-known integrable case correspond to the self-dual point of the $n$-state Potts model, where $\alpha_j=\alpha_{j'}=\widehat{\alpha}_j=\widehat{\alpha}_{j'}$. This chain describes the transition (second-order for $n\le 4$ \cite{Baxter82,Duminil15} and first-order for $n>4$ \cite{Duminil16}) between an ordered phase with $S_n$ symmetry breaking and a disordered phase. A self-dual integrable point with $\mathbb{Z}_n\times \mathbb{Z}_2$ symmetry is critical for all $n$ \cite{Fateev82}, and in the continuum is described by the ``parafermion'' conformal field theory \cite{Fateev85}. There exists a two-parameter integrable deformation \cite{Perk97} called the ``chiral Potts model'', although the model does not have an $S_n$ symmetry, but in general only $\mathbb{Z}_n$. The superintegrable Hamiltonian \eqref{HSI} is a one-parameter subset of this model.
The purpose of this paper is to analyse in depth another integrable model generalising \eqref{HZn} to have an even larger symmetry, promoting the ${\mathbb{Z}_n}$ symmetry to a full $U(1)$ symmetry. The key observation is that a particular linear combination of the $\tau$ matrix defined in \eqref{tausigma} is a $U(1)$ symmetry generator $S^z$.
Namely, the operator $Q$ defined by
\begin{align}
Q=\sum_{j=1}^L S^z_j \ ,\qquad\quad S^z_j =\sum_{a=1}^{n-1}\frac{1}{1-\omega^{-a}} (\tau_j)^a
\label{Qdef}
\end{align}
is a $U(1)$ charge.
The single-site operator $S^z$ is that occuring in the spin-$(n-1)/2$ representation of the $SU(2)$ algebra, as using the explicit form for $\tau$ in \eqref{tausigma} gives the diagonal $n\times n$ matrix
whose entries are $(S^z)_{bb'} = \frac{1}{2}(n+1-2b)\delta_{bb'}$. Acting with $\sigma_j$ on an eigenstate of $Q$ gives states whose eigenvalues of $Q$ either increase by 1 or decreases by $n-1$. Thus the operator $Q$ does not commute with the Hamiltonian \eqref{HZn}, because terms in the latter can violate conservation of $Q$ by $\pm n$.
We instead consider another nearest-neighbour Hamiltonian that does commute with $Q$. The trick is to combine $\tau$ and $\sigma$ operators to remove the $U(1)$-violating processes. The unique such $U(1)$-invariant Hamiltonian with self-duality and only nearest-neighbour interactions is then
\begin{align}
H_n =
i \sum_{j=1}^L \sum_{a=1}^{n-1} \frac{1}{1-\omega^{-a}}
\Bigg[(2a-n) \left(\tau_j^a + (\sigma_j^\dagger \sigma_{j+1})^a \right)
+ \sum_{b=1}^{n-1} \frac{1 - \omega^{-a b}}{1-\omega^{-b}} \left( \tau_j^b (\sigma_j^\dagger \sigma_{j+1})^a + (\sigma_j^\dagger \sigma_{j+1})^a \tau_{j+1}^b \right) \Bigg]\,.
\label{Hnchiral}
\end{align}
It is worth noting that when this Hamiltonian is written in terms of parafermionic operators \cite{Fradkin80,Fendley12}, each term involves at most only three consecutive such operators, explaining how the model can be both self-dual and nearest-neighbour while still being more complicated than \eqref{HZn}.
While the self-duality of $H_n$ is apparent in the form (\ref{Hnchiral}), the $U(1)$ conservation is not. Although it is not difficult to show directly that it indeed commutes with $Q$, it is more illuminating to rewrite it in terms of
\begin{align}
S^+_j\equiv \sigma_j\left(1-\frac{1}{n}\sum_{a=0}^{n-1}(\tau_j)^a\right)\ ,
\qquad\quad S^-_j= (S^+_j)^\dagger\ .
\end{align}
Acting on a single site in the basis \eqref{tausigma} where $\tau$ is diagonal, $S^\pm$ has matrix elements $(S^\pm)_{bb'}= \delta_{b,b\pm 1}$. These generators therefore satisfy
\begin{align}
\big[Q,\,S^\pm_j\big] = \pm S^\pm_j\ .
\label{QScomm}
\end{align}
Then we show in the Appendix that $H_n$ can be rewritten in the remarkably simple form
\begin{align}
H_n =
i \sum\limits_{j=1}^L \sum\limits_{a=1}^{n-1} \frac{1}{1-\omega^{-a}} \biggl[ (2a-n) \tau_j^a + n \left(S_j^+S_{j+1}^-\right)^{n-a} - n \left(S_j^- S_{j+1}^+\right)^a \biggr]\,.
\label{HnchiralSpm}
\end{align}
Using \eqref{QScomm} shows immediately that the Hamiltonian $H_n$ is $U(1)$ invariant.
The commutator $[S^+,\,S^-]$ is not proportional to $S^z$, so the three do {not} satisfy the $SU(2)$ commutation relations and the model does not have an $SU(2)$ symmetry. However, we exploit in section \ref{sec:transfermatrices} their connection to representations of the quantum-group algebra $U_q(SL(2))$, a deformation of $SU(2)$.
We refer to the model with Hamiltonian $H_n$ in
\eqref{Hnchiral} or \eqref{HnchiralSpm} as the {\it self-dual $U(1)$-invariant clock model}. This model has appeared before as a particular case of the integrable XXZ chain of spin $(n-1)/2$, as we will detail in section \ref{sec:XXZ}.
For $n=2$, it is bilinear in fermionic operators and so a free theory; we solve it in section \ref{sec:n2}. For $n=3$, it also has arisen in the study of models based on the Temperley-Lieb algebra \cite{Ikhlef,Ikhlef2}. In a separate paper \cite{Phasediagram}, we will describe the rich physics of a Hamiltonian given by a linear combination of \eqref{HZn} and \eqref{HnchiralSpm}.
\subsection{Onsager symmetry}
\label{sec:Onsager}
We will devote much of this paper to describing the many interesting symmetries of $H_n$. One remarkable feature of the Hamiltonian $H_n$ is that despite its being a strongly interacting spin chain for $n>2$, it is quite simple to show that it has a symmetry algebra with an infinite number of generators as $L\to\infty$. This feature arises because $H_n$ is self-dual and commutes with $Q$. It therefore must also commute with the dual of $Q$:
\begin{align}
\big[\widehat{Q},\,H\big]=0\qquad \hbox{ for }\quad \widehat{Q} = \sum_{j=1}^L \sum_{a=1}^{n-1} \frac{1}{1-\omega^{-a}}(\sigma_j^\dagger \sigma_{j+1})^a \, .
\end{align}
The Hamiltonian thus has a second $U(1)$ symmetry. The interesting symmetries arise because $Q$ and $\widehat{Q}$ do {\em not} commute with each other. Repeatedly commuting $Q$ and $\widehat{Q}$ gives rise to an infinite-dimensional Lie algebra called the {\em Onsager algebra} \cite{Onsager44}.
A remarkable feature of our self-dual Hamiltonian $H_n$ is that since both $Q$ and $\widehat{Q}$ commute with it, all the Onsager-algebra elements do as well. Moreover, as explained in the introduction, thinking about $Q$ as generating a $U(1)$ symmetry allows the key relations of the algebra to be found with almost no work. The reason why $Q$ and $\widehat{Q}$ do not commute, and why $\widehat{Q}$ can be split as (\ref{qhatdecomposition}), is that acting with $(\sigma^\dagger_j\sigma_{j+1})^a$ can change the charge under $Q$ by $\pm n$. This ensuing Dolan-Grady conditions (\ref{Dolan1},\ref{Dolan2}) and the Jacobi identity gives the Onsager algebra, as proved in \cite{Davies2}.
The Onsager algebra is typically given in the form originally found by Onsager. Whereas this is natural if writing the elements in terms of Majorana fermion operators, it obscures the $U(1)$ structure. To make the $U(1)$ structure more apparent, we instead display this algebra in terms of a set of generators $Q^0_m$, $Q^+_m$, and $Q^-_m$, with $m$ an integer and $Q^\pm_{-m} \equiv -Q^{\pm}_m$ and $Q^0_{-m}\equiv Q_m$. Denoting $Q^0_0\equiv 4Q/n$ and $Q^r_1\equiv 4Q^r/n$ for $r=0,\pm$, the Onsager algebra is\footnote{Onsager's convention is to describe the elements by two sets of operators $A_m$ and $G_m=-G_{-m}$. The two generators are $A_0 = 4Q/n$ and $A_1 = 4\widehat{Q}/n$, and in general, our elements are related by $Q^0_m=(A_m+A_{-m})/2$ and $Q^\pm_m = (A_m-A_{-m} \pm 2G_m)/4$.}
\begin{align}
[{Q}_l^r, {Q}_m^r] &= 0 \cr
\big[{Q}_l^-, {Q}_m^+\big] &= {Q}_{m+l}^0 - {Q}_{m-l}^0 \cr
\big[{Q}_l^-,{Q}_m^0\big] &=2 \Big( {Q}_{m+l}^- - {Q}_{m-l}^- \Big) \cr
\big[{Q}_l^+, {Q}_m^0\big] &= 2 \Big( {Q}_{m-l}^+ - {Q}_{m+l}^+ \Big) \label{Onsager} \ .
\end{align}
We are not aware of a closed-form expression of the $Q^r_m$ in the clock models. However, like the Hamiltonian, the $Q^r$ have a nice expression in terms of $S^\pm_j$:
\begin{align}
Q^0 &= \sum_{j=1}^L \sum_{a=1}^{n-1}
\frac{1}{1-\omega^{-a}}
\Big[
( S_j^- S_{j+1}^+)^{a} - \omega^{-a}( S_j^+ S_{j+1}^-)^{a}
\Big]
\ ,\c
Q^+ &= \sum_{j=1}^L \sum_{a=1}^{n-1}
\frac{1}{1-\omega^a }
(S_j^+)^{a} (S_{j+1}^+)^{n-a}
\ ,
\label{QSpm}
\end{align}
with $Q^-=(Q^+)^\dagger$ .
It should be expected that such a rich, non-abelian symmetry of our models should come with interesting physical consequences. We will start to examine these in section \ref{sec:spectrum}.
\subsection{Chiral decomposition}
\label{sec:chiraldecomposition}
Comparing the explicit expressions (\ref{HnchiralSpm}) for $H_n$ and (\ref{QSpm}) for $Q^0$ leads to another interesting feature of the model: the $H_n$ can be split into two commuting pieces. Namely,
define
\begin{align}
H_{\rm R} =
i \sum_{j=1}^L \sum_{a=1}^{n-1}
\frac{1}{1-\omega^{-a}}
\Big[
n
\left( S_{j}^- S_{j+1}^+\right)^a
+
\frac{1}{2}(2a-n)
\left( \tau_j\right)^a
\Big] \ ,\qquad\quad H_{\rm L}=(H_{\rm R})^\dagger\ .
\label{HRdef}
\end{align}
We used the subscripts $\rm R$ and $\rm L$ because the non-diagonal pieces in
$H_{\rm L}$ and $H_{\rm R}$ contain the parts of $H_n$ that carry $U(1)$ charge toward the right and the left respectively. These operators were chosen so that
\begin{align}
H_n &= H_{\rm R} + H_{\rm L} \,,\\
Q^0 &= \frac{i}{n} \left(H_{\rm R} - H_{\rm L} \right) \,,
\label{decomposition}
\end{align}
As described in section \ref{sec:Onsager}, $[Q^0,\,H_n]=0$ by construction. Thus we immediately find
\begin{equation}
[ H_{\rm R} , H_{\rm L} ] = 0 \,.
\label{HLHR0}
\end{equation}
In other terms, the Hamiltonians $H_n$ can be split as the sum of a left- and right-moving parts that commute with each other! It is worth noting that while this decomposition holds for twisted boundary conditions as well as periodic, the analogous $H_{\rm L}$ and $H_{\rm R}$ do not commute for open boundary conditions.
Thus defining
\begin{equation}
H(\alpha) = e^{i \alpha} H_{\rm R} + e^{-i \alpha} H_{\rm L} \,,
\label{Halpha}
\end{equation}
gives a one-parameter family of commuting Hamiltonians obeying $H(0)=H_n$ and $H(\pi)=-H_n$, while the ``maximally chiral'' Hamiltonian $H(\pi/2)$ is proportional to $Q^0$. Since these Hamiltonians all commute with one another, they share the same eigenspaces, a fact that will prove quite useful in our analysis.
However, it is important to note that $H(\alpha)$ commutes with all the Onsager generators only for $\alpha=0$ or $\pi$; only the $Q^0_m$ commute with $H(\alpha)$ for all $\alpha$.
Another decomposition of the Hamiltonian as the sum of two commuting pieces has been found for the case $n=3$, in terms of Temperley-Lieb generators \cite{Ikhlef}. Interestingly, this splitting is different from the one presented here, or from any linear combination of $H_{\rm R}$ and $H_{\rm L}$. The two commuting Hamiltonians presented in \cite{Ikhlef} do not conserve the $U(1)$ charge individually, and so might signal an additional symmetry of our models.
\subsection{The Onsager algebra for \texorpdfstring{$n=2$}{n=2}}
To give a little more intuition into the Onsager algebra, we write its generators out explicitly in the $n=2$ case using fermionic operators.
The $U(1)$-invariant self-dual Hamiltonian for $n=2$ in terms of Pauli matrices is
\begin{align}
H_2 = \frac{1}{2}\sum_{j=1}^L\left(\sigma_j^x\sigma_{j+1}^y-\sigma_j^y\sigma_{j+1}^x\right)=i\sum_{j=1}^L\left(\sigma_j^+\sigma_{j+1}^--\sigma_j^-\sigma_{j+1}^+\right).
\end{align}
The $U(1)$ charge operators commuting with $H_2$ are
\begin{align}
Q=\frac{1}{2}\sum_{j=1}^L\sigma_j^z\ ,\qquad\quad \widehat{Q}=\frac{1}{2}\sum_{j=1}^L\sigma_j^x\sigma^x_{j+1}\ .
\end{align}
This Hamiltonian can be split into two as $H_2=H_{\rm L}+H_{\rm R}$, where
\begin{align}
H_{\rm L}=i\sum_{j=1}^L\sigma_j^+\sigma_{j+1}^-\,,\qquad H_{\rm R}=-i\sum_{j=1}^L\sigma_j^-\sigma_{j+1}^+=(H_{\rm L})^\dagger\,.
\label{split2}
\end{align}
It is simple to check that $[H_{\rm L},H_{\rm R}]=0$ for periodic boundary conditions (but not for open).
To give a nice expression for the Onsager elements, we use a Jordan-Wigner transformation to complex fermions:
\begin{align}
c_j=\sigma_j^-\prod\limits_{l<j}\sigma_l^z\ , \qquad c_j^\dagger=\sigma_j^+\prod\limits_{l<j}\sigma_l^z\ .
\end{align}
These operators obey the usual anticommutation relations
\begin{equation}
\{c_j,c_l\}=\{c_j^{\dag},c_l^{\dag}\}=0\,, \qquad \{c_j,c_l^{\dag}\}=\delta_{jl}.
\end{equation}
In terms of the fermions, the $U(1)$ charge is simply
\begin{align}
Q
=-\frac{L}{2}+\sum_{j=1}^L c^\dagger_j c_j\ ,
\end{align}
so $Q$ up to a shift measures the fermion number. The commutator of the fermions with $Q$ is simple, namely $[Q,c_j^\dagger]=c^\dagger_j$ and $[Q,c_j]=-c_j$. For simplicity we assume that $L$ is even, so that the eigenvalues of $Q$ are integers.
The Hamiltonian is then
\begin{align}
H_2=i\sum_{j=1}^{L-1} \left(c^\dagger_j c^{}_{j+1} + c_j^{}c^\dagger_{j+1}\right)-i(-1)^Q\left(c^\dagger_L c^{}_{1} + c^{}_Lc^\dagger_{1}\right)
\label{H2ferm}
\end{align}
where $(-1)^Q=\prod_j \sigma_j^z$ in this basis measures whether the number of spin-down particles is even or odd, or, equivalently, fermion-number parity. This twist factor $-(-1)^Q$ arises because of the non-locality of the map from spins to fermions.
In terms of the fermions, the dual $U(1)$ charge is
\begin{align}
\widehat{Q}=\frac{1}{2}\sum\limits_{j=1}^{L}(-1)^{{\cal T}_{j+1}}\left(c_j-c_j^{\dag}\right)\left(c_{j+1}+c_{j+1}^{\dag}\right) .
\end{align}
where the twisting is defined by
\[{\cal T}_{s}=(Q+1)\lfloor(s-1)/L \rfloor
\]
with $\lfloor x \rfloor$ the floor of $x$. In this form it is obvious how to split $\widehat{Q}$ into the $Q^r$: all terms involving any $c_j^\dagger c_{j+1}^\dagger$ are contained in $Q^+$, all with $c_jc_{j+1}$ are in $Q^-$, with the others having zero charge and so in $Q^0$.
Since commutators of bilinears in fermions give bilinears, the Onsager elements are also bilinears in the fermions. A little bit of algebra then yields
\begin{align}
Q_m^0=(-1)^m\sum\limits_{j=1}^L(-1)^{{\cal T}_{j+m}}\left(c_j^{\dag}c_{j+m}-c_jc_{j+m}^{\dag}\right)\,,
\qquad Q_m^+=(-1)^m\sum\limits_{j=1}^L(-1)^{{\cal T}_{j+m}} c_j^{\dag}c_{j+m}^{\dag}\,,\quad
\label{Qmfermion}
\end{align}
where $Q_m^-=(Q_m^+)^\dagger$ as always. It is thus obvious that $[Q,Q_m^r]= 2 rQ_m^r$.
From these explicit expressions it is also clear that the Onsager elements are periodic: $Q^r_{m+L}=-(-1)^Q Q^r_m$, and so $Q^r_{m+2L}=Q_m^r$. Intuitively, one can think each shift by $L$ as wrapping the Jordan-Wigner string around one more time. Such elegant periodicity in $m$ is a consequence of the free-fermion nature of $n=2$; we have verified by brute force that for general $n$ there is no such linear relation among Onsager elements under shifts linear in $L$.
\comment{
In the presence of the $U(1)$ symmetry, it is natural to use Dirac fermions
\begin{equation}
c_j=\frac{1}{2}\left(\gamma_{2j-1}-i\gamma_{2j}\right)\,,\qquad c_j^{\dag}=\frac{1}{2}\left(\gamma_{2j-1}+i\gamma_{2j}\right)\,
\end{equation}
\[\gamma_{2j-1}=\sigma_j^x\prod\limits_{l<j}\sigma_l^z\ , \qquad \gamma_{2j}=i\sigma_j^x\sigma_j^z\prod\limits_{l<j}\sigma_l^z\ .\]
$\{\gamma_a,\gamma_b\}=2\delta_{jk}$. The Hamiltonian becomes
\begin{align}
H_2=-\frac{i}{2}\sum_{j=1}^{2L-2}\gamma_j\gamma_{j+2} +\frac{i}{2}(-1)^Q\left(\gamma_{2L-1}\gamma_1+\gamma_{2L}\gamma_2\right),
\end{align}
It is worth noting that (\ref{H2def}) makes apparent another way of splitting $H_2$ into two
commuting Hamiltonians, simply by considering terms with even and odd $j$. The resulting Hamiltonians however are individually Hermitian, and so not equivalent to those in \eqref{split2}.
}
\section{The degeneracies}
\label{sec:spectrum}
The many symmetries described in section \ref{sec:symmetries} suggest that the self-dual Hamiltonians $H_n$ are integrable. In section \ref{sec:betheansatz} we use the Bethe ansatz to show that indeed this is so.
Moreover, the fact that the symmetry generators obey a non-abelian algebra indicates that there should be degeneracies in the spectrum. Namely, since the Onsager generators $Q_m^\pm$ do not commute with $Q$, acting with them on an energy eigenstate must give another state with the same energy but with charge changed by $\pm n$. The purpose of this section is to characterise these degeneracies in a simple manner, before plunging into the detailed technical analysis
\subsection{The degeneracies for \texorpdfstring{$n=2$}{n=2}}
\label{sec:n2}
It is highly illuminating to start by analysing the $n=2$ case. Since the Hamiltonian $H_2$ in (\ref{H2ferm}) is bilinear in free-fermion operators, the entire spectrum can be computed, and the degeneracies due to the Onsager algebra can be isolated.
The Hamiltonian can be diagonalised by Fourier transforming these fermions as
\begin{align}
c_k=\frac{1}{\sqrt{L}}\sum\limits_ke^{ij\left(k-\frac{\pi}{2}\right)}c_j\,,\qquad c_k^{\dag}=\frac{1}{\sqrt{L}}\sum\limits_ke^{-ij\left(k-\frac{\pi}{2}\right)}c_j^{\dag}\,,
\end{align}
with $k=2m\pi/L+\pi/2$ for $(-1)^Q=-1$ and $k=(2m+1)\pi/L+\pi/2$ for $(-1)^Q=1$ and we have added the extra $\pi/2$ for later convenience. We then find
\begin{align}
H_{\rm L}=i\sum\limits_ke^{-i\left(k-\frac{\pi}{2}\right)}n_k\,,\qquad H_{\rm R}=-i\sum\limits_ke^{i\left(k-\frac{\pi}{2}\right)}n_k,
\end{align}
where $n_k=c_k^{\dag}c_k$ is the fermion number operator. Thus
\begin{align}
H=-2\sum\limits_k n_k\cos k\ ,\qquad\quad Q=-\frac{L}{2} +\sum_k n_k\ .
\end{align}
To ensure the correct boundary conditions, $k=2m\pi/L$ for spin parity $(-1)^Q=-1$, while $k=(2m+1)\pi/L$ for $(-1)^Q=1$. It should be noted that particles of energies $\pm k$ have equal energies, while those of $k$ and $\pi-k$ have opposite energies, a fact which will be of crucial importance. Acting on a state with $c^\dagger_k$ or $c_k$ will therefore either annihilate the state or change the charge, that is the total fermion number, by $\pm 1$, respectively.
The ground states in each spin-parity sector are then found by filling all of the negative-energy levels ($\left|k\right|<\pi/2$), while leaving the positive ones empty. There are a few subtleties here on ground-state degeneracies arising from zero modes, but these are unimportant for the subsequent discussion. To obtain excited states in each sector, we then act with pairs of annihilation and creation operators: $c_kc_q$, $c_kc_q^{\dag}$ or $c_k^{\dag}c_q^{\dag}$. This allows us to generate the full spectrum of the model.
Understanding the degeneracies due to the Onsager algebra is straightforward in terms of the fermions.
Because of the periodicity of the Onsager algebra $Q^r_{m+2L}=Q_m^r$ for $n=2$, the
momentum-space versions of the Onsager elements are quite simple. Using the explicit expressions \eqref{Qmfermion} and defining
\begin{align}
\mathcal{Q}(k)\equiv-\frac{i}{\sqrt{L}}\sum\limits_{m=1}^{L-1}\sin \biggl(m\left(k+\frac{\pi}{2}\right)\biggr)Q_m^{-}
\label{SQ}
\end{align}
gives
\begin{align}
\mathcal{Q}^{\dag}(k)=c_k^{\dag}c_{\pi-k}^{\dag}\ ,\qquad\quad \mathcal{Q}(k)=c_{\pi-k}c_k\ .
\label{Sdef}
\end{align}
Acting with these operators
leaves the energy invariant, changes the momentum by $\pi$, and alters the $U(1)$ charge by $\pm 2$.
An example of the action of $\mathcal{Q}^\dagger$ on a particular state is illustrated in Figure \ref{fig:fermions}. This action is non-trivial only on states with both a hole in the Fermi sea at momentum $k$, and no particle at momentum $\pi-k$. Acting with $\mathcal{Q}(k)$ is non-trivial only on states with a filled level at momentum $\pi-k$ and without a hole at $k$. In terms of the usual quasiparticle picture, $\mathcal{Q}^\dagger(k)$ creates a particle and annihilates an antiparticle, and vice versa for $\mathcal{Q}(k)$. Such an action indeed changes the charge by $+2$ and $-2$ respectively, while leaving the energy invariant.
\begin{figure}[tb]
\begin{center}
\begin{tikzpicture}[scale=1.4]
\draw[black, line width=1] (0,1) cos (1,0);
\draw[black, line width=1] (1,0) sin (2,-1);
\draw[black, line width=1] (2,-1) cos (3,0);
\draw[black, line width=1] (3,0) sin (4,1);
\draw[->,black, line width=0.7] (-0.2,0)-- (4.2,0) node[right] {$k$};
\draw[dashed,black, line width=0.7] (0,0)node[below] {$0$} -- (0,1);
\draw[dashed,black, line width=0.7] (4,0)node[below] {$2\pi$} -- (4,1);
\draw[fill=white] (0,1) circle (0.075);
\draw[fill=white] (0.2,0.951) circle (0.075);
\draw[fill=white] (0.4,0.809) circle (0.075);
\draw[fill=white] (0.6,0.5878) circle (0.075);
\draw[fill=white] (0.8,0.309) circle (0.075);
\draw[fill=white] (1,0) circle (0.075);
\draw[fill=white] (1.2,-0.309) circle (0.075);
\draw[fill=black] (1.4,-0.5878) circle (0.075);
\draw[fill=white] (1.6,-0.809) circle (0.075);
\draw[fill=black] (1.8,-0.951) circle (0.075);
\draw[fill=white] (2,-1) circle (0.075);
\draw[fill=black] (2.2,-0.951) circle (0.075);
\draw[fill=black] (2.4,-0.809) circle (0.075);
\draw[fill=black] (2.6,-0.5878) circle (0.075);
\draw[fill=black] (2.8,-0.309) circle (0.075);
\draw[fill=black] (3,0) circle (0.075);
\draw[fill=white] (3.2,0.309) circle (0.075);
\draw[fill=white] (3.4,0.5878) circle (0.075);
\draw[fill=white] (3.6,0.809) circle (0.075);
\draw[fill=white] (3.8,0.951) circle (0.075);
\draw[fill=white] (4,1) circle (0.075);
\draw[->, line width=0.7] (5,0) -- (6,0);
\begin{scope}[shift={(7,0)}]
\draw[black, line width=1] (0,1) cos (1,0);
\draw[black, line width=1] (1,0) sin (2,-1);
\draw[black, line width=1] (2,-1) cos (3,0);
\draw[black, line width=1] (3,0) sin (4,1);
\draw[->,black, line width=0.7] (-0.2,0)-- (4.2,0) node[right] {$k$};
\draw[dashed,black, line width=0.7] (0,0)node[below] {$0$} -- (0,1);
\draw[dashed,black, line width=0.7] (4,0)node[below] {$2\pi$} -- (4,1);
\draw[fill=white] (0,1) circle (0.075);
\draw[fill=white] (0.2,0.951) circle (0.075);
\draw[red,fill=red] (0.4,0.809) circle (0.075);
\draw[fill=white] (0.6,0.5878) circle (0.075);
\draw[fill=white] (0.8,0.309) circle (0.075);
\draw[fill=white] (1,0) circle (0.075);
\draw[fill=white] (1.2,-0.309) circle (0.075);
\draw[fill=black] (1.4,-0.5878) circle (0.075);
\draw[red,fill=red] (1.6,-0.809) circle (0.075);
\draw[fill=black] (1.8,-0.951) circle (0.075);
\draw[fill=white] (2,-1) circle (0.075);
\draw[fill=black] (2.2,-0.951) circle (0.075);
\draw[fill=black] (2.4,-0.809) circle (0.075);
\draw[fill=black] (2.6,-0.5878) circle (0.075);
\draw[fill=black] (2.8,-0.309) circle (0.075);
\draw[fill=black] (3,0) circle (0.075);
\draw[fill=white] (3.2,0.309) circle (0.075);
\draw[fill=white] (3.4,0.5878) circle (0.075);
\draw[fill=white] (3.6,0.809) circle (0.075);
\draw[fill=white] (3.8,0.951) circle (0.075);
\draw[fill=white] (4,1) circle (0.075);
\end{scope}
\end{tikzpicture}
\end{center}
\caption{
Action of the operator $\mathcal{Q}^{\dag}(k)$ on a given eigenstate of the $n=2$ Hamiltonian. If the levels $k,\pi-k$ were vacant in the original state, $\mathcal{Q}(k)$ creates a degenerate eigenstate with two more fermions (in red), increasing the $U(1)$ charge by 2.
}
\label{fig:fermions}
\end{figure}
Applying the $\mathcal{Q}^{\dag}(k)$ and $\mathcal{Q}(k)$ operators to general states clearly leads to degeneracies. Since $\mathcal{Q}(\pi-k)=-\mathcal{Q}(k)$, we can restrict consideration to $|k|<\pi/2$.
To find the full structures of the multiplets with these degeneracies, consider an energy eigenstate $|s_{\rm min}\rangle$ annihilated by all $\mathcal{Q}(k)$. In this state, each pair of levels $k$ and $\pi-k$ can be occupied by at most one fermion. Let $N_s$ be the number of such pairs completely unoccupied, and $N'_s$ the number of pairs with at exactly one level occupied. Since there are $L$ levels, $N_s+N_s'=L/2$ with our assumption that $L$ is even.
The charge of this state must therefore be
\[Q_{\text{min}}=-L/2+N_s' = -N_s\ .\]
There are $N_s$ different values of $k$ such that $\mathcal{Q}^\dagger(k)|s_{\rm min}\rangle\ne 0$. Acting with any of these once increases the charge by $2$, giving a multiplet of states with the same energy. Since $[\mathcal{Q}^\dagger(k),\mathcal{Q}^\dagger(k')]=0$ and $(\mathcal{Q}^\dagger(k))^2=0$, the total degeneracy of this multiplet is $2^{N_s}$, with the number of states $d_p$ at each charge $Q=2p-N_s$ given by
\begin{equation}
d_p = {{N_s}\choose{p}} \ .
\end{equation}
The relation (\ref{SQ},\ref{Sdef}) between the Onsager elements and fermion bilinears is nice because of the periodicity of the Onsager elements under $m\to m+2L$, a property that does not generalise to arbitrary $n$. Nonetheless, degeneracies analogous to these for $n=2$ are not simply a free-fermionic fluke, and the subject of the rest of the paper.
\subsection{Structure of degeneracies for general \texorpdfstring{$n$}{n}}
\label{sec:degeneracies}
Degeneracies should be expected as a general feature of models with a non-abelian symmetry, but constructing the analog of $\mathcal{Q}^\dagger(k)$ for $n>2$ requires considerable work. In sections \ref{sec:betheansatz} and \ref{sec:stringcreation} we give this construction by utilising exact $n$-string solutions of the Bethe equations. Happily, the detailed calculation is not needed to understand the degeneracies qualitatively. Thus we start our general analysis by giving here some numerics for $n=3$ that illustrate this structure nicely.
The Onsager elements $Q^{\pm}_m$ shift the charge by $\pm n$, but still commute with the Hamiltonian. Thus we expect degenerate multiplets with charges differing by multiples of $n$. Some numerical results for $H_3$ using exact diagonalisation can be found in tables \ref{table1} and \ref{table2}. The presence of such degeneracies is readily apparent in both of these.
\begin{table}[h]
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$Q=-6$ & $Q=-3$ & $Q=0$ & $Q=3$ & $Q=6$ & $L\to\infty$ & CFT\\
\hline
& & 0 & & & 0 & 0 \\
\hline
& & 0.992453634448 & & & 1.0014 & 1\\
\hline
& & 1.979146217630 & & & 2.0040 & 2\\
\hline
2.870426956543& 2.870426956543 $\times 4$ & 2.870426956543 $\times 6$& 2.870426956543 $\times 4$& 2.870426956543 & 3.0309 & 3 \\
\hline
\end{tabular}
}
\begin{center}
\resizebox{.7\columnwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|}
\hline
$Q=-2$ & $Q=1$ & $Q=4$ & $L\to\infty$ & CFT\\
\hline
0.334282995064 & & & 0.3334 & 1/3 \\
\hline
1.313397175669 & 1.313397175669 $\times 2$ & 1.313397175669 & 1.3364 & 4/3\\
\hline
2.2629252374614 & 2.2629252374615 $\times 2$ & 2.2629252374616 & 2.3436 & 7/3\\
\hline
\end{tabular}
}
\end{center}
\caption{Low-lying energy levels of $H_3$ for $L=16$ with momentum $k=0$ in sectors of various $Q$; the spectra for $Q\to-Q$ are identical. The energies are shifted so the ground-state energy is 0 and rescaled by $L/(2\pi v_F)$, where $v_F=9/2$ is the Fermi velocity. The $\times m$ indicates that there are $m$ levels with this energy, up to differences $< 10^{-10}$. The $L\to \infty$ column gives the extrapolation of the energy to infinite lattice length from a quadratic fit in $1/L$ of the values for $L=12,14,16$. The last column consists of the predictions from the c=3/2 CFT.}
\label{table1}
\end{table}
To make the results even more informative, we have shifted all the energies by a constant (the same in all sectors), and rescaled them by $L/(2\pi v_F)$, where $v_F$ is a ``Fermi'' velocity $v_F=9/2$. This value for the velocity turns out to be derivable using the Bethe ansatz, but here can be simply viewed as a rescaling that reveals a striking feature beyond the degeneracies: levels within a sector are typically approximately split by integers (or half-integers in a few cases).
This splitting by integers leads to the expectation that the continuum limit of this spin chain is described by a conformal field theory (CFT). This limit is also implied by the mapping on to the spin-$(n-1)/2$ XXZ chain described in section \ref{sec:XXZ}. Earlier work indicates that $H_n$ should scale to a CFT Hamiltonian with central charge $3(n-1)/(n+1)$, while $-H_n$ scales to one with central charge $1$ \cite{XXZSCFT,XXZSCFTFrahm}. In both cases, we have refined and checked these predictions, identifying exactly which CFT it is (including finding the radius of the bosonic field present). We thus include in the tables a column which gives the energies for this level in the corresponding CFT, and defer further analysis of the CFTs to future work. Worth noting, however, is that the CFT degeneracies are even larger than those on the lattice, as indicated in Table \ref{table2}, where levels distinct on the lattice but presumably degenerate in the CFT are separated by dashed horizontal lines.
\begin{table}[h]
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|}
\hline
$Q=-3$ & $Q=0$ & $Q=3$ & $L\to\infty$ & CFT \\
\hline
0.89191865865 & 0.89191865865 $\times 2$ & 0.89191865865 & 0.8751 & 7/8 \\
\hline
2.67243586227 $\times 2$ & 2.67243586227 $\times 4$ & 2.67243586227 $\times 2$ & 2.9072 & \\
\cdashline{1-4}
2.77803728576 & 2.77803728576 $\times 2$ & 2.77803728576 & 2.8865 & 23/8 \\
\cdashline{1-4}
2.86395394892 & 2.86395394892 $\times 2$& 2.86395394892 & 2.8863 & \\
\hline
\end{tabular}
\quad
\begin{tabular}{|c|c|c|c|c|}
\hline
$Q=-5$ & $Q=-2$ & $Q=1$ & $L\to\infty$ & CFT \\
\hline
& & 0.211743760 & 0.2084 & $\frac{5}{24}\approx 0.2083$ \\
\hline
& 2.0994556102 $\times 2$ & 2.0994556102 $\times 2$ & 2.2202 & \\
\cdashline{1-4}
2.2088764136 & 2.2088764136 $\times 2$ & 2.208876413 & 2.2121 & $\frac{53}{24}\approx 2.2083$\\
\cdashline{1-4}
& & 2.232900511518 & 2.2133 & \\
\hline
\end{tabular}
}
\caption{Low-lying energy levels of $H_3$ as in table \ref{table1}, except with $k=\pi$.}
\label{table2}
\end{table}
We have presented numerical data for $H_3$, but we have checked $-H_3$ as well as higher $n$. We find that in all cases, degeneracies occur between states of $U(1)$-charge differing by multiples of $n$, exact up to high numerical precision. All states can be grouped into degenerate multiplets characterised by a ``highest-weight'' and ``lowest-weight" pair in the sectors of charge $Q_{\rm max}$ and $Q_{\rm min}$ respectively, such that $Q_{\rm max}-Q_{\rm min}=Nn$, for some integer $N$. We find that for a given degenerate multiplet, the number of states inside the sector with $Q-Q_{\rm min} = p n$ is given by
\begin{equation}
d_p = {{N}\choose{p}} \,.
\label{binomials}
\end{equation}
The total degeneracy of the tower is therefore
\begin{equation}
d = \sum_{p=0}^N {{N}\choose{p}} = 2^N \,
\label{binomials2}
\end{equation}
just as in the free-fermion case of $H_2$.
The structure of multiplets is illustrated schematically in Figure \ref{fig:degeneracies}. For $N$ even, there is a unique $Q_c$ with maximal $d_Q$, while for $N$ odd there are two values $Q_{c_1}$ and $Q_{c_2}$ with maximal $d_Q$. The latter values are the ``centre(s)" of the multiplet and are always found to have values $-n<Q_c<n$, $(-n<Q_{c_1}<Q_{c_2}<n)$.
If $Q_{\rm max}$ is not a multiple of $n$, neither is $Q_{\rm min}$ and there is a second multiplet degenerate with the first but with all $Q\to -Q$
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=1.15]
\draw[fill=black,-latex] (-7,-1) -- (-7,1) node[above] {Energy};
\node at (-6,-1) {$-6$};
\node at (-5,-1) {$-5$};
\node at (-4,-1) {$-4$};
\node at (-3,-1) {$-3$};
\node at (-2,-1) {$-2$};
\node at (-1,-1) {$-1$};
\node at (0,-1) {$0$};
\node at (1,-1) {$1$};
\node at (2,-1) {$2$};
\node at (3,-1) {$3$};
\node at (4,-1) {$4$};
\node at (5,-1) {$5$};
\node at (6,-1) {$6$};
\foreach \x in {-0.25,-0.15,-0.05,0.05,0.15,0.25}
{ \draw[fill=black] (\x,-0.5) circle (0.05);
}
\foreach \x in {-0.15,-0.05,0.05,0.15}
{ \draw[fill=black] (-3+\x,-0.5) circle (0.05);
\draw[fill=black] (3+\x,-0.5) circle (0.05);
}
\foreach \x in {0}
{ \draw[fill=black] (-6+\x,-0.5) circle (0.05);
\draw[fill=black] (6+\x,-0.5) circle (0.05);
}
\draw[black,fill=black] (-5,-0.) circle (0.05);
\draw[black,fill=black] (-2.07,-0.) circle (0.05);
\draw[black,fill=black] (-1.93,-0.) circle (0.05);
\draw[black,fill=black] (1,-0.) circle (0.05);
\draw[black] (5,-0.) circle (0.05);
\draw[black] (2.07,-0.) circle (0.05);
\draw[black] (1.93,-0.) circle (0.05);
\draw[black] (-1,-0.) circle (0.05);
\draw[black,fill=black] (-4,0.5) circle (0.05);
\draw[black,fill=black] (-1.07,0.5) circle (0.05);
\draw[black,fill=black] (-0.93,0.5) circle (0.05);
\draw[black,fill=black] (2,0.5) circle (0.05);
\draw[black] (4,0.5) circle (0.05);
\draw[black] (1.07,0.5) circle (0.05);
\draw[black] (0.93,0.5) circle (0.05);
\draw[black] (-2,0.5) circle (0.05);
\draw[fill=black] (-3,1) circle (0.05);
\draw[fill=black] (-0.07,1) circle (0.05);
\draw[fill=black] (0.07,1) circle (0.05);
\draw[fill=black] (3,1) circle (0.05);
\end{tikzpicture}
\end{center}
\caption{
Schematic representation of the degeneracies for the $n=3$ model on a chain of $L=6$ sites. The numbers at the bottom represent the charge $Q$, and full and empty circles are used to distinguish between different multiplets at the same energy. Degeneracies occur between sectors of charge differing by multiples of $n$, and the number of states of a given degenerate tower in each sector is given by a binomial coefficient.}
\label{fig:degeneracies}
\end{figure}
The multiplicities behave in essentially the same fashion for all $n$. However, the models do not have a free-fermionic interpretation for $n>2$, and instead are strongly interacting, as will become apparent via the Bethe-ansatz analysis of these models in section \ref{sec:betheansatz}. The underlying reason for this structure seems to have little to do with fermions, free or not, and everything to do with Onsager. Indeed the Onsager algebra \eqref{Onsager} is independent of $n$, so its allowed representations will be independent as well. Free fermions give irreducible representations of the algebra of dimension $2^N$, and we know of no other representations. Thus it should not be surprising that for any $n$ the only representations that appear are of dimension $2^N$.
\subsection{Splitting the degeneracies}
\label{sec:chiralnumerics}
As discussed in section \ref{sec:chiraldecomposition}, the Hamiltonian $H_n$ can be split into two commuting chiral pieces, so that a one-parameter family $H(\alpha)$ of Hamiltonians can be constructed.
Although the charge-neutral Onsager elements $Q^0_m$ still commute with $H(\alpha)$, the charged elements ${Q_m^\pm}$ do not. Since the latter are what give the exact lattice degeneracies, we expect that these degeneracies are split when $\alpha\ne 0,\pi$. This splitting turns out to be a valuable tool in gaining further insight into these degeneracies.
To give an illustration, we plot the spectrum at $L=6$ as a function of $\alpha$ is represented on Figure \ref{fig:crossings0}. From the left panel, we observe that the degeneracies are indeed lifted for $\alpha \neq 0$. The evolution of the spectrum of \eqref{Halpha} as $\alpha$ is varied between $0$ and $\pi$ is illustrated on the right panel of Figure \ref{fig:crossings0}, the ground state undergoes a series of crossings as $\alpha$ is varied. As $L\to \infty$ we expect these crossings to become dense. In section \ref{sec:alphafamily} we use the Bethe ansatz to give a more precise characterisation of these ground-state level crossings.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.7]{crossings0.pdf}
\hspace{1cm}
\includegraphics[scale=0.4]{crossings1.pdf}
\end{center}
\caption{
Complete spectrum of the Hamiltonian $H(\alpha))$ for $n=3$ and $L=6$ sites.
The left plot shows the lifting of degeneracies between various sectors when $\alpha$ increases from $0$.
The right plot shows the levels of the $Q=0$ sector as $\alpha$ is varied between $0$ and $\pi$. The successive ground states are highlighted in red.
}
\label{fig:crossings0}
\end{figure}
\section{Unified picture through transfer matrices and quantum groups}
\label{sec:transfermatrices}
We have shown that the self-dual $U(1)$ clock models possess a slew of exact degeneracies owing to the presence of the non-abelian Onsager algebra as a symmetry. Moreover, numerical evidence suggests that the structure of the degeneracies is very simple. To make further progress, we exploit the models' integrability. Since a $U(1)$ symmetry is present, the coordinate Bethe ansatz is applicable, and we pursue this approach in section \ref{sec:CBA}. This analysis allows us to understand how the degeneracies are described within the Bethe-ansatz framework, as well as much physical information about the model. The coordinate Bethe ansatz, however, still does not allow us to fully understand the multiplet structure.
Thus before implementing the Bethe ansatz, we describe how to set our Hamiltonians and symmetry algebras in a deeper approach commonly used in integrable models. This approach requires constructing a family of commuting transfer matrices, of which the Hamiltonians are recovered in a particular limit. We show that not only is this possible for the self-dual Hamiltonians $H_n$, but construct transfer matrices for their chiral parts $H_{\rm L}$ and $H_{\rm R}$. Even more strikingly, we can find a transfer matrix that gives a generating function for the elements of the Onsager algebra. We find these transfer matrices by utilising various types of representations of quantum-group algebras \cite{Gomez}. This has a side benefit of giving a nice interpretation of some representations not commonly arising in physics.
\subsection{Correspondence with the higher spin XXZ chains}
\label{sec:XXZ}
A useful starting point is to show how our Hamiltonians can be recast as higher-spin XXZ chains with highly fine-tuned (but still nearest-neighbour) interactions that make them integrable. The connection of the Onsager algebra with such chains has long been known \cite{Roan,Dasmahapatra}, but we provide a direct connection here. Not only does this recasting make finding the corresponding transfer matrix straightforward, but gives insight into how these particular models are special.
In the form \eqref{HnchiralSpm}, the Hamiltonians $H_n$ are not symmetric under spatial parity, whereas the XXZ models are. Parity symmetry (up to boundary conditions) is restored by change of basis
\begin{equation}
H_n = \left( U_1 U_2 \ldots U_L \right)^{-1} \widetilde{H}_n \left( U_1 U_2 \ldots U_L\right) \,,\quad\quad
U_j \equiv e^{ij\pi \left(1+\frac{1}{n}\right) S_j^z}\ .
\label{basischange}
\end{equation}
The Hamiltonian in this new basis,
\begin{equation}
\widetilde{H}_n =
- \sum_{j=1}^L \sum_{a=1}^{n-1}
\frac{1}{2\sin \frac{\pi a}{n} }
\left[
n (-1)^a \left(
\left( S_{j}^- S_{j+1}^+\right)^a
+
\left( S_{j}^+ S_{j+1}^-\right)^a
\right)
+
(n-2a)
\left( e^{i \frac{ \pi}{n}} \tau_j\right)^a
\right] \,,
\label{Hntilde}
\end{equation}
is manifestly parity-symmetric in the bulk, although periodic boundary conditions in the original $H_n$ are now {twisted} as \footnote{While the degeneracies and Onsager algebra symmetry described in the previous sections are tied to the choice of boundary conditions \eqref{twist}, we note in passing that for $n$ odd the model \eqref{Hntilde} with periodic boundary conditions commutes with another version of the Onsager algebra, generated by $Q$ and its dual under the modified duality transformation $\tau_j
\longrightarrow e^{- i \frac{\pi(n+1)}{n}}
\sigma_j^\dagger \sigma_{j+1}
\longrightarrow
\tau_{j+1} $, yielding
\begin{equation}
\widehat{Q}'
=
\sum_{j=1}^L \sum_{a=1}^{n-1} \frac{(-1)^a}{2 i \sin\frac{\pi a}{n}} ( \sigma_j^\dagger \sigma_{j+1})^a
\,.
\nonumber
\end{equation}
}
\begin{equation}
S_{L+1}^\pm = (-1)^L e^{\pm i\pi L/{n}} S_{1}^\pm \,.
\label{twist}
\end{equation}
For example, for $n=2$,
\begin{equation}
\widetilde{H}_2 =
\sum_{j=1}^{L} \left(
\sigma_j^+ \sigma_{j+1}^- + \sigma_j^- \sigma_{j+1}^+
\right)
=
\frac{1}{2}\sum_{j=1}^L
\left(
\sigma_j^x \sigma_{j+1}^x
+
\sigma_j^y \sigma_{j+1}^y
\right)
\,.
\label{H2new}
\end{equation}
The spin-$1/2$ XXZ Hamiltonian is
\[H_{{\rm XXZ},1/2}=\widetilde{H}_2+\frac{q+q^{-1}}{2} \sum_{j=1}^{L} \sigma^z_j\sigma_{j+1}^z\ \]
and is integrable for any value of the parameter $q$, with gapless behaviour for $|q|=1$ and gapped otherwise \cite{Baxter82}. The Hamiltonian $\widetilde{H}_2$ therefore corresponds to $q=e^{i\pi/2}$, a special case often called the XX model, well known to be free-fermionic.
The integrable spin-1 generalisation of the XXZ chain is found by taking the spin-chain limit of the ``19-vertex'' transfer matrix of \cite{FZ}. Again, the integrable line can be parametrised by $q$ with the same gapless/gapped behaviour \cite{XXZSCFT}. The explicit form, however, is much more complicated here, given that for spin $1$, the operators $(S^\pm)^2$ no longer vanish. It is easy to check though that, when $q=e^{i\pi/3}$, the form simplifies and reduces to $\widetilde{H}_3$ from \eqref{Hntilde}. Although we will not exploit it here, it is worth mentioning that the integrable spin-1 chain possesses a very interesting non-local supersymmetry that results in degeneracies between chains of different $L$ \cite{Hagendorf}.
This supersymmetry commutes with the Onsager symmetries described here.
Higher-spin XXZ Hamiltonians are found by utilising a procedure called ``fusion'' \cite{Kulish81}. As the name indicates, the idea is very much a generalisation of fusing spin-1/2 representations of the $SU(2)$ algebra to get higher-spin representations. Here however the representations involved are of the quantum-group algebra $U_q(sl_2)$, a one-parameter deformation of $SU(2)$. This algebra has three generators
$\mathbf{S}^+, \mathbf{S}^-, \mathbf{S}^z$ obeying
\begin{align}
q^{2\mathbf{S}^z} \mathbf{S}^\pm q^{-2\mathbf{S}^z} = q^{\pm 2} \mathbf{S}^\pm
\,, \qquad\quad
[\mathbf{S}^+, \mathbf{S}^-] =
\frac{q^{2\mathbf{S}^z}-q^{-2\mathbf{S}^z}}{q-q^{-1}}\,\,.
\label{qdef}
\end{align}
The relations reduce to $SU(2)$ when $q\to\pm 1$, but in general are not those of a Lie algebra.
The representation theory of quantum-group algebras depends substantially on whether or not the parameter $q$ is a root of unity. The reason is apparent in \eqref{qdef}: in representations where the eigenvalues of ${\bf S}_z$ are integer or half-integer like in $SU(2)$, the right-hand-side of the latter relation can vanish for $q^n=\pm 1$ for some integer $n$. For any $q$, there occur spin-$S$ representations with $S$ a non-negative integer or half-integer. These act on a chain of $(2S+1)$-state quantum systems, and the action on a single site with basis states $\{|m\rangle\}_{m=-S, \ldots S}$ is
\begin{eqnarray}
\mathbf{S}^z |m \rangle &=& m |m \rangle \,, \qquad m = -S,\ldots,S
\label{spinSz}
\\
\mathbf{S}^\pm |m \rangle &=& \sqrt{ [S+1\pm m] [S \mp m]} | m \pm 1 \rangle \,,
\label{spinS}
\end{eqnarray}
where we have introduced the usual notation
\[[x] \equiv \frac{q^x-q^{-x}}{q-q^{-1}} \ .\]
For $q^n=\pm 1$, the representation of spin $n/2$ is reducible, not surprising given that (\ref{spinS}) makes it clear that the action of $\mathbf{S}^\pm$ can vanish on all states. This reducibility is familiar in physics in the fusion categories arising in anyons or conformal field theory \cite{MooreReshetikhin}.
Using various properties of the representation theory of quantum-group algebras makes the construction of integrable higher-spin XXZ Hamiltonians straightforward, although technically intricate \cite{XXZSSogo,XXZSBabu,XXZSKiri}. We find that
\begin{equation}
\widetilde{H}_n \qquad \longleftrightarrow \qquad \mbox{spin-$\frac{n-1}{2}$ XXZ chain at $q=e^{i\pi/n}$.}
\label{identification}
\end{equation}
Although closed-form expressions for the higher-spin Hamiltonians can be found in \cite{XXZSH}, their limit as $q\to e^{i\pi/n}$ is singular, since many terms vanish there: $\widetilde{H}_n$ is much simpler than for generic $q$. This happens because at this value of $q$, we have on each site the highest-spin irreducible representation, so its tensor products used to construct the Hamiltonian are reducible. We thus demonstrate this correspondence for arbitrary $n$ indirectly below, by showing in section \eqref{sec:CBA} that the Bethe equations are the same for the two models.
\subsection{Transfer-matrix construction}
The spin-$S$ XXZ Hamiltonians can be generated from a set of commuting transfer matrices written as \cite{Korepinbook,Gomez}
\begin{equation}
T(u) = \mathrm{Tr}_\mathcal{A} \left( e^{i \varphi \mathbf{S^z}} \mathcal{L}_L(u) \ldots \mathcal{L}_1(u) \right) \,.
\label{XXZTu}
\end{equation}
This is pictured in Figure \ref{fig:TM}, with the auxiliary space $\mathcal{A}$ the horitzontal line.
The objects $\mathcal{L}_j(u)$, the so-called Lax operators, are $(2S+1)\times (2S+1)$ matrices acting on the respective sites of the chain, and whose entries are operators $\mathbf{S}^+, \mathbf{S}^-, \mathbf{S}^z$ acting on $\mathcal{A}$, also $2S+1$-dimensional.
The trace is over $\mathcal{A}$, and and we have included in \eqref{XXZTu} a factor $e^{i \varphi \mathbf{S}^z}$ in order to allow for twisted boundary conditions. To make $\widetilde{H}_n$ periodic we set $\varphi=0$, while to recover the twisted boundary condition in \eqref{twist} we need to choose
\begin{equation}
\varphi = L\frac{\pi(n+1)}{n} \,.
\label{twistphi}
\end{equation}
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=1.2]
\begin{scope}[yshift=0]
\draw[blue,rounded corners=5pt,line width=1pt] (0.25,0.5) --(0,0.5) -- (-0.25,0.25) -- (0,0) -- (10,0) -- (10.25,0.25) -- (10.,0.5) -- (9.75,0.5);
\foreach \x in {1,2.5,4,8}
{ \draw[line width=1pt] (\x,-.75) -- (\x,.75);
\draw[fill=white,line width=1pt] (\x-0.35,-.25) rectangle (\x+0.35,.25);
}
\node at (6,-0.5) {\Huge $\ldots$};
\node at (1,-1) {$1$};
\node at (2.5,-1) {$2$};
\node at (4,-1) {$3$};
\node at (8,-1) {$L$};
\draw[white, fill=white,line width=1pt] (9,-.25) rectangle (9.6,.25);
\node at (9.3,0) {\small $e^{i \varphi \mathbf{S}^z}$};
\node at (1,0) {\small $\mathcal{L}_1(\lambda)$};
\node at (2.5,0) {\small $\mathcal{L}_2(\lambda)$};
\node at (4,0) {\small $\mathcal{L}_3(\lambda)$};
\node at (8,0) {\small $\mathcal{L}_L(\lambda)$};
\end{scope}
\end{tikzpicture}
\end{center}
\caption{
The transfer matrix for the spin-$S$ XXZ chain. The auxilliary space $\mathcal{A}$ is represented in blue, and is traced over. This construction allows for twisted boundary conditions when an additional $e^{i \varphi \mathbf{S}^z}$ acting on $\mathcal{A}$ is inserted.}
\label{fig:TM}
\end{figure}
The simplest case is that of the spin $S=1/2$ chain, where the Lax operators are given by
\begin{equation}
\mathcal{L}(u) =
\left(
\begin{array}{cc}
[\frac{u}{ \gamma} +\frac{1}{2}+\mathbf{S}^z ]
&
\mathbf{S}^-
\\
\mathbf{S}^+
&
[\frac{u}{ \gamma} +\frac{1}{2}-\mathbf{S}^z ]
\end{array}
\right) \,.
\label{lax12}
\end{equation}
The fusion procedure then gives the higher-spin versions \cite{XXZSSogo,XXZSBabu,XXZSKiri, XXZSH}. The Lax operator for $S=1$ is for example derived explicitly in \cite{VernierPiroli}, and is
\begin{eqnarray}
\mathcal{L}(u) =
\left(
\begin{array}{ccc}
[\frac{u}{ \gamma} +\mathbf{S}^z ]
[\frac{u}{ \gamma} +1+\mathbf{S}^z ]
&
\mathbf{S}^- [\frac{u}{ \gamma} +\mathbf{S}^z ]
&
(\mathbf{S}^-)^2
\\
\mathbf{S}^+ [\frac{u}{ \gamma} +1+\mathbf{S}^z ]
&
\mathbf{S}^+ \mathbf{S}^- + [\frac{u}{ \gamma} +1+\mathbf{S}^z ][\frac{u}{ \gamma} -\mathbf{S}^z ]
&
\mathbf{S}^- [\frac{u}{ \gamma} -1+\mathbf{S}^z ]
\\
(\mathbf{S}^+)^2
&
\mathbf{S}^+ [\frac{u}{ \gamma} -\mathbf{S}^z ]
&
[\frac{u}{ \gamma} +1-\mathbf{S}^z ]
[\frac{u}{ \gamma} -\mathbf{S}^z ]
\end{array}
\right) \,.
\label{lax1}
\end{eqnarray}
In all cases, the transfer matrices depend on an extra parameter $u$ called the spectral parameter. A fundamental property of the construction is that the transfer matrices associated with different spectral parameters commute with one another:
\begin{equation}
[T(u),T(v)]=0 \,.
\end{equation}
One can generate a set of mutually commuting local charges by taking the successive logarithmic derivatives of $T(u)$ about $u=0$, with the Hamiltonian the first one, i.e.
\begin{align}
\widetilde{H}_n = T(0)^{-1} T'(0)\ .
\end{align}
An alternate but equivalent description is to reorganize the matrix elements of the Lax operators into $R$ matrices $R_{\mathcal{A},j}(u)$. These $(2S+1)^2 \times (2S+1)^2$ matrices act on the tensor product of $\mathcal{A}$ with the fundamental representation. The transfer matrix is then
\begin{equation}
T(u) = \mathrm{Tr}_\mathcal{A} \left( e^{i \varphi \mathbf{S^z}} R_{\mathcal{A}L}(u) \ldots R_{\mathcal{A}1}(u) \right) \,.
\label{XXZTuR}
\end{equation}
An important property of the $R$ matrices is that at $u=0$, $R_{\mathcal{A},j}(0) \propto \mathcal{P}_{\mathcal{A},j}$, the permutation operator acting as $\mathcal{P}_{\mathcal{A},j} |a\rangle_{\mathcal{A}} \otimes |b\rangle_j = |b\rangle_{\mathcal{A}} \otimes |a\rangle_{j}$. It is customary to introduce the matrices $\check{R}_{\mathcal{A},j}(u) = \mathcal{P}_{\mathcal{A},j} R_{\mathcal{A},j}(u)$, which have the property that $\check{R}(0)$ is proportional to the identity.
\subsection{The chiral Hamiltonians from nilpotent representations}
\label{sec:nilpotent}
The transfer matrices described above are called {\it fundamental}, in the sense that the auxiliary space and the physical sites carry the same spin-$S$ representation. {\it Non-fundamental} transfer matrices are built by using other representations of the quantum-group algebra \cite{Gomez} for the auxiliary space $\mathcal{A}$. Such transfer matrices have been used extensively in the recent literature on quantum quenches or quantum transport, as generators of quasi-local conserved charges (see e.g.\ \cite{Prosen1,Prosen2,VernierPiroli,DeLuca}).
The structure of these representations is particularly rich at the points $q^{n}=\pm 1$, where there occur representations with no analog in the $SU(2)$ Lie algebra. At $q^n=-1$, the quantum group $U_q(sl_2)$ has additional $2S+1$-dimensional representations referred to as {\it nilpotent}, {\it semi-cyclic} or {\it cyclic}. Whereas the latter type has arisen previously in studies of the Onsager algebra in the superintegrable chiral Potts model \cite{NishinoDeguchi,Roan}, we describe here how all three arise naturally in our models. The corresponding transfer matrices allow both the chiral Hamiltonians and the Onsager elements to be expressed in an elegant algebraic fashion.
The nilpotent representations are parametrised by a continuous number $\alpha$, as
\begin{eqnarray}
\mathbf{S}^z |m \rangle &=& (m + \alpha ) |m \rangle \,, \qquad m = -S,\ldots,S
\nonumber
\\
\mathbf{S}^+ |m \rangle &=& - [m - S + 2 \alpha] | m + 1 \rangle \,,
\nonumber
\\
\mathbf{S}^- |m \rangle &=& [m + S] | m - 1 \rangle \,.
\label{nilpotent}
\end{eqnarray}
Here and from now on we fix $S=(n-1)/2$ and $\gamma = {\pi/n}$, so that $q=e^{i\pi\gamma}$.
For $\alpha=0$, it is easy to check this reduces the usual spin-$S$ representation \eqref{spinSz}, \eqref{spinS}. Otherwise the representation is non-unitary, and is sometimes referred to as the ``complex-spin representation''.
Using these new generators $\mathbf{S}^z, \mathbf{S}^+, \mathbf{S}^-$ inside the definition of the Lax operator given in the previous section, we now have a two parameter ($u$ and $\alpha$) family of transfer matrices which, crucially, all commute with one another, and so in particular commute with the fundamental transfer matrix \eqref{XXZTu} and the Hamiltonian $\widetilde{H}_n$.
We label these more general objects as $T(\lambda, \bar{\lambda})$, using the parameters
\begin{equation}
\lambda = i u \,, \qquad \bar{\lambda} = i \gamma \alpha \,,
\end{equation}
so that $T(u)=T(-i\lambda,0)$. These transfer matrices obey
\begin{equation}
[T(\lambda, \bar{\lambda}), T(\lambda', \bar{\lambda}')] = 0 \,.
\end{equation}
Because for the nilpotent representation both the auxiliary space and the physical spins have the same dimension $2S+1$, the logarithmic derivatives with respect to both $\lambda$ and $\bar{\lambda}$ generate independent {\it local} conserved charges. Remarkably, these are the chiral Hamiltonians:
\begin{eqnarray}
\left. i \frac{\mathrm{d}}{\mathrm{d}\lambda} \log T(\lambda,0) \right|_{\lambda=0} &=& \frac{2}{n}\widetilde{H}_n = \frac{2}{n} \left(\widetilde{H}_{\rm R} + \widetilde{H}_{\rm L}\right) \,,
\label{TH}
\\
\left. i \frac{\mathrm{d}}{\mathrm{d}\bar{\lambda}} \log T(0,\bar{\lambda}) \right|_{\bar{\lambda}=0} &=& \frac{2}{n} \left(\widetilde{H}_{\rm R} - \widetilde{H}_{\rm L}\right)
\,,
\label{THbar}
\end{eqnarray}
where the tildes arise from the change of basis \eqref{basischange}. Thus the decomposition of $H_n$ into the sum of commuting pieces ${H}_{\rm R}$ and ${H}_{\rm L}$ is expressed very nicely by using an uncommon quantum-group representation. For $n=3$, this fact was known from a classification of three-state quantum chains solvable by coordinate Bethe Ansatz \cite{Ragoucy1,Ragoucy2}, where $\widetilde{H}_{\rm R}$ and $\widetilde{H}_{\rm L}$ are part of a continuous family of Hamiltonians associated with special representations of $U_q(sl_2)$ at roots of unity
Even more remarkably, the transfer matrices themselves factorise as
\begin{align}
T(\lambda, \bar{\lambda}) = T(0,0)^{-1} T^{}_{\rm R}(\lambda_{\rm R}) T^{}_{\rm L}(\lambda_{\rm L}) \,,
\label{factorization}
\end{align}
where $T(0,0)=T(0)$ is the one site translation operator, and where we have introduced
\begin{align}
T^{}_{\rm R}(\lambda_{\rm R}) = T\left(\frac{\lambda_{\rm R}}{2},\frac{\lambda_{\rm R}}{2}\right)\ , \qquad\quad
T^{}_{\rm L}(\lambda_{\rm L}) = T\left(\frac{\lambda_{\rm L}}{ 2}, -\frac{\lambda_{\rm L}}{ 2}\right) \ .
\end{align}
The transfer matrices generate the chiral Hamiltonians a
\begin{align}
i\left. \frac{\mathrm{d}}{\mathrm{d}\lambda_{\rm R}} \log T_{\rm R}(\lambda_{\rm R}) \right|_{\lambda_{\rm R}=0} = \frac{2}{n} \widetilde{H}_{\rm R} \ ,\qquad\quad
i\left. \frac{\mathrm{d}}{\mathrm{d}\lambda_{\rm L}} \log T_{\rm L}(\lambda_{\rm L}) \right|_{\lambda_{\rm L}=0} &= \frac{2}{n} \widetilde{H}_{\rm L} \ .
\label{HTRL}
\end{align}
The transfer matrices $T_{\rm R}(\lambda_{\rm R})$ and $T_{\rm L}(\lambda_{\rm L})$ not only form commuting families but commute with one another as well. As our naming indicates, these are purely chiral, in that their action carries $U(1)$ charge towards the right and the left respectively. The easiest way to prove this is to rewrite the transfer matrix in the $R$-matrix form \eqref{XXZTuR}. In the nilpotent representation, these matrices turn out to be
upper and lower triangular.
Another important property of nilpotent representations \eqref{nilpotent} is that they are reducible for $\alpha = \pm 1$ (and for any integer value of $\alpha$ not a multiple of $n$). The auxilliary space then is effectively of dimension $n-1$. One can easily check that the action of the generators $\mathbf{S}^{z,\pm}$ in the reduced auxilliary spaces for $\alpha=\pm 1$ are equivalent up to a change of basis, so the two corresponding transfer matrices are equal. In terms of $T_{\rm R}$ and $T_{\rm L}$, this translates into the following identity
\begin{equation}
T_{\rm L}(\lambda)T_{\rm R}(\lambda+i\pi/n)
=
T_{\rm R}(\lambda)T_{\rm L}(\lambda+i\pi/n) \,,
\label{TLTRTLTR}
\end{equation}
as can be checked by direct implementation on the lattice.
\subsection{The Onsager algebra from semi-cyclic representations}
\label{sec:OnsagerTM}
An even more general class of representations of the quantum-group algebra are called (semi-)cyclic.
Transfer matrices built out of these representations do not conserve the $U(1)$ charge but only a ${\mathbb Z}_n$ subgroup. Moreover, they do not commute with one another, nor in general with the $T(\lambda, \bar{\lambda})$ constructed in the previous subsection. However, under some circumstances \cite{Arnaudon}, one can construct such transfer matrices that do commute with the fundamental one $T(u)=T(\lambda,0)$ with $\lambda=iu$.\footnote{A technical complication is that these circumstances exclude our case $q=e^{i\pi/n}$. A workaround is to use a simple gauge transformation to relate our spin-$S$ XXZ chains to those with $q=-e^{-i\pi/n}$, where the construction works \cite{XXZSKiri}. As a consequence, we can construct transfer matrices for the (semi)cyclic representations for the latter, which after undoing the gauge transformation commute with $\widetilde{H}_{\rm R} + \widetilde{H}_{\rm L}$. }
Such transfer matrices therefore have precisely the properties of the elements of the Onsager algebra, and we show to find the latter in the former. The connection between the Onsager algebra and cyclic representations of the quantum group has been widely already noted in the past literature, albeit following a different route than the one presented here \cite{Bazhanov}.
Semi-cyclic representations are characterised by two more parameters, $\beta_{\pm}$. The generators can be written as \cite{Gomez,Prosen2,Arnaudon}
\begin{eqnarray}
\mathbf{S}^z |m \rangle &=& (m + \alpha ) |m \rangle \,, \qquad m = -S,\ldots,S
\nonumber
\\
\mathbf{S}^+ |m \rangle &=& \beta_+ \beta_- + [m-S] [m - S + 2 \alpha] | m + 1 \rangle \,,
\qquad
\mathbf{S}^+ |S \rangle = \beta_+ | -S \rangle \,,
\nonumber
\\
\mathbf{S}^- |m \rangle &=& | m - 1 \rangle \,,
\qquad
\mathbf{S}^- |-S \rangle = \beta_- | S \rangle \,,
\label{semicyclic}
\end{eqnarray}
in particular for $\beta_+ = \beta_- = 0$ these recover the nilpotent generators \eqref{nilpotent} up to a change of basis.
The action of the generators $\mathbf{S}^{\pm}$ in such representations is pictured in Figure \ref{fig:cyclic}.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=1.5]
\begin{scope}[yshift=0]
\foreach \x in {0,0.5,1.5,2}
{ \draw[line width=1.5pt] (0,\x) -- (1,\x);
}
\node at (0.5,1.1) {\Huge $\vdots$};
\node at (0.5,0.1) {\small $-S$};
\node at (0.5,0.6) {\small $-S+1$};
\node at (0.5,1.6) {\small $S-1$};
\node at (0.5,2.1) {\small $S$};
\draw[blue,line width=1pt,-latex] (1.1,0) arc (-90:90:0.25) ;
\draw[blue,line width=1pt,-latex] (1.1,1.5) arc (-90:90:0.25) ;
\draw[blue,line width=1pt,-latex] (1.3,2) arc (45:-45:1.4) ;
\node[blue] at (2,1.1) {$\beta_+$};
\draw[red,line width=1pt,-latex] (-0.1,0.5) arc (90:270:0.25) ;
\draw[red,line width=1pt,-latex] (-0.1,2) arc (90:270:0.25);
\draw[red,line width=1pt,latex-] (-0.3,2) arc (135:225:1.4);
\node[red] at (-1,1.1) {$\beta_-$};
\end{scope}
\end{tikzpicture}
\end{center}
\caption{
Action of the quantum group generators $\mathbf{S}^-$ (in red) and $\mathbf{S}+$ (in blue) in the (semi)-cyclic representations.
}
\label{fig:cyclic}
\end{figure}
The resulting $U(1)$ charge violation and ${\mathbb Z}_n$ preservation is also apparent, in that e.g. ${\bf S}^{\pm}$ in a semi-cyclic representation can change the $U(1)$ charge by $\mp n$.
We define $T_+(\beta_+)$ and $T_-(\beta_-)$ to be the transfer matrices constructed using the semi-cyclic representations with $\beta_-=0$ and $\beta_+=0$ respectively, and $u=\alpha=0$. These commute with $T(\lambda,0)$ and hence $\widetilde{H}_{\rm R} + \widetilde{H}_{\rm L}$, but not with $T(\lambda,\bar{\lambda})$ in general, and so not any $\widetilde{H}(\alpha)$ except at $\alpha=0$ or $\pi$.
For $n=2$, the R matrix associated with $T_+(\beta_+)$ is
\begin{equation}
\check{R}_+(\beta_+) =
\left(
\begin{array}{cccc}
1 & & & \\
& 1 & & \\
& & 1 & \\
\beta_+ & & & 1 \\
\end{array}
\right) \,,
\end{equation}
and similarly that associated with $T_-(\beta_-)$ is obtained by transposing the above expression and replacing $\beta_+$ by $\beta_-$. For $n=3$ we find analogously
\begin{equation}
\check{R}_+(\beta_+) =
\left(
\begin{array}{ccccccccc}
1 & & & & & & & & \\
& 1 & & & & & & & \\
& & 1 & & & & & & \\
& & & 1 & & & & & \\
& & & & 1 & & & & \\
\beta_+ & & & & & 1 & & & \\
& & & & & & 1 & & \\
- \beta_+ & & & & & & & 1 & \\
& - \beta_+ & & \beta_+ & & & & & 1 \\
\end{array}
\right) \,,
\end{equation}
and similarly for $\check{R}_-(\beta_-)$.
From there we immediately recognize
\begin{align}
\left. \frac{\mathrm{d}}{\mathrm{d}\beta_+} \log T_{+}(\beta_+) \right|_{\beta_+=0} = 2 \sin\frac{\pi}{n} ~ \widetilde{Q}^{+} \ ,\qquad\quad
\left. \frac{\mathrm{d}}{\mathrm{d}\beta_-} \log T_{-}(\beta_-) \right|_{\beta_-=0} = 2 \sin\frac{\pi}{n} ~ \widetilde{Q}^{-}
\label{QpmT} \,,
\end{align}
which we conjecture to remain true for larger values of $n$. As always, the tilde in \eqref{QpmT} means to take the unitary transform (\ref{basischange}).
As is typical, higher-order derivatives can be obtained from commutators of the local densities of the first derivatives, namely $\widetilde{Q}^+$ and $\widetilde{Q}^-$ respectively. Since these commutators involve respectively $\mathbf{S}^+$ operators only and $\mathbf{S}^-$ operators only, the Onsager algebra requires that they commute, and so the higher logarithmic derivatives vanish. Thus our
conjecture for the transfer matrices $T_{\pm}(\beta_\pm) $ can be rewriten as
\begin{equation}
T_{\pm}(\beta_\pm) = T(0) e^{(2 \sin\frac{\pi}{n}) \beta_{\pm} \widetilde{Q}^\pm} \,.
\end{equation}
The relations (\ref{THbar},\ref{QpmT}), give the building blocks of the dual $U(1)$ charge, $Q^0$, $Q^+$ and $Q^-$, in terms of the non-fundamental transfer matrices of the higher-spin XXZ quantum chain. All the Onsager elements can be generated by commuting these with each other. It is therefore natural to expect that all the Onsager generators can be expressed in a similar elegant fashion, and we present a conjecture here.
\comment{
Setting
\begin{equation}
A_0 = \frac{4}{n} Q \,, \qquad A_1 = \frac{4}{n} \hat{Q} \,,
\end{equation}
one can build recursively a series of generators $\{A_m\}_{m \in \mathbb{Z}}\,, \{G_m\}_{m \in \mathbb{Z}}$ which obey the following commutation relations \cite{Davies}
\begin{eqnarray}
[A_l, A_m] &=& 4 G_{l-m}\,, \\
{}[G_l, A_m] &=& 2 A_{m+l} - 2 A_{m-l}\,, \\
{}[G_l, G_m] &=& 0 \,.
\label{Onsagerdef}
\end{eqnarray}
Duality, in particular, relates $A_l \leftrightarrow A_{1-l}$.
The Onsager generators can be decomposed as
\begin{eqnarray}
A_m &=& \frac{4}{n} \left( Q_m^0 + Q_m^+ + Q_m^- \right)\,, \\
G_m &=& \frac{4}{n} \left( Q_m^- - Q_m^+ \right) \,, \end{eqnarray}
where $Q_{m}^0$ commute with $Q$, while $Q_{m}^\pm$ changes it by $\pm n$. In particular we see comparing with \eqref{qhatdecomposition} that $Q_1^{0,+,-} = \widehat{Q}^{0,+,-}$.
}
We start with the operators $Q^0_m$, which we refer to as the ``Onsager Hamiltonians'' \footnote{These generators form a subset of the three-parameter abelian subalgebra $I_m = \kappa (A_{m}+A_{-m})+\kappa^* (A_{m+1}+A_{-m+1})+ \mu (G_{m+1}-G_{m-1})$ of the Onsager algebra \cite{Baseilhac18}, corresponding to $\kappa^* = \mu = 0$.}. Since these are mutually commuting, we expect that these are related to the transfer matrices $T_{\rm R}$ and $T_{\rm L}$ constructed from the nilpotent representation. To this end, we define parameters $\tau_{\rm R}= \tau(\lambda_{\rm R}), \tau_{\rm L}= \tau(\lambda_{\rm L})$ via the function
\begin{equation}
\tau(\lambda) = - \tanh \left(\frac{n}{2} \lambda \right) \, .
\label{taudef}
\end{equation}
We then define a family of commuting local conserved charges as
\begin{equation}
Q_{{\rm R},m} = \frac{1}{ (m-1)!} \left. \frac{\mathrm{d}^m}{\mathrm{d}\tau_{\rm R}^m} \log T_{\rm R} \right|_{\tau_{\rm R}=0}
\,,
\qquad\quad
Q_{{\rm L},m} = \frac{1}{ (m-1)!} \left. \frac{\mathrm{d}^m}{\mathrm{d}\tau_{\rm L}^m} \log T_{\rm L} \right|_{\tau_{\rm L}=0}
\end{equation}
for any positive integer $m$. Generalising the relations for $m=1$ from \eqref{decomposition},
we expect that the particular combinations $Q_{{\rm R},m}+Q_{{\rm L},m}$ are in direct correspondence with the higher conserved charges generated by the fundamental transfer matrix \eqref{XXZTu}, while the combinations $Q_{{\rm R},m}-Q_{{\rm L},m}$ are related to the Onsager Hamiltonians. We thus conjecture
\begin{eqnarray}
\widetilde{Q}_{2m+1}^{0} &=& Q_{{\rm R},2m+1} - Q_{{\rm L},2m+1}\,, \\
\widetilde{Q}_{2m}^{0} &=& Q_{{\rm R},2m} - Q_{{\rm L},2m} - Q \,.
\end{eqnarray}
We have checked this conjecture on finite chains for $n=2,3$ and several values of $m$ ranging between $1$ and $10$.
We then consider the formal series expansion
\begin{equation}
\frac{2\tau(\lambda)}{n} \left. \frac{\mathrm{d}}{\mathrm{d}\bar{\lambda}} \log T(\lambda,\bar{\lambda}) \right|_{\bar{\lambda}=0}
= \sum_{m=1}^{\infty} {\tau(\lambda)^m} \widetilde{Q}^{0}_m - \frac{\tau(\lambda)^2}{1-\tau(\lambda)^2} Q \,,
\label{gen1}
\end{equation}
and similarly, using that $\tau(\lambda + i \gamma) = \tau(\lambda)^{-1}$,
\begin{equation}
{2\tau(\lambda)^{-1}\over n} \left. \frac{\mathrm{d}}{\mathrm{d}\bar{\lambda}} \log T(\lambda+i\gamma,\bar{\lambda}) \right|_{\bar{\lambda}=0}
= \sum_{m=1}^{\infty} {\tau(\lambda)^{-m}} \widetilde{Q}^{0}_m - \frac{\tau(\lambda)^{-2}}{1-\tau(\lambda)^{-2}} Q \,.
\label{gen2}
\end{equation}
The sum \eqref{gen1}+\eqref{gen2} can be rewritten, after a little rearranging, as the generating function of the Onsager Hamiltonians
\begin{align}
\mathcal{G}^0(\lambda) &\equiv
\frac{n}{2i \pi \cosh(n \lambda)}
\sum_{p \in \mathbb{Z}}
{e^{- |p |\epsilon}}
\tau\left(\lambda-i{\gamma/ 2}\right)^p \widetilde{Q}_p^0 \cr
& = \frac{1}{2 i \pi}
\frac{\mathrm{d}}{\mathrm{d}\lambda}
\log\left[
{
T_{\rm R}\left(\lambda-i\frac{\gamma}{2}+i \epsilon\right)
T_{\rm L}\left(\lambda+i\frac{\gamma}{2}- i \epsilon\right)
\over
T_{\rm L}\left(\lambda-i\frac{\gamma}{2}+i \epsilon\right)
T_{\rm R}\left(\lambda+i\frac{\gamma}{2}- i \epsilon\right)
}
\right] \,.
\label{gen3reg}
\end{align}
A few comments about this conjecture are in order: first, an operator in the denominator means its inverse has to be taken. Since all of the matrices considered here commute with one another, the notation is unambiguous.
Second, we have introduced in \eqref{gen3reg} a small positive number $\epsilon$, which plays the role of a regulator. In the absence of the latter, the expression \eqref{gen3reg} would vanish as a result of \eqref{TLTRTLTR}. The interpretation of the regularized generating function \eqref{gen3reg} will become natural in the Bethe-ansatz framework described below.
The remaining Onsager elements can be generated simply commuting with $\widetilde{Q}^{\pm}=\widetilde{Q}^{\pm}_1$
as in \eqref{Onsager}. Namely,
\begin{eqnarray}
\mathcal{G}^+(\lambda) &\equiv& [\widetilde{Q}^+, \mathcal{G}^0(\lambda)] = \frac{n^2}{4\pi \cosh^2(n \lambda)}
\sum_{p \in \mathbb{Z}}
{e^{- |p |\epsilon}} \tau\left(\lambda+i{\gamma/ 2}\right)^p \widetilde{Q}_p^+
\nonumber
\\
\mathcal{G}^-(\lambda) &\equiv&
[\widetilde{Q}^-, \mathcal{G}^0(\lambda)] =
\frac{n^2}{4\pi \cosh^2(n \lambda)}
\sum_{p \in \mathbb{Z}}
{e^{- |p |\epsilon}} \tau\left(\lambda+i{\gamma/ 2}\right)^p \widetilde{Q}_p^- \,,
\label{QplusQminus}
\end{eqnarray}
\section{Bethe-ansatz analysis}
\label{sec:betheansatz}
The existence of a $U(1)$ conserved charge suggests that the energies can be computed using the Coordinate Bethe Ansatz (CBA). This construction will allow us to demonstrate the correspondence with the higher-spin XXZ chains, and provide a means to better understand the structure of the degenerate multiplets.
\subsection{Coordinate Bethe ansatz}
\label{sec:CBA}
The CBA procedure starts with the definition of a reference eigenstate (or pseudovacuum), corresponding to the minimal value of the charge $Q$. Labeling the local basis states for each spin as $n-1,n-2, \ldots 0$, according to the eigenvalue of $(n-1)/2 +Q_j$, the pseudovacuum is defined as
\begin{equation}
|\Omega \rangle = |0\ldots 0\rangle
\end{equation}
From now on we shall shift the Hamiltonians $H_n$ by an appropriate identity term to make $H_n|\Omega\rangle=0$.
\paragraph{One-particle eigenstates}
One-particle eigenstates in the basis \eqref{HnchiralSpm} are plane-wave states
\begin{equation}
| k \rangle = \sum_{j} e^{i k x} |1_{j} \rangle
\label{CBA1} \,,
\end{equation}
where $|1_{j} \rangle$ stands for the state $|1\rangle$ on site $j$, and $\ket{0}$ on the others. Requiring the periodicity of the wavefunction imposes the quantization $k \in \frac{2\pi}{L} \mathbb{Z}$.
It will be useful to introduce the shifted momenta
\begin{equation}
\tilde{k} = k - \frac{\pi(n+1) }{n} \,,
\end{equation}
which can be understood as the momenta in the basis \eqref{Hntilde}, that is, the momenta for the associated XXZ chains with twisted boundary conditions.
The energy of such states in terms of the latter is easily checked to be given by
\begin{equation}
\epsilon(\tilde{k}) = \frac{n}{ \sin \frac{\pi}{n}} \left(\cos \tilde{k} + \cos \frac{\pi}{n} \right)
\label{epsilonk}
\,.
\end{equation}
\paragraph{Two-particle eigenstates}
Two-particle states are given by
\begin{equation}
| k_1,k_2 \rangle = \sum_{j_1\leq j_2} \left( A_{12} e^{i (k_1 j_1+k_2 j_2)}+ A_{21} e^{i (k_2 j_1+k_1 j_2)} \right) |1_{j_1}1_{j_2} \rangle \,,
\label{psi2}
\end{equation}
with the convention that $|1_{j}1_{j} \rangle = |2_{j} \rangle$. As follows from examining terms with $j_1$ and $j_2$ far apart, for $|k_1,k_2\rangle$ to be an eigenstate of $H_n$, the energy must be $\epsilon(\tilde{k}_1)+\epsilon(\tilde{k}_2)$. Unwanted terms in $H_n|k_1,k_2\rangle$ with $j_1=j_2\pm 1$ vanish when
\begin{equation}
A_{12} \left(1 +2 \cos \frac{\pi}{n} e^{i \tilde{k}_1} + e^{i (\tilde{k}_1+\tilde{k}_2)} \right) +
A_{21} \left(1 +2 \cos \frac{\pi}{n} e^{i \tilde{k}_2} + e^{i (\tilde{k}_1+\tilde{k}_2)} \right)
=0 \,.
\label{A12_A21}
\end{equation}
There are two types of solutions to \eqref{A12_A21}. One is to have both $A_{12}$ and the factor multiplying $A_{21}$ vanish (or the other way around) \cite{Baxtercompleteness}; these so-called $0=0$ solutions will prove pivotal to our discussion. The other way is for them not to vanish, so that
\begin{equation}
\frac{A_{12}}{A_{21}}= - \frac{1 +2 \cos \frac{\pi}{n} e^{i \tilde{k}_2} + e^{i (\tilde{k}_1+\tilde{k}_2)}}{1 +2 \cos \frac{\pi}{n} e^{i \tilde{k}_1} + e^{i (\tilde{k}_1+\tilde{k}_2)} }
\equiv S(\tilde{k}_1,\tilde{k}_2)
\,.
\label{s_k1_k2}
\end{equation}
Note in particular that $S(\tilde{k},\tilde{k})=-1$, so the wavefunction \eqref{psi2} vanishes when $\tilde{k}_1$ and $\tilde{k}_2$ are equal.
Requiring periodicity of the wavefunction then quantizes the momentum via
\begin{align}
e^{iL\pi(n+1)/n} e^{iL\tilde{k}_1}= S(\tilde{k}_1,\tilde{k}_2)\
\end{align}
and similarly with $\tilde{k}_1 \leftrightarrow \tilde{k}_2$.
\paragraph{$M$-particle eigenstates}
Nothing in these one- or two-particle eigenstates requires the model to be integrable. However, for the analogous Bethe ansatz for the eigenstates to work for more particles, the model must be integrable. This fact is clear for $n=2$, where the $k_j$ are just the free-fermion momenta. Checking that all unwanted terms vanish is straightforward for $n=3$, but making an explicit check gets increasingly difficult for larger values of $n$, as the number of terms in the Hamiltonian increases accordingly. However we verified by explicit implementation for finite chains the validity of the Bethe-ansatz construction up to 3-particle states for $n=4$.
In the following we will therefore take for granted that the Bethe-ansatz construction holds generally, and will describe the general structure of eigenstates. Equivalently, we can just assume that $H_n$ is indeed the appropriate special case of the XXZ chain, and the result follows, since the fusion procedure guarantees the Bethe ansatz will work for all $n$.
The $M$-particle eigenstates are parametrized by a set of pseudomomenta $\{k_1, \ldots k_M\}$ as
\begin{equation}
| k_1,\ldots k_M \rangle = \sum_{j_1 \leq \ldots \leq j_M} \left( \sum_{{\cal P} \in \mathfrak{S}_M} A_{\cal P} e^{i (k_{{\cal P}_1} j_1+ \ldots+ k_{{\cal P}_M} j_M)}
\right)
|1_{j_1} \ldots 1_{j_M} \rangle \,,
\label{psiM}
\end{equation}
where the second sum if over permutations of the set $\{1, \ldots M \}$ (also labeled as orderings $p_1, \ldots p_M$). In this notation, a state where a spin takes on a value $2,\ldots, n-1$ is included by taking two successive $j_m$ to be equal, e.g.\ $\ket{2_j}=\ket{1_j1_j}$. For the $n$-state model, only $n-1$ consecutive $j_m$ can be equal. The $U(1)$ charge of the state is by construction $-L(n-1)/2+M$.
These states (\ref{psiM}) are eigenstates of $H_n$ when the coefficients $ A_{\cal P}$ are related by
\begin{equation}
\left(1 +2 \cos \frac{\pi}{n} e^{i \tilde{k}_{p_j}} + e^{i (\tilde{k}_{p_{j}}+\tilde{k}_{p_{j+1}})} \right) A_{\ldots p_j, p_{j+1} \ldots}
+
\left(1 +2 \cos \frac{\pi}{n} e^{i \tilde{k}_{p_{j+1}}} + e^{i (\tilde{k}_{p_{j}}+\tilde{k}_{p_{j+1}})} \right)
A_{\ldots p_{j+1},p_j \ldots}
=0
\,.
\label{CBAM}
\end{equation}
Note in particular from \eqref{CBAM} that having two coinciding pseudomomenta results in a vanishing of all the coefficients $\mathcal{A}_{\cal P}$, as already noticed above for the two-particle states. The pseudomomenta obey an exclusion principle, as typical in Bethe-ansatz eigenstates. Imposing the periodicity of the wavefunction fixes
\begin{equation}
A_{p_1,p_2 \ldots p_M} = e^{i L k_{p_1}} A_{p_2 \ldots p_M, p_1} \ .
\label{CBAperiodicity}
\end{equation}
If {both terms in \eqref{CBAM} are non-vanishing}, combining it with the periodicity relation in a quantization of the pseudomomenta $k_j$ through
\begin{equation}
e^{i L \frac{\pi(n +1)}{n}} e^{i L \tilde{k}_j} = \prod_{m \neq j}^M S(\tilde{k}_j,\tilde{k}_m) = \prod_{m \neq j}^M - \frac{1 + 2 \cos\frac{\pi}{n} e^{i \tilde{k}_m} + e^{i (\tilde{k}_j+\tilde{k}_m)}}{1 +2 \cos\frac{\pi}{n} e^{i \tilde{k}_j} + e^{i (\tilde{k}_j+\tilde{k}_m)}} \ .
\label{BAEk}
\end{equation}
These coupled polynomial equations, one for each $e^{i\tilde{k}_j}$, are known as the Bethe equations. The energy of the corresponding eigenstate is solely given in terms of their solutions as
$E= \sum_{j} \epsilon(\tilde{k}_j)$, with $\epsilon(\tilde{k})$ given by eq. \eqref{epsilonk}.
It is useful to reparametrize the pseudomomenta in terms of the Bethe roots $\{\lambda_j\}$ as
\begin{equation}
e^{i \tilde{k}_j} = \frac{\sinh\left( \lambda_j + i \frac{\pi}{2n}(n-1) \right) }{\sinh\left( \lambda_j - i \frac{\pi}{2n}(n-1) \right)}
\,, \qquad
e^{2 \lambda_j} = \frac{\sin\left( { \tilde{k}_j\over 2} + \frac{\pi}{2n}(n-1) \right) }{\sin\left( {\tilde{k}_j\over 2} - \frac{\pi}{2n}(n-1) \right) } \,.
\label{klambda}
\end{equation}
The Bethe quantization equations \eqref{BAEk} read in terms of the latter
\begin{equation}
e^{i L \frac{\pi(n +1)}{n}}\left( \frac{\sinh\left( \lambda_j - i \frac{\pi}{2n}(n-1) \right) }{\sinh\left( \lambda_j + i \frac{\pi}{2n}(n-1) \right)} \right)^L = \prod_{l \neq j}^M
\frac{ \sinh\left( \lambda_j - \lambda_l - i \frac{\pi}{n}\right) }
{ \sinh\left( \lambda_j - \lambda_l + i\frac{\pi}{n} \right) }
\label{BAEl} \,,
\end{equation}
while the energy becomes
\begin{equation}
E = - \sum_{j=1}^M \frac{n \sin{\pi \over n}}{\cosh(2\lambda_j) + \cos{\pi \over n}} \,.
\label{Energyl}
\end{equation}
Equations \eqref{BAEl} and \eqref{Energyl} match precisely the Bethe equations and energy of the spin-{$n-1 \over 2$} XXZ chain $q=e^{i\gamma}$ and anisotropy parameter $\gamma = \frac{\pi}{n}$ \cite{XXZSSogo,XXZSBabu,XXZSKiri, XXZSH, XXZSCFT,XXZSCFTFrahm,XXZSCFTDFZ}, up to a rescaling of the energy by a factor $2/n$. This completes the identification of our models with the higher-spin XXZ chains.
\subsection{The degeneracies as exact \texorpdfstring{$n$}{n}-strings}
\label{sec:exactstrings}
As we will now see, the degeneracies described in the beginning of this paper have a very natural interpretation within the Bethe ansatz.
Degeneracies of this kind have already been studied for the spin-1/2 XXZ chain at $q$ a root of unity, \cite{FabriciusMcCoy,Baxtercompleteness}, and the situation goes much analogously for the case at hand here.
The solutions $\{\lambda_j\}$ of the Bethe equations \eqref{BAEl} typically arrange into sets of real roots, or form patterns in the complex plane. Following a standard argument \cite{takahashi}, the roots assemble into {\it strings}, which are sets of roots distant from one another by approximately $i \gamma$ and centered around the real axis. According to the {\it string hypothesis} \cite{takahashi}, as $L\to\infty$ these values approach
\begin{equation}
\lambda + \left( j - \frac{p+1}{2} \right) i \gamma \,, \qquad j=1,\ldots{p} \,,
\label{pstring}
\end{equation}
where the real value $\lambda$ is refered to as the {\it string center}. The set above is called a $p$-string. In addition to these, one also encounters the so-called $(1-)$-strings, or antistrings, which are single roots with imaginary part ${\pi \over 2}$ (see Figure \ref{fig:strings} for an illustration).
Strings inherit the exclusion principle verified by Bethe roots, in particular two strings of the same length cannot have the same center. It is a common observation in the study of integrable models that most of the relevant eigenstates of a model, in particular all of its low-energy levels in the large-$L$ limit, are described in terms of the above strings.
For instance the ground state of the spin-1/2 XXZ chain is described by a set (``sea'') of $L/2$ real Bethe roots (or $1$-strings) on the antiferromagnetic side, and by a set of $L/2$ antistrings (where $\lambda_j+i\pi/2$ is real) on the ferromagnetic side. More generally, the ground state of the spin-$S$ XXZ chain is described by a sea of $2S$-strings on the antiferromagnetic side, and an sea of antistrings on the ferromagnetic side \cite{XXZSSogo,XXZSBabu,XXZSKiri, XXZSCFT,XXZSCFTFrahm}.
The energy associated with a configuration of Bethe roots including strings can be recast in the thermodynamic limit, where the strings become exact, as a sum over the string centers. The contribution to the energy of a $p$-string the form \eqref{pstring} reads for any spin-$S$ XXZ chain
\begin{equation}
\lim_{L\to\infty} E_{p \mbox{-}{\rm string}} {=} - \sum_{j=1}^{p}
\frac{n \sin\gamma}{\cosh(2\lambda + i \gamma(2j-p-1)) + \cos\gamma} \,,
\label{Epstring}
\end{equation}
and is generically non-zero.
In these approximate string solutions, the main difference between $q$ a root of unity and $q$ not is that in the former, typically only a finite number of types of string solutions are important as $L\to\infty$. The non-zero energy of (\ref{Epstring}) means that they are not related to the degeneracies in the spectrum. Degeneracies such as we have can arise in the Bethe ansatz from {\it exact $n$-strings}, or {\it exact complete strings}, that occur at root of unity. As the name indicates, the values \eqref{pstring} of the roots are exact even at finite size $L$. Indeed, at the values $\gamma = \frac{\pi}{n}$ the string energy \eqref{Epstring} vanishes for $p=n$. Such string solutions have the form
\begin{equation}
\{\mu\}_{n} \equiv
\left\{
\mu+ i\frac{\pi}{2} \,,~
\mu + i\frac{\pi}{2} + i \gamma\,,~
\ldots
,
\mu + i\frac{\pi}{2} + i (n-1) \gamma
\right\} \,,
\label{exactstrings}
\end{equation}
(see Figure \ref{fig:strings}), where the {\it string center} $\mu$ obeys a slightly different definition than for the ordinary strings above. Since the Bethe roots are defined up to a shift $\lambda_j \to \lambda_j + i \pi$, any cyclic permutation can be performed within \eqref{exactstrings} so the string center is defined up to shifts by $\pm i \gamma$.
We will also introduce a similar notation, $\{\tilde{k}\}_{n}$ , for the associated (shifted) pseudomomenta, which are related two by two through
\begin{equation}
1 +2 \cos \frac{\pi}{n} e^{i \tilde{k}_{j+1}} + e^{i (\tilde{k}_j+\tilde{k}_{j+1})} = 0 \,,\qquad j=1, \ldots, n\,,
\label{exactstringk}
\end{equation}
where it is understood that $\tilde{k}_{n+1}=\tilde{k}_{1}$. In the following, we will also occasionally use the terminology {\it ordinary roots} to denote Bethe roots which are not part of an exact $n$-string.
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=3.5]
\draw[black] (-1,0) -- (1,0);
\draw[black] (0,-0.6) -- (0,0.8);
\draw[black,dashed] (-1,0.75) node[left] {$\frac{\pi}{2}$} -- (1,0.75);
\draw[black,dotted] (-1,0.25) node[left] {$\frac{\pi}{6}$} -- (1,0.25);
\draw[black,dotted] (-1,-0.25) node[left] {$- \frac{\pi}{6}$} -- (1,-0.25);
\foreach \x in {(-0.679,0.457-0.75),(-0.2946,0.486-0.75),(-0.084,0.4899-0.75),(0.084,0.4899-0.75),(0.2946,0.486-0.75),(0.679,0.457-0.75),
(-0.679,-0.457+0.75),(-0.2946,-0.486+0.75),(-0.084,-0.4899+0.75),(0.084,-0.4899+0.75),(0.2946,-0.486+0.75),(0.679,-0.457+0.75)}
{ \draw[fill=black] \x circle (0.03);
}
\foreach \x in {(-0.8,0),(-0.9,0)}
{ \draw[fill=black] \x circle (0.03);
}
\foreach \x in {(0.2,0.75),(0.5,0.75)}
{ \draw[fill=black] \x circle (0.03);
}
\foreach \x in {(0.42,-0.25),(0.42,0.75),(0.42,0.25)}
{ \draw[red, fill=red] \x circle (0.03);
}
\end{tikzpicture}
\end{center}
\caption{
Example of configuration of Bethe roots for $n=3$. Real roots, ``antistrings'' (of imaginary part $\pi/2$) and 2-strings are shown in black, while an exact 3-string is shown in red.
}
\label{fig:strings}
\end{figure}
In order to understand why exact $n$-strings are special, let us look at their effect on the Bethe equations. Starting from a configuration of roots $\{\lambda_j\}$ solution of \eqref{BAEl}, consider adding to the latter an exact $n$-string of the form \eqref{exactstrings}. The right-hand side of the BAE \eqref{BAEl} for the original roots acquires an additional factor
\begin{equation}
\frac{ \sinh\left( \lambda_j - \mu - i \frac{\pi}{2}- i \frac{\pi}{n}\right) }
{ \sinh\left( \lambda_j - \mu - i \frac{\pi}{2}+ i\frac{\pi}{n} \right) }
\frac{ \sinh\left( \lambda_j - \mu- i \frac{\pi}{2} - 2i \frac{\pi}{n}\right) }
{ \sinh\left( \lambda_j - \mu - i \frac{\pi}{2} \right) }
\ldots
\frac{ \sinh\left( \lambda_j - \mu- i \frac{\pi}{2} - i \pi\right) }
{ \sinh\left( \lambda_j - \mu - i \frac{\pi}{2}-(n-2) i\frac{\pi}{n} \right) }
=1 \,,
\end{equation}
that is, in terms of the associated pseudomomenta,
\begin{equation}
\prod_{\tilde{k}' \in \{\tilde{k}' \}_n} S(\tilde{k}_j,\tilde{k}') = 1 \,.
\label{stringprodS1}
\end{equation}
so the BAE remain satisfied in the presence of the exact $n$-string.
Let us turn to the wavefunction, letting $k_1, \ldots k_M$ be the pseudomomenta associated with the original roots $\{\lambda_j\}$, and $\{k_{M+1}\}_n = \{k_{M+1}, \ldots k_{M+n}\}$ those associated with the exact $n$-string.
The wavefunction associated with $k_1, \ldots k_M$ is given by \eqref{psiM}, where all of the coefficients $A_{\mathcal{P}}$ are fixed by \eqref{CBAM}, up to a global rescaling.
Upon adding the exact $n$-string, the wavefunction becomes
\begin{equation}
| k_1,\ldots k_{M+n} \rangle = \sum_{j_1 \leq \ldots \leq j_{M+n}} \left( \sum_{{\cal P} \in \mathfrak{S}_{M +n}} A'_{\cal P} e^{i (k_{{\cal P}_1} j_1+ \ldots+ k_{{\cal P}_{M+n}} j_{M+n})}
\right)
|1_{j_1} \ldots 1_{j_{M+n}} \rangle \,.
\label{psiMn}
\end{equation}
Up to a global rescaling, we can choose
\begin{equation}
A'_{12,\ldots M,M+n, \ldots M+1} = A_{12,\ldots M}
\,,
\end{equation}
which fixes all of the $(M+n)!$ coefficients $A'_{\mathcal{P}}$ through successive applications of \eqref{CBAM} and of the periodicity condition \eqref{CBAperiodicity}.
Let us look in particular at what happens when two momenta within the exact-string are permuted, say $k_{M+1}$ and $k_{M+2}$. As a consequence of \eqref{exactstringk}, the coefficients must obey
\begin{equation}
0 \times A'_{12,\ldots M,M+n, \ldots M+2,M+1}+
\left(1 +2 \cos \frac{\pi}{n} e^{i \tilde{k}_{M+1}} + e^{i (\tilde{k}_{M+1}+\tilde{k}_{M+2})} \right)
A'_{12,\ldots M,M+n, \ldots M+1,M+2} = 0\,,
\end{equation}
which imposes $ A'_{12,\ldots M,M+n, \ldots M+1,M+2}=0$.
The remaining nonzero coefficients are all obtained from $A'_{12,\ldots M,M+n, \ldots M+1}$ through permutations made of transpositions within the set of original momenta $k_1, \ldots k_M$ or between the latter and the exact string momenta, but excluding transpositions within the exact string itself.
The resulting wavefunction, if non-vanishing, is an eigenstate of the Hamiltonian $H_n$ with the same energy as the original $M$-particle state, since the periodicity requirement \eqref{CBAperiodicity} (with $M$ replaced by $M+n$) yields no further constraint than the original Bethe equations obeyed by the momenta $k_1, \ldots k_M$. Moreover, since inserting an exact $N$ string solution includes $n$ more particles, the states with the exact $n$-string has $U(1)$ charge increased by $n$ relative to the corresponding state without it. The exact $n$-strings therefore give degeneracies of exactly the same sort as the Onsager generators do.
While the $M$ original equations are left unaffected by the addition of the exact $n$-string, the additional $M+n$ equations whose left-hand side involves the exact $n$-string itself are ill-defined and do not apply. The center of the exact $n$-string is thus not constrained by the Bethe ansatz. The existence of exact $n$-string solutions can be inferred from the Bethe equations \eqref{BAEl}, as such string solutions make both numerator and denominator vanish. For this reason, exact $n$-strings are sometimes called ``$0/0$'' solutions \cite{FabriciusMcCoy}.
\footnote{The original motivation of \cite{FabriciusMcCoy} for studying such solutions was to investigate the completeness of the Bethe ansatz, as it was put forward that the Bethe equations fail to uniquely determine states with exact strings. As later argued in \cite{Baxtercompleteness}, this is a mere consequence of the many possible choices of basis vectors within degenerate eigenspaces, and the Bethe ansatz is in fact complete in the sense that it furnishes a complete basis for these degenerate spaces.}
Multiple exact $n$-strings can be added to a given configuration of ordinary roots. The study of the resulting Bethe wavefunction is similar to the preceding. All of the non-zero coefficients are obtained from a reference one through transpositions within the original roots $k_1, \ldots k_M$, between these and momenta of the exact strings, between two momenta in the different exact strings, but not within the exact strings themselves. The sets of equations obtained from the periodicity of the wavefunction once again amount to the original set of equations \eqref{BAEk} for the momenta $k_1, \ldots k_M$, and no other constraint than the exclusion principle advocated in the previous section is imposed on the location of the exact $n$-strings.
It follows from this discussion that exact $n$-strings can be arbitrarily added to a Bethe eigenstate to form new eigenstates. Besides the exclusion principle, exact $n$-strings do not influence each other, neither do they affect the quantization equations for the ordinary roots. They can therefore be used to construct degenerate eigenstates in sectors of charges differing by multiples of $n$, which indeed reproduces the structure observed in section \ref{sec:degeneracies}.
\subsection{Quantizing the exact \texorpdfstring{$n$}{n}-strings: the example of \texorpdfstring{$n=3$}{n=3} }
The presence of exact $n$-string solutions of the Bethe equations results in degeneracies between states of charge differing by $Q$. However, nothing in the above construction fixes the number of linearly independent choices of exact strings in a given sector, nor the maximal number of strings that can be added on top of a given eigenstate. In other words, the exact $n$-strings are not quantized by the Bethe equations.
One way of attacking this problem is to utilise
the chiral decomposition \eqref{decomposition}.
Recalling section \ref{sec:chiraldecomposition}, the Hamiltonians $H_{\rm R}$ and $H_{\rm L}$ commute with $H_n$, so they share the same eigenspaces. However, they lift the degeneracies observed in $H_n$.
We therefore expect that constructing the coordinate Bethe ansatz for $H_{\rm R}$ or $H_{\rm L}$ individually should impose some kind of quantization condition on the exact strings.
In this section we will sketch this procedure, quickly specializing to $n=3$ for the sake of simplicity. In section \ref{sec:TQ}, we will describe an alternative derivation valid for general $n$.
The CBA construction for the Hamiltonians $H_{\rm R}$ or $H_{\rm L}$ goes very similarly to that for the full Hamiltonian $H_{\rm R}+H_{\rm L}$. \footnote{For $n=3$ the construction has been presented in \cite{Ragoucy1,Ragoucy2} for a general family of Hamiltonians including $H_{\rm R}$ and $H_{\rm L}$, however the exact $n$-strings were not considered there.}
The one-particle energies are now given by
\begin{equation}
\epsilon_{\rm L}(\tilde{k}) = \frac{n}{2 \sin \frac{\pi}{n}} \left(e^{i \tilde{k}} + \cos \frac{\pi}{n} \right) \,,\qquad
\epsilon_{\rm R}(\tilde{k}) = \frac{n}{2 \sin \frac{\pi}{n}} \left(e^{-i \tilde{k}} + \cos \frac{\pi}{n} \right)
\label{epsilonkLR}
\,,
\end{equation}
so that the sum $\epsilon_{\rm R}+\epsilon_{\rm L}$ recovers the energy \eqref{epsilonk} of the full Hamiltonian.
For generic sets of pseudomomenta, we recover in the same way as in section \ref{sec:CBA} the equations \eqref{BAEk}, \eqref{BAEl}.
In terms of the parameters $\{\lambda_j\}$ the energies read
\begin{equation}
E_{\rm L} = \sum_{j} \frac{n}{2} \tan \left( i \lambda_j - \frac{\pi}{2n} \right) \,,
\qquad
E_{\rm R} = \sum_j \frac{n}{2} \tan \left(- i \lambda_j - \frac{\pi}{2n} \right) \,.
\label{ELR}
\end{equation}
From \eqref{ELR}, we verify that exact $n$-strings indeed come with nonzero (but opposite) left and right energies.
In order to understand how these are quantized by $H_{\rm R, L}$, we will now specify to $n=3$ and consider the first few-particle states.
\paragraph{One exact string on top of the pseudovacuum}
The 3-particle states are written as
\begin{equation}
|k_1, k_2, k_3\rangle = \sum_{j_1\leq j_2 \leq j_3}\sum_{\mathcal{P} \in \mathfrak{S}_3} A_\mathcal{P} e^{i (k_{\mathcal{P}_1} j_1 + k_{\mathcal{P}_2} j_2 + k_{\mathcal{P}_3} j_3)} |1_{j_1} 1_{j_2} 1_{j_3}\rangle \,.
\end{equation}
We consider the case where the momenta $k_1, k_2, k_3$ form an exact 3-string, namely they are related through equations \eqref{exactstringk} which read in this case
\begin{equation}
1+ e^{i \tilde{k}_2} + e^{i (\tilde{k}_1+\tilde{k}_2)} =0 \,,\qquad
1+ e^{i \tilde{k}_2} + e^{i (\tilde{k}_2+\tilde{k}_3)} =0 \,,\qquad
1+ e^{i \tilde{k}_1} + e^{i (\tilde{k}_3+\tilde{k}_1)} =0 \,.
\label{3stringk}
\end{equation}
As described in section \ref{sec:exactstrings}, the coefficients $A_\mathcal{P}$ are related two by two through the scattering factors between the $k_j$, which are either zero or infinity. Three of them vanish, $A_{123} = A_{231}= A_{312} =0$, while the others are related by carrying particles around the system, i.e.\ $A_{abc} = A_{cab} e^{i L k_c}$.
Taking the component of the equation $H_{\rm L} |k_1, k_2, k_3\rangle = E_{\rm L} |k_1, k_2, k_3\rangle$ on the state $|j,j+1,j+1 \rangle$, we obtain
\begin{equation}
\sum_{\mathcal{P} \in \mathfrak{S}_3} A_{\mathcal{P}} e^{i \tilde{k}_{\mathcal{P}_2}}e^{i \tilde{k}_{\mathcal{P}_3}} (1 +
e^{i (\tilde{k}_{\mathcal{P}_2} + \tilde{k}_{\mathcal{P}_3})}
+e^{i \tilde{k}_{\mathcal{P}_1}}+e^{i \tilde{k}_{\mathcal{P}_2}}
)
= 0 \,,
\label{CBA3}
\end{equation}
which, restricting to the non-zero terms and using \eqref{3stringk}, becomes
\begin{equation}
\sum_{\mathcal{P} \in \{ 321, 213, 132 \} } A_{\mathcal{P}} e^{i \tilde{k}_{\mathcal{P}_2}}e^{i \tilde{k}_{\mathcal{P}_3}} e^{i \tilde{k}_{\mathcal{P}_1}} = e^{i (\tilde{k}_1 + \tilde{k}_2 + \tilde{k}_3)} \left(A_{321} + A_{213} + A_{132} \right) =0 \,,
\end{equation}
which we can rewrite as
\begin{equation}
e^{i L k_1} + e^{i L (k_1+k_2)} + e^{i L (k_1+k_2+k_3)} = 0 \,.
\label{quantization3}
\end{equation}
We obtain the analogous equations from cyclic permutations of $k_1, k_2, k_3$, but these are equivalent. Combining them with \eqref{3stringk} yields an
equation for any one momentum,
\begin{equation}
1 + \left(\frac{e^{i \frac{\pi}{n}} }{1 + e^{i \tilde{k}_i}} \right)^L
+
\left(e^{i \frac{2\pi}{n}} e^{-i \tilde{k}_i} \right)^L
= 0 \,,
\end{equation}
that is
\begin{equation}
2 \cos \left(\frac{L \tilde{k}_i}{2} - \frac{L \pi}{n} \right)
\left(2 \cos \frac{\tilde{k}_i}{2} \right)^L = -1
\,.
\end{equation}
This equation indeed provides a quantization for the centre of the exact $3$-string.
Working with $H_{\rm R}$ instead of $H_{\rm L}$ one would get the same equation multiplied by an overall minus sign, so effectively the same quantization. In contrast, working with $H_{\rm R}+H_{\rm L}$ would result in an equation of the type $0=0$, and hence no quantization.
\paragraph{One exact string + one particle}
We move on to four-particle states of the form
\begin{equation}
|k', k_1, k_2, k_3\rangle = \sum_{j_0\leq j_1\leq j_2 \leq j_3}\sum_\mathcal{P} A_\mathcal{P} e^{i (k_{\mathcal{P}_0} j_0 + k_{\mathcal{P}_1} j_1 + k_{\mathcal{P}_2} j_2 + k_{\mathcal{P}_3} j_3)} |1_{j_0} 1_{j_1} 1_{j_2} 1_{j_3}\rangle \,,
\end{equation}
where $k_1, k_2, k_3$ form an exact string as in the previous paragraph, while $k' \equiv k_0 \in 2\pi \mathbb{Z}/L$ is a solution of the single particle quantization.
Taking the component of the equation $H_{\rm L} |k', k_1, k_2, k_3\rangle = E_{\rm L} |k', k_1, k_2, k_3\rangle$ on the state $|i,j,j+1,j+1 \rangle$ with $i$ and $j,j+1$ far apart, we get
\begin{equation}
\sum_{\mathcal{P} \in \mathfrak{S}_4} A_{\mathcal{P}}e^{i k_{\mathcal{P}_2}}e^{i k_{\mathcal{P}_3}} (
e^{i k_{\mathcal{P}_0}}+e^{i k_{\mathcal{P}_1}}+e^{i k_{\mathcal{P}_2}}+e^{i (k_{\mathcal{P}_2}+k_{\mathcal{P}_3})}+1)
= 0\,.
\end{equation}
Once again the coefficients $A_{\mathcal{P}}$ vanish for 12 of the 24 permutations, and the remaining 12 are all related to one another through \eqref{CBAM} and \eqref{CBAperiodicity}.
Making similar manipulations as in the above paragraph, we arrive at
\begin{equation}
e^{i L k_1} S(\tilde{k}',\tilde{k}_1) + e^{i L (k_1+k_2)} S(\tilde{k}',\tilde{k}_1)S(\tilde{k}',\tilde{k}_2) + e^{i L (k_1+k_2+k_3)} = 0 \,,
\label{quantization4}
\end{equation}
which once again yields a quantization of the exact string.
\paragraph{One exact string + $M$ particles}
From \eqref{quantization3}, \eqref{quantization4}, we can conjecture that the quantization equation for an exact 3-string on top of a general background of $M$ other particles $\{k'_j\}$ should read
\begin{equation}
e^{i L k_1}
\prod_{j=1}^M S(\tilde{k}'_j,\tilde{k}_1)
+ e^{i L (k_1+k_2)} \prod_{j=1}^M S(\tilde{k}'_j,\tilde{k}_1) S(\tilde{k}'_j,\tilde{k}_2) + e^{i L (k_1k_2+k_3)}
=0 \,.
\label{quantization5}
\end{equation}
In section \ref{sec:TQ} we will recover this formula (and generalize it to other values of $n$) through another approach utilising the transfer matrix.
A particularly remarkable feature is that the quantization of a given exact $n$-string is affected by other particles, but, due to \eqref{stringprodS1}, not by the presence of other exact $n$-strings. In other words, exact $n$-strings do not interact with one another.
We close this discussion by mentioning another proposal for quantizing the exact strings by using a limiting procedure in the anisotropy parameter $\gamma$ \cite{FabriciusMcCoy2}. The two quantizations fundamentally differ in that, while ours should have no relation with the eigenstates at neighbouring values of $\gamma$, that of \cite{FabriciusMcCoy2} should have no relation with the chiral structure $H_{\rm R}, H_{\rm L}$, nor with the underlying Onsager algebra. The two schemes give different results, but we stress that there is no reason why these should coincide: as far as the Hamiltonians $H_n = H_{\rm R} + H_{\rm L}$ (or, equivalently, $\widetilde{H}_n$) are concerned, any quantization of the two strings gives an equally legitimate eigenstate.
\subsection{The quantization equation of exact \texorpdfstring{$n$}{n}-strings, and its solutions}
\label{sec:quantnstrings}
It is quite natural to expect, as will be recovered in section \ref{sec:TQ} through another approach, that equation \eqref{quantization5} should extend to generic values of $n$ as
\begin{equation}
\sum_{m=1}^{n} ~
\prod_{1 \leq j \leq m}
\left(
e^{i L k_j}
\prod_{p=1}^M S(\tilde{k}'_p, \tilde{k}_j)
\right)
=0
\,,
\label{quantizationTQk}
\end{equation}
where $\{k_1\}_n = {k_1, \ldots k_n}$ denote the pseudomomenta within an exact $n$-string, while $\{k'_p\}_{p=1,\ldots M}$ are the remaining particles on top of which the exact $n$-string is quantized.
Using the notation \eqref{exactstrings} for the exact $n$-string and denoting by $\{\lambda_k \}_{k=1,\ldots M}$ the Bethe roots associated with the exterior particles, we can rewrite \eqref{quantizationTQk} as
\begin{equation}
\sum_{m=0}^{n-1} ~
\prod_{1 \leq j \leq m}
\left[
\left(e^{i \frac{\pi(n+1)}{n}}
\frac{\sinh\left(\mu + i\frac{\pi}{2} + i j \gamma + i S \gamma \right)}{\sinh\left(\mu + i\frac{\pi}{2} + i j \gamma - i S \gamma \right)} \right)^L
\prod_{k=1}^M
\frac{\sinh\left(\mu + i\frac{\pi}{2} + i j \gamma - \lambda_k-i \gamma \right)}{\sinh\left(\mu + i\frac{\pi}{2} + i j \gamma - \lambda_k+ i \gamma \right)}
\right]
=0
\label{quantizationTQl}
\,,
\end{equation}
or alternatively
\[
\sum_{m=0}^{n-1} ~
\left(
\frac{e^{i m \frac{\pi}{n}}\sinh\left(\mu + i\frac{\gamma}{2} \right)}{\sinh\left(\mu + i\frac{\gamma}{2} + i m \gamma \right)} \right)^L
\prod_{k=1}^M
\frac{\sinh\left(\mu + i\frac{\pi}{2} - \lambda_k \right)\sinh\left(\mu + i\frac{\pi}{2} - \lambda_k +i \gamma \right)}{\sinh\left(\mu + i\frac{\pi}{2} + i k \gamma - \lambda_k \right)\sinh\left(\mu + i\frac{\pi}{2} + i (k+1)\gamma - \lambda_k \right)}
=0
\,.
\]
These equations are what we denote as the quantization equations for exact $n$-strings. Since their form is unaffected by the presence of other exact $n$-strings within the set of exterior Bethe roots $\{\lambda_k\}_{k=1, \ldots M}$, we shall assume in the following that $\{\lambda_k\}_{k=1, \ldots M}$ are all ordinary roots, namely contain no exact $n$-string.
Let us start with the case where there are no background roots, namely $M=0$. The quantization equation \eqref{quantizationTQl} can be rewritten as a polynomial equation of degree $L(n-1)$ in the variable $e^{2 \mu}$, which has therefore $L(n-1)$ zeroes.
We can check numerically that $e^{2\mu}=0$, that is $\mu = -\infty$, is a zero of \eqref{quantizationTQl} with multiplicity $m_{-\infty}=n-\left( L - n\left\lfloor \frac{L-1}{n} \right\rfloor \right)$.
Such zeros do not correspond to solutions for exact $n$-strings: since these correspond to $\mu \to -\infty$, all $n$-roots of an exact string built out of these would be indistinguishable from one another, and as a result of the exclusion principle between Bethe roots the associated wavefunction would vanish.
We therefore focus on the remaining finite zeroes. The number of such is a multiple of $n$, namely
\begin{equation}
(n-1)L - m_{- \infty} = n\left( L - \left\lfloor(L-1)/ n \right\rfloor -1 \right) \,.
\end{equation}
Furthermore, we can check from \eqref{quantizationTQl} that if $\mu$ is a solution, then $\mu+i\gamma$ is a solution. Therefore, the finite zeroes of \eqref{quantizationTQl} form a set of $\left( L - \left\lfloor \frac{L-1}{n} \right\rfloor -1 \right)$ distinct ``exact $n$-strings'', which we parametrize by the set of their centers ${\cal S}$ as
\begin{equation}
\{ \mu_k, \mu_k+i\gamma, \ldots \mu_k + i (n-1)\gamma | \mu_k \in \cal S \}
\label{eq:solutionS}
\end{equation}
Note that ${\cal S}$ is defined up to permutations within each of the exact strings. However, we can check numerically that the solutions of \eqref{quantizationTQl} all have $\rm{Im}\, \mu \in \gamma \mathbb{Z}$, so by convention we can define ${\cal S}$ as the set of all real centers.
For illustration, we represent on figure \ref{fig:solutions} the associated pseudomomenta $\tilde{k}$ for $n=3$ and $L=8$, as well as the associated energy $i (\epsilon_{\rm R} - \epsilon_{\rm L})_{\rm s}$ for the maximally chiral Hamiltonian, \eqref{estring} (see next section).
A striking feature of these solutions is their proximity to the values corresponding to $k \in \pi(2\mathbb{Z}+1)/L$, reminiscent of a free-fermionic problem such as the one treated in section \ref{sec:n2}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{stringsolutions.pdf}
\end{center}
\caption{
Solutions of the exact $n$-strings quantization equation in absence of other roots for $n=3$, $L=8$. The solutions are represented as red dots, and we have indicated in comparison the values $k \in \pi(2\mathbb{Z}+1)/L$. The blue curve represents the associated energy for the maximally chiral Hamiltonian $i (H_{\rm R}-H_{\rm L})$, whose expression in terms of the associated string centers $\mu$ is given by \eqref{estring}.}
\label{fig:solutions}
\end{figure}
With a background of $M$ other particles, the quantization equation can now be rewritten as a polynomial equation of degree $(L+M)(n-1)$ in the variable $e^{2 \mu}$. In fact, the degree is reduced when \eqref{quantizationTQl} has zeroes as $\mu \to \infty$. We checked numerically that the multiplicity of these zeroes is given by $m_{\infty} = n-\left( M - n\left\lfloor \frac{M-1}{n} \right\rfloor \right)$, which results in reducing the degree of the polynomial equation to $(L+M)(n-1)- m_{\infty}$.
We now turn to the zeroes at $\mu \to -\infty$, that is $e^{2 \mu} = 0$. By numerical inspection, we see that their multiplicity is $m_{-\infty}=n-\left( L+M - n\left\lfloor \frac{L+M-1}{n} \right\rfloor \right) $.
Out of the remaining finite zeroes, we checked that for each of the $M$ roots $\lambda_k$ there is a zero at $\lambda_k$, as well as $n$ zeroes corresponding to the exact $n$-string built from $\lambda_k$. Once again, the exclusion principle implies that neither of these should be considered as solutions for the exact $n$-strings.
Putting everything together, the number of remaining zeroes is then simply
\[
(L+M)(n-1) - m_{\infty}-m_{-\infty}-(n+1)M =
n \left( L - n\left \lfloor \frac{M-1}{n} \right\rfloor
- n\left \lfloor \frac{L+M-1}{n} \right\rfloor
-2 \right) \equiv nm_{\cal S}\,.
\]
Once again this is a multiple of $n$, giving $m_{\cal S}$ distinct exact $n$-strings of the form \eqref{eq:solutionS}. In contrast with the case of no exterior particles, however, we observe that their imaginary parts are not always of the form $\rm{Im} \mu \in \gamma \mathbb{Z}$, so the associated centres cannot always be chosen real.
Let us now set some notations for the following. For a given eigenstate, we have defined as $\cal{S}$ the set of solutions of the string quantization equation. This equation is unchanged if the considered eigenstate contained exact $n$-strings in the first place, and we therefore define
\begin{equation}
{\cal S} = \rm{s} ~\cup ~ \bar{\rm s} \,,
\label{Ssbar}
\end{equation}
where $\rm{s}$ and $\bar{\rm s}$ denote respectively the set of occupied/vacant solutions.
Looking back at the structure of degeneracies detailed in section \ref{sec:degeneracies}, it is now clear where the binomials of equations \eqref{binomials}, \eqref{binomials2} come from. For a given ``highest weight'' state corresponding to a certain configuration of Bethe roots, the exact string quantization equation gives rise to $m_{\cal S}$ solutions. There will be therefore $2^{m_{\cal S}}$ degenerate states, corresponding to the possibilities of having each of these soltions occupied, or empty. Moreover within a sector of fixed charge, that is with a fixed number of exact strings $k$, there will be as many degenerate states as ways to choose $k$ solutions out of $m_{\cal S}$ to be occupied.
In particular, the ground states of the Hamiltonians $\pm H_{n}$, which are associated with $M=S L$ Bethe roots, have $m_{\cal S}=0$ and are therefore non-degenerate.
\subsection{The ground states of the chiral Hamiltonians \texorpdfstring{$H(\alpha)$}{} }
\label{sec:alphafamily}
Armed with this understanding of the Bethe-ansatz structure, we can return to the family $e^{i \alpha} H_{\rm R} + e^{-i \alpha} H_{\rm L}$, and explore the low-energy physics as $\alpha$ is varied from $0$ to $\pi$.
The physics of the Hamiltonians $\pm H_n$ has previously been explored in their incarnations as higher-spin XXZ chains \cite{XXZSSogo,XXZSBabu,XXZSKiri, XXZSH, XXZSCFT,XXZSCFTFrahm,XXZSCFTDFZ}.
The antiferromagnetic Hamiltonian $H_n$ has a ground state known to be described by a sea of $L/2$ (non exact) $2S=(n-1)$-strings. The low-lying excitations correspond to making holes close to the edges of this sea, or creating a finite number of other types of strings. For real values of the anisotropy $\gamma$ these are found to be gapless, and described by a conformal field theory (CFT) of central charge
\begin{equation}
c= \frac{3S}{S+1}=\frac{3(n-1)}{n+1} \,.
\label{cantiferro}
\end{equation}
Aspects of this CFT for the $n=3$ model will be studied in much more detail in \cite{Phasediagram}.
On the ferromagnetic side, corresponding to $H(\pi)=-H_n$, it is known for $S=1/2$ \cite{XXZ} and $S=1$ \cite{Baranowski} that the ground state is described by a sea of $L S$ antistrings.
It is then natural to expect the same antistring-sea ground state holds for all $S$, as can be seen by computing the energies associated to the various configurations of strings. Indeed, sending $\lambda\to\lambda+i\pi/2$ changes the sign of \eqref{Epstring}, and it is easy to check for the ferromagnet that the resulting energies are positive for all $p$-strings, and negative for antistrings. The ground state is therefore obtained by filling in the maximal number of the latter, namely $L S$. Studying the low-lying excitations and extracting the scaling of corresponding energies is a standard Bethe-ansatz calculation which we will not pursue here \cite{Korepinbook}, however it quite clear from there that the $c=1$ CFT description should hold for all $S$, in the regime where $\gamma$ is real. This is corroborated by a numerical study of the $n=4$ model, recovering indeed $c=1$.
\footnote{
This value of the central charge, as well as \eqref{cantiferro} on the antiferromagnetic side, holds for the periodic XXZ chains.
The additional boundary twists in $\pm \widetilde{H}_n$ can be interpreted the CFT language as charges at infinity and result in a lowering of the central charge \cite{XXZ,XXZSCFT}. Note however that for $L\in 2n \mathbb{Z}$ the effect of the twist disappears, and the unscreened central charges are recovered.}
We now turn to the ``maximally chiral'' Hamiltonian $i (H_{\rm R}-H_{\rm L})$. Recalling \eqref{epsilonkLR}, the single-particle energy is $\tilde{k}$
\begin{equation}
i(\epsilon_{\rm R}-\epsilon_{\rm L})(\tilde{k}) = \frac{n \sin \tilde{k}}{\sin \frac{\pi}{n}} = n \tanh \lambda\,.
\end{equation}
The energy of an exact $n$-string, obtained by summing the single-particle energies for all $n$ roots, has a particularly simple form in terms of the string center $\mu$ :
\begin{equation}
i(\epsilon_{\rm R}-\epsilon_{\rm L})_{\rm s}=
n^2 \tanh n\mu \,.
\label{estring}
\end{equation}
Identifying the Bethe roots associated with the eigenstates of interest by comparing the corresponding energies, we observe that the ground state in that case is comprised solely of exact $n$-strings.
As we have seen in section \ref{sec:quantnstrings}, there are in this case $m_{\cal S} = L - \left\lfloor \frac{L-1}{n} \right\rfloor -1 $ solutions of the exact $n$-string quantization equations. Those corresponding to $\tilde{k}<0$ (resp. $\tilde{k}>0$) bring a negative (resp. positive) contribution to the energy, see figure \ref{fig:solutions}.
As with free fermions, the ground state of $i(H_{\rm R} - H_{\rm L})$ is therefore obtained by filling all the negative energy solutions. The corresponding configuration of exact strings is represented for $n=3$, $L=6$ on the middle diagram of figure \ref{fig:rootsalpha}.
%
%
As we have described in section \ref{sec:chiralnumerics}, varying $\alpha$ causes the ground state to undergo a series of crossings (see figure \ref{fig:crossings0}). In the language of the Bethe ansatz, increasing $\alpha$ from $0$ to $\pi/2$ corresponds to progressively emptying the sea of (approximate) $n-1$-strings and filling the sea of exact $n$-strings, with some occasional marginal extra roots ensuring that the total number of roots remains the same.
Similarly, moving from $\alpha = \frac{\pi}{2}$ to $\alpha = \pi$ the crossings result from emptying of the sea of exact $n$-strings and the filling of the sea of antistrings. This is illustrated in figure \ref{fig:rootsalpha}, where we display the Bethe roots associated to the successive ground states for $n=3$, $L=6$.
It is clear from this mechanism that the number of crossings increases linearly with $L$, and can be expected to become dense throughout the interval $\alpha \in [0,\pi]$ as $L \to \infty$.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[scale=2.2]
\draw[black] (-0.5,0) -- (0.5,0);
\draw[black] (0,-0.3) -- (0,0.8);
\draw[black,dashed] (-0.5,0.75) node[left] {$\frac{\pi}{2}$} -- (0.5,0.75);
\draw[black,dotted] (-0.5,0.25) node[left] {$\frac{\pi}{6}$} -- (0.5,0.25);
\draw[black,dotted] (-0.5,-0.25) node[left] {$- \frac{\pi}{6}$} -- (0.5,-0.25);
\foreach \x in {(0.211037, 0.27211), (-0.211037, 0.27211), (0.,
0.260135), (0.211037, -0.27211), (-0.211037, -0.27211), (0.,
-0.260135)}
{ \draw[red,fill=red] \x circle (0.03);
}
\begin{scope}[shift={(1.5,0)}]
\draw[black] (-0.5,0) -- (0.5,0);
\draw[black] (0,-0.3) -- (0,0.8);
\draw[black,dashed] (-0.5,0.75)
-- (0.5,0.75);
\draw[black,dotted] (-0.5,0.25)
-- (0.5,0.25);
\draw[black,dotted] (-0.5,-0.25)
-- (0.5,-0.25);
\foreach \x in {(0, 0), (0, -0.242476), (0, 0.242476), (-0.226109, 0.75), (-0.226109,0.25), (-0.226109, -0.25)}
{ \draw[red,fill=red] \x circle (0.03);
}
\end{scope}
\begin{scope}[shift={(3,0)}]
\draw[black] (-0.5,0) -- (0.5,0);
\draw[black] (0,-0.3) -- (0,0.8);
\draw[black,dashed] (-0.5,0.75)
-- (0.5,0.75);
\draw[black,dotted] (-0.5,0.25)
-- (0.5,0.25);
\draw[black,dotted] (-0.5,-0.25)
-- (0.5,-0.25);
\foreach \x in {(-0.0749488, 0.75), (-0.0749488, 0.25), (-0.0749488, -0.25), (-0.306426, 0.75), (-0.306426, 0.25), (-0.306426, -0.25)}
{ \draw[red,fill=red] \x circle (0.03);
}
\end{scope}
\begin{scope}[shift={(4.5,0)}]
\draw[black] (-0.5,0) -- (0.5,0);
\draw[black] (0,-0.3) -- (0,0.8);
\draw[black,dashed] (-0.5,0.75)
-- (0.5,0.75);
\draw[black,dotted] (-0.5,0.25)
-- (0.5,0.25);
\draw[black,dotted] (-0.5,-0.25)
-- (0.5,-0.25);
\foreach \x in {(0.136342, 0.75), (0, 0.75), (-0.136342, 0.75), (-0.389944, 0.75), (-0.389944, 0.25), (-0.389944, -0.25)}
{ \draw[red,fill=red] \x circle (0.03);
}
\end{scope}
\begin{scope}[shift={(6,0)}]
\draw[black] (-0.5,0) -- (0.5,0);
\draw[black] (0,-0.3) -- (0,0.8);
\draw[black,dashed] (-0.5,0.75)
-- (0.5,0.75);
\draw[black,dotted] (-0.5,0.25)
-- (0.5,0.25);
\draw[black,dotted] (-0.5,-0.25)
-- (0.5,-0.25);
\foreach \x in {(-0.20822, 0.75), (-0.524302, 0.75), (0.524302, 0.75), (-0.0597185, 0.75), (0.0597185, 0.75), (0.20822, 0.75)}
{ \draw[red,fill=red] \x circle (0.03);
}
\end{scope}
\end{tikzpicture}
\end{center}
\caption{
Configurations of Bethe roots associated with the successive ground states of the Hamiltonian $H(\alpha)$ for $n=3$ on a chain of $L=6$ sites, as $\alpha$ is varied from $0$ to $\pi$. These are the levels highlighted in red in Figure \ref{fig:crossings0}, and correspond to the state of lowest energy in the intervals $\alpha \in [0,\approx 0.484], [\approx 0.484,\approx 0.972], [\approx 0.972,\approx 1.851], [\approx 2.733,\pi]$ respectively.
}
\label{fig:rootsalpha}
\end{figure}
\section{Quantizing the exact \texorpdfstring{$n$}{n}-strings using the transfer matrix}
\label{sec:TQ}
\subsection{The T-Q relations}
Within the quantum integrability framework, much can be learned from operatorial relations (or functional relations, when viewed at the level of eigenvalues) satisfied by the transfer matrices. A particularly important set are the $T$-$Q$ relations \cite{BaxterTQ} giving the transfer matrices in terms of Baxter's $Q$ operator. The latter has by construction eigenvalues on Bethe states
\begin{align}
Q(\lambda) = \prod_j \sinh(\lambda - \lambda_j) \,.
\label{qfunction}
\end{align}
The $T$-$Q$ relations for the fundamental transfer matrices of the spin-$S$ XXZ chains can be derived from fusion of the spin-1/2 case.
In the following, we will be interested in $T$-$Q$ equations for the transfer matrices based on nilpotent auxilliary representations at root of unity. Such relations were presented in the case of the spin-1/2 chain in \cite{KorffTQ}, and applications to the study of exact strings can be found in \cite{Korffstrings}. More recent applications of such relations, in particular to the study of quantum quenches and quantum transport, can be found in \cite{DeLuca}.
It is easy to extend these relations to the case at hand here, namely spin-$n-1 \over 2$ chains with twisted boundary conditions. We write
\begin{equation}
T(\lambda,\bar{\lambda}) =
Q\left(\lambda-\bar{\lambda} - i S \gamma \right)
Q\left(\lambda+\bar{\lambda} + i(S+1) \gamma \right)
\sum_{m=-S}^{S} e^{i m \varphi}
\frac{ f\left( \lambda+\bar{\lambda} + i\left( m + 1 /2\right)\gamma \right) }
{
Q\left(\lambda+\bar{\lambda} + i(m+1)\gamma \right) Q\left(\lambda+\bar{\lambda} + i m \gamma \right)
} \,,
\label{TQ}
\end{equation}
which has exactly the same form as that proposed in \cite{DeLuca}, the only differences residing in the definition of the source function, which is here
\begin{eqnarray}
f(\lambda) &=& \left(\prod_{k=1}^{n-1} -i \sinh\left( \lambda +i \left( k-\frac{n}{2}\right) \gamma \right)\right)^{L}
= \left( \frac{1}{2^{n-1}} \frac{\sinh\left(n\left( \lambda - i \frac{\pi}{2} \right)\right)}{\sinh\left( \lambda - i \frac{\pi}{2} \right)} \right)^L \,,
\end{eqnarray}
as resulting from the fusion of spin-1/2 chains into a spin-$S$ chain, as well as in the introduction of twist factors in front of each term (we recall that the case of interest for us is obtained to setting the twist parameter as in eq. \eqref{twistphi}).
Equation \eqref{TQ}, though not proved, can be checked extensively against exact diagonalization, using the following method adapted from \cite{FabriciusMcCoy2}: since the transfer matrices $T(\lambda, \bar{\lambda})$, whose entries are trigonometric polynomials in $\lambda$ and $\bar{\lambda}$, share the same set of eigenvectors, it is straightforward to show that their eigenvalues are also trigonometric polynomials in $\lambda$ and $\bar{\lambda}$. For a given eigenvector obtained from exact diagonalization of one of these transfer matrices, we can construct this polynomial explicitly by acting on this eigenvector with $T(\lambda, \bar{\lambda})$. Assuming a functional dependence of the form \eqref{TQ}, where the number of Bethe roots is fixed by the $U(1)$ charge but where the Bethe roots themselves are unknowns, we can write from \eqref{TQ} a trigonometric polynomial equation which should vanish for any $\lambda$, $\bar{\lambda}$. This imposes the cancellation of each coefficient in this equation, which, if a solution exists, fixes the Bethe roots $\{\lambda_j\}$ and confirms the consistency of \eqref{TQ}.
Using this procedure, we have indeed checked the validity of \eqref{TQ} on finite size chains ($L=4,5,6)$, for several values of $n$, and for various eigenstates in different charge sectors.
A first observation to make from \eqref{TQ} it can be factorized in the form \eqref{factorization} yielding two individual $T$-$Q$ equations for $T_{\rm L}$ and $T_{\rm R}$:
\begin{eqnarray}
T_{\rm L}(\lambda_{\rm L}) &=& \frac{ Q\left(\lambda_{\rm L} - i S \gamma \right) }{ Q\left(i S \gamma \right) }
\label{TQL}
\\
T_{\rm R}(\lambda_{\rm R}) &=&
Q\left( -i S \gamma \right)
Q\left(\lambda_{\rm R} + i(S+1) \gamma \right)
\sum_{m=-S}^{S}
\frac{ f\left( \lambda_{\rm R} + i\left( m + 1 /2\right)\gamma \right) }
{
Q\left(\lambda_{\rm R}+i (m+1)\gamma \right) Q\left( \lambda_{\rm R} + i m \gamma \right)
}
\label{TQR} \,,
\end{eqnarray}
which once again can be checked numerically. The manifest asymmetry between \eqref{TQL} and \eqref{TQR} might seem surprising, given that $T_{\rm L}$ and $T_{\rm R}$ are related through a global spin-flip operation. The Bethe ansatz, however, is asymmetric due to the choice of a reference state that breaks the spin-reversal symmetry. The two transfer matrices $T_{\rm L}$ and $T_{\rm R}$ can be understood as the two linearly independent solutions of the $T$-$Q$ equation for the fundamental transfer matrix. These are sometimes referred to as $Q_+$ and $Q_-$, or $Q_{\rm R}$ and $Q_{\rm L}$ in the literature \cite{Bazhanov,KorffTQ,Korffequator}.
It is well-known in the usual case how to recover the Bethe equations from the analyticity properties of the $T$-$Q$ relation \cite{BaxterTQ}. In the present case, we can go further and derive the quantization equation for exact $n$-strings.
For this sake, let us introduce a few notations.
As defined in section \ref{sec:quantnstrings}, for a given eigenstate made of ordinary roots ${\rm r} = \{\lambda_j\}$, the exact $n$-string quantization equations gives rise to a set of solutions $\mathcal{S}={\rm s} \cup \bar{\rm s}$, where ${\rm s}$ and $\bar{\rm s}$ denote respectively the set of occupied and vacant solutions.
We introduce for each set a different $Q$ function, namely
\begin{align}
Q_{\rm r}(\lambda) &= \prod_{\lambda_j \in {\rm r}} \sinh(\lambda - \lambda_j) \,,
\cr
Q_{\rm s}(\lambda) &= \prod_{\mu_k \in {\rm s}} \sinh\left(\lambda-\mu_k - i \frac{\pi}{2} \right) \sinh\left(\lambda-\mu_k - i \frac{\pi}{2} + i \gamma \right)
\ldots
\sinh\left(\lambda-\mu_k - i \frac{\pi}{2} + i(n-1) \gamma \right)
\,,\cr
Q_{\bar{\rm s}}(\lambda) &= \prod_{\bar{\mu}_k \in \bar{\rm s}} \sinh\left(\lambda-\bar{\mu}_k - i \frac{\pi}{2} \right) \sinh\left(\lambda-\bar{\mu}_k - i \frac{\pi}{2} + i \gamma \right)
\ldots
\sinh\left(\lambda-\bar{\mu}_k - i \frac{\pi}{2} + i(n-1) \gamma \right)\,,
\cr
Q_{\cal S}(\lambda) &= Q_{\rm s}(\lambda) Q_{\bar{\rm s}}(\lambda) \,,
\end{align}
so in particular Baxter's original $Q$ function \eqref{qfunction} reads
\begin{equation}
Q(\lambda) = Q_{\rm r}(\lambda) Q_{\rm s}(\lambda) \,.
\label{eq:QsQr}
\end{equation}
Looking at \eqref{TQR}, it is easy to see that the product $Q_{\rm s}\left(\lambda_{\rm R}+ i(m+1)\gamma \right) Q_{\rm s}\left( \lambda_{\rm R} + i m \gamma \right)$ in the denominator does not depend on $m$, and therefore
\begin{eqnarray}
T_{\rm R}(\lambda_{\rm R})
&=&
\frac{
Q\left( -i S \gamma \right)
Q_{\rm r}\left(\lambda_{\rm R} +i (S+1) \gamma \right)
}
{
Q_{{\rm s}}\left(\lambda_{\rm R} + i \frac{\gamma}{2}\right)
}
\sum_{m=-S}^{S}
e^{i m \varphi} \frac{ f\left( \lambda_{\rm R} + i\left( m + 1 /2\right)\gamma \right) }
{
Q_{\rm r}\left(\lambda_{\rm R}+ i (m+1)\gamma \right) Q_{\rm r}\left( \lambda_{\rm R} + i m \gamma \right)
}
\nonumber \\
\label{TQRstrings}
\end{eqnarray}
Following a standard argument \cite{BaxterTQ}, the functions $T_{\rm R}(\lambda_{\rm R})$ and $T_{\rm L}(\lambda_{\rm L})$ are trigonometric polynomials by construction of the transfer matrix, and should therefore have no poles.
Taking for instance $\lambda \to \mu_k - i \frac{\gamma}{2}$, this imposes that the sum in \eqref{TQRstrings} should cancel at this value, namely,
\begin{equation}
\sum_{m=-S}^{S}
e^{i m \varphi} \frac{ f\left(\mu_k + i m \gamma \right) }
{
Q_{\rm r}\left(\mu_k+ i\left(m- \frac{1}{2}\right)\gamma \right) Q_{\rm r}\left(\mu_k + i\left(m+ \frac{1}{2}\right) \gamma \right)
}
=0 \,,
\label{quantizationTQ}
\end{equation}
which fixes a quantization condition on the exact $n$-string center $\mu_k$.
Dividing \eqref{quantizationTQ} by $f(\mu_k + i S \gamma)$ and multiplying by $Q_{\rm r}\left(\mu_k + i \frac{\pi}{2}\right)Q_{\rm r}\left(\mu_k + i \frac{\pi}{2} - i \gamma\right)$, we indeed recover precisely the quantization equation \eqref{quantizationTQl}. We note that similar results have been obtained for spin-1/2 chains in \cite{Korffstrings}.
We can now factorize \eqref{quantizationTQ} in terms of the solutions of the string quantization equation, which have been described in section \ref{sec:quantnstrings}.
By explicitly implementing all the $Q$ functions above for various eigenstates for $n=3,4,5$ and system sizes $L=3,4,5,6,7$, we check that the following factorization holds
\begin{equation}
\sum_{m=-S}^{S}
\frac{ e^{i m \varphi} f\left(\mu + i m \gamma \right) }
{
Q_{\rm r}\left(\mu+ i\left(m- \frac{1}{2}\right)\gamma \right) Q_{\rm r}\left(\mu + i\left(m+ \frac{1}{2}\right) \gamma \right)
}
\propto
e^{(m_{-\infty}-m_{\infty})\mu}
Q_{\cal S}\left(\mu \right) \,,
\label{quantizationTQfactor}
\end{equation}
where the multiplicities $m_{\pm\infty}$ have been defined in section \ref{sec:quantnstrings}, and where the symbol $\propto$ indicates a numerical proportionality constant independent of $\mu$. The latter can be determined for instance by comparing the limits $\mu \to \infty$ of the two sides of \eqref{quantizationTQfactor}, and depends on the state under consideration. From there, we can rewrite $T_{\rm R}(\lambda_{\rm R})$ as
\begin{eqnarray}
T_{\rm R}(\lambda_{\rm R}) &\propto &
\frac{
Q(-i S \gamma) Q_{\rm r}\left(\lambda_{\rm R} +i (S+1) \gamma \right) }
{
Q_{\rm s}\left(\lambda_{\rm R} + i \frac{\gamma}{2}\right)
}
Q_{\rm s}\left(\lambda_{\rm R} + i \frac{\gamma}{2}\right)
Q_{\bar{\rm s}}\left(\lambda_{\rm R} + i \frac{\gamma}{2}\right)
\nonumber
\\
&\propto &
Q(-i S \gamma) Q_{\rm r}\left(\lambda_{\rm R} +i (S+1) \gamma \right)
Q_{\bar{\rm s}}\left(\lambda_{\rm R} + i \frac{\gamma}{2}\right) \,,
\label{TRconj}
\end{eqnarray}
which will turn out useful in the following.
\subsection{The exact string creation/annihilation operators}
\label{sec:stringcreation}
We finally are able to give a precise link between the exact strings and the elements of the Onsager algebra.
To do so, we utilise the generating functions $\mathcal{G}^{0}(\lambda), \mathcal{G}^{\pm}(\lambda)$ introduced in section \ref{sec:OnsagerTM} from the transfer matrix construction.
Consider the commutators of the generating functions $\mathcal{G}^{\pm}(\lambda)$ with the Onsager Hamiltonian $\hat{Q}^0\propto H_{\rm R}-H_{\rm L}$. Using the commutation relations \eqref{Onsager}, we obtain
\begin{equation}
\left[ \hat{Q}^0 , \mathcal{G}^\pm(\lambda) \right]
= \pm \frac{n}{2}(\tau\left(\lambda-i{\gamma/ 2}\right) + \tau\left(\lambda+i{\gamma/ 2}\right)) \mathcal{G}^\pm(\lambda) \,,
\end{equation}
Using the definition \eqref{taudef} of $\tau$ and the expression \eqref{estring} for the energy of an exact $n$-string then gives,, in the limit $\epsilon \to 0$,
\begin{equation}
\left[ i(H_{\rm R}-H_{\rm L}) , \mathcal{G}^\pm(\lambda) \right]
\ =\ \pm
n^2 \tanh(n \lambda)\mathcal{G}^\pm(\lambda)
\ =\ \pm
i(\epsilon_{\rm R} - \epsilon_{\rm L})_{\rm s}(\lambda)
\mathcal{G}^\pm(\lambda)
\,.
\label{comGpm}
\end{equation}
The generating functions $\mathcal{G}^\pm(\lambda)$ thus have the same commutation relations with $i(H_{\rm R}-H_{\rm L})$ as would operators creating or annihilating an exact $n$-string with center $\lambda$.
In order to make this correspondence more precise, we need to take proper care of the regulator in \eqref{gen3reg}.
Let us first consider the action of $\mathcal{G}^0(\lambda)$ on a given eigenstate specified by a set of ordinary roots $\{ \lambda_j\}$ and of exact $n-$strings of centers $\{ \mu_k\}$.
The eigenvalues of $T_{\rm R}$ and $T_{\rm L}$ were expressed in terms of the various $Q$ functions in section \ref{sec:TQ}, and we further recall the alternative conjectured expression \eqref{TRconj} for $T_{\rm R}$, which has been verified through extensive numerical checks.
From there we obtain
\begin{equation}
{
T_{\rm R}\left(\lambda-i\frac{\gamma}{2}+i \epsilon\right)
T_{\rm L}\left(\lambda+i\frac{\gamma}{2}- i \epsilon\right)
\over
T_{\rm L}\left(\lambda-i\frac{\gamma}{2}+i \epsilon\right)
T_{\rm R}\left(\lambda+i\frac{\gamma}{2}- i \epsilon\right)
}
=
\frac{Q_{\bar{\rm s}}(\lambda + i \epsilon)Q_{\rm s}(\lambda - i \epsilon)}
{Q_{\rm s}(\lambda + i \epsilon)Q_{\bar{\rm s}}(\lambda - i \epsilon)} \,,
\end{equation}
where we recall that $Q_{s}$ and $Q_{\bar{\rm s}}$ indicate products over the occupied exact $n$-strings, and the vacant exact $n$-strings solutions respectively. We then have
\begin{equation}
{T_{\rm R}\left(\lambda-i\frac{\gamma}{2}+i \epsilon\right)
T_{\rm L}\left(\lambda+i\frac{\gamma}{2}- i \epsilon\right)
\over
T_{\rm L}\left(\lambda-i\frac{\gamma}{2}+i \epsilon\right)
T_{\rm R}\left(\lambda+i\frac{\gamma}{2}- i \epsilon\right)
}
=
\prod_{\mu \in {\rm s}}
\frac{\lambda - \mu - i \epsilon}
{\lambda - \mu + i \epsilon}
\prod_{\bar{\mu} \in \bar{\rm s}}
\frac{\lambda - \bar{\mu} + i \epsilon}
{\lambda - \bar{\mu} - i \epsilon} \,,
\end{equation}
and so for $\epsilon \to 0^+$,
\begin{align}
\mathcal{G}^0(\lambda) &=
\frac{1}{\pi}
\left(
\sum_{\mu \in {\rm s}} \frac{ \epsilon}{(\lambda-\mu)^2 + \epsilon^2}
-
\sum_{\bar{\mu} \in \bar{\rm s}} \frac{ \epsilon}{(\lambda-\bar{\mu})^2 + \epsilon^2}
\right)
\cr
& \stackrel{\epsilon\to 0^+}{\longrightarrow}
\sum_{\mu \in {\rm s}} \delta(\lambda-\mu)
-
\sum_{\bar{\mu} \in \bar{\rm s}} \delta(\lambda-\bar{\mu}) \,.
\label{lorentzians}
\end{align}
Consider now $\mathcal{G}^+(\lambda)$ acting on a given eigenstate $|\Psi\rangle$. $\hat{Q}^+ |\Psi\rangle$, if non-vanishing, is degenerate with $| \Psi \rangle$ (with respect to the original Hamiltonian $H_{\rm R}+H_{\rm L}$) and has $n$ more particles, so can be expanded as a combination of all possible exact $n$-strings that can be built on top of $|\Psi\rangle$. In transparent notation,
\begin{equation}
\hat{Q}^+ |\Psi\rangle = \sum_{ \bar{\mu} \in \bar{\rm s}} \alpha_ {\bar{\mu}} | \Psi \cup \{\bar{\mu}\}_n \rangle \,,
\end{equation}
where $\alpha_{\bar{\mu}}$ are some coefficients.
From there,
\begin{align}
\mathcal{G}^+(\lambda) |\Psi\rangle
&= [\hat{Q}^+, \mathcal{G}^0(\lambda)] |\Psi\rangle
\cr
&=
\sum_{\bar{\mu} \in \bar{\rm s}} \alpha_{\bar{\mu}} \delta(\lambda - \bar{\mu}) | \Psi \cup \{\bar{\mu}\}_n \rangle \,,
\end{align}
Thus $\mathcal{G}^+(\lambda)$ creates an exact $n$-string at center $\lambda$ whenever this is allowed.
Similarly, for $\mathcal{G}^{-}(\lambda)$ we find
\begin{eqnarray}
\mathcal{G}^-(\lambda) |\Psi\rangle
=
\sum_{{\mu} \in {\rm s}} \alpha_{{\mu}} \delta(\lambda - {\mu} ) | \Psi \setminus \{ {\mu} \}_n \rangle \,,
\end{eqnarray}
so $\mathcal{G}^-(\lambda)$ annihilates an exact $n$-string at center $\lambda$, whenever this is allowed.
In conclusion, the operators $\mathcal{G}^{\pm}(\lambda)$ are precisely the string creation/annihilation operators!
In order to check the validity of our construction, it is worth having a look at slightly different objects, namely $\epsilon ~\mathcal{G}^{0}(\lambda)$, $\epsilon ~\mathcal{G}^{+}(\lambda)$ and $\epsilon ~\mathcal{G}^{-}(\lambda)$ in the $\epsilon \to 0$ limit.
As for the previously considered $\mathcal{G}^{0}(\lambda)$, $\mathcal{G}^{+}(\lambda)$ and $\mathcal{G}^{-}(\lambda)$, these are zero for most values of $\lambda$, except when $\lambda$ is a solution of the exact $n$-string quantization equation on top of some state. In that latter case, the Lorentzians in \eqref{lorentzians} have a finite $\epsilon \to 0$ limit when multiplied by $\epsilon$, so $\epsilon ~\mathcal{G}^{0}(\lambda)$, $\epsilon ~\mathcal{G}^{+}(\lambda)$ and $\epsilon ~\mathcal{G}^{-}(\lambda)$ become well-defined, finite operators.
As a check, we have constructed the operators $\epsilon ~\mathcal{G}^{\pm}(\lambda)$ explicitly on the lattice, and verified for a few examples that these indeed act as exact $n$-string creation/annihilation operators.
In the $n=2$ case in particular, we can verify that these recover the creation and annihilation operators introduced in section \ref{sec:n2}. Consider indeed the formal expansions \eqref{QplusQminus} for $\mathcal{G}^\pm(\lambda)$. In terms of the pseudomomentum $k$ related to $\lambda$, we have for $n=2$ the simple relation
\begin{equation}
\tau\left(\lambda+i{\gamma/ 2}\right) = e^{i \left(k + \frac{\pi}{2} \right)} \,.
\end{equation}
From there, the generating functions (which we denote by $\mathcal{G}^{\pm}(k)$ by abuse of notation) read
\begin{eqnarray}
\mathcal{G}^{\pm}(k) &=& - \frac{2 \cos k}{\pi}
\sum_{m=1}^{\infty}
{e^{- m \epsilon}} \sin\left(m \left( k + \frac{\pi}{2} \right) \right) \widetilde{Q}_m^{\pm} \,.
\end{eqnarray}
Using the periodicity of the Onsager algebra for $n=2$, $Q_{m+L}^\pm = (-1)^{Q+1} Q_{m}^{\pm}$, the infinite sum is non-vanishing in the $\epsilon\to 0$ limit only when $e^{i k L} = (-1)^{Q+1}$, which is precisely the quantization equation discussed in section \ref{sec:n2}.
We get in that case
\begin{eqnarray}
\lim_{\epsilon \to 0} \epsilon ~ \mathcal{G}^{\pm}(k) &=& - \frac{2 \cos k}{\pi} {\lim_{\epsilon\to 0} \frac{\epsilon}{1-e^{- L \epsilon} }}
\sum_{m=1}^{L-1}
\sin\left(m \left( k + \frac{\pi}{2} \right) \right) \widetilde{Q}_m^{\pm} \nonumber \\
&=& - \frac{2 \cos k}{L \pi}
\sum_{m=1}^{L-1}
\sin\left(m \left( k + \frac{\pi}{2} \right) \right) \widetilde{Q}_m^{\pm} \,,
\end{eqnarray}
which, up to a proportionality factor and the change of basis \eqref{basischange}, precisely recovers the operators ${\cal Q}(k)$ and ${\cal Q}^\dagger(k)$ of section \ref{sec:n2} (see equation \eqref{SQ}).
The operators $\mathcal{G}^{r}(\lambda)$, in the $\epsilon \to 0$ limit, make sense as distributions. Another way to build finite norm lattice operators out of these is to consider integrals over $\lambda$.
Looking back at the definition \eqref{gen3reg}, these can be recast as contour integrals of logarithmic derivatives of the transfer matrices $T_{\rm R}$, $T_{\rm L}$. As an example, the ground state of the maximally chiral Hamiltonian $i(H_{\rm R}- H_{\rm L})$, which as we have seen in section \ref{sec:alphafamily} is made purely of exact $n$-strings, could be formally constructed from a product of such operators. It remains unclear, however, how much of a practical use such a construction might be.
\section{Conclusion}
We started the paper by explaining how the Onsager algebra is a symmetry algebra of lattice models with both a $U(1)$ symmetry and self-duality. Such a non-abelian symmetry will result in exact degeneracies in the spectrum, and we described how they appear in an $n$-state clock model. Moreover, in these models the Onsager algebra is intimately related to a quantum-group algebra, providing a nice physical realisation of non-fundamental representations of the latter. Because this model is not free-fermionic, the Onsager algebra cannot be used to compute exactly the symmetry multiplets, so we resorted to a more detailed analysis coordinate Bethe-ansatz analysis and a set of deep functional relations. The symmetry structure admits an elegant description in term of exact $n$-string solutions of the Bethe equations, and we showed how to construct operators annihilating and creating them.
Our work suggests a number of future directions to pursue. The superintegrable chiral Potts Hamiltonian \eqref{HSI} is built from the symmetry generators $Q$ and $\widehat{Q}$, and so commutes with our Hamiltonian $H_n$. Since the latter has a $U(1)$ symmetry and so is easily treated using the coordinate Bethe ansatz, it gives a simple and direct way of understanding why the corresponding Bethe equations also arise in the integrable chiral Potts models; previous analyses were somewhat indirect. Our results therefore may provide some new insight into these models and their integrable structure.
The continuum limits of both $H_n$ and $-H_n$ are described by conformal field theories. This implies that the Onsager algebra survive sin the continuum limit, as the degeneracies not only survive but are further enhanced. Indeed, an infinite-dimensional symmetry algebra, the Virasoro algebra, is a symmetry of all conformal field theories, and those with a $U(1)$ symmetry like ours have an even larger symmetry generated by a Kac-Moody algebra \cite{Goddard86}. Since some of the conformal structure already is apparent on the lattice \cite{Koo93,Zou17}, it likely would be fruitful to examine the connection of the Onsager algebra with these conformal symmetry algebras. Indeed, it is not difficult to see the connection in the $n=2$ free-fermion case. However, the lattice chiral decomposition of the Hamiltonian into commuting left and right-moving parts is not the same as the analogous decomposition in the conformal field theory, since the empty state is not the ground state. It thus would be quite interesting to understand what happens at higher $n$.
Even more tantalisingly, our result that the Onsager-algebra symmetry arises from $U(1)$ symmetry and self-duality does not seem to have anything inherently to do with our models being $1+1$ or two-dimensional. Is it possible for Onsager symmetries to arise in higher-dimensional self-dual models?
\subsection*{Acknowledgments}
E.V thanks Eric Ragoucy and Luc Frappat for clarifications about their work \cite{Ragoucy1,Ragoucy2}, as well as Pascal Baseilhac, Samuel Belliard, Bruno Bertini, Azat Gainutdinov, Barry McCoy, Christian Korff, Giuliano Niccoli and Hubert Saleur for discussions. This work was supported by EPSRC through grant EP/N01930X.
\bigskip
\section*{Appendix: The manifestly $U(1)$-invariant form of $H_n$}
\setcounter{equation}{0}
\renewcommand{\theequation}{A\arabic{equation}}
The $U(1)$ charge $Q$ from \eqref{Qdef} is a sum over the $\tau$ operators, and so the only non-commuting terms in the Hamiltonian \eqref{Hnchiral} involve some $(\sigma^\dagger_j\sigma_{j+1})^a$. It is thus useful to rewrite \eqref{Hnchiral} in the form
\begin{align}
H_n = i\sum\limits_{j=1}^L \sum\limits_{a=1}^{n-1} \frac{1}{1-\omega^{-a}}
\biggl[ (2a-n) \tau_j^a + \frac{1}{2} \big(\sigma^\dagger_j\sigma_{j+1}\big)^a\,
\Big({R}^{(n-a)}_{j}-R^{(a)}_{j+1}\Big)\biggr]\ ,
\label{HR}
\end{align}
where
\[
R^{(a)}_j = n-2a - 2\sum_{b=1}^{n-1}\frac{1-\omega^{-ab}}{1-\omega^{-b}}\tau_j^b\ .
\]
The key property of the latter operators is that for $a=0\dots n-1$ their square is proportional to the identity:
$\big(R^{(a)}_j\big)^2 = n^2. $
Their eigenvalues are therefore all $\pm n$. Indeed, in the basis \eqref{tausigma} all the $R^{(a)}_j$ are diagonal, and letting the eigenvalues of the $\tau_j$ be $\omega^{t_j}$ gives
\begin{align}
R^{(a)}_j =
\begin{cases}
-n\qquad & t_j=0\dots a-1\ ,\cr
\ \, n & t_j=a\dots n-1\ .
\end{cases}
\label{Rdiag}
\end{align}
To show how the $S^\pm$ appear, note that the operator $\sigma^\dagger_{j}$ shifts $t_j$ by $+1\,$mod$\,n$, while $\sigma_{j+1}$ shifts $t_{j+1}$ by $-1\,$mod$\,n$. Thus $\sigma^\dagger_{j}\sigma_{j+1}$ violates $U(1)$ conservation when acting on the states with either $p_j =n-1$ or $p_{j+1}=0$, but not both. Conveniently, from \eqref{Rdiag} it follows that the linear combination ${R}^{(n-a)}_{j}-R^{(a)}_{j+1}$ annihilates these states, preserving the $U(1)$.
Proceeding in this fashion gives
\[ \big(\sigma^\dagger_j\sigma_{j+1}\big)^a\Big({R}^{(n-a)}_{j}-R^{(a)}_{j+1}\Big)= 2n \big(S^+_jS^-_{j+1}\big)^{n-a}- 2n \big(S^-_j S^+_{j+1}\big)^a\ ,
\]
which when used in \eqref{HR} yield the manifestly $U(1)$-invariant Hamiltonian \eqref{HnchiralSpm}.
|
1,314,259,995,935 | arxiv | \section{Introduction}
A common approach to gain a better understanding of Yang-Mills theory, in particular the mechanism of confinement, is to restrict the full path integral to a small subset of gauge field configurations, which are supposed to be of physical importance. Examples are instanton gas and liquid models (cf.\ \cite{Schafer:1996wv} and references therein), ensembles of regular gauge instantons and merons \cite{Lenz:2003jp,Negele:2004hs,Lenz:2007st}, the pseudoparticle approach \cite{Wagner:2005vs,Wagner:2006qn,Wagner:2006du,Szasz:2008qk}, calorons with non-trivial holonomy \cite{Gerhold:2006sk,Gerhold:2006kw}, and models based on center vortices (cf.\ e.g.\ \cite{Faber:1997rp,Engelhardt:1999wr,Engelhardt:2003wm,Rafibakhsh:2007sh,Faber:2008}).
In this paper we apply the pseudoparticle approach to SU(2) Yang-Mills theory and perform a detailed study of the static potential for various representations.
\section{The pseudoparticle approach in SU(2) Yang-Mills theory}
The basic idea of the pseudoparticle approach is to approximate the Yang-Mills path integral
\begin{eqnarray}
\label{EQN001} \Big\langle \mathcal{O} \Big\rangle \ \ = \ \ \frac{1}{Z} \int DA \, \mathcal{O}[A] e^{-S[A]} \quad , \quad S[A] \ \ = \ \ \frac{1}{4 g^2} \int d^4x \, F_{\mu \nu}^a F_{\mu \nu}^a ,
\end{eqnarray}
where $F_{\mu \nu}^a = \partial_\mu A_\nu^a - \partial_\nu A_\mu^a + \epsilon^{a b c} A_\mu^b A_\nu^c$, with a small number of physically relevant degrees of freedom. To this end, the integration over all gauge field configurations in (\ref{EQN001}) is restricted to a small subset, which can be written as a linear superposition of a fixed number of pseudoparticles\footnote{In this paper the term pseudoparticle refers to any gauge field configuration $a_\mu^a$, which is localized in space and in time, not only to solutions of the classical Yang-Mills equations of motion.}:
\begin{eqnarray}
\label{EQN002} A_\mu^a(x) \ \ = \ \ \sum_j \mathcal{A}(j) \mathcal{C}^{a b}(j) a_\mu^b(x-z(j)) ,
\end{eqnarray}
where $j$ is the pseudoparticle index and $\mathcal{A}(j) \in \mathbb{R}$, $\mathcal{C}^{a b}(j) \in \textrm{SO(3)}$ and $z(j) \in \mathbb{R}^4$ are the amplitude, the color orientation and the position of the $j$-th pseudoparticle respectively. The functional integration over all gauge field configurations is defined as the integration over pseudoparticle amplitudes and color orientations:
\begin{eqnarray}
\int DA \, \ldots \quad = \quad \int \left(\prod_j d\mathcal{A}(j) \, d\mathcal{C}(j)\right) \ldots
\end{eqnarray}
For the results presented in this work we have used $625$ ``long range pseudoparticles'', which fall off as $1 / \textrm{distance}$, inside a hypercubic spacetime region (for details regarding this setup cf.\ \cite{Szasz:2008qk}):
\begin{eqnarray}
a_{\mu,\textrm{\scriptsize inst.}}^a(x) \ \ = \ \ \frac{\eta_{\mu \nu}^a x_\nu}{x^2 + \lambda^2} \quad , \quad a_{\mu,\textrm{\scriptsize antiinst.}}^a(x) \ \ = \ \ \frac{\bar{\eta}_{\mu \nu}^a x_\nu}{x^2 + \lambda^2} \quad , \quad a_{\mu,\textrm{\scriptsize akyron}}^a(x) \ \ = \ \ \frac{\delta^{a 1} x_\mu}{x^2 + \lambda^2} .
\end{eqnarray}
The first two types generate transverse gauge field components and are similar to regular gauge instantons and antiinstantons, while the third type, the so-called akyron \cite{Wagner:2006qn}, is responsible for longitudinal gauge field components. We would like to stress that gauge field configurations (\ref{EQN002}) are in general not even close to solutions of the classical Yang-Mills equations of motion, i.e.\ the pseudoparticle approach is not a semiclassical model. The idea is rather to approximate physically relevant gauge field configurations by a small number of degrees of freedom.
\section{Casimir scaling and adjoint string breaking}
In the following the potential associated with a pair of static color charges $\phi^{(J)}$ and $(\phi^{(J)})^\dagger$ in spin-$J$-representation at separation $R$ is denoted by $V^{(J)}(R)$. In pure Yang-Mills theory there is no string breaking, when the charges are in the fundamental representation ($J = 1/2$). For charges in the adjoint representation ($J = 1$) the situation is different: gluons are able to screen such charges and the connecting gauge string is expected to break, when the charges are separated adiabatically beyond a certain distance; a pair of essentially non-interacting gluelumps is formed.
The starting point to extract the static potential in spin-$J$-representation are ``string trial states''
\begin{eqnarray}
\label{EQN003} S^{(J)}(\mathbf{x},\mathbf{y}) | \Omega \rangle \ \ = \ \ (\phi^{(J)}(\mathbf{x}))^\dagger U^{(J)}(\mathbf{x};\mathbf{y}) \phi^{(J)}(\mathbf{y}) | \Omega \rangle \quad , \quad |\mathbf{x}-\mathbf{y}| \ \ = \ \ R ,
\end{eqnarray}
where $U^{(J)}$ denotes a spatial parallel transporter. We compute temporal correlation functions
\begin{eqnarray}
\mathcal{C}_\textrm{\scriptsize string}^{(J)}(T) \ \ = \ \ \langle \Omega | \Big(S^{(J)}(\mathbf{x},\mathbf{y},T)\Big)^\dagger S^{(J)}(\mathbf{x},\mathbf{y},0) | \Omega \rangle \ \ \propto \ \ \Big\langle W_{(R,T)}^{(J)} \Big\rangle
\end{eqnarray}
and determine the corresponding potential values from their exponential fall-off (for details cf.\ \cite{Szasz:2008qk}).
The numerical result for the fundamental potential is shown in Figure~\ref{FIG001}a (here and in the following we have used the value $g = 12.5$ for the coupling constant). It is linear for large separations, i.e.\ there is confinement. We set the physical scale by fitting $V^{(1/2)}(R) = V_0 + \sigma R$ and by identifying the string tension $\sigma$ with $\sigma_\textrm{\scriptsize physical} = 4.2 / \textrm{fm}^2$. This amounts to a spacetime region of extension $L^4 = (1.85 \, \textrm{fm})^4$.
\begin{figure}[b]
\begin{center}
\input{FIG001.pstex_t}
\caption{\label{FIG001}
\textbf{a)}~The fundamental static potential $V^{(1/2)}$ as a function of the separation $R$.
\textbf{b)}~``Pure Wilson loop static potentials'' $V^{(J)}$ for different representations as functions of the separation $R$.
\textbf{c)}~Ratios $V^{(J)} / V^{(1/2)}$ as functions of the separation $R$ compared to the Casimir scaling expectation.
}
\end{center}
\end{figure}
Numerical results for higher representation potentials ($J=1,\ldots,5/2$) are shown in Figure~\ref{FIG001}b. According to the Casimir scaling hypothesis these potentials are supposed to fulfill
\begin{eqnarray}
V^{(1/2)}(R) \ \ \approx \ \ \frac{V^{(1)}(R)}{8/3} \ \ \approx \ \ \frac{V^{(3/2)}(R)}{5} \ \ \approx \ \ \frac{V^{(2)}(R)}{8} \ \ \approx \ \ \frac{V^{(5/2)}(R)}{35/3}
\end{eqnarray}
for intermediate separations. Figure~\ref{FIG001}c shows that this is the case for the adjoint potential, while there are slight deviations for $J \geq 3/2$. This is in agreement with what has been observed in 4d SU(2) lattice gauge theory \cite{Piccioni:2005un}.
Note that there is no sign of string breaking for the adjoint potential even for separations $R \raisebox{-0.5ex}{$\,\stackrel{>}{\scriptstyle\sim}\,$} 1.6 \, \textrm{fm}$. This is, because string trial states have poor overlap to the ground state, which is expected to resemble a two gluelump state. The solution to overcome this problem is to use a whole set of trial states containing not only string trial states (\ref{EQN003}), but also ``two-gluelump trial states''
\begin{eqnarray}
\sum_{j = x,y,z} G_j(\mathbf{x}) G_j(\mathbf{y}) | \Omega \rangle \quad , \quad G_j(\mathbf{x}) \ \ = \ \ \textrm{Tr}\Big(\phi^{(1)}(\mathbf{x}) B_j(\mathbf{x})\Big) \quad , \quad |\mathbf{x}-\mathbf{y}| \ \ = \ \ R .
\end{eqnarray}
We extract the adjoint potential from the corresponding correlation matrices by solving a generalized eigenvalue problem and by computing effective masses (for details cf.\ \cite{Szasz:2008qk}). Results are shown in Figure~\ref{FIG002}a. The potential saturates at around two times the magnetic gluelump mass (which is $\approx 1000 \, \textrm{MeV}$ at $g = 12.5$ in this regularization \cite{Szasz:2008qk}) at separation $R_\textrm{\scriptsize sb} \approx 1.0 \, \textrm{fm}$. This string breaking distance as well as the observed level ordering (the first excited state is an excited string state for small separations, then becomes a two gluelump state and finally a string state again, etc.) is in agreement with results from lattice computations \cite{Jorysz:1987qj,deForcrand:1999kr}.
\begin{figure}[b]
\begin{center}
\input{FIG002.pstex_t}
\caption{\label{FIG002}
\textbf{a)}~The adjoint static potential $V^{(1)}$ and its first two excitations as functions of the separation $R$.
\textbf{b)}~Overlaps of the ground state approximation to the trial states as functions of the separation $R$.
\textbf{c)}~Overlaps of the first excited state approximation to the trial states as functions of the separation $R$.
}
\end{center}
\end{figure}
To investigate, whether the gluonic string really breaks, when two static charges are separated adiabatically, we perform a mixing analysis. During the computation of effective masses we obtain approximations of the ground state and the first excited state,
\begin{eqnarray}
| 0 \rangle \ \ \approx \ \ a_\textrm{\scriptsize string}^0 | \textrm{string} \rangle + a_\textrm{\scriptsize 2g-lump}^0 | \textrm{2g-lump} \rangle \quad , \quad | 1 \rangle \ \ \approx \ \ a_\textrm{\scriptsize string}^1 | \textrm{string} \rangle + a_\textrm{\scriptsize 2g-lump}^1 | \textrm{2g-lump} \rangle ,
\end{eqnarray}
where $| \textrm{string} \rangle$ and $| \textrm{2g-lump} \rangle$ are normalized trial states. The overlaps $|a_{\ldots}^j|^2$ are shown as functions of the separation in Figure~\ref{FIG002}b and \ref{FIG002}c. The transition between string and two-gluelump states is rapid but smooth indicating that string breaking is present in the pseudoparticle approach.
\section{Conclusions and outlook}
We have computed static potentials for various representations within the pseudoparticle approach. While the fundamental static potential is linear for large separations, we clearly observe string breaking for the adjoint representation. Both the string breaking distance $R_\textrm{\scriptsize sb} \approx 1.0 \, \textrm{fm}$ and the level ordering are in agreement with lattice results, and a mixing analysis indicates a rapid, but smooth transition between a string and a two gluelump state, when two static charges are separated adiabatically. Moreover, higher representation potentials exhibit Casimir scaling. We conclude that the pseudoparticle approach is a model, which is able to reproduce many essential features of SU(2) Yang-Mills theory.
Currently our efforts are focused on applying the pseudoparticle approach to fermionic theories. First steps in this direction have been successful \cite{Wagner:2007he,Wagner:2007av}. Now we intend to consider QCD, where a cheap computation of exact all-to-all propagators should be possible due to the small number of degrees of freedom involved. Another appealing possibility is an application to supersymmetric theories, where an exact realization of supersymmetry might be possible due to translational invariance present in pseudoparticle ensembles.
\begin{acknowledgments}
MW would like to thank M.~Faber, J.~Greensite and M.~Polikarpov for the invitation to this conference. Moreover, we acknowledge useful conversations with M.~Ammon, G.~Bali, P.~de~Forcrand, H.~Hofmann, E.-M.~Ilgenfritz, F.~Lenz and M.~M\"uller-Preussker.
\end{acknowledgments}
|
1,314,259,995,936 | arxiv |
\subsection{Power System Model}
We consider a connected power network composed of $n$ buses indexed by $i \in \mathcal{V} := \{1,\dots, n\} $ and transmission lines denoted by unordered pairs $\{i,j\} \in \mathcal{E}$, where $\mathcal{E}$ is a set of $2$-element subsets of $\mathcal{V}$. As illustrated by the block diagram in Fig. \ref{fig:model2}, the system dynamics are modeled as a feedback interconnection of bus dynamics and network dynamics.
The input signals $p_\mathrm{in} := \left(p_{\mathrm{in},i}, i \in \mathcal{V} \right) \in \real^n$ and $d_\mathrm{p} := \left(d_{\mathrm{p},i}, i \in \mathcal{V} \right) \in \real^n$ represent power injection set point changes and power fluctuations around the set point, respectively, and $n_\omega := \left(n_{\omega,i}, i \in \mathcal{V} \right) \in \real^n $ represents frequency measurement noise. The weighting functions $\hat{W}_\mathrm{p}(s)$ and $\hat{W}_\omega{}(s)$ can be used to adjust the size of these disturbances in the usual way. The output signal $\omega:=\left(\omega_i, i \in \mathcal{V} \right) \in \real^n$ represents the bus frequency deviation from its nominal value. We now discuss the dynamic elements in more detail.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{power_network_model_F_copy.eps}
\caption{Block diagram of power network.}\label{fig:model2}
\end{figure}
\subsubsection{Bus Dynamics} The bus dynamics that maps the net power bus imbalance $u_\mathrm{P} = \left( u_{\mathrm{P},i}, i \in \mathcal{V} \right) \in \real^n$ to the vector of frequency deviations $\omega$ can be described as a feedback loop that comprises a forward-path $\hat{G}(s)$ and a feedback-path $\hat{C}(s)$, where $\hat{G}(s) := \diag {\hat{g}_i(s), i \in \mathcal{V}}$ and $\hat{C}(s) := \diag {\hat{c}_i(s), i \in \mathcal{V}} $ are the transfer function matrices of generators and inverters, respectively.
\paragraph{Generator Dynamics}
The generator dynamics are composed of the standard swing equations with a turbine, i.e.,
\begin{equation}\label{eq:sw}
m_i \dot{\omega}_i = - d_i \omega_i + q_{\mathrm{r},i} +q_{\mathrm{t},i} + u_{\mathrm{P},i} \,,
\end{equation}
where $m_i>0$ denotes the aggregate generator inertia, $d_i>0$ the aggregate generator damping, $q_{\mathrm{r},i}$ the controllable input power produced by the grid-connected inverter, and $q_{\mathrm{t},i}$ the change in the mechanical power output of the turbine.
The turbine does not react to the frequency deviation $\omega_i$ until it exceeds a preset threshold $\omega_{\epsilon}\geq0$, i.e.,
\begin{equation}\label{eq:turb}
\tau_i\dot q_{\mathrm{t},i}=\varphi_{\omega_\epsilon}(\omega_i) - q_{\mathrm{t},i}
\end{equation}
with
$$\varphi_{\omega_\epsilon}(\omega_i):=
\begin{cases}
-{r_{\mathrm{t},i}^{-1}}(\omega_i+\omega_{\epsilon}) & \omega_i \leq -\omega_{\epsilon}\\
0 & -\omega_{\epsilon} < \omega_i < \omega_{\epsilon}\\
-{r_{\mathrm{t},i}^{-1}}(\omega_i-\omega_{\epsilon}) & \omega_i \geq \omega_{\epsilon}
\end{cases}\,,
$$
where $\tau_i>0$ represents the turbine time constant and $r_{\mathrm{t},i}>0$ the turbine droop coefficient.
Two special cases of our interest are:
\begin{dyn-g}[Standard swing dynamics] \label{dy:sw}
When $|\omega_i(t)| < \omega_{\epsilon}$, the turbines are not triggered and the generator dynamics can be described by the transfer function
\begin{equation}\label{eq:dy-sw}
\hat{g}_i(s) = \frac{1}{m_i s + d_i}
\end{equation}
which is exactly the standard swing dynamics.
\end{dyn-g}
\begin{dyn-g}[Second-order turbine dynamics]
When $\omega_{\epsilon} = 0$, the turbines are constantly triggered and the generator dynamics can be described by the transfer function
\begin{equation} \label{eq:dy-sw-t}
\hat{g}_i(s) = \frac{ \tau_i s + 1 }{m_i \tau_i s^2 + \left(m_i + d_i \tau_i \right) s + d_i + r_{\mathrm{t},i}^{-1}}\;.
\end{equation}
\end{dyn-g}
\paragraph{Inverter Dynamics}
Since power electronics are significantly faster than the electro-mechanical dynamics of generators, we assume that each inverter measures the local grid frequency deviation $\omega_i$ and instantaneously updates the output power $q_{\mathrm{r},i}$. Different control laws can be used to map $\omega_i$ to $q_{\mathrm{r},i}$. We represent such laws using a transfer function $\hat{c}_i(s)$. The two most common ones are:
\begin{dyn-i}[Droop Control]
This control law can provide additional droop capabilities and is given by
\begin{equation} \label{eq:dy-dc}
\hat{c}_i(s) = -r_{\mathrm{r},i}^{-1}\;,
\end{equation}
where $r_{\mathrm{r},i}>0$ is the droop coefficient.
\end{dyn-i}
\begin{dyn-i}[Virtual Inertia]
Besides providing additional droop capabilities, this control law can compensate the loss of inertia and is given by
\begin{equation} \label{eq:dy-vi}
\hat{c}_i(s) = -\left(m_{\mathrm{v},i} s + r_{\mathrm{r},i}^{-1}\right)\;,
\end{equation}
where $m_{\mathrm{v},i}>0$ is the virtual inertia constant.
\end{dyn-i}
\subsubsection{Network Dynamics}
The network power fluctuations $p_\mathrm{e} := \left(p_{\mathrm{e},i}, i \in \mathcal{V} \right) \in \real^n$ are given by a linearized model of the power flow equations~\cite{Purchala2005dc-flow}:
\begin{align}
\hat p_\mathrm{e}(s) = \frac{L_\mathrm{B}}{s} \hat \omega(s)\;,\label{eq:N}
\end{align}
where $\hat p_\mathrm{e}(s)$ and $\hat \omega(s)$ denote the Laplace transforms of $p_\mathrm{e}$ and $\omega$, respectively.\footnote{We use hat to distinguish the Laplace transform from its time domain counterpart.} The matrix $L_\mathrm{B}$ is an undirected weighted Laplacian matrix of the network with elements
\[
L_{\mathrm{B},{ij}}=\partial_{\theta_j}{\sum_{j=1}^n|V_i||V_j|b_{ij}\sin(\theta_i-\theta_j)}\Bigr|_{\theta=\theta_0}.
\]
Here, $\theta := \left(\theta_i, i \in \mathcal{V} \right) \in \real^n$ denotes the angle deviation from its nominal, $\theta_0 := \left(\theta_{0,i}, i \in \mathcal{V} \right) \in \real^n$ are the equilibrium angles, $|V_i|$ is the (constant) voltage magnitude at bus $i$, and $b_{ij}$ is the line $\{i,j\}$ susceptance.
\subsubsection{Closed-loop Dynamics}
We will investigate the closed-loop responses of the system in Fig.~\ref{fig:model2} from the power injection set point changes $p_\mathrm{in}$, the power fluctuations around the set point $d_\mathrm{p}$, and frequency measurement noise $n_\omega$ to frequency deviations $\omega$, which can be described compactly by the transfer function matrix
\begin{equation}\label{eq:closed-loop}
\hat{T}(s) := \begin{bmatrix}\hat{T}_{\omega\mathrm{p}}(s) & \hat{T}_{\omega \mathrm{dn}}(s):=\begin{bmatrix}\hat{T}_{\omega\mathrm{d}} (s) & \hat{T}_{\omega\mathrm{n}} (s)\end{bmatrix}\end{bmatrix}\;.
\end{equation}
\begin{rem}[Model Assumptions]
The \emph{linearized} network model \eqref{eq:closed-loop}
implicitly makes the following assumptions which are standard and well-justified for frequency control on transmission networks \cite{kundur_power_1994}:
\begin{itemize}
\item Bus voltage magnitudes $|V_i|$'s are constant;
we are not modeling the dynamics of exciters used for voltage control; these are assumed to operate at a much faster time-scale.
\item Lines $\{i,j\}$ are lossless.
\item Reactive power flows do not affect bus voltage phase angles and frequencies.
\item Without loss of generality, the equilibrium angle difference ($\theta_{0,i}-\theta_{0,j}$) accross each line is less than $\pi/2$.
\end{itemize}
For a first principle derivation of the model we refer to \cite[Section VII]{Zhao:2013ts}. For applications of similar models for frequency control within the control literature, see, e.g., \cite{Zhao:2014bp,Li:2016tcns,mallada2017optimal}.
\end{rem}
\begin{rem}[Internal Stability of \eqref{eq:closed-loop}]
Throughout this paper we consider feedback interconnections of positive real and strictly positive real subsystems. Internal stability follows from classical results~\cite{khalil2002nonlinear}. Since the focus of this paper is on performance, we do not discuss internal stability here in detail. We refer to the reader to \cite{pm2018tcns}, for a thorough treatment of similar feedback interconnections. From now on a standing assumption --that can be verified-- is that feedback interconnection described in Fig. \ref{fig:model2} is internally stable.
\end{rem}
\subsection{Performance Metrics} \label{ssec:metrics}
Having considered the model of the power network, we are now ready to introduce performance metrics used in this paper to compare different inverter control laws.
\subsubsection{Steady-state Effort Share}
This metric measures the fraction of the power imbalance addressed by inverters, which is calculated as the absolute value of the ratio between the inverter steady-state input power and the total power imbalance, i.e.,
\begin{align}\label{eq:ES}
\mathrm{ES} := \left|\frac{\sum_{i=1}^n \hat{c}_i(0) \omega_{\mathrm{ss},i}}{\sum_{i=1}^n p_{\mathrm{in},i}(0^+) }\right|\;,
\end{align}
when the system $\hat{T}_{\omega\mathrm{p}}$ undergoes a step change in power excitation.
Here, $\hat{c}_i(0)$ is the dc gain of the inverter and $\omega_{\mathrm{ss},i}$ is the steady-state frequency deviation.
\subsubsection{Power Fluctuations and Measurement Noise}
This metric measures how the relative intensity of power fluctuations and measurement noise affect the frequency deviations, as quantified by the $\mathcal{H}_2$ norm of the transfer function $\hat{T}_{\omega\mathrm{dn}}$:
\begin{align}
&\|\hat{T}_{\omega\mathrm{dn}}\|_{\mathcal{H}_2}^2 \label{eq:h2_def_E}\\&:=\!\begin{cases}
\!\displaystyle\frac{1}{2\pi}\!\!\int_{-\infty{}}^\infty{}\!\!\!\tr{\hat{T}_{\omega\mathrm{dn}}(\boldsymbol{j\omega})^\ast \hat{T}_{\omega\mathrm{dn}}(\boldsymbol{j\omega})}\mathrm{d}\boldsymbol{\omega}&\!\!\textrm{if $\hat{T}_{\omega\mathrm{dn}}$ is stable,}\\
\!\infty&\!\!\textrm{otherwise.}\nonumber\footnotemark
\end{cases}
\end{align}
The quantity $\|\hat{T}_{\omega\mathrm{dn}}\|_{\mathcal{H}_2}$ has several standard interpretations in terms of the input-output behavior of the system $\hat{T}_{\omega\mathrm{dn}}$~\cite{g2015tran}. In particular, in the stochastic setting, when the disturbance signals $d_{\mathrm{p},i}$ and $n_{\omega,i}$ are independent, zero mean, unit variance, white noise, then $\lim_{t\to \infty}\mathbb{E} \left[\omega(t)^T \omega(t)\right]=\|\hat{T}_{\omega\mathrm{dn}}\|_{\mathcal{H}_2}^2$.
This means that the sum of the steady-state variances in the output of $\hat{T}_{\omega\mathrm{dn}}$ in response to these disturbance equals the squared $\mathcal{H}_2$ norm of $\hat{T}_{\omega\mathrm{dn}}$. Thus the $\mathcal{H}_2$ norm gives a precise measure of how the intensity of power fluctuations and measurement noise affects the system's frequency deviations.
\footnotetext{$\boldsymbol{j}$ represents the imaginary unit which satisfies $\boldsymbol{j}^2=-1$ and $\boldsymbol{\omega}$ represents the frequency variable.}
\subsubsection{Synchronization Cost}
This metric measures the size of individual bus deviations from the synchronous response when the system $\hat{T}_{\omega\mathrm{p}}$ is subject to a step change in power excitation given by $p_\mathrm{in} = u_0 \mathds{1}_{ t \geq 0 } \in \real^n$, where $u_0 \in \real^n$ is a given vector direction and $\mathds{1}_{ t \geq 0 }$ is the unit-step function \cite{p2017ccc}. This is quantified by the squared $\mathcal{L}_2$ norm of the vector of deviations $\tilde{\omega} := \omega - \bar{\omega} \mathbbold{1}_n \in \real^n$, i.e.,
\begin{equation}\label{eq:sync_cost}
\|\tilde{\omega}\|_2^2 := \sum_{i=1}^n \int_0^\infty \tilde{\omega}_i(t)^2 \mathrm{d}t\;.
\end{equation}
Here, $\bar{\omega}:= \left(\sum_{i=1}^n m_i\omega_i\right)/\left(\sum_{i=1}^n m_i\right)$ is the system frequency that corresponds to the inertia-weighted average of bus frequency deviations and $\mathbbold{1}_n \in \real^n $ is the vector of all ones.
\subsubsection{Nadir} This metric measures the minimum post-contingency frequency of a power system, which can be quantified by the $\mathcal{L}_{\infty}$ norm of the system frequency $\bar{\omega}$, i.e.,
\begin{equation}\label{eq:Nadir}
\|\bar{\omega}\|_\infty := \max_{t\geq0} |\bar{\omega}(t)|\;,
\end{equation}
when the system $\hat{T}_{\omega\mathrm{p}}$ has as input a step change in power excitation \cite{p2017ccc}, i.e., $p_\mathrm{in} = u_0 \mathds{1}_{ t \geq 0 } \in \real^n$. This quantity matters in that deeper Nadir increases the risk of under-frequency load shedding and cascading outrages.
\subsection{Diagonalization}
In order to make the analysis tractable, we require the closed-loop transfer functions to be diagonalizable. This is ensured by the following assumption, which is a generalization of \cite{pm2019preprint,p2017ccc}.
\begin{ass}[Proportionality]\label{ass:proportion}
There exists a proportionality matrix $F := \diag {f_i, i \in \mathcal{V}} \in \real_{\geq 0}^{n \times n} $ such that
\[ \hat{G}(s) = \hat{g}_\mathrm{o}(s) F^{-1} \qquad \text{and} \qquad \hat{C}(s) = \hat{c}_\mathrm{o}(s) F\]
where $\hat{g}_\mathrm{o}(s)$ and $\hat{c}_\mathrm{o}(s)$ are called the representative generator and the representative inverter, respectively.
\end{ass}
\begin{rem}[Proportionality parameters]
The parameters $f_i$'s represent the individual machine rating. This definition is rather arbitrary for our analysis, provided that Assumption \ref{ass:proportion} is satisfied. Other alternatives could include $f_i=m_i$ or $f_i=m_i/m$ where $m$ is, for example, either the average or maximum generator inertia. The practical relevance of Assumption~\ref{ass:proportion} is justified, for example, by the empirical values reported in \cite{oakridge2013}, which show that at least in regards of order of magnitude, Assumption \ref{ass:proportion} is a reasonable first-cut approximation to heterogeneity.
\end{rem}
Under Assumption~\ref{ass:proportion}, the representative generator of \eqref{eq:dy-sw} and \eqref{eq:dy-sw-t} are given by
\begin{equation}\label{eq:go-sw}
\hat{g}_\mathrm{o}(s) = \frac{1}{m s + d}
\end{equation}
and
\begin{equation}\label{eq:go-sw-tb}
\hat{g}_\mathrm{o}(s) = \frac{ \tau s + 1 }{m \tau s^2 + \left(m + d \tau \right) s + d + r_\mathrm{t}^{-1}}\;, \footnote{We use variables without subscript $i$ to denote parameters of representative generator and inverter.}
\end{equation}
respectively, with $m_i=f_im$, $d_i=f_id$, $r_{\mathrm{t},i}=r_{\mathrm{t}}/f_i$, and $\tau_i=\tau$.
Similarly,
the representative inverters of DC \eqref{eq:dy-dc} and VI \eqref{eq:dy-vi} are given by
\begin{equation}\label{eq:co-dc}
\hat{c}_\mathrm{o}(s) = -r_\mathrm{r}^{-1}
\end{equation}
and
\begin{equation}\label{eq:co-vi}
\hat{c}_\mathrm{o}(s) = -\left(m_\mathrm{v} s + r_\mathrm{r}^{-1}\right)\;,
\end{equation}
with $m_{\mathrm{v},i}=f_im_{\mathrm{v}}$ and $r_{\mathrm{r},i}=r_{\mathrm{r}}/f_i$.
Using Assumption~\ref{ass:proportion}, we can derive a diagonalized version of \eqref{eq:closed-loop}. First, we rewrite
\[\hat{G}(s) = F^{-\frac{1}{2}} [\hat{g}_\mathrm{o}(s)I_n] F^{-\frac{1}{2}} \quad\text{and}\quad \hat{C}(s) = F^{\frac{1}{2}} [\hat{c}_\mathrm{o}(s)I_n] F^{\frac{1}{2}}\]
as shown in Fig. \ref{fig:diag1}, and after a loop transformation obtain Fig. \ref{fig:diag2}. Then, we define the scaled Laplacian matrix
\begin{equation}\label{eq:scale-L}
L_\mathrm{F} := F^{-\frac{1}{2}} L_\mathrm{B} F^{-\frac{1}{2}}
\end{equation}
by grouping the terms in the upper block of Fig. \ref{fig:diag2}. Moreover, since $L_\mathrm{F} \in \real^{n \times n}$ is symmetric positive semidefinite, it is real orthogonally diagonalizable with non-negative eigenvalues~\cite{Horn2012MA}. Thus, there exists an orthogonal matrix $V \in \real^{n \times n}$ with $V^T V = V V^T = I_n$, such that
\begin{equation}\label{eq:ortho-diag}
L_\mathrm{F} = V \Lambda V^T\;,
\end{equation}
where $\Lambda := \diag{\lambda_k, k \in \{1,\dots, n\}} \in \real_{\geq0}^{n \times n}$ with $\lambda_k$ being the $k$th eigenvalue of $L_\mathrm{F}$ ordered non-decreasingly $(0 = \lambda_1 < \lambda_2 \leq \ldots \leq\lambda_n)$\footnote{Recall that we assume the power network is connected, which means that $L_\mathrm{F}$ has a single eigenvalue at the origin.} and $V := \begin{bmatrix} (\sum_{i=1}^n f_i)^{-\frac{1}{2}} F^{\frac{1}{2}} \mathbbold{1}_n & V_{\bot} \end{bmatrix}$ with $V_{\bot} := \begin{bmatrix} v_2 &\ldots & v_n \end{bmatrix}$ composed by the eigenvector $v_k$ associated with $\lambda_k$.\footnote{We use $k$ and $l$ to index dynamic modes but $i$ and $j$ to index bus numbers.} Now, applying \eqref{eq:scale-L} and \eqref{eq:ortho-diag} to Fig. \ref{fig:diag2} and rearranging blocks of $V$ and $V^T$ results in Fig. \ref{fig:diag3}. Finally, moving the block of $\hat{c}_\mathrm{o}(s) I_n$ ahead of the summing junction and combining the two parallel paths produces Fig. \ref{fig:block-diag}, where the boxed part is fully diagonalized.
Now, by defining
the closed-loop with a forward-path $\hat{g}_\mathrm{o}(s) I_n$ and a feedback-path $\left(\Lambda/s - \hat{c}_\mathrm{o}(s) I_n\right)$ as
\begin{equation*}
\hat{H}_\mathrm{p}(s) = \diag{\hat{h}_{\mathrm{p},k}(s), k \in \{1,\dots, n\}}
\end{equation*}
where
\begin{equation} \label{eq:hp-s}
\hat{h}_{\mathrm{p},k}(s) = \frac{\hat{g}_\mathrm{o}(s)}{1+\hat{g}_\mathrm{o}(s)\left(\lambda_k/s-\hat{c}_\mathrm{o}(s)\right)}\;,
\end{equation}
and $\hat{H}_\omega(s) = \hat{c}_\mathrm{o}(s) \hat{H}_\mathrm{p}(s)$, i.e.,
\begin{equation*}
\hat{H}_\omega(s) = \diag{\hat{h}_{\omega,k}(s), k \in \{1,\dots, n\}}
\end{equation*}
where
\begin{equation} \label{eq:homega-s}
\hat{h}_{\omega,k}(s) = \hat{c}_\mathrm{o}(s)\hat{h}_{\mathrm{p},k}(s)\;,
\end{equation}
the closed-loop transfer functions from $p_\mathrm{in}$, $d_\mathrm{p}$, and $n_\omega$ to $\omega$ become
\begin{subequations}\label{eq:T-diag}
\begin{equation}\label{eq:Tp}
\hat{T}_{\omega\mathrm{p}} (s) = F^{-\frac{1}{2}} V \hat{H}_\mathrm{p}(s) V^T F^{-\frac{1}{2}}\;,
\end{equation}
\begin{equation}\label{eq:Td}
\hat{T}_{\omega\mathrm{d}} (s) = F^{-\frac{1}{2}} V \hat{H}_\mathrm{p}(s) V^T F^{-\frac{1}{2}}\hat{W}_\mathrm{p}(s)\;,
\end{equation}
\begin{equation}
\hat{T}_{\omega\mathrm{n}} (s) = F^{-\frac{1}{2}} V \hat{H}_\omega(s) V^T F^{\frac{1}{2}}\hat{W}_\omega{}(s)\;,
\end{equation}
\end{subequations}
respectively.
Note that depending on the specific generator and inverter dynamics involved, we may add subscripts in the name of a transfer function without making a further declaration in the rest of this paper. For example, we may add 'T' if the turbine is triggered and 'DC' if the inverter operates in DC mode as in $\hat{h}_{\mathrm{p},k,\mathrm{T,DC}}(s)$.
\begin{figure*}[!t]
\centering
\subfigure[]
{\includegraphics[width=0.32\textwidth]{diag_step1_F_copy.eps}\label{fig:diag1}}
\hfil
\subfigure[]
{\includegraphics[width=0.32\textwidth]{diag_step2_F_copy.eps}\label{fig:diag2}}
\hfil
\subfigure[]
{\includegraphics[width=0.32\textwidth]{diag_step3_F_copy.eps}\label{fig:diag3}}
\caption{Equivalent block diagrams of power network under proportionality assumption.}
\label{fig:block-diag-process}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{block_diag_F_copy.eps}
\caption{Diagonalized block diagram of power network.}\label{fig:block-diag}
\end{figure}
\subsection{Generic Results for Performance Metrics}
We now derive some important building blocks required for the performance analysis of the system $\hat{T}$ described in \eqref{eq:T-diag}. As described in Section \ref{ssec:metrics}, the sensitivity to power fluctuations and measurement noise can be evaluated through the $\mathcal{H}_2$ norm of the system $\hat{T}_{\omega\mathrm{dn}}$, while the steady-state effort share, synchronization cost, and Nadir can all be characterized by a step response of the system $\hat{T}_{\omega\mathrm{p}}$. There are two scenarios that are of our interest
\begin{ass}[Proportional weighting scenario]\label{ass:noise}\
\begin{itemize}
\item
The noise weighting functions are given by
\begin{equation*}
\hat{W}_\mathrm{p}(s) = \kappa_\mathrm{p} F^{\frac{1}{2}} \qquad \text{and} \qquad \hat{W}_\omega(s) = \kappa_\omega F^{-\frac{1}{2}},
\end{equation*}
where $\kappa_\mathrm{p}>0$ and $\kappa_\omega>0$ are weighting constants.
\item
$|\omega_i(t)| < \omega_{\epsilon}$, $\forall i \in \mathcal{V}$ and $t\geq0$ such that turbines will not be triggered.
\end{itemize}
\end{ass}
\begin{ass}[Step input scenario]\label{ass:step}\
\begin{itemize}
\item
There is a step change as defined in Section \ref{ssec:metrics} on the power injection set point, i.e., $p_\mathrm{in} = u_0 \mathds{1}_{ t \geq 0 }$, $d_\mathrm{p}= \mathbbold{0}_n$, and $n_\omega = \mathbbold{0}_n$ with $\mathbbold{0}_n \in \real^n $ being the vector of all zeros.
\item
$\omega_{\epsilon} = 0$ such that turbines are constantly triggered.
\end{itemize}
\end{ass}
\begin{rem}[Weighting assumption]
As a natural counterpart of Assumption~\ref{ass:proportion}, we look at the case when the power fluctuations and measurement noise are weighted directly and inversely proportional to the square root of the bus ratings, respectively. In the case of $\hat W_\mathrm{p}(s)$, this is equivalent to assuming that demand fluctuation variances are proportional to the bus ratings, which is in agreement with the central limit theorem. For $\hat W_\mathrm{\omega}(s)$, this is equivalent to assuming the frequency measurement noise variances are inversely proportional to the bus ratings, which is in line with the inverse relationship between jitter variance and power consumption for an oscillator in phase-locked-loop \cite{Weigandt1994}.
\end{rem}
\subsubsection{Steady-state Effort Share}
As indicated by \eqref{eq:ES}, the key of computing the steady-state effort share lies in computing the steady-state frequency deviation $\omega_{\mathrm{ss}}$ of the system $\hat{T}_{\omega\mathrm{p}}$. When the system synchronizes, the steady-state frequency deviation is given by $\omega_{\mathrm{ss}} = \omega_{\mathrm{syn}} \mathbbold{1}_n$
and $\omega_{\mathrm{syn}}$ is called the synchronous frequency. In the absence of a secondary control layer, e.g., automatic generation control \cite{d1973tran}, the system can synchronize with a nontrivial frequency deviation, i.e., $\omega_{\mathrm{syn}} \neq 0$.
The following lemma provides a general expression for $\omega_{\mathrm{syn}}$ in our setting.
\begin{lem}[Synchronous frequency]\label{lem:syn-fre}
Let Assumption~\ref{ass:step} hold.
If $q_{\mathrm{r},i}$ is determined by a control law $\hat{c}_i(s)$, then the output $\omega$ of the system $\hat{T}_{\omega\mathrm{p}}$ synchronizes to the steady-state frequency deviation $\omega_{\mathrm{ss}} = \omega_{\mathrm{syn}} \mathbbold{1}_n$ with
\begin{equation}
\omega_{\mathrm{syn}} = \dfrac{\sum_{i=1}^n u_{0,i}}{\sum_{i=1}^n \left( d_i + {r_{\mathrm{t},i}^{-1} - \hat{c}_i(0)} \right)}\;. \label{eq:ome-syn}
\end{equation}
\end{lem}
\begin{proof}
Combining \eqref{eq:sw} and \eqref{eq:N} through the relationship $u_\mathrm{P} = p_\mathrm{in} - p_\mathrm{e}$,
we get the (partial) state-space representation of the system $\hat{T}_{\omega\mathrm{p}}$ as
\begin{subequations}\label{eq:ss}
\begin{align}
\dot{\theta} =&\ \omega \,,\\
M \dot{\omega} =& -D \omega -L_\mathrm{B} \theta + q_\mathrm{r} + q_\mathrm{t} + p_\mathrm{in} \,, \label{eq:ss-fre}
\end{align}
\end{subequations}
where $M := \diag{m_i, i \in \mathcal{V}} \in \real_{\geq0}^{n \times n}$, $D := \diag{d_i, i \in \mathcal{V}} \in \real_{\geq0}^{n \times n}$, $q_\mathrm{r} := \left(q_{\mathrm{r},i}, i \in \mathcal{V} \right) \in \real^n$, and $q_\mathrm{t} := \left(q_{\mathrm{t},i}, i \in \mathcal{V} \right) \in \real^n$.
In steady-state, \eqref{eq:ss} yields
\begin{equation}\label{eq:ss-pf1}
L_\mathrm{B} \omega_{\mathrm{ss}} t = -D \omega_{\mathrm{ss}} -L_\mathrm{B} \theta_{\mathrm{ss}_0} + q_{\mathrm{r},\mathrm{ss}} + q_{\mathrm{t},\mathrm{ss}} + u_0 \,,
\end{equation}
where $(\theta_{\mathrm{ss}_0} + \omega_{\mathrm{ss}} t, \omega_{\mathrm{ss}}, q_{\mathrm{r},\mathrm{ss}}, q_{\mathrm{t},\mathrm{ss}})$ denotes the steady-state solution of \eqref{eq:ss}. Equation \eqref{eq:ss-pf1} indicates that $L_\mathrm{B} \omega_{\mathrm{ss}} t$ is constant and thus $L_\mathrm{B} \omega_{\mathrm{ss}} = \mathbbold{0}_n$. It follows that $\omega_{\mathrm{ss}} = \omega_{\mathrm{syn}} \mathbbold{1}_n$.
Therefore, \eqref{eq:ss-pf1} becomes
\begin{align}
\mathbbold{0}_n =& -D \omega_{\mathrm{syn}} \mathbbold{1}_n -L_\mathrm{B} \theta_{\mathrm{ss}_0} + q_{\mathrm{r},\mathrm{ss}} + q_{\mathrm{t},\mathrm{ss}} + u_0\;, \label{eq:synsw1}
\end{align}
where $q_{\mathrm{r},\mathrm{ss}} = \left( \hat{c}_i(0) \omega_{\mathrm{syn}}, i \in \mathcal{V} \right)\in \real^n$ and $q_{\mathrm{t},\mathrm{ss}} = \left(-r_{\mathrm{t},i}^{-1} \omega_{\mathrm{syn}}, i \in \mathcal{V} \right)\in \real^n$ when $\omega_{\epsilon}=0$ by \eqref{eq:turb}. Pre-multiplying \eqref{eq:synsw1} by $\mathbbold{1}_n^T$ and using the property that $\mathbbold{1}_n^T L_\mathrm{B} = \mathbbold{0}_n^T$, we get the desired result in \eqref{eq:ome-syn}.
\end{proof}
Now, the theorem below provides an explicit expression for the steady-state effort share.
\begin{thm}[Steady-state effort share]\label{thm:ss-es}
Let Assumption~\ref{ass:step} hold. If $q_{\mathrm{r},i}$ is determined by a control law $\hat{c}_i(s)$, then the steady-state effort share of the system {$\hat{T}_{\omega\mathrm{p}}$} is given by
\begin{equation}
\mathrm{ES} = \left|\frac{\sum_{i=1}^n \hat{c}_i(0)}{\sum_{i=1}^n \left( d_i + {r_{\mathrm{t},i}^{-1} - \hat{c}_i(0)} \right) }\right|\;.
\end{equation}
\end{thm}
\begin{proof}
It follows directly from Lemma~\ref{lem:syn-fre} that $\omega_{\mathrm{ss},i}=\omega_{\mathrm{syn}}$ and $\sum_{i=1}^n u_{0,i}=\omega_{\mathrm{syn}} \sum_{i=1}^n \left( d_i + {r_{\mathrm{t},i}^{-1} - \hat{c}_i(0)} \right)$. Plugging these two equations to the definition of ES in \eqref{eq:ES} yields the desired result.
\end{proof}
\subsubsection{Power Fluctuations and Measurement Noise}
We seek to characterize the effect of power fluctuations and frequency measurement noise on the frequency variance, i.e., the $\mathcal{H}_2$ norm of the system $\hat{T}_{\omega\mathrm{dn}}$.
We first show that the squared $\mathcal{H}_2$ norm of $\hat{T}_{\omega\mathrm{dn}}$ is a weighted sum of the squared $\mathcal{H}_2$ norm of each $\hat{h}_{\mathrm{p},k}$ and $\hat{h}_{\omega,k}$ in the diagonalized system \eqref{eq:T-diag}.
\begin{thm}[Frequency variance]\label{thm:h2-sum}
Define $\Gamma := V^T F^{-1} V$. If Assumptions~\ref{ass:proportion} and \ref{ass:noise} hold, then
\begin{align*}
\|\hat{T}_{\omega\mathrm{dn} }\|_{\mathcal{H}_2}^2 = \sum_{k=1}^n\Gamma_{kk}\left(\kappa_\mathrm{p}^2 \|\hat{h}_{\mathrm{p},k}\|_{\mathcal{H}_2}^2 + \kappa_\omega^2 \|\hat{h}_{\omega,k}\|_{\mathcal{H}_2}^2\right)\;.
\end{align*}
\end{thm}
\begin{proof}
It follows from \eqref{eq:closed-loop} and \eqref{eq:h2_def_E} that
\begin{align*}
\|\hat{T}_{\omega\mathrm{dn} }\|_{\mathcal{H}_2}^2 \!=&\ \frac{1}{2\pi}\int_{-\infty{}}^\infty \tr{\hat{T}_{\omega\mathrm{d}} (\boldsymbol{j\omega})^\ast \hat{T}_{\omega\mathrm{d}} (\boldsymbol{j\omega}) }\,\mathrm{d}\boldsymbol{\omega} \nonumber\\&+ \frac{1}{2\pi}\int_{-\infty{}}^\infty \tr{ \hat{T}_{ \omega\mathrm{n}} (\boldsymbol{j\omega})^\ast \hat{T}_{ \omega\mathrm{n}} (\boldsymbol{j\omega})}\,\mathrm{d}\boldsymbol{\omega}\\=:&\ \|\hat{T}_{ \omega\mathrm{d}}\|^2_{\mathcal{H}_2}+\|\hat{T}_{ \omega\mathrm{n}}\|^2_{\mathcal{H}_2}.
\end{align*}
We now compute $\|\hat{T}_{ \omega\mathrm{d}}\|^2_{\mathcal{H}_2}$. Using \eqref{eq:Td} and the fact that $\hat{W}_\mathrm{p} (s) = \kappa_\mathrm{p} F^{\frac{1}{2}}$ by Assumption \ref{ass:noise}, we get $\hat{T}_{ \omega\mathrm{d}}(s)=\kappa{}_\mathrm{p}F^{-\frac{1}{2}} V \hat{H}_\mathrm{p}(s) V^T$.
Therefore,
\begin{equation*}
\hat{T}_{\omega\mathrm{d}} (\boldsymbol{j\omega})^\ast \hat{T}_{\omega\mathrm{d}} (\boldsymbol{j\omega}) =\kappa{}_\mathrm{p}^2 V \hat{H}_\mathrm{p}(\boldsymbol{j\omega})^\ast V^T F^{-1} V \hat{H}_\mathrm{p}(\boldsymbol{j\omega}) V^T.
\end{equation*}
Using the cyclic property of the trace, this implies that
\begin{equation*}
\tr{\hat{T}_{\omega\mathrm{d}} (\boldsymbol{j\omega})^\ast \hat{T}_{\omega\mathrm{d}} (\boldsymbol{j\omega}) }=\kappa{}_\mathrm{p}^2 \tr{\hat{H}_\mathrm{p}(\boldsymbol{j\omega})^\ast \Gamma{} \hat{H}_\mathrm{p}(\boldsymbol{j\omega}) },
\end{equation*}
where $\Gamma:=V^TF^{-1}V$. Therefore, it follows that
\begin{align*}
\|\hat{T}_{\omega\mathrm{d}}\|^2_{\mathcal{H}_2}&=\frac{1}{2\pi}\int_{-\infty{}}^\infty{}\kappa{}_\mathrm{p}^2 \tr{\hat{H}_\mathrm{p}(\boldsymbol{j\omega})^* \Gamma{} \hat{H}_\mathrm{p}(\boldsymbol{j\omega}) }\,\mathrm{d}\boldsymbol{\omega}\\
= \sum_{k=1}^n \frac{\kappa_\mathrm{p}^2\Gamma_{kk}}{2\pi}&\int_{-\infty}^\infty \left|\hat{h}_{\mathrm{p},k}(\boldsymbol{j\omega})\right|^2\, \mathrm{d}\boldsymbol{\omega} = \kappa_\mathrm{p}^2 \sum_{k=1}^n \Gamma_{kk}\|\hat{h}_{\mathrm{p},k}\|_{\mathcal{H}_2}^2\;.
\end{align*}
The result follows from a similar argument on $\|\hat{T}_{\omega\mathrm{n}}\|^2_{\mathcal{H}_2}$.
\end{proof}
Theorem \ref{thm:h2-sum} allows us to compute the $\mathcal{H}_2$ norm of $\hat{T}_{\omega\mathrm{dn}}$ by means of computing the norms of a set of simple scalar transfer functions.
However, for different controllers, the transfer functions $\hat{h}_{\mathrm{p},k}$ and $\hat{h}_{\omega,k}$ will change. Since in all the cases these transfer functions are of fourth-order or lower, the following lemma will suffice for the purpose of our comparison.
\begin{lem}[$\mathcal{H}_2$ norm of a fourth-order transfer function]\label{lm:h2-4th}
Let
\[
\hat{h}(s)=\frac{b_3s^3+b_2s^2+b_1s+b_0}{s^4+a_3s^3+a_2s^2+a_1s+a_0}+b_4
\]
be a stable transfer function.
If $b_4=0$, then
\begin{equation}\label{eq:4throderh2}
\|\hat{h}\|_{\mathcal{H}_2}^2 =
\displaystyle{\frac{\zeta_0 b_0^2+\zeta_1 b_1^2+\zeta_2 b_2^2+\zeta_3 b_3^2+\zeta_4}{2 a_0 \left(a_1 a_2 a_3 -a_1^2 -a_0 a_3^2\right)}}\;,
\end{equation}
where
\begin{align}\label{eq:zeta}
\zeta_0:=&\ a_2 a_3-a_1 \,,\qquad \zeta_1:=\ a_0 a_3\,,\qquad \zeta_2:=\ a_0a_1\,,\\
\zeta_3:=&\ a_0a_1 a_2 - a_0^2 a_3\,,\qquad
\zeta_4:=-2a_0(a_1 b_1 b_3 +a_3 b_0 b_2)\,.\nonumber
\end{align}
Otherwise, $\|\hat{h}\|_{\mathcal{H}_2}^2 = \infty$.
\end{lem}
\begin{proof}
First recall that given any state-space realization of $\hat{h}(s)$, the $\mathcal{H}_2$ norm can be calculated by solving a particular Lyapunov equation. More specifically, suppose
\[
\Sigma_{\hat{h}(s)}=\left[\begin{array}{c|c}
A & B \\\hline{}
C & D
\end{array}\right],
\]
and let $X$ denote the solution to the Lyapunov equation
\begin{align}
AX+XA^T=-BB^T.\label{eq:lyp-x}
\end{align}
If $\hat{h}(s)$ is stable, then
\begin{align}
\|\hat{h}\|_{\mathcal{H}_2}^2=\begin{cases}
\infty{}&\text{if $D\neq{}0$,}\\
CXC^T&\text{otherwise}.
\end{cases}\label{eq:h2-2cases}
\end{align}
Consider the observable canonical form of $\hat{h}(s)$ given by
\begin{align}
\Sigma_{\hat{h}(s)}=
\left[\begin{array}{cccc|c}
0&0&0&-a_0&b_0\\
1&0&0&-a_1&b_1\\
0&1&0&-a_2&b_2\\
0&0&1&-a_3&b_3\\\hline
0&0&0 & 1&b_4
\end{array}\right].\label{eq:real-h}
\end{align}
Since $D=b_4$, it is trivial to see from \eqref{eq:h2-2cases} that if $b_4\neq0$ then $\|\hat{h}\|_{\mathcal{H}_2}^2 = \infty$. Hence, in the rest of the proof, we assume $b_4=0$. We will now solve the Lyapunov equation analytically for the realization \eqref{eq:real-h}. $X$ must be symmetric and thus can be parameterized as
\begin{equation} \label{eq:grammian-4th}
X=\big[x_{ij}\big]\in\real^{4\times4}\;, \quad\text{with}\quad x_{ij}=x_{ji}.
\end{equation}
Since it is easy to see that $C X C^T = x_{44}$, the problem becomes solving for $x_{44}$.
Substituting \eqref{eq:real-h} and \eqref{eq:grammian-4th} into \eqref{eq:lyp-x} yields the following equations
\begin{subequations}\label{eq:lyap-group}
\begin{align}
2 a_0 x_{14}=&\ b_0^2\;,\label{eq:lyap-1}\\
x_{12} - a_2 x_{14} - a_0 x_{34} =& -b_0 b_2\;,\label{eq:lyap-3}\\
2(x_{12} - a_1 x_{24})=&-b_1^2\;,\label{eq:lyap-5}\\
x_{23} - a_3 x_{24} + x_{14} - a_1 x_{44} =& -b_1 b_3\;,\label{eq:lyap-7}\\
2(x_{23} - a_2 x_{34})=&-b_2^2\;,\label{eq:lyap-8}\\
2(x_{34} - a_3 x_{44})=&-b_3^2\;.\label{eq:lyap-10}
\end{align}
\end{subequations}
\ifthenelse{\boolean{archive}}{
Since $\hat{h}(s)$ is stable, by the Routh-Hurwitz criterion $a_0\neq0$, and therefore \eqref{eq:lyap-1} yields
\begin{align}
x_{14}=\frac{b_0^2}{2 a_0}\;.\label{eq:x14}
\end{align}
Applying \eqref{eq:x14} to \eqref{eq:lyap-3} and \eqref{eq:lyap-7} gives
\begin{subequations}
\begin{align}
x_{12} =& a_0 x_{34} + \frac{a_2b_0^2}{2 a_0}-b_0 b_2\;,\label{eq:x12 in x34}\\
x_{23} - a_3 x_{24} =& a_1 x_{44}-\frac{b_0^2}{2 a_0}-b_1 b_3\;.\label{eq:x23-24 in x44}
\end{align}
\end{subequations}
We now parameterize unknowns in $x_{44}$.
Equation \eqref{eq:lyap-10} yields
\begin{align}
x_{34} =& a_3 x_{44}-\frac{b_3^2}{2}\;.\label{eq:x34 in x44}
\end{align}
Substituting \eqref{eq:x34 in x44} into \eqref{eq:lyap-8} and \eqref{eq:x12 in x34} gives
\begin{subequations}
\begin{align}
x_{23} =&a_2 a_3 x_{44}-\frac{a_2 b_3^2+b_2^2}{2}\;,\label{eq:x23 in x44}\\
x_{12} =& a_0 a_3 x_{44}-\frac{a_0b_3^2}{2} + \frac{a_2b_0^2}{2 a_0}-b_0 b_2\;,\label{eq:x12 in x44}
\end{align}
\end{subequations}
respectively.
Plugging \eqref{eq:x12 in x44} into \eqref{eq:lyap-5} leads to
\begin{align}
a_1 x_{24}=&a_0 a_3 x_{44}-\frac{a_0b_3^2}{2} + \frac{a_2b_0^2}{2 a_0}-b_0 b_2 +\frac{b_1^2}{2}\;,\label{eq:x24 in x44}
\end{align}
Combining \eqref{eq:x23-24 in x44}, \eqref{eq:x23 in x44}, and \eqref{eq:x24 in x44}, we can solve for $x_{44}$ as}
{Through standard algebra, we can solve for $x_{44}$ as}
\[
x_{44} =
\displaystyle{\frac{\zeta_0 b_0^2+\zeta_1 b_1^2+\zeta_2 b_2^2+\zeta_3 b_3^2+\zeta_4}{2 a_0 \left(a_1 a_2 a_3-a_1^2 -a_0 a_3^2\right)}}
\]
with $\zeta_0, \zeta_1, \zeta_2, \zeta_3$, and $\zeta_4$ defined by \eqref{eq:zeta},
which concludes the proof; the denominator is guaranteed to be nonzero by the Routh-Hurwitz criterion.
\end{proof}
\begin{rem}[$\mathcal{H}_2$ norm of a transfer function lower than fourth-order]\label{rem:h2-3rd}
Although Lemma~\ref{lm:h2-4th} is stated for a fourth-order transfer function, it can also be used to find the $\mathcal{H}_2$ norm of third-, second-, and first-order transfer functions by considering appropriate limits. For example, setting $a_0=b_0=\epsilon{}$ and considering the limit $\epsilon\to 0$, \eqref{eq:4throderh2} gives the $\mathcal{H}_2$ norm of a generic third-order transfer function. This process shows that given a stable transfer function $\hat{h}(s)$, if $b_4=0$ and:
\begin{itemize}
\item (third-order transfer function) $a_0=b_0=0$, then
\[
\|\hat{h}\|_{\mathcal{H}_2}^2 = \frac{a_3 b_1^2+a_1 b_2^2+a_1 a_2b_3^2-2 a_1 b_1 b_3}{2 a_1 (a_2 a_3- a_1)};
\]
\item (second-order transfer function) $a_0=b_0=a_1=b_1=0$, then
\[
\|\hat{h}\|_{\mathcal{H}_2}^2 = \frac{b_2^2+a_2b_3^2}{2 a_2a_3};
\]
\item (first-order transfer function) $a_0=b_0=a_1=b_1=a_2=b_2=0$, then
\[
\|\hat{h}\|_{\mathcal{H}_2}^2 =\frac{b_3^2}{2 a_3};
\]
\end{itemize}
otherwise $\|\hat{h}\|_{\mathcal{H}_2}^2=\infty{}$.
\end{rem}
\begin{rem}[Well-definedness by the stability]
Note that the stability of $\hat{h}(s)$ guarantees that the denominators in all the above $\mathcal{H}_2$ norm expressions are nonzero by the Routh-Hurwitz stability criterion.
\end{rem}
\subsubsection{Synchronization Cost}
The computation of the synchronization cost defined in \eqref{eq:sync_cost} for the system $\hat{T}_{ \omega\mathrm{p}}$ in the absence of inverter control can be found in \cite{pm2019preprint}. Taking this into account, we can get corresponding results for the system with any control law readily.
\begin{lem}[Synchronization cost]\label{lem:syncost-generic}
Let Assumptions~\ref{ass:proportion} and \ref{ass:step} hold. Define $\tilde{u}_0 := V_{\bot}^T F^{-\frac{1}{2}} u_0$ and $\tilde{\Gamma} := V_{\bot}^T F^{-1} V_{\bot}$. Then the synchronization cost of the system $\hat{T}_{\omega\mathrm{p}}$ is given by
\begin{align*}
\|\tilde{\omega}\|_2^2 = \tilde{u}_0^T \left(\tilde{\Gamma}\circ \tilde{H}\right) \tilde{u}_0,
\end{align*}
where $\circ$ denotes the Hadamard product and $\tilde{H}\in\real^{(n-1) \times (n-1)}$ is the matrix with entries
\begin{equation*}
\tilde{H}_{kl} := \int_0^\infty h_{\mathrm{u},k}(t) h_{\mathrm{u},l}(t)\ \mathrm{d}t\,,\quad\forall k,l \in \{1,\dots, n-1\}
\end{equation*}
with $\hat{h}_{\mathrm{u},k}(s) := \hat{h}_{\mathrm{p},{k+1},\mathrm{T}}(s)/s$ and $\hat{h}_{\mathrm{p},k,\mathrm{T}}(s)$ being a specified case of the transfer function $\hat{h}_{\mathrm{p},k}(s)$ defined in \eqref{eq:hp-s}, i.e., when the turbine is triggered.
\end{lem}
\begin{proof}
This is a direct extension of \cite[Proposition 2]{pm2019preprint}.
\end{proof}
Lemma~\ref{lem:syncost-generic} shows that the computation of the synchronization cost requires knowing the inner products $\tilde{H}_{kl}$. However, the general expressions of these inner products for an arbitrary combination of $k$ and $l$ are already too tedious to be useful in our analysis. Therefore, we will investigate instead bounds on the synchronization cost in terms of the inner products $\tilde{H}_{kl}$ when $k=l$; which are exactly the $\mathcal{H}_2$ norms of transfer functions $\hat{h}_{\mathrm{u},k}(s)$.
\begin{lem}
[Bounds for Hadamard product]\label{lem:bounds-Had}
Let $P\in\real^{n\times{}n}$ be a symmetric matrix with minimum and maximum eigenvalues given by $\lambda_{\mathrm{min}}(P)$ and $\lambda_{\mathrm{max}}(P)$, respectively. Then $\forall x, y\in\real^n$,
\[
\lambda_{\mathrm{min}}(P)\sum_{k=1}^nx_k^2y_k^2\leq{}x^T\left(P\circ\left(yy^T\right)\right)x\leq{}\lambda_{\mathrm{max}}(P) \sum_{k=1}^nx_k^2y_k^2.
\]
\end{lem}
\begin{proof}
First note that
\[
\begin{aligned}
x^T\left(P\circ\left(yy^T\right)\right)x&=\tr{P^T\left(x\circ y\right)\left(x\circ y\right)^T}\\
&=\left(x\circ y\right)^T P^T\left(x\circ y\right).
\end{aligned}
\]
Let $w:=x\circ y$. Since $P$ is symmetric, by Rayleigh \cite{Horn2012MA}
\[
\lambda_{\mathrm{min}}(P) w^Tw\leq{}x^T\left(P\circ\left(yy^T\right)\right)x\leq{}\lambda_{\mathrm{max}}(P) w^Tw.
\]
Observing that $w^Tw=\sum_{k=1}^nx_k^2y_k^2$ completes the proof.
\end{proof}
Lemma~\ref{lem:bounds-Had} implies the following bounds on the synchronization cost.
\begin{thm}[Bounds on synchronization cost]\label{thm:bound-cost}
Let Assumptions~\ref{ass:proportion} and \ref{ass:step} hold. Then the synchronization cost of the system $\hat{T}_{ \omega\mathrm{p}}$ is bounded by $\underline{\|\tilde{\omega}\|_2^2} \leq\|\tilde{\omega}\|_2^2 \leq \overline{\|\tilde{\omega}\|_2^2}$, where
\[
\underline{\|\tilde{\omega}\|_2^2}\!\!:=\!\!\frac{\sum_{k=1}^{n-1}\!\tilde{u}_{0,k}^2\| \hat{h}_{\mathrm{u},k} \|_{\mathcal{H}_2}^2}{\max_{i \in \mathcal{V}} \left(f_i \right)}\ \text{and}\ \overline{\|\tilde{\omega}\|_2^2} \!\!:=\!\!\frac{\sum_{k=1}^{n-1}\!\tilde{u}_{0,k}^2\!\| \hat{h}_{\mathrm{u},k} \|_{\mathcal{H}_2}^2}{\min_{i \in \mathcal{V}} \left(f_i \right)} .
\]
\end{thm}
\begin{proof}
By Lemma~\ref{lem:syncost-generic},
\[
\begin{aligned}
\|\tilde{\omega}\|_2^2&\!=\!\!\int_0^\infty{}\tilde{u}_0^T\left(\tilde{\Gamma}\circ{}\left(h_{\mathrm{u}}(t)h_{\mathrm{u}}(t)^T\right)\right)\tilde{u}_0\,\mathrm{d}t\\
\!&\!\!\!\geq{}\!\!\int_0^\infty{}\lambda_{\min}(\tilde{\Gamma})\sum_{k=1}^{n-1}\tilde{u}_{0,k}^2h_{\mathrm{u},k}(t)^2\,\mathrm{d}t\\
\!&\!\!\!=\!\lambda_{\min}(\tilde{\Gamma})\sum_{k=1}^{n-1}\tilde{u}_{0,k}^2\| \hat{h}_{\mathrm{u},k} \|_{\mathcal{H}_2}^2\\
\!&\!\!\!\geq{}\!\lambda_{\min}(F^{-1})\sum_{k=1}^{n-1}\tilde{u}_{0,k}^2\| \hat{h}_{\mathrm{u},k} \|_{\mathcal{H}_2}^2 =\!\frac{\sum_{k=1}^{n-1}\tilde{u}_{0,k}^2\| \hat{h}_{\mathrm{u},k} \|_{\mathcal{H}_2}^2}{\max_{i \in \mathcal{V}} \left(f_i \right)},
\end{aligned}
\]
which concludes the proof of the lower bound.
The first inequality follows from Lemma~\ref{lem:bounds-Had} by setting $P = \tilde{\Gamma}$, $x = \tilde{u}_0$, and $y = h_{\mathrm{u}}(t):=\left(h_{\mathrm{u},k}(t), k \in \{1,\dots, n-1\}\right) \in \real^{n-1}$. The second inequality follows from the interlacing theorem \cite[Theorem 4.3.17]{Horn2012MA}. The upper bound can be proved similarly.
\end{proof}
\begin{rem}[Synchronization cost in homogeneous case]\label{rem:syncost-homo}
In the system with homogeneous parameters, i.e., $F=fI_n$ for some $f>0$, the identical lower and upper bounds on the synchronization cost imply that
\[\|\tilde{\omega}\|_2^2=f^{-1}\sum_{k=1}^{n-1}\!\tilde{u}_{0,k}^2\| \hat{h}_{\mathrm{u},k} \|_{\mathcal{H}_2}^2.
\]
\end{rem}
\subsubsection{Nadir}
A deep Nadir poses a threat to the reliable operation of a power system. Hence one of the goals of inverter control laws is the reduction of Nadir. We seek to evaluate the ability of different control laws to eliminate Nadir. To this end, we provide a necessary and sufficient condition for Nadir elimination in a second-order system with a zero.
\begin{thm}[Nadir elimination for a second-order system] \label{th:nonadir-con}
Assume $K>0$, $z>0$, $\xi \geq 0$, $\omega_\mathrm{n}> 0$. The step response of a second-order system with transfer function given by
\begin{equation*}
\hat{h}(s) = \dfrac{K\left(s + z\right)}{ s^2 + 2\xi\omega_\mathrm{n} s + \omega_\mathrm{n}^2 }
\end{equation*}
has no Nadir if and only if
\begin{align}\label{eq:nonadir-con}
1 \leq \xi\leq z/\omega_\mathrm{n} \quad\text{or}\quad
\begin{cases}
\xi>z/\omega_\mathrm{n}\\
\xi \geq \left(z/\omega_\mathrm{n}+\omega_\mathrm{n}/z\right)/2
\end{cases},
\end{align}
where the conditions in braces jointly imply $\xi>1$.
\end{thm}
\begin{proof}
Basically, Nadir must occur at some non-negative finite time instant $t_\mathrm{nadir}$, such that $\dot{p}_\mathrm{u}(t_\mathrm{nadir}) =0$ and $p_\mathrm{u}(t_\mathrm{nadir})$ is a maximum,
where $p_\mathrm{u}(t)$ denotes the unit-step response of $\hat{h}(s)$, i.e., $\hat{p}_\mathrm{u}(s) := \hat{h}(s)/s$. We consider three cases based on the value of damping ratio $\xi$ separately:
\begin{enumerate}
\item Under damped case ($0\leq\xi<1$): The output is
\begin{align*}
\hat{p}_\mathrm{u}(s) = \dfrac{Kz}{\omega_\mathrm{n}^2} \left[\dfrac{1}{s}- \dfrac{s + \xi\omega_\mathrm{n}}{ (s+\xi\omega_\mathrm{n})^2 + \omega_\mathrm{d}^2 }- \dfrac{\xi\omega_\mathrm{n} - \omega_\mathrm{n}^2 z^{-1}}{ (s+\xi\omega_\mathrm{n})^2 + \omega_\mathrm{d}^2 }\right]
\end{align*}
with $\omega_\mathrm{d} := \omega_\mathrm{n} \sqrt{1 - \xi^2}$, which gives the time domain response
\begin{align*}
p_\mathrm{u}(t)
= \dfrac{Kz}{\omega_\mathrm{n}^2} \left[1- e^{-\xi\omega_\mathrm{n} t} \eta_0\sin{(\omega_\mathrm{d} t+\phi)}\right]\;,
\end{align*}
where
\[
\eta_0 = \!\sqrt{1+\dfrac{\left(\xi\omega_\mathrm{n} - \omega_\mathrm{n}^2 z^{-1}\right)^2}{\omega_\mathrm{d}^2}}\ \text{and}\
\tan\phi = \dfrac{\omega_\mathrm{d}}{\xi\omega_\mathrm{n} - \omega_\mathrm{n}^2 z^{-1}}.
\]
Clearly, the above response must have oscillations. Therefore, for the case $0\leq\xi<1$, Nadir always exists.
\item Critically damped case ($\xi=1$): The output is
\begin{align*}
\hat{p}_\mathrm{u}(s)
=\dfrac{Kz}{\omega_\mathrm{n}^2} \left[\dfrac{1}{s}- \dfrac{1}{ s + \omega_\mathrm{n} }- \dfrac{ \omega_\mathrm{n} - \omega_\mathrm{n}^2 z^{-1}}{ \left(s + \omega_\mathrm{n}\right)^2 }\right]\;,
\end{align*}
which gives the time domain response
\begin{align*}
p_\mathrm{u}(t) = \dfrac{Kz}{\omega_\mathrm{n}^2} \left\{1- e^{-\omega_\mathrm{n} t}\left[1 + \left(\omega_\mathrm{n} - \omega_\mathrm{n}^2 z^{-1}\right) t\right]\right\}\;.
\end{align*}
Thus,
\begin{align*}
\dot{p}_\mathrm{u}(t) = Kz e^{-\omega_\mathrm{n} t}\left[ \left( 1- \omega_\mathrm{n} z^{-1}\right) t + z^{-1}\right]\;.
\end{align*}
Letting $\dot{p}_\mathrm{u}(t) =0$ yields
\begin{align*}
\omega_\mathrm{n} e^{-\omega_\mathrm{n} t} \left[1 + \left(\omega_\mathrm{n} - \omega_\mathrm{n}^2 z^{-1}\right) t\right] = e^{-\omega_\mathrm{n} t} \left(\omega_\mathrm{n} - \omega_\mathrm{n}^2 z^{-1}\right)\;,
\end{align*}
which has a non-negative finite solution
\begin{align*}
t_\mathrm{nadir}
=\dfrac{z^{-1}}{\omega_\mathrm{n} z^{-1} -1}
\end{align*}
whenever $\omega_\mathrm{n} z^{-1} > 1$. For any $\epsilon>0$, it holds that
\begin{align*}
\dot {p}_\mathrm{u}(t_\mathrm{nadir}-\epsilon) = \epsilon Kz e^{-\omega_\mathrm{n} \left(t_\mathrm{nadir}-\epsilon\right)} \left(\omega_\mathrm{n} z^{-1}-1\right)>0\;,\\
\dot {p}_\mathrm{u}(t_\mathrm{nadir}+\epsilon) = \epsilon Kz e^{-\omega_\mathrm{n} \left(t_\mathrm{nadir}+\epsilon\right)} \left( 1- \omega_\mathrm{n} z^{-1}\right)<0 \;.
\end{align*}
Clearly, Nadir occurs at $t_\mathrm{nadir}$.
Therefore, for the case $\xi = 1$, Nadir is eliminated if and only if $\omega_\mathrm{n} z^{-1} \leq 1$. To put it more succinctly, we combine the two conditions into
\begin{equation}\label{eq:cri-nonadir}
1=\xi\leq z/\omega_\mathrm{n}\;.
\end{equation}
\item Over damped case ($\xi>1$): The output is
\begin{align*}
\hat{p}_\mathrm{u}(s)
=&\dfrac{Kz}{\omega_\mathrm{n}^2} \left(\dfrac{1}{s}- \dfrac{\eta_1}{ s +\sigma_1}- \dfrac{\eta_2}{ s+\sigma_2}\right)
\end{align*}
with
\begin{align*}
\sigma_{1,2} = \omega_\mathrm{n}\left(\xi\pm \sqrt{\xi^2-1}\right)\ \ \text{and}\ \ \eta_{1,2} =\dfrac{1}{2}\mp \dfrac{\xi- \omega_\mathrm{n}z^{-1}}{ 2\sqrt{\xi^2-1}}\;,
\end{align*}
which gives the time domain response
\begin{align*}
p_\mathrm{u}(t) = \dfrac{Kz}{\omega_\mathrm{n}^2} \left(1- \eta_1 e^{-\sigma_1 t}-\eta_2 e^{-\sigma_2 t}\right)\;.
\end{align*}
Thus,
\begin{align*}
\dot{p}_\mathrm{u}(t) = \dfrac{Kz}{\omega_\mathrm{n}^2} \left(\sigma_1 \eta_1 e^{-\sigma_1 t}+\sigma_2\eta_2 e^{-\sigma_2 t}\right)\;.
\end{align*}
Letting $\dot{p}_\mathrm{u}(t) =0$ yields $\sigma_1 \eta_1 e^{-\sigma_1 t} = - \sigma_2 \eta_2 e^{-\sigma_2 t}$,
which has a non-negative finite solution
\begin{align*}
t_\mathrm{nadir}
=\dfrac{1}{2\omega_\mathrm{n} \sqrt{\xi^2-1}}\ln{\dfrac{1 - \omega_\mathrm{n}z^{-1}\left(\xi+ \sqrt{\xi^2-1}\right)}{1 - \omega_\mathrm{n}z^{-1}\left(\xi- \sqrt{\xi^2-1}\right)}}
\end{align*}
whenever $1 - \omega_\mathrm{n}z^{-1}\left(\xi- \sqrt{\xi^2-1}\right)<0$. For any $\epsilon>0$, it holds that
\begin{align*}
\dot{p}_\mathrm{u}(t_\mathrm{nadir}-\epsilon)
>& \dfrac{Kz}{\omega_\mathrm{n}^2} e^{\sigma_1 \epsilon} \left(\sigma_1 \eta_1 e^{-\sigma_1 t_\mathrm{nadir}}+ \sigma_2\eta_2e^{-\sigma_2 t_\mathrm{nadir}}\right)\\
=& e^{\sigma_1 \epsilon} \dot{p}_\mathrm{u}(t_\mathrm{nadir})=0\;,\\
\dot{p}_\mathrm{u}(t_\mathrm{nadir}+\epsilon)
<& \dfrac{Kz}{\omega_\mathrm{n}^2} e^{-\sigma_1 \epsilon} \left(\sigma_1 \eta_1 e^{-\sigma_1 t_\mathrm{nadir}}+ \sigma_2\eta_2e^{-\sigma_2 t_\mathrm{nadir}}\right)\\
=& e^{-\sigma_1 \epsilon} \dot{p}_\mathrm{u}(t_\mathrm{nadir})=0\;,
\end{align*}
since $\sigma_1>\sigma_2>0$ and one can show that $\sigma_2\eta_2<0$. Clearly, Nadir occurs at $t_\mathrm{nadir}$.
Therefore, for the case $\xi > 1$, Nadir is eliminated if and only if $1 - \omega_\mathrm{n}z^{-1}\left(\xi- \sqrt{\xi^2-1}\right)\geq0$, i.e., $\sqrt{\xi^2-1} \geq \xi-z/\omega_\mathrm{n}$,
which holds if and only if
\begin{align*}
\xi\leq z/\omega_\mathrm{n} \quad\text{or}\quad
\begin{cases}
\xi> z/\omega_\mathrm{n}\\
\xi \geq \left(z/\omega_\mathrm{n}+\omega_\mathrm{n}/z\right)/2
\end{cases}.
\end{align*}
Thus we get the conditions
\begin{align}\label{eq:ov-nonadir}
1 < \xi\leq z/\omega_\mathrm{n} \quad\text{or}\quad
\begin{cases}
\xi>1\\
\xi>z/\omega_\mathrm{n}\\
\xi \geq \left(z/\omega_\mathrm{n}+\omega_\mathrm{n}/z\right)/2
\end{cases}.
\end{align}
\end{enumerate}
Finally, since $\forall a, b \geq 0$, $(a+b)/2\geq\sqrt{ab}$ with equality only when $a=b$, it follows that the second condition in \eqref{eq:ov-nonadir} can only hold when $\xi>1$.
Thus we can combine
\eqref{eq:cri-nonadir} and \eqref{eq:ov-nonadir} to yield \eqref{eq:nonadir-con}.
\end{proof}
\subsection{Steady-state Effort Share}
\begin{cor}[Synchronous frequency under DC and VI]\label{lem:syn-fre-dc}
Let Assumption~\ref{ass:step} hold. When $q_{\mathrm{r},i}$ is defined by the control law DC \eqref{eq:dy-dc} or VI \eqref{eq:dy-vi}, the steady-state frequency deviation of the system $\hat{T}_{\omega \mathrm{p,DC}}$ or $\hat{T}_{\omega \mathrm{p, VI}}$ synchronizes to the synchronous frequency, i.e., $\omega_{\mathrm{ss}} = \omega_{\mathrm{syn}} \mathbbold{1}_n$ with
\begin{equation}
\omega_{\mathrm{syn}} = \dfrac{\sum_{i=1}^n u_{0,i}}{\sum_{i=1}^n \left( d_i + r_{\mathrm{t},i}^{-1} + r_{\mathrm{r},i}^{-1} \right)}\;. \label{eq:ome-syn-dc}
\end{equation}
\end{cor}
\begin{proof}
The result follows directly from Lemma~\ref{lem:syn-fre}.
\end{proof}
Now, the corollary below gives the expression for the steady-state effort share when inverters are under the control law DC or VI.
\begin{cor}[Steady-state effort share of DC and VI]\label{thm:ss-DC}
Let Assumption~\ref{ass:step} hold. If $q_{\mathrm{r},i}$ is under the control law \eqref{eq:dy-dc} or \eqref{eq:dy-vi}, then the steady-state effort share of the system $\hat{T}_{\omega \mathrm{p, DC}}$ or $\hat{T}_{\omega \mathrm{p, VI}}$ is given by
\begin{equation}\label{eq:es-ratio}
\mathrm{ES} = \frac{\sum_{i=1}^n r_{\mathrm{r},i}^{-1}}{\sum_{i=1}^n \left( d_i + {r_{\mathrm{t},i}^{-1} + r_{\mathrm{r},i}^{-1}} \right) }\;.
\end{equation}
\end{cor}
\begin{proof}
The result follows directly from Theorem~\ref{thm:ss-es} applied to \eqref{eq:dy-dc} and \eqref{eq:dy-vi}.
\end{proof}
Corollary~\ref{thm:ss-DC} indicates that DC and VI have the same steady-state effort share, which increases as $r_{\mathrm{r},i}^{-1}$ increase.
However, $r_{\mathrm{r},i}^{-1}$ are parameters that also directly affect the dynamic performance of the power system, which can be seen clearly from the dynamic performance analysis.
\subsection{Power Fluctuations and Measurement Noise}\label{ssec:VI-dy}
Using Theorem~\ref{thm:h2-sum} and Lemma~\ref{lm:h2-4th}, it is possible to get closed form expressions of $\mathcal{H}_2$ norms for systems $\hat{T}_{\omega\mathrm{dn},\mathrm{DC}}$ and $\hat{T}_{\omega\mathrm{dn},\mathrm{VI}}$.
\begin{cor}[Frequency variance under DC and VI]\label{thm:noise-VI}
Let Assumptions~\ref{ass:proportion} and \ref{ass:noise} hold. The squared $\mathcal{H}_2$ norm of $\hat{T}_{\omega\mathrm{dn},\mathrm{DC}}$ and $\hat{T}_{\omega\mathrm{dn},\mathrm{VI}}$ is given by
\begin{subequations}
\begin{equation}\label{eq:noise-DC}
\|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|_{\mathcal{H}_2}^2 = \sum_{k=1}^n \Gamma_{kk} \dfrac{\kappa_\mathrm{p}^2 + r_\mathrm{r}^{-2} \kappa_\omega^2}{2m \check{d}} ,
\end{equation}
\begin{equation}\label{eq:noise-VI}
\|\hat{T}_{\omega\mathrm{dn}, \mathrm{VI}}\|_{\mathcal{H}_2}^2 = \infty \;,
\end{equation}
\end{subequations}
respectively, where $\check{d} := d + r_\mathrm{r}^{-1}$.
\end{cor}
\begin{proof}
We study the two cases separately.
We begin with $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|_{\mathcal{H}_2}^2$.
Applying \eqref{eq:go-sw} and \eqref{eq:co-dc} to \eqref{eq:hp-s} and \eqref{eq:homega-s} shows
$\hat{h}_{{\mathrm{p},k},\mathrm{DC}}(s)$ is a transfer function with $b_4=a_0=b_0=a_1=b_1=0, a_2 = \lambda_k/m, b_2 = 0, a_3 = \check{d}/m, b_3 =1/m$, while $\hat{h}_{{\omega,k},\mathrm{DC}}(s)$ is a transfer function with $b_4=a_0=b_0=a_1=b_1=0, a_2 = \lambda_k/m, b_2 = 0, a_3 = \check{d}/m, b_3 =-r_\mathrm{r}^{-1}/m$.
Thus, by Lemma~\ref{lm:h2-4th},
\begin{align*}
\|\hat{h}_{{\mathrm{p},k},\mathrm{DC}}\|_{\mathcal{H}_2}^2=\frac{1}{2m\check{d}} \quad \text{and} \quad
\|\hat{h}_{{\omega,k},\mathrm{DC}}\|_{\mathcal{H}_2}^2=\frac{r_\mathrm{r}^{-2}}{2m\check{d}}\;.
\end{align*}
Then \eqref{eq:noise-DC} follows from Theorem~\ref{thm:h2-sum}.
We now turn to show that $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{VI}}\|_{\mathcal{H}_2}^2$ is infinite. Applying \eqref{eq:go-sw} and \eqref{eq:co-vi} to \eqref{eq:homega-s} yields
\begin{align*}
\hat{h}_{{\omega,k},\mathrm{VI}}(s) =& - \frac{m_{\mathrm{v}} s^2 + r_\mathrm{r}^{-1} s}{(m + m_\mathrm{v}) s^2 + \check{d} s + \lambda_k}\;,
\end{align*}
which by Lemma~\ref{lm:h2-4th} has $b_4 = -m_\mathrm{v}/\left(m + m_\mathrm{v}\right)\neq0$ and thus $\|\hat{h}_{{\omega,k},\mathrm{DC}}\|_{\mathcal{H}_2}^2=\infty$. Then \eqref{eq:noise-VI} follows directly from Theorem~\ref{thm:h2-sum}.
\end{proof}
\begin{cor}[Optimal $r_\mathrm{r}^{-1}$ for $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|_{\mathcal{H}_2}^2$] \label{cor:optimal-h2-dc}Let Assumptions~\ref{ass:proportion} and \ref{ass:noise} hold. Then
\begin{align}\label{eq:rr-star}
r_\mathrm{r}^{-1\star}\!\!:=\!\argmin_{r_\mathrm{r}^{-1} > 0}\! \|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|_{\mathcal{H}_2}^2\!\!=\!-d +\! \sqrt{d^2 + (\kappa_{\mathrm{p}}/\kappa_{\omega})^2}\,.
\end{align}
\end{cor}
\begin{proof}
The partial derivative of $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|_{\mathcal{H}_2}^2$ with respect to $r_\mathrm{r}^{-1}$ is
\begin{align}\label{eq:h2-dc-partial}
\partial_{r_\mathrm{r}^{-1}}\|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|_{\mathcal{H}_2}^2 = \sum_{k=1}^n \Gamma_{kk} \frac{\kappa_\omega^2 r_\mathrm{r}^{-2} \!+\! 2d\kappa_\omega^2 r_\mathrm{r}^{-1}\!-\!\kappa_\mathrm{p}^2}{2m\check{d}^2}\,.
\end{align}
By equating \eqref{eq:h2-dc-partial} to 0, we can solve the corresponding $r_\mathrm{r}^{-1}$ as ${r_\mathrm{r}^{-1\star}}_\pm=-d \pm \sqrt{d^2 + (\kappa_{\mathrm{p}}/\kappa_{\omega})^2}$. The only positive root is therefore $r_\mathrm{r}^{-1\star}:=-d + \sqrt{d^2 + (\kappa_{\mathrm{p}}/\kappa_{\omega})^2}$. We now show that $\Gamma_{kk} > 0$, $\forall k \in \{1,\dots, n\}$. Recall that $\Gamma := V^T F^{-1} V$. We know $\Gamma_{kk} = \sum_{j=1}^n ( v_{k,j}^2/f_j)$. Since $v_k$ is an eigenvector, $\forall k \in \{1,\dots, n\}$, there must exist at least one $j\in \mathcal{V}$ such that $v_{k,j}\neq0$. Since $f_i > 0$, $\forall i$, we have that $\Gamma_{kk} > 0$, $\forall k \in \{1,\dots, n\}$. In addition, since the denominator of \eqref{eq:h2-dc-partial} is always positive and the highest order coefficient of the numerator is positive, whenever $0 < r_\mathrm{r}^{-1} < r_\mathrm{r}^{-1\star}$, then $ \partial_{r_\mathrm{r}^{-1}}\|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|_{\mathcal{H}_2}^2 < 0$, and if $r_\mathrm{r}^{-1} > r_\mathrm{r}^{-1\star}$, then $ \partial_{r_\mathrm{r}^{-1}}\|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|_{\mathcal{H}_2}^2 > 0$. Therefore, $r_\mathrm{r}^{-1\star}$ is the minimizer of $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|_{\mathcal{H}_2}^2$.
\end{proof}
Two main observations can be made from Corollary~\ref{thm:noise-VI}. First, the control parameter $r_\mathrm{r}^{-1}$ of DC has an direct effect on the size of the frequency variance in the system, which makes it impossible to require DC to bear an assigned amount of steady-state effort share and reduce the frequency variance at the same time. The other important point is that VI will induce unbounded frequency variance, which poses a threat to the operation of the power system. Therefore, neither DC nor VI is good solution to improve the frequency variance without sacrificing the steady-state effort share.
\subsection{Synchronization Cost}
Theorem~\ref{thm:bound-cost} implies that the synchronization cost of $\hat{T}_{\omega \mathrm{p, DC}}$ and $\hat{T}_{\omega \mathrm{p, VI}}$ are bounded by a weighted sum of $\| \hat{h}_{\mathrm{u},k,\mathrm{DC}} \|_{\mathcal{H}_2}^2$ and $\| \hat{h}_{\mathrm{u},k,\mathrm{VI}} \|_{\mathcal{H}_2}^2$, respectively. Hence, in order to see the limited ability of DC and VI to reduce the synchronization cost, we need to gain a deeper understanding of $\| \hat{h}_{\mathrm{u},k,\mathrm{DC}} \|_{\mathcal{H}_2}^2$ and $\| \hat{h}_{\mathrm{u},k,\mathrm{VI}} \|_{\mathcal{H}_2}^2$ first.
\begin{thm}[Bounds of $\| \hat{h}_{\mathrm{u},k,\mathrm{DC}} \|_{\mathcal{H}_2}^2$ and $\| \hat{h}_{\mathrm{u},k,\mathrm{VI}} \|_{\mathcal{H}_2}^2$]
\label{lem:bounds-VI}
Let Assumptions~\ref{ass:proportion} and \ref{ass:step} hold. Then, given $r_\mathrm{r}^{-1}>0$, $\forall m_\mathrm{v} > 0$,
\begin{align*}
\dfrac{1}{2 \lambda_{k+1} \!\left(\check{d} \!+\! r_\mathrm{t}^{-1}\right)} \!\!<\! \| \hat{h}_{\mathrm{u},k,\mathrm{VI}} \|_{\mathcal{H}_2}^2 \!\!<\! \| \hat{h}_{\mathrm{u},k,\mathrm{DC}} \|_{\mathcal{H}_2}^2 \!\!<\! \| \hat{h}_{\mathrm{u},k,\mathrm{SW}} \|_{\mathcal{H}_2}^2,
\end{align*}
where $\| \hat{h}_{\mathrm{u},k,\mathrm{SW}} \|_{\mathcal{H}_2}^2$ represents the inner products of the open-loop system with no additional control from inverters.
\end{thm}
\begin{proof}
Considering that DC can be viewed as VI with $m_\mathrm{v} = 0$ and the open-loop system can be viewed as VI with $m_\mathrm{v}=r_\mathrm{r}^{-1}=0$, we only compute $\|\hat{h}_{\mathrm{u},k,\mathrm{VI}}\|_{\mathcal{H}_2}^2$, which straightforwardly implies the other two. Applying \eqref{eq:go-sw-tb} and \eqref{eq:co-vi} to \eqref{eq:hp-s} shows $\hat{h}_{\mathrm{u},k,\mathrm{VI}}(s)=\hat{h}_{\mathrm{p},k+1,\mathrm{T,VI}}(s)/s$ is a transfer function with $b_4=a_0=b_0=0, a_1=\lambda_{k+1}/\left(\check{m} \tau\right), b_1=1/\left(\check{m} \tau\right), a_2 = \left(\check{d} + r_{\mathrm{t}}^{-1} + \lambda_{k+1} \tau\right)/\left(\check{m} \tau\right), b_2 = 1/\check{m}, a_3 = \left(\check{m} + \check{d} \tau\right)/\left(\check{m} \tau\right), b_3 =0$.
Then it follows from Lemma~\ref{lm:h2-4th} that
\begin{align}
\|\hat{h}_{\mathrm{u},k,\mathrm{VI}}\|_{\mathcal{H}_2}^2
\!\!=\!\dfrac{\check{m} + \tau\! \left(\lambda_{k+1} \tau + \check{d} \right)}{ 2 \lambda_{k+1}\!\left[\tau \check{d} \left(\lambda_{k+1} \tau + \check{d} +\! r_\mathrm{t}^{-1}\right) \!+ \!\check{m}\!\left(\check{d} + r_\mathrm{t}^{-1}\right)\right] }. \nonumber
\end{align}
Since $\| \hat{h}_{\mathrm{u},k,\mathrm{VI}} \|_{\mathcal{H}_2}^2$ is a function of $r_\mathrm{r}^{-1}$ and $m_\mathrm{v}$, in what follows we denote it by $\rho(r_\mathrm{r}^{-1}, m_\mathrm{v})$. In order to have an insight on how $\| \hat{h}_{\mathrm{u},k,\mathrm{VI}} \|_{\mathcal{H}_2}^2$ changes with $r_\mathrm{r}^{-1}$ and $m_\mathrm{v}$, we take partial derivatives of $\rho(r_\mathrm{r}^{-1}, m_\mathrm{v})$ with respect to $r_\mathrm{r}^{-1}$ and $m_\mathrm{v}$, i.e.,
\begin{align}
&\partial_{r_\mathrm{r}^{-1}} \rho (r_\mathrm{r}^{-1}, m_\mathrm{v}) \nonumber\\
=& - \!\dfrac{
\left[\check{m}+ \tau \left(\lambda_{k+1} \tau + \check{d} \right)\right]^2 + \lambda_{k+1} \tau^3 r_\mathrm{t}^{-1} }{ 2 \lambda_{k+1}\left[ \tau \check{d} \left(\lambda_{k+1} \tau + \check{d} + r_\mathrm{t}^{-1}\right) + \check{m}(\check{d} + r_\mathrm{t}^{-1}) \right]^2}\;,\nonumber\\
&\partial_{m_\mathrm{v}} \rho (r_\mathrm{r}^{-1}, m_\mathrm{v}) \nonumber\\
=& - \!\dfrac{ \tau^2 r_\mathrm{t}^{-1}}{ 2 \left[ \tau \check{d} \left(\lambda_{k+1} \tau + \check{d} + r_\mathrm{t}^{-1}\right) + \check{m}(\check{d} + r_\mathrm{t}^{-1}) \right]^2}\;. \nonumber
\end{align}
Clearly, for all $r_\mathrm{r}^{-1} \geq 0$, $\partial_{r_\mathrm{r}^{-1}} \rho (r_\mathrm{r}^{-1}, m_\mathrm{v}) < 0$, which means that $\rho (r_\mathrm{r}^{-1}, m_\mathrm{v})$ is a monotonically decreasing function of $r_\mathrm{r}^{-1}$. Similarly, for all $m_\mathrm{v} \geq 0$, $\partial_{m_\mathrm{v}} \rho (r_\mathrm{r}^{-1}, m_\mathrm{v}) < 0$, which means that $\rho (r_\mathrm{r}^{-1}, m_\mathrm{v})$ is a monotonically decreasing function of $m_\mathrm{v}$. Therefore, given $r_\mathrm{r}^{-1} > 0$, $\forall m_\mathrm{v} > 0$, it holds that
\[
\lim_{m_\mathrm{v}\to \infty} \rho(r_\mathrm{r}^{-1}, m_\mathrm{v})<\rho(r_\mathrm{r}^{-1}, m_\mathrm{v}) < \rho(r_\mathrm{r}^{-1}, 0)< \rho(0, 0)\,.
\]
Recall that $\| \hat{h}_{\mathrm{u},k,\mathrm{VI}} \|_{\mathcal{H}_2}^2 = \rho(r_\mathrm{r}^{-1}, m_\mathrm{v})$, $\| \hat{h}_{\mathrm{u},k,\mathrm{DC}} \|_{\mathcal{H}_2}^2 = \rho(r_\mathrm{r}^{-1}, 0)$, and $\| \hat{h}_{\mathrm{u},k,\mathrm{SW}} \|_{\mathcal{H}_2}^2 = \rho(0, 0)$.
The result follows.
\end{proof}
\begin{cor}[Comparison of synchronization cost in homogeneous case] \label{cor:syncost-homoe-bounds}
Denote the synchronization cost of the open-loop system as $\|\tilde{\omega}_\mathrm{SW}\|_2^2$. Then, under Assumptions~\ref{ass:proportion} and \ref{ass:step}, given $r_\mathrm{r}^{-1}>0$, $\forall m_\mathrm{v} > 0$, we can order the synchronization cost when $F=fI_n$ as:
\[
\frac{\sum_{k=1}^{n-1}\left(\tilde{u}_{0,k}^2/\lambda_{k+1}\right)}{2f\left(\check{d} + r_\mathrm{t}^{-1}\right)}< \|\tilde{\omega}_\mathrm{VI}\|_2^2 < \|\tilde{\omega}_\mathrm{DC}\|_2^2 < \|\tilde{\omega}_\mathrm{SW}\|_2^2\,.
\]
\end{cor}
\begin{proof}
The result follows by combining Remark~\ref{rem:syncost-homo} and Theorem~\ref{lem:bounds-VI}.
\end{proof}
\begin{cor}[Lower bound of synchronization cost under DC and VI]\label{cor:low-bound-pro}
Under Assumptions~\ref{ass:proportion} and \ref{ass:step}, the ordering of the size of the bounds on the synchronization cost of open-loop, DC, and VI depends on the parameter values. Thus we cannot order $\|\tilde{\omega}_\mathrm{VI}\|_2^2$, $\|\tilde{\omega}_\mathrm{DC}\|_2^2$, and $\|\tilde{\omega}_\mathrm{SW}\|_2^2$ strictly. Instead, we highlight that, given $r_\mathrm{r}^{-1}>0$, the synchronization cost under DC and VI are bounded below by
\[
\frac{\sum_{k=1}^{n-1}\left(\tilde{u}_{0,k}^2/\lambda_{k+1}\right)}{2\max_{i \in \mathcal{V}} \left(f_i \right)\left(\check{d} + r_\mathrm{t}^{-1}\right)}\,.
\]
\end{cor}
\begin{proof}
The result follows from Theorems~\ref{thm:bound-cost} and~\ref{lem:bounds-VI}.
\end{proof}
Corollary~\ref{cor:syncost-homoe-bounds} provides both upper and lower bounds for the synchronization cost under DC and VI in homogeneous case. The upper bound verifies that DC and VI do reduce the synchronization cost by adding damping and inertia while the lower bound indicates that the reduction of the synchronization cost through DC and VI is limited by certain value that is dependent on $r_\mathrm{r}^{-1}$. Corollary~\ref{cor:low-bound-pro} implies that in the proportional case the synchronization cost under DC and VI is also bounded below by a value that is dependent on $r_\mathrm{r}^{-1}$. The fact that the lower bound of the synchronization cost under DC and VI is reduced as $r_\mathrm{r}^{-1}$ increases is not satisfactory, since, from the stead-state effort share point of view, a smaller $r_\mathrm{r}^{-1}$ is preferred. However, given a small $r_\mathrm{r}^{-1}$, even if the inertia is very high, i.e., $m_\mathrm{v}\to\infty$, the synchronization cost $\|\tilde{\omega}_\mathrm{VI}\|_2^2$ can never reach zero, not to mention $\|\tilde{\omega}_\mathrm{DC}\|_2^2$.
\subsection{Nadir}
Finally, with the help of Theorem~\ref{th:nonadir-con}, we can determine the conditions that the parameters of DC and VI must satisfy to eliminate Nadir of the system frequency.
\begin{thm}[Nadir elimination under DC and VI]\label{thm:no-nadir-cond-VI}
Under Assumptions~\ref{ass:proportion} and \ref{ass:step}:
\begin{itemize}
\item
for $\hat{T}_{\omega \mathrm{p, DC}}$,
the tuning region that eliminates Nadir through DC is $r_{\mathrm{r}}^{-1}$ such that
\begin{align}\label{eq:nadir-DC}
r_{\mathrm{r}}^{-1} \leq m\left(\tau^{-1} - 2\sqrt{\tau^{-1}r_{\mathrm{t}}^{-1}/m}\right)-d\;;
\end{align}
\item
for $\hat{T}_{\omega \mathrm{p, VI}}$,
the tuning region that eliminates Nadir through VI is $(r_{\mathrm{r}}^{-1}, m_{\mathrm{v}})$ such that
\begin{align}\label{eq:nadir-VI}
r_{\mathrm{r}}^{-1} \!\leq \! \left(m\!+\! m_{\mathrm{v}}\right)\!\left(\tau^{-1}\! -\! 2\sqrt{\tau^{-1}r_{\mathrm{t}}^{-1}\!/\!\left(m\!+\! m_{\mathrm{v}}\right)}\right)\!-d\;.
\end{align}
\end{itemize}
\end{thm}
\begin{proof}
We start by deriving the Nadir elimination condition for VI.
The system frequency of $\hat{T}_{\omega \mathrm{p, VI}}$ is given by \cite{p2017ccc}
\begin{equation*}
\bar{\omega}_\mathrm{VI}(t) = \dfrac{\sum_{i=1}^n u_{0,i} }{ \sum_{i=1}^n f_i } p_\mathrm{u,VI}(t)\;,
\end{equation*}
where $p_\mathrm{u,VI}(t)$ is the unit-step response of $\hat{h}_{{\mathrm{p},1},\mathrm{T, VI}}(s)$. Clearly, as long as $p_\mathrm{u,VI}(t)$ has no Nadir, neither does $\bar{\omega}_\mathrm{VI}(t)$. Thus, as shown later, the core is to apply Theorem~\ref{th:nonadir-con} to $\hat{h}_{{\mathrm{p},1},\mathrm{T, VI}}(s)$.
Substituting \eqref{eq:go-sw-tb} and \eqref{eq:co-vi} to \eqref{eq:hp-s} yields
\begin{align*}
\hat{h}_{\mathrm{p},1,\mathrm{T,VI}}(s)
= \frac{1}{\check{m}}\dfrac{s + \tau^{-1}}{ s^2 + 2\xi\omega_\mathrm{n} s + \omega_\mathrm{n}^2 }\;,\nonumber
\end{align*}
where $\omega_\mathrm{n} := \sqrt{\cfrac{\check{d} +r_{\mathrm{t}}^{-1}}{\check{m}\tau}}\;,\quad
\xi := \dfrac{\tau^{-1}+\check{d}/\check{m}}{2\sqrt{\left(\check{d} +r_{\mathrm{t}}^{-1}\right)/\left(\check{m}\tau\right)}}\;.$
Now we are ready to search the Nadir elimination tuning region by means of Theorem~\ref{th:nonadir-con}. An easy computation shows the following inequality: $2\xi\omega_\mathrm{n} - \tau^{-1} = \check{d}/\check{m} < \left(\check{d} +r_{\mathrm{t}}^{-1}\right)/\check{m}=\omega^2_\mathrm{n} \tau$.
Equivalently, it holds that $\xi < \left[1/\left(\omega_\mathrm{n}\tau\right)+ \omega_\mathrm{n} \tau\right]/2$,
which indicates that the second set of conditions in \eqref{eq:nonadir-con} cannot be satisfied. Hence, we turn to the first set of conditions in \eqref{eq:nonadir-con}, which holds if and only $\xi \geq 1$ and $\xi\omega_\mathrm{n} \leq \tau^{-1}$. Via simple algebraic computations, this is equivalent to
\begin{align}\label{eq:cond-nonadir}
\tau \check{d}^2 /\check{m} - 2 \check{d} + \tau^{-1}\check{m} - 4r_{\mathrm{t}}^{-1} \!\geq 0 \quad\text{and}\quad
\check{d}/\check{m} \!\leq \tau^{-1}.
\end{align}
The first condition in \eqref{eq:cond-nonadir} can be viewed as a quadratic inequality with respect to $\check{d}$, which holds if and only if
\begin{align*}
\check{d} \leq \check{m}\left(\tau^{-1} - 2\sqrt{ \cfrac{r_{\mathrm{t}}^{-1}}{\check{m}\tau}}\right)
\quad\text{or}\quad
\check{d} \geq \check{m}\left(\tau^{-1} + 2\sqrt{ \cfrac{r_{\mathrm{t}}^{-1}}{\check{m}\tau}}\right)\,.
\end{align*}
However, only the former region satisfies the second condition in \eqref{eq:cond-nonadir}. This concludes the proof of the second statement. The first statement follows trivially by setting $m_{\mathrm{v}}=0$.
\end{proof}
Important inferences can be made from Theorem~\ref{thm:no-nadir-cond-VI}. The fact that a small $m$ tends to make the term on the right hand side of \eqref{eq:nadir-DC} negative implies that in a low-inertia power system it is impossible to eliminate Nadir using only DC. Undoubtedly, the addition of $m_\mathrm{v}$ makes the tuning region in \eqref{eq:nadir-VI} more accessible, which indicates that VI can help a low-inertia power system improve Nadir.
We end this section by summarizing the pros and cons of each controller.
\begin{itemize}
\item \textbf{Droop control:}~With only one parameter $r_{\mathrm{r}}^{-1}$, DC can neither reduce frequency variance or synchronization cost without affecting steady-state effort share. Moreover, for low-inertia systems, DC cannot eliminate Nadir.
\item \textbf{Virtual inertia:}~ VI can use its additional dynamic parameter $m_\mathrm{v}$ to eliminate system Nadir and relatively improve synchronization cost. However this comes at the price of introducing large frequency variance in response to noise, and cannot be decoupled from increases in the steady-state effort share.
\end{itemize}
\subsection{Steady-state Effort Share}
We can show that iDroop is able to preserve the steady-state behavior given by DC and VI.
\begin{cor}[Synchronous frequency under iDroop]\label{lem:syn-fre-idroop}
Let Assumption~\ref{ass:step} hold. If $q_{\mathrm{r},i}$ is under the control law \eqref{eq:dy-idroop}, then the steady-state frequency deviation of the system $\hat{T}_{\omega \mathrm{p, iDroop}}$ synchronizes to the synchronous frequency given by \eqref{eq:ome-syn-dc}.
\end{cor}
\begin{proof}
The result follows directly from Lemma~\ref{lem:syn-fre}.
\end{proof}
\begin{cor}[Steady-state effort share of iDroop]\label{thm:ss-idroop}
Let Assumption~\ref{ass:step} hold. If $q_{\mathrm{r},i}$ is under the control law \eqref{eq:dy-idroop}, then the steady-state effort share of the system $\hat{T}_{\omega \mathrm{p, iDroop}}$ is given by \eqref{eq:es-ratio}.
\end{cor}
\begin{proof}
The result follows directly from Theorem~\ref{thm:ss-es} applied to \eqref{eq:dy-idroop}.
\end{proof}
Corollaries~\ref{lem:syn-fre-idroop} and~\ref{thm:ss-idroop} suggest that iDroop achieves the same synchronous frequency and steady-state effort share as DC and VI do, which depend on $r_{\mathrm{r},i}^{-1}$. Note that besides $r_{\mathrm{r},i}^{-1}$ iDroop provides us with two more degrees of freedom by $\delta_i$ and $\nu_i$.
\subsection{Power Fluctuations and Measurement Noise}
The next theorem quantifies the frequency variance under iDroop through the squared $\mathcal{H}_2$ norm of the system $\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}$.
\begin{cor}[Frequency variance under iDroop]\label{thm:noise-idroop}
Let Assumptions~\ref{ass:proportion} and \ref{ass:noise} hold. The squared $\mathcal{H}_2$ norm of $\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}$ is given by
\begin{align}
& \|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|^2_{\mathcal{H}_2}\label{eq:h2-idroop}\\&= \sum_{k=1}^n \Gamma_{kk} {\dfrac{(\kappa_\mathrm{p}^2 + r_\mathrm{r}^{-2} \kappa_\omega^2) m \delta^2 + (\kappa_\mathrm{p}^2+\nu^2 \kappa_\omega^2)\left(\check{d} \delta + \lambda_k \right) }{2m\left[\check{d}m\delta^2 + (d+\nu)\left(\check{d}\delta + \lambda_k \right)\right]}}.\nonumber
\end{align}
\end{cor}
\begin{proof}
The proof is based on the Theorem~\ref{thm:h2-sum} and Lemma~\ref{lm:h2-4th}.
Applying \eqref{eq:go-sw} and \eqref{eq:co-idroop} to \eqref{eq:hp-s} and \eqref{eq:homega-s} shows
$\hat{h}_{{\mathrm{p},k},\mathrm{iDroop}}(s)$ is a transfer function with $b_4=a_0=b_0=0, a_1 = \left(\lambda_k \delta\right)/m, b_1 =0, a_2 = \left(\check{d} \delta + \lambda_k\right)/m, b_2 = \delta/m, a_3 = \left(m \delta + d +\nu\right)/m, b_3 =1/m$, while $\hat{h}_{{\omega,k},\mathrm{iDroop}}(s)$ is a transfer function with $b_4=a_0=b_0=0, a_1 = \left(\lambda_k \delta\right)/m, b_1 =0, a_2 = \left(\check{d} \delta + \lambda_k\right)/m, b_2 = -\left(r_\mathrm{r}^{-1}\delta\right)/m, a_3 = \left(m \delta + d +\nu\right)/m, b_3 =-\nu/m$.
Thus, by Lemma~\ref{lm:h2-4th},
\begin{align*}
\|\hat{h}_{{\mathrm{p},k},\mathrm{iDroop}}\|_{\mathcal{H}_2}^2={\dfrac{ m \delta^2 + \check{d} \delta + \lambda_k }{2m\left[\check{d}m\delta^2 + (d+\nu)\left(\check{d} \delta + \lambda_k \right)\right]}} \;,\\
\|\hat{h}_{{\omega,k},\mathrm{iDroop}}\|_{\mathcal{H}_2}^2={\dfrac{ r_\mathrm{r}^{-2} m \delta^2 + \nu^2 \left(\check{d} \delta + \lambda_k \right) }{2m\left[\check{d}m\delta^2 + (d+\nu)\left(\check{d} \delta + \lambda_k \right)\right]}}\;.
\end{align*}
Then \eqref{eq:h2-idroop} follows from Theorem~\ref{thm:h2-sum}.
\end{proof}
The explicit expression of $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|^2_{\mathcal{H}_2}$ given in Corollary~\ref{thm:noise-idroop} is useful to show that iDroop can reduce the frequency variance relative to DC and VI. Given the fact that $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{VI}}\|^2_{\mathcal{H}_2}$ is infinite, the question indeed lies in whether we can find a set of values for parameters $\delta$ and $\nu$ that ensure $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|^2_{\mathcal{H}_2} \leq \|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|^2_{\mathcal{H}_2}$. Fortunately, we can not only find such a set but also the optimal setting for \eqref{eq:h2-idroop}. The following three lemmas set the foundation of this important result which is given as Theorem 7.
\begin{lem}[Limit of $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|^2_{\mathcal{H}_2}$]\label{lem:h2lim}
Let Assumptions~\ref{ass:proportion} and \ref{ass:noise} hold. If $\delta \to \infty$, then $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|^2_{\mathcal{H}_2} = \|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|^2_{\mathcal{H}_2}$.
\end{lem}
\begin{proof}
The limit of \eqref{eq:h2-idroop} as $\delta \to \infty$ can be computed as
\[
\underset{\delta \to \infty}{\lim} \|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|^2_{\mathcal{H}_2}
\!=\! \sum_{k=1}^n \Gamma_{kk} {\dfrac{\kappa_\mathrm{p}^2 + r_\mathrm{r}^{-2} \kappa_\omega^2 }{2m\check{d}}} \!=\! \|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|^2_{\mathcal{H}_2}\,,
\]
where the second equality follows from \eqref{eq:noise-DC}.
\end{proof}
Lemma~\ref{lem:h2lim} shows that $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|_{\mathcal{H}_2}^2$ asymptotically converges to $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|_{\mathcal{H}_2}^2$ as $\delta \to \infty$.
The next lemma shows that this convergence is monotonically from either above or below depending on the value of the parameter $\nu$.
\begin{lem}[$\nu$-dependent monotonicity of $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{ iDroop}}\|_{\mathcal{H}_2}^2$ with respect to $\delta$ ]\label{lem:mono-alp}
Let Assumptions~\ref{ass:proportion} and \ref{ass:noise} hold. Define
\begin{equation}
\alpha_1 (\nu)
:= \dfrac{- \check{d}\kappa_\omega^2 \nu^2 + \left(\kappa_\mathrm{p}^2 + r_\mathrm{r}^{-2}\kappa_\omega^2 \right)\nu + d r_\mathrm{r}^{-2} \kappa_\omega^2 - r_\mathrm{r}^{-1}\kappa_\mathrm{p}^2}{d+\nu} \nonumber\,.\label{eq:alpha1}
\end{equation}
Then
\begin{itemize}
\item
$\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|_{\mathcal{H}_2}^2$ is a monotonically increasing or decreasing function of $\delta > 0$ if and only if $\alpha_1 (\nu)$ is positive or negative, respectively.
\item
$\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|_{\mathcal{H}_2}^2$ is independent of $\delta>0$ if and only if $\alpha_1 (\nu)$ is zero.
\end{itemize}
\end{lem}
\begin{proof}
Provided that $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|_{\mathcal{H}_2}^2$ is a function of $\delta$ and $\nu$, in what follows we denote it by $\Pi(\delta, \nu)$. To make it clear how $\Pi(\delta, \nu)$ changes with $\delta$, we firstly put it into the equivalent form of
\begin{equation*}
\Pi(\delta, \nu) = \sum_{k=1}^n \Gamma_{kk} \left[\dfrac{\alpha_1(\nu) \delta^2}{\alpha_2 \delta^2+\alpha_3(\nu) \delta +\alpha_4(\nu, \lambda_k)} + \alpha_5(\nu) \right]\
\end{equation*}
with
\begin{subequations} \label{eq:alpha}
\begin{align*}
&\alpha_1(\nu) := \dfrac{- \check{d}\kappa_\omega^2 \nu^2 + \left(\kappa_\mathrm{p}^2 + r_\mathrm{r}^{-2}\kappa_\omega^2 \right)\nu + d r_\mathrm{r}^{-2} \kappa_\omega^2 - r_\mathrm{r}^{-1}\kappa_\mathrm{p}^2}{d+\nu}\;,\\
&\alpha_2 := 2m\check{d}\;,\qquad\;
\alpha_3(\nu) := 2(d+\nu)\check{d}\;, \\
&\alpha_4(\nu, \lambda_k) := 2(d+\nu)\lambda_k\;,\qquad
\alpha_5(\nu) := \dfrac{\kappa_\mathrm{p}^2+\nu^2 \kappa_\omega^2}{2m(d+\nu)}\;.
\end{align*}
\end{subequations}
We then take the partial derivative of $\Pi(\delta, \nu)$ with respect to $\delta$ as
\begin{align*}
\partial_{\delta} \Pi(\delta, \nu)
=\! \alpha_1(\nu) \sum_{k=1}^n \Gamma_{kk} \!\left[ \dfrac{ \alpha_3(\nu) \delta^2 +2 \alpha_4(\nu, \lambda_k) \delta}{(\alpha_2 \delta^2+\alpha_3(\nu) \delta +\alpha_4(\nu, \lambda_k))^2}\right].
\end{align*}
Since $m > 0$, $d > 0$, $\nu > 0$, and $r_\mathrm{r}^{-1} > 0$, $\alpha_2$ and $\alpha_3(\nu)$ are positive. Also, given that all the eigenvalues of the scaled Laplacian matrix $L_\mathrm{F}$ are non-negative, $\alpha_4(\nu, \lambda_k)$ must be non-negative. Thus, $\forall \delta > 0$, $( \alpha_3(\nu) \delta^2 +2 \alpha_4(\nu, \lambda_k) \delta)/(\alpha_2 \delta^2+\alpha_3(\nu) \delta +\alpha_4(\nu, \lambda_k))^2 > 0$.
Recall from the proof of Corollary~\ref{cor:optimal-h2-dc} that $\Gamma_{kk} > 0$, $\forall k \in \{1,\dots, n\}$. Therefore, $\forall \delta > 0$, $\mathrm{sign} \left( \partial_{\delta} \Pi(\delta, \nu) \right) = \mathrm{sign} \left( \alpha_1(\nu) \right)$.
\end{proof}
By Lemma~\ref{lem:mono-alp}, for a given $\nu$, if $\alpha_1(\nu) < 0$, then $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|^2_{\mathcal{H}_2}$ always decreases as $\delta$ increases. However, according to Lemma~\ref{lem:h2lim}, even if $\delta \to \infty$, we can only obtain $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|^2_{\mathcal{H}_2} = \|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|^2_{\mathcal{H}_2}$. Similarly, if $\alpha_1 (\nu) = 0$, then $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|^2_{\mathcal{H}_2}$ keeps constant as $\delta$ increases, which means whatever $\delta$ is we will always obtain $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|^2_{\mathcal{H}_2} = \|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|^2_{\mathcal{H}_2}$. Therefore, iDroop cannot outperform DC when $\alpha_1 (\nu) \le 0$. To put it another way, Lemmas~\ref{lem:h2lim} and \ref{lem:mono-alp} imply that in order to improve the frequency variance through iDroop, one needs to set $\nu$ such that $\alpha_1(\nu) > 0$ and $\delta$ as small as practically possible. The following lemma characterizes the minimizer $\nu^{\star}$ of $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|_{\mathcal{H}_2}^2$ when $\delta = 0$.
\begin{lem}[Minimizer $\nu^{\star}$ of $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|_{\mathcal{H}_2}^2$ when $\delta = 0$]\label{lem:vstar}
Let Assumptions~\ref{ass:proportion} and \ref{ass:noise} hold. Then
\begin{equation}\label{eq:vstar}
\nu^{\star}\!\!:=\! \argmin_{\delta = 0, \nu > 0} \!\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|_{\mathcal{H}_2}^2 \!\!=\! -d +\! \sqrt{d^2 + (\kappa_{\mathrm{p}}/\kappa_{\omega})^2}\, .
\end{equation}
\end{lem}
\begin{proof}
Recall from the proof of Lemma~\ref{lem:mono-alp} that $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|_{\mathcal{H}_2}^2 = \Pi(\delta, \nu)$. Then we have
\begin{equation*}
\Pi(0, \nu) = \dfrac{\kappa_\mathrm{p}^2+\nu^2 \kappa_\omega^2}{2m(d + \nu)} \sum_{k=1}^n \Gamma_{kk}\;,
\end{equation*}
whose derivative with respect to $\nu$ is given by
\begin{equation}
\Pi'(0, \nu) = \dfrac{\kappa_\omega^2\nu^2 + 2 d\kappa_\omega^2 \nu - \kappa_\mathrm{p}^2}{2m (d+\nu)^2} \sum_{k=1}^n \Gamma_{kk}\;. \label{eq:g'(0,nu)}
\end{equation}
Note that \eqref{eq:g'(0,nu)} and \eqref{eq:h2-dc-partial} are in the same form. Thus, $\nu^{\star}$ is determined in the same way as in the proof of Corollary~\ref{cor:optimal-h2-dc}.
\end{proof}
We are now ready to prove the next theorem.
\begin{thm}[$\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|_{\mathcal{H}_2}^2$ optimal tuning]\label{thm:h2-improves}
Let Assumptions~\ref{ass:proportion} and \ref{ass:noise} hold. Define $\nu^{\star}$ as in \eqref{eq:vstar}. Then
\begin{itemize}
\item whenever $(\kappa_\mathrm{p}/\kappa_\omega)^2 \neq 2r_\mathrm{r}^{-1}d + r_\mathrm{r}^{-2}$, for any $\delta > 0$ and $\nu$ such that
\begin{equation}\label{eq:condition}
\nu \in [\nu^{\star},r_\mathrm{r}^{-1}) \quad \text{or}\quad \nu \in (r_\mathrm{r}^{-1},\nu^\star]\;,
\end{equation}
iDroop outperforms DC in terms of frequency variance, i.e.,
\begin{equation*}
\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|^2_{\mathcal{H}_2}<\|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|^2_{\mathcal{H}_2}\;.
\end{equation*}
Moreover, the global minimum of $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|^2_{\mathcal{H}_2}$ is obtained by setting $\delta \to 0$ and $\nu \to \nu^{\star}$.
\item if $(\kappa_\mathrm{p}/\kappa_\omega)^2 = 2r_\mathrm{r}^{-1}d + r_\mathrm{r}^{-2}$, then for any $\delta > 0$, by setting $\nu \to \nu^{\star} = r_\mathrm{r}^{-1}$, iDroop matches DC in terms of frequency variance, i.e.,
\begin{equation*}
\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|^2_{\mathcal{H}_2}=\|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|^2_{\mathcal{H}_2}\;.
\end{equation*}
\end{itemize}
\end{thm}
\begin{proof}
As discussed before, to guarantee $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|^2_{\mathcal{H}_2} < \|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|^2_{\mathcal{H}_2}$, one requires to set $\nu$ such that $\alpha_1 (\nu) > 0$. In this case, $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|^2_{\mathcal{H}_2}$ always increases as $\delta$ increases, so choosing $\delta$ arbitrarily small is optimal for any fixed $\nu$.
We now look for the values of $\nu$ that satisfy the requirement $\alpha_1 (\nu) > 0$. Since the denominator of $\alpha_1 (\nu) $ is always positive, the sign of $\alpha_1 (\nu)$ only depends on its numerator. Denote the numerator of $\alpha_1 (\nu)$ as $N_{\alpha_1}(\nu)$. Clearly, $N_{\alpha_1}(\nu)$ is a univariate quadratic function in $\nu$, whose roots are: $\nu_1 = r_\mathrm{r}^{-1}$ and $\nu_2 = \left[(\kappa_\mathrm{p}/\kappa_\omega)^2 - r_\mathrm{r}^{-1} d \right]/\check{d}$.
Provided that the highest order coefficient of $N_{\alpha_1}(\nu)$ is negative, the graph of $N_{\alpha_1}(\nu)$ is a parabola that opens downwards. Therefore, if $\nu_1 < \nu_2$, then $\nu \in (\nu_1, \nu_2)$ guarantees $\alpha_1 (\nu) > 0$; if $\nu_1 > \nu_2$, then $\nu \in (\nu_2, \nu_1)\cap(0, \infty)$ guarantees $\alpha_1 (\nu) > 0$. Notably, if $\nu_1 = \nu_2$, there exists no feasible points of $\nu$ to make $\alpha_1 (\nu) > 0$.
The condition $\nu_1 = \nu_2$ happens only if $(\kappa_\mathrm{p}/\kappa_\omega)^2 = 2r_\mathrm{r}^{-1} d + r_\mathrm{r}^{-2}$, from which it follows that $\nu^{\star} = r_\mathrm{r}^{-1} = \nu_1 = \nu_2$. Then $\alpha_1 (\nu^{\star}) = \alpha_1 (r_\mathrm{r}^{-1}) = 0$. Therefore, by setting $\nu \to \nu^{\star} = r_\mathrm{r}^{-1}$, we get $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|^2_{\mathcal{H}_2}=\|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|^2_{\mathcal{H}_2}$. This concludes the proof of the second part.
We now focus on the case where the set $S=(\nu_1, \nu_2)\cup\{(\nu_2, \nu_1)\cap(0, \infty)\}$
is nonempty. Recall from the proof of Lemma~\ref{lem:mono-alp} that $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{iDroop}}\|_{\mathcal{H}_2} = \Pi(\delta, \nu)$. For any fixed $\nu \in S$, it holds that $\alpha_1 (\nu) > 0$ and thus $\Pi(\delta, \nu) > \Pi(0, \nu)$ for any $\delta >0$. Recall from the proof of Lemma~\ref{lem:vstar} that $\nu^{\star}$ is the minimizer of $\Pi(0, \nu)$. Hence, $(0, \nu^{\star})$ globally minimizes $\Pi(\delta, \nu)$ as long as $\nu^{\star}\in S$. In fact, we will show next that $\nu^{\star}$ is always within $S$ whenever $S\neq\emptyset$.
Firstly we consider the case when $\nu_1 < \nu_2$, which implies that $(\kappa_\mathrm{p}/\kappa_\omega)^2 > 2r_\mathrm{r}^{-1} d + r_\mathrm{r}^{-2}$. Then we have $\nu^{\star} >-d+\sqrt{d^2 + 2r_\mathrm{r}^{-1}d + r_\mathrm{r}^{-2}} = r_\mathrm{r}^{-1} = \nu_1$.
We also want to show $\nu^{\star} < \nu_2$ which holds if and only if
\begin{align*}
\sqrt{d^2 + (\kappa_\mathrm{p}/\kappa_\omega)^2} \!<\! \dfrac{(\kappa_\mathrm{p}/\kappa_\omega)^2 - r_\mathrm{r}^{-1} d }{\check{d}} + d = \dfrac{(\kappa_\mathrm{p}/\kappa_\omega)^2 + d^2 }{\check{d}}
\end{align*}
which is equivalent to $1 < \sqrt{d^2 + (\kappa_\mathrm{p}/\kappa_\omega)^2}/\check{d}$.
This always holds since $(\kappa_\mathrm{p}/\kappa_\omega)^2 > 2r_\mathrm{r}^{-1} d + r_\mathrm{r}^{-2}$. Thus, $\nu_1 < \nu^{\star} < \nu_2$.
Similarly, we can prove that in the case when $\nu_1 > \nu_2$, $\nu_2 < \nu^{\star} < \nu_1$ holds and thus $\nu^{\star} \in (\nu_2, \nu_1)\cap(0, \infty)$.
It follows that $(0, \nu^{\star})$ is the global minimizer of $\Pi(\delta, \nu)$.
Finally, by Lemma~\ref{lem:h2lim}, $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|^{2}_{\mathcal{H}_2} = \Pi(\infty, \nu)$. The condition \eqref{eq:condition} actually guarantees $\nu \in S$ and thus $\alpha_1(\nu) > 0$. Then, by Lemma~\ref{lem:mono-alp}, we have $\|\hat{T}_{\omega\mathrm{dn}, \mathrm{DC}}\|^{2}_{\mathcal{H}_2} = \Pi(\infty, \nu) > \Pi(\delta, \nu)$.
This concludes the proof of the first part.
\end{proof}
Theorem \ref{thm:h2-improves} shows that, to optimally improve the frequency variance, iDroop needs to first set $\delta$ arbitrarily close to zero. Interestingly, this implies that the transfer function $\hat{c}_\mathrm{o}(s)\approx-\nu$ except for $\hat{c}_\mathrm{o}(0)=-r_\mathrm{r}^{-1}$. In other words, iDroop uses its first-order lead/lag property to effectively decouple the dc gain $\hat{c}_\mathrm{o}(0)$ from the gain at all the other frequencies such that $\hat{c}_\mathrm{o}(\boldsymbol{j\omega})\approx -\nu$. This decouple is particularly easy to understand in two special regimes: (i) If $\kappa_\mathrm{p}\ll \kappa_\omega$, the system is dominated by measurement noise and therefore $\nu^{\star} \approx 0 <r_\mathrm{r}^{-1}$ which makes iDroop a lag compensator. Thus, by using lag compensation (setting $\nu<r_\mathrm{r}^{-1}$) iDroop can attenuate frequency noise;
(ii) If $\kappa_\mathrm{p} \gg \kappa_\omega$, the system is dominated by power fluctuations and therefore $\nu^\star \approx \kappa_\mathrm{p}/\kappa_\omega >r_\mathrm{r}^{-1}$ which makes iDroop a lead compensator. Thus, by using lead compensation (setting $\nu>r_\mathrm{r}^{-1}$) iDroop can mitigate power fluctuations.
\subsection{Synchronization Cost}
Theorem~\ref{thm:bound-cost} implies that the bounds on the synchronization cost of $\hat{T}_{\omega \mathrm{p, iDroop}}$ are closely related to $\| \hat{h}_{\mathrm{u},k,\mathrm{iDroop}} \|_{\mathcal{H}_2}^2$. If we can find a tuning that forces $\| \hat{h}_{\mathrm{u},k,\mathrm{iDroop}}\|_{\mathcal{H}_2}^2$ to be zero, then both lower and upper bounds on the synchronization cost converge to zero. Then, the zero synchronization cost is achieved naturally. The next theorem addresses this problem.
\begin{thm}[Zero synchronization cost tuning of iDroop]
\label{th:zero-syn-idroop}
Let Assumptions~\ref{ass:proportion} and \ref{ass:step} hold. Then a zero synchronization cost of the system $\hat{T}_{\omega \mathrm{p, iDroop}}$, i.e., $\|\tilde{\omega}_\mathrm{iDroop}\|_2^2 = 0$, can be achieved by setting $\delta \to 0$ and $\nu \to \infty$.
\end{thm}
\begin{proof}
Since the key is to show that $\| \hat{h}_{\mathrm{u},k,\mathrm{iDroop}} \|_{\mathcal{H}_2}^2 \to 0$ as $\delta \to 0$ and $\nu \to \infty$,
we can use Lemma~\ref{lm:h2-4th}. Applying \eqref{eq:go-sw-tb} and \eqref{eq:co-idroop} to \eqref{eq:hp-s} shows $\hat{h}_{\mathrm{u},k,\mathrm{iDroop}}(s)=\hat{h}_{\mathrm{p},k+1,\mathrm{T,iDroop}}(s)/s$ is a transfer function with
\begin{subequations}
\begin{align*}
a_0 =& \frac{\lambda_{k+1}\delta}{m \tau}\;,
\qquad b_0 = \frac{\delta}{m \tau}\;,\\
a_1 =& \frac{ \delta(\check{d}+ r_{\mathrm{t}}^{-1}+\lambda_{k+1}\tau) + \lambda_{k+1}}{m \tau}\;,
\qquad b_1 = \frac{\delta \tau + 1}{m \tau}\;,\\
a_2 =& \frac{\delta(m+ \check{d} \tau ) + d + r_{\mathrm{t}}^{-1} + \lambda_{k+1} \tau + \nu }{m \tau},\qquad b_2 =\! \frac{1}{m}\;,\\
a_3 =& \frac{m \delta \tau + m + d \tau + \nu \tau}{m \tau},
\qquad b_3 = 0\;,\qquad b_4 = 0\;.
\end{align*}
\end{subequations}
Considering that $a_0\to0$ and $b_0\to0$ as $\delta \to 0$ and $\nu \to \infty$,
we can employ the $\mathcal{H}_2$ norm computation formula for the third-order transfer function in Remark~\ref{rem:h2-3rd}. Then
\begin{align*}
\underset{\delta \to 0, \nu \to \infty}{\lim} \!\!\|\hat{h}_{\mathrm{u},k,\mathrm{iDroop}}\|_{\mathcal{H}_2}^2
\!\
\!=&\!\underset{\delta \to 0, \nu \to \infty}{\lim} \!\frac{\frac{\nu}{m} \!\left(\frac{1}{m \tau}\right)^2\!+\!\frac{\lambda_{k+1} }{m \tau} \!\left(\frac{1}{m}\right)^2}{2 \frac{\lambda_{k+1} }{m \tau} (\frac{\nu}{m \tau} \frac{\nu}{m}\!-\! \frac{\lambda_{k+1} }{m \tau})}
\!=\!0\,.
\end{align*}
Thus by Theorem~\ref{thm:bound-cost}, $\underline{\|\tilde{\omega}_\mathrm{iDroop}\|_2^2}=\overline{\|\tilde{\omega}_\mathrm{iDroop}\|_2^2}=0$, which forces $\|\tilde{\omega}_\mathrm{iDroop}\|_2^2 = 0$.
\end{proof}
Theorem~\ref{th:zero-syn-idroop} shows that unlike DC and VI that require changes on $r_\mathrm{r}^{-1}$ to arbitrarily reduce the synchronization cost, iDroop can achieve zero synchronization cost without affecting the steady-state effort share. Naturally, $\delta\approx0$ may lead to slow response and $\nu\rightarrow\infty$ may hinder robustness. Thus this result should be appreciated from the viewpoint of the additional tuning flexibility that iDroop provides.
\subsection{Nadir}
Finally, we show that with $\delta$ and $\nu$ tuned appropriately, iDroop enables the system frequency of $\hat{T}_{\omega \mathrm{p, iDroop}}$ to evolve as a first-order response to step power disturbances, which effectively makes Nadir disappear. The following theorem summarizes this idea.
\begin{thm}[Nadir elimination with iDroop]\label{thm:no nadir}
Let Assumptions~\ref{ass:proportion} and \ref{ass:step} hold. By setting $\delta = \tau^{-1}$ and $\nu = r_\mathrm{r}^{-1} + r_\mathrm{t}^{-1}$, Nadir \eqref{eq:Nadir} of $\hat{T}_{\omega \mathrm{p, iDroop}}$ disappears.
\end{thm}
\begin{proof}
The system frequency of $\hat{T}_{\omega \mathrm{p, iDroop}}$ is given by \cite{p2017ccc}
\begin{equation}\label{eq:nadir}
\bar{\omega}_\mathrm{iDroop}(t) = \dfrac{\sum_{i=1}^n u_{0,i} }{ \sum_{i=1}^n f_i } p_\mathrm{u,iDroop}(t)\;,
\end{equation}
where $p_\mathrm{u,iDroop}(t)$ is the unit-step response of $\hat{h}_{{\mathrm{p},1},\mathrm{T, iDroop}}(s)$. If we set $\delta = \tau^{-1}$ and $\nu = r_\mathrm{r}^{-1} + r_\mathrm{t}^{-1}$, then \eqref{eq:co-idroop} becomes
\begin{equation}\label{eq:co-idroop-nonadir}
\hat{c}_\mathrm{o}(s) =\frac{ r_\mathrm{t}^{-1} }{\tau s+1}-\left(r_\mathrm{r}^{-1} + r_\mathrm{t}^{-1}\right)\;.
\end{equation}
Applying \eqref{eq:go-sw-tb} and \eqref{eq:co-idroop-nonadir} to \eqref{eq:hp-s} yields
\begin{align*}
\hat{h}_{{\mathrm{p},1},\mathrm{T, iDroop}}(s)
=& \dfrac{1}{m s + \check{d} + r_{\mathrm{t}}^{-1}}\;,
\end{align*}
whose unit-step response $p_\mathrm{u,iDroop}(t)$ is a first-order evolution. Thus, \eqref{eq:nadir} indicates that Nadir of the system frequency disappears.
\end{proof}
\subsection{Comparison in Step Input Scenario}\label{sub:step-simulation}
Fig.~\ref{fig:step simulation 1} shows how different controllers perform when the system suffers from a step drop of $-0.3$ p.u. in power injection at bus number $2$ at time $t = \SI{1}{\second}$. As for the representative inverter, we turn $\delta = \tau^{-1} = \SI{0.218}{\per\second}$ and $\nu = r_\mathrm{r}^{-1} + r_\mathrm{t}^{-1} = \SI{0.004}{\second\per\radian}$ in iDroop such that Nadir of the system frequency disappears as suggested by Theorem~\ref{thm:no nadir} and we tune $m_\mathrm{v} = \SI{0.022}{\square\second\per\radian}$ in VI such that the system frequency is critically damped.\footnotemark[7] The inverter parameters on each bus $i$ are defined as follows: $\delta_i := \delta$, $\nu_i := f_i \nu$, and $m_{\mathrm{v},i} = f_i m_{\mathrm{v}}$.
\footnotetext[7]{In the rest of this section, we keep tuning $m_\mathrm{v} = \SI{0.022}{\square\second\per\radian}$.}
The results are shown in Fig.~\ref{fig:step simulation 1}. One observation is that all three controllers lead to the same synchronous frequency as predicted by Corollaries~\ref{lem:syn-fre-dc} and~\ref{lem:syn-fre-idroop}. Another observation is that although both of VI and iDroop succeed in eliminating Nadir of the system frequency --which is better than what DC does-- the system synchronizes with much faster rate and lower cost under iDroop than VI. Interestingly, the synchronization cost under VI is even slightly higher than that under DC, which indicates that the benefit of eliminating Nadir through increasing $m_\mathrm{v}$ in VI is significantly diluted by the obvious sluggishness introduced to the synchronization process in the meanwhile. Finally, we highlight the huge control effort required by VI when compared with DC and iDroop.
\begin{figure}[t!]
\centering
\subfigure[Frequency deviations]
{\includegraphics[width=\columnwidth]{frequency_step_sde_db.eps}}
\hfil
\subfigure[Control effort]
{\includegraphics[width=\columnwidth]{control_step_sde_db.eps}}
\hfil
\subfigure[System frequency and synchronization cost]
{\includegraphics[width=\columnwidth]{COI_cost_step_sde_db.eps}}
\caption{Comparison between controllers when a $-0.3$ p.u. step change in power injection is introduced to bus number $2$.}
\label{fig:step simulation 1}
\end{figure}
\subsection{Comparison in Noise Scenario}
\ifthenelse{\boolean{archive}}{Fig.~\ref{fig:noise simulation pdom} and Fig.~\ref{fig:noise simulation ndom} show how different controllers perform when the system encounters power fluctuations and measurement noise. Fig.~\ref{fig:noise simulation pdom} focuses on the case dominated by power fluctuations where $\kappa_\mathrm{p} = 10^{-4}$ and $\kappa_\omega = 10^{-5}$, while Fig.~\ref{fig:noise simulation ndom} corresponds to the case dominated by measurement noise where $\kappa_\mathrm{p} = 10^{-4}$ and $\kappa_\omega = 10^{-3}$. As required by Theorem~\ref{thm:h2-improves}, we tune $\delta$ to be a small value $\SI{0.1}{\per\second}$ and $\nu$ to be the optimal value $\nu^{\star}$ which is either $\SI{9.9986}{\second\per\radian}$ for $\kappa_\mathrm{p} \gg \kappa_\omega$ or $\SI{0.0986}{\second\per\radian}$ for $\kappa_\mathrm{p} \ll \kappa_\omega$.
Observe from Fig.~\ref{fig:noise simulation pdom-fre} and Fig.~\ref{fig:noise simulation ndom-fre} that setting $\delta$ small enough and $\nu$ optimally guarantees iDroop has a better performance than DC in terms of noise variance, which actually fits well with Theorem~\ref{thm:h2-improves}. Note that, as expected from Corollary~\ref{thm:noise-VI}, whichever type of noise dominates the system under VI performs badly, therefore whenever noise appears the simulation results of VI will be omitted throughout this section.}
{Fig.~\ref{fig:noise simulation pdom} shows how different controllers perform when the system encounters power fluctuations and measurement noise. Since in reality power fluctuations are larger than measurement noise, we focus on the case dominated by power fluctuations, where $\kappa_\mathrm{p} = 10^{-4}$ and $\kappa_\omega = 10^{-5}$. As required by Theorem~\ref{thm:h2-improves}, we tune $\delta$ to be a small value $\SI{0.1}{\per\second}$ and $\nu$ to be the optimal value $\nu^{\star}$ which is $\SI{9.9986}{\second\per\radian}$ here.
Observe from Fig.~\ref{fig:noise simulation pdom-fre} that setting $\delta$ small enough and $\nu=\nu^\star$ ensures that iDroop has a better performance than DC in terms of frequency variance, as expected by Theorem~\ref{thm:h2-improves}. Note that, since by Corollary~\ref{thm:noise-VI}, VI performs badly, we do not evaluate VI in the presence of stochastic disturbances.}
\begin{figure}[t!]
\centering
\subfigure[Frequency deviations]
{\includegraphics[width=\columnwidth]{frequency_noise_pdom_sde_db.eps}\label{fig:noise simulation pdom-fre}}
\hfil
\subfigure[Control effort]
{\includegraphics[width=\columnwidth]{control_noise_pdom_sde_db.eps}}
\caption{Comparison between controllers when power fluctuations and measurement noise are introduced with $\kappa_\mathrm{p} = 10^{-4}$ and $\kappa_\omega = 10^{-5}$.}
\label{fig:noise simulation pdom}
\end{figure}
\ifthenelse{\boolean{archive}}{
\begin{figure}[t!]
\centering
\subfigure[Frequency deviations]
{\includegraphics[width=\columnwidth]{frequency_noise_ndom_sde_db.eps}\label{fig:noise simulation ndom-fre}}
\hfil
\subfigure[Control effort]
{\includegraphics[width=\columnwidth]{control_noise_ndom_sde_db.eps}}
\hfil
\subfigure[Empirical PDF of frequency deviations]
{\includegraphics[width=\columnwidth]{pdf_noise_ndom_sde_db.eps}}
\caption{Comparison between controllers when power fluctuations and measurement noises are introduced with $\kappa_\mathrm{p} = 10^{-4}$ and $\kappa_\omega = 10^{-3}$.}
\label{fig:noise simulation ndom}
\end{figure}}
\subsection{Tuning for Combined Noise and Step Disturbances}\label{sub:comb-simulation}
Although our current study does not contemplate jointly step and stochastic disturbances, we illustrate here that the Nadir eliminated tuning of Theorem \ref{thm:no nadir} for iDroop can perform quite well in more realistic scenarios with combined step and stochastic disturbances.
In Fig.~\ref{fig:combine simulation pdom np}, we show how different controllers perform when the system is subject to a step drop of $-0.3$p.u. in power injection at bus number $2$ at time $t = \SI{1}{\second}$ as well as power fluctuations and measurement noise. \ifthenelse{\boolean{archive}}{Since in reality usually power disturbances have a higher order of magnitude than measurement noise does}{Again}, we consider the case with $\kappa_\mathrm{p} = 10^{-4}$ and $\kappa_\omega = 10^{-5}$. Here we employ the same inverter parameters setting as in the step input scenario. More precisely, we tune inverter parameters in iDroop on each bus $i$ as follows: $\delta_i := \delta$, $\nu_i := f_i \nu$, where $\delta = \tau^{-1} = \SI{0.218}{\per\second}$ and $\nu = r_\mathrm{r}^{-1} + r_\mathrm{t}^{-1} = \SI{0.004}{\second\per\radian}$.
Some observations are in order. First, even though the result is not given here, there is no surprise that the system under VI performs badly due to its inability to reject noise.
Second, the performance of the system under DC and iDroop is similar to the one in the step input scenario except additional noise. Last but not least, a bonus of the Nadir eliminated tuning is that iDroop outperforms DC in frequency variance as well. This can be understood through Theorem~\ref{thm:h2-improves}. Provided that $\kappa_\mathrm{p} \gg \kappa_\omega$, we know from the definition in Lemma~\ref{lem:vstar} that $\nu^{\star} \approx \kappa_{\mathrm{p}}/\kappa_{\omega}$. Thus, for realistic values of system parameters, $\nu^{\star} \gg r_\mathrm{r}^{-1}$ always holds. It follows directly that $\nu = r_\mathrm{r}^{-1} + r_\mathrm{t}^{-1} \in (r_\mathrm{r}^{-1},\nu^{\star}]$. By Theorem~\ref{thm:h2-improves}, iDroop performs better than DC in terms of frequency variance. Further, the preceding simulation results suggest that the Nadir eliminated tuning of iDroop designed based on the proportional parameter assumption works relatively well even when parameters are non-proportional.
\begin{figure}[t!]
\centering
\subfigure[Frequency deviations]
{\includegraphics[width=\columnwidth]{frequency_combine_sde_db.eps}}
\hfil
\subfigure[Empirical PDF of frequency deviations and system frequency]
{\includegraphics[width=\columnwidth]{pdf_COI_combine_sde_db.eps}}
\caption{Comparison between controllers when a $-0.3$ p.u. step change in power injection is introduced to bus number $2$ and power fluctuations and measurement noise are introduced with $\kappa_\mathrm{p} = 10^{-4}$ and $\kappa_\omega = 10^{-5}$.}
\label{fig:combine simulation pdom np}
\end{figure}
\section{Introduction}\label{sec:intro}
\input{01-intro.tex}
\section{Preliminaries} \label{sec:prelim}
\input{02-preliminaries.tex}
\section{Results}
\label{sec:result}
\input{03-results.tex}
\section{The Need for a Better Solution}\label{sec:need}
\input{04-the_need_of_a_better_solution.tex}
\section{Dynam-i-c Droop Control (iDroop)}\label{sec:idroop}
\input{05-idroop.tex}
\section{Numerical Illustrations}\label{sec:simulation}
\input{06-numerical_illustration.tex}
\section{Conclusions}\label{sec:conclusion}
\input{07-conclusion.tex}
\section{Acknowledgements}
The authors would like to acknowledge and thank Fernando Paganini, Petr Vorobev, and Janusz Bialek for their insightful comments that helped improve earlier versions of this manuscript.
\bibliographystyle{IEEEtran}
|
1,314,259,995,937 | arxiv | \section{Introduction}
The first detections of the quasar clustering date back more than one decade
(Shaver 1984). Up to now, however, more detailed studies of the clustering
dependence on physical parameters like absolute magnitude and redshift was
hampered by the small number of quasars in statistically well-defined samples.
In recent times complete samples totaling about 2000 QSOs have been used
resulting in a $4-5 \sigma$ detection of the clustering on scales of the order
of $6 h^{-1}$ comoving Mpc (Andreani \& Cristiani 1992, Mo \& Fang 1993, Croom
\& Shanks 1996). The evolution of this clustering is not clear. An amplitude
constant in comoving coordinates or marginally decreasing with increasing
redshift has been suggested, an amplitude which appears to be consistent or
slightly larger than what is observed for present-day galaxies and definitely
less than the clustering of clusters.
\begin{figure}[t]
\epsfysize 6truecm
\epsffile{sydney1.eps}
\caption{
The integral correlation function $\bar\xi(r)$ (defined as $\bar\xi(r)
= {3\over r^3} \int_0^r x^2\xi(x)dx$) for the quasars in the SGP
sample in two redshift ranges $0.3<z\leq 1.4$, and $1.4< z\leq 2.2$.
\label{clustfig1}
}
\end{figure}
\section{Methods and results}
In an attempt to improve the situation, while waiting for the 2dF QSO redshift
survey, we have carried out a survey in the South Galactic Pole (SGP) over a
{\it connected} area of 25 square degrees down to $B_j = 20.5$ (La Franca et
al. 1998). Stacked UKSTU plates were used to select UVx candidates and the
multi-fiber spectrograph MEFOS at ESO to take spectra of them. The final
sample is made up of 388 QSOs with $0.3<z<2.2$. The data set was divided into
several luminosity, redshift and spatial sub-samples in order to study the
autocorrelation function $\xi(r)$ and the integral autocorrelation function
$\bar\xi(r)$ as a function of the comoving distance, assuming a fixed value of
$\gamma=1.8$. The two point correlation function (TPCF) analysis gives an
amplitude $r_o = (6.2\pm1.6) ~h^{-1}$ Mpc at an average redshift 1.34. While
$\bar\xi(25)=0.21\pm0.16$ is found, in agreement with estimate of Croom and
Shanks (1996) of $\bar \xi(25)=0.16\pm0.08$. However, when the evolution of
the clustering with redshift is analyzed, evidence is found for an {\it
increase} of the clustering with increasing redshift (La Franca, Andreani \&
Cristiani 1998). The sample was split into the two redshift ranges
$0.3<z\leq1.4$, and $1.4<z\leq2.2$ (Fig. 1). These were fitted by $\gamma=1.8$
power laws with $r_0$ as a free parameter. At low redshift ($z=0.97$), $r_0=
4.2$ $h^{-1}$ Mpc was found, corresponding to $\bar \xi(15) = 0.26\pm 0.27$;
while at high redshift ($z=1.82$), $r_0=9.1$ $h^{-1}$ Mpc, which corresponds to
$\bar\xi(15)=1.03\pm0.36$. The effect is small, a $2\sigma$ significant
discrepancy, but it is interestingly corroborated by other results (at lower
and higher redshift) in the literature.
\begin{figure}[t]
\epsfysize 6truecm
\epsffile{sydney2.eps}
\caption{ The amplitude of the $\bar\xi(15~h^{-1}$ Mpc) as a function of z.
Filled circles: the low- and high-$z$ SGP subsamples (filled circles); open
circle: the SGP sample plus the Boyle et al. (1990), La Franca, Cristiani and
Barbieri (1992), and Zitelli et al. (1992) samples; open triangles: same as
open circle but divided in two redshift slices; filled triangle: low-$z$ AGNs
from Boyle and Mo (1993) and Georgantopoulos and Shanks (1994); open square:
the high-$z$ sample from Kundi\'{c} 1997. The dotted line is the $\epsilon=-2.5$
clustering evolution fitted to the open triangles and the filled triangle data.
The dashed lines are the $10^{12}$ and $10^{13}$ $M_{\odot}$ $h^{-1}$ minimum
halo masses clustering evolution according to the transient model of Matarrese
et al. (1997).
}
\label{clustfig}
\end{figure}
At low redshift Boyle and Mo (1993) measured the clustering of low-$z$ QSOs in
the EMSS, while Georgantopoulos and Shanks (1994) used the IRAS point source
catalog to measure the clustering of Seyferts. Altogether a low value of the
TPCF at 15 Mpc and $z=0.05$ is obtained, $\bar\xi = 0.24 \pm 0.25$. Besides,
the data of the Palomar Transit Grism Survey (Kundi\'{c} 1997, Stephens et al.
1997) allow measuring the amplitude of the TPCF at redshifts higher than 2.7
and the result, $r_o = (18\pm8) h^{-1}$ Mpc, suggests that the trend of
increasing clustering persists. It may be argued that these surveys tend to
select objects with different luminosities and the comparison with the SGP
data could not be entirely significant, but an analysis on restricted absolute
magnitude slices of the SGP sample shows no correlation of the clustering with
the QSO absolute luminosity. If we describe the evolving correlation function
in a standard way: $\xi(r,z) =
({r/{r_0}})^{-\gamma}(1+z)^{-(3-\gamma+\epsilon)}$, where $\epsilon$ is an
arbitrary (and not very physical, see Matarrese et al. 1997) fitting
parameter, we obtain $\epsilon = - 2.5\pm 1.0$ (Fig. 2).
In spite of the statistical uncertainties the measured QSO clustering is able
to put interesting constraints on the allowed evolution, being inconsistent
with values $\epsilon > 0.0$, such as $\epsilon \simeq 0.8$ observed for
faint galaxies at lower redshifts (Le F\`{e}vre et al. 1996, Carlberg et al.
1997, Villumsen et al. 1997). Great care should be exercised however when
carrying out this comparison. Are the faint lower-redshift galaxies
representative of the same population of galaxies for which recent
observations by Steidel et al (1998) show substantial clustering at $z \simeq
3.1$? Are the Lyman-break galaxies progenitors of massive galaxies at the
present epoch or precursors of present day cluster galaxies (Governato et al.
1998)?
We already know from energetic arguments that QSOs cannot shine continuously
from high redshifts to the present epoch (Cavaliere and Padovani 1989).
However the existing models still do not exclude that a single population
exists, which after having been formed at a certain epoch, has undergone a
recurrent activity with a sequence of active and quiescent periods. But -
following Matarrese et al. (1997) and Moscardini et al. (1998) - this scenario
would correspond to an object conserving model in which a decrease of the
clustering amplitude with redshift is expected. {\it Thus we can come to the
conclusion that the observed increase of the clustering amplitude with
redshift is able to rule out a single population model for QSOs.}
If we go back to the model in which quasars are associated with interactions,
then we may think in terms of clustering of transient objects, which is
definitely different from the case of galaxies which, depending on the
physical scenario, can be assimilated to the merging model or the
object-conserving paradigm of long-lived objects. In this way the observed
clustering is the result of the convolution of the true clustering of the mass
with the bias and redshift distribution of the objects. If we think of QSOs as
objects sparsely sampling halos with $M > M_{\rm min}$ we may ask what are
the typical masses which allow reproducing the observed clustering. In this
perspective an increase of the QSO clustering is expected because they are
sampling rarer and rarer overdensities with increasing redshift. As we can see
from Fig.~\ref{clustfig} an $M_{\rm min}= 10^{12} - 10^{13}~ M_{\odot}$ would
provide the desired amount of clustering and evolution. Similar theoretical
results have also been obtained by Bagla (1997).
|
1,314,259,995,938 | arxiv | \section{Running Heads}
\section{Introduction}
The equivalence principle (EP) \cite{misner73} states the equivalence between inertial and gravitational mass.
This fact is a mere coincidence in classical physics, but it has some important consequences, for example:
\begin{itemize}
\item the free fall of {\em any} object in the same gravity field depends only
on their initial status and not on their composition or structure;
\item it is impossible to detect the difference between a uniform static
gravitational field and a uniform acceleration: free-fall and inertial motion are physically equivalent.
\end{itemize}
As a consequence, the EP allows the geometrical description of spacetime, which is at the basis of General Relativity (GR).
The weak form of the EP (WEP) is limited to strong and electroweak interactions. It can be verified by measuring the free fall of test masses with different
chemical compositions. Tests are performed on ground
with, for instance, torsion balances \cite{adelberger2009} or in space with low Earth
orbits (e.g.\ with the MICROSCOPE mission \cite{touboul2012}).\\
The strong form (SEP) extends the validity of the weak principle to self-graviting bodies.
The EP violation for the body $i$ can be
parametrized as follows \cite{milani2002,damour1996}
\begin{equation}
m_i^G = m_i^I (1+ \delta_i + \eta \, \Omega_i),
\end{equation}
where $m_i^I$ ($m_i^G$) is the inertial (gravitational) mass, and
\begin{equation}
\Omega_i=\frac{E_{g}}{m^{I}_i c^2}=-\frac{G}{2 m^{I}_i c^2}\iint \frac{ {d {m'}^\text{G}_i} {d {m''}^{G}_i}} {\vert \vert \mathbf r'-\mathbf r'' \vert \vert},
\end{equation}
where $c$ is the speed of light, and $E_g$ is the self-gravity energy, which is obtained by double-integrating over the mass of the body.
The WEP involves only the case $\Omega_i=0$ and corresponds to $\delta_i = 0$, while the SEP is valid, for each $\Omega_i$, when both $\delta_i$ and $\eta$ are equal to zero.
With experiments on ground, the typical $\Omega_i$ can be so small (see \tref{tab1}) that only the WEP can effectively be tested.
The only means by which the SEP can be constrained is evidently in space involving celestial bodies.
\begin{table}[!h]
\tbl{Self-gravity coefficients $\Omega_i$ for some celestial bodies and a reference test mass.}
{\begin{tabular}{ll}
\toprule
Sun & \hphantom{0} $-3.52 \times 10^{-6}$\\
Jupiter & \hphantom{0} $-1.21 \times 10^{-8}$\\
Earth & \hphantom{0} $-4.64 \times 10^{-10}$\\
Moon & \hphantom{0} $-1.88 \times 10^{-11}$\\
test mass (1~kg, size 5~cm) &$\approx -8.90 \times 10^{-27}$\\%$\Omega$ &
\botrule
\end{tabular} \label{tab1}
}
\end{table}
Thanks to retroreflectors placed on the facing side of the Moon it is possible to measure the Earth-Moon distance and detect a possible SEP violation signal.
This experiment was proposed by Nordtvedt \cite{nordtvedt1968}. In this case, a violation of the SEP will introduce a signal in the Earth-Moon range, its amplitude being proportional to $\Omega_\text{Earth}-\Omega_\text{Moon}$.
Over the last 46 years, the Lunar Laser Ranging (LLR) project has carried out a long sequence of range measurements, and the precision on the Earth-Moon relative differential accelerations is currently\cite{williams2009}
$\sigma[\delta a/a_\text{sun}] = 1.3 \times 10^{-13}$, but this result includes possible violations of both SEP and WEP. Since ground experiments can test only the weak form of the EP, the parameter $\eta$ can be measured only by using the results of both experiments, ground and LLR. No disproofs of the SEP have still been found and the error associated to $\eta$ is currently\cite{williams2009} $\sigma[\eta]=4.4 \times 10^{-4}$. The BepiColombo mission is expected to improve this result by about an order of magnitude\cite{demarchi2016} -- a prediction is given in the first part of this paper. Instead, an alternative ranging experiment towards the Sun-Earth Lagrangian points -- recently proposed in Ref.\;\refcite{congedo2016} -- could easily reach the LLR's performance in a very short time span, which is investigated in the second part of the paper.
Therefore we will describe two experiments for the estimation of $\eta$. The first one in Section \ref{sect2} is the well-known Relativity experiment of the BepiColombo mission, while in Section \ref{sect3} we will study the same measurement performed on range data between the Earth and a spacecraft (SC) orbiting around a Sun-Earth Lagrangian point.
\section{MORE with BepiColombo}\label{sect2}
BepiColombo (BC) is a joint ESA/JAXA mission to Mercury with challenging objectives regarding geophysics, geodesy, and fundamental physics \cite{benkoff2010}. Currently, the launch is scheduled for the end of 2018, with a nominal duration of one year plus a possible one-year extension.
The Mercury Orbiter Radioscience Experiment (MORE) is one of the on-board experiments that focus on gravimetry, rotation and Relativity \cite{milani2002, sanchez2006, cicalo2012}.
The goal is the measurement of key parameters by means of orbit determination techniques using the Earth-MPO \footnote{Mercury Planetary Orbiter.} radio link observables, i.e.\ range and range rate. The parameters for gravitation and rotation experiments are the Mercury gravity field coefficients, Love numbers, obliquity and libration. Instead the Relativity experiment consists in the measurement of the Parametrized Post-Newtonian (PPN) parameters, which account for possible small deviations from GR -- $\eta$ is one of them.
All parameters will be estimated by a global nonlinear least-squares fitting of all the {\em observed} signals (range, range-rate, accelerometer readings, etc.) along with the {\em computed} signals that are calculated by using mathematical models as accurate as possible.
The main characteristics of the Radioscience experiment are summarized in \tref{tab2}. For further details see Refs\;\refcite{milani2002,ashby2007}.
The observed data of gravity and rotation experiments are primarily range-rate signals, which are poorly correlated with those of the Relativity experiment, i.e.\ Earth-MPO range only, because the frequency domains are very different. Since we are interested in the Relativity experiment, we can neglect the motion of the MPO around Mercury (the orbital period is approximately 2~hrs) and consider only the Mercury-Earth range.
\begin{table}[!h]
\tbl{Summary of the main characteristics of the radioscience experiments on-board BepiColombo.}
{\begin{tabular}{l l l l}
\toprule
& gravimetry & rotation & Relativity\\
\hline
parameters & - gravity field coeffs & - longitude libration &- $\gamma, \beta, \alpha_1, \alpha_2,\eta$ \\
& (up to the 25th deg.) & - obliquity & - $\mu_0, \dot \mu_0 / \mu_0, J_{2 \odot}$ \\
& - $k_2$ & & - initial cond.\ of \\
& & & Earth and Mercury \\
observables & range-rate & range-rate & range\\
precision & $3.0\times 10^{-4}$ cm/s & $3.0\times 10^{-4}$ cm/s & $30$ cm @ $300$ s\\
& @ $1000$ s & @ $1000$ s &\\
freq. domain & $\gtrsim 1.2\times 10^{-4}$ Hz & $\gtrsim 1.2\times 10^{-4} $ Hz & $\approx 10^{-7}$ Hz\\
& (MPO mean motion) & (MPO mean motion) & (planetary mean motions)\\
\botrule
\end{tabular} \label{tab2}}
\end{table}
\subsection{Analytical model and sources of uncertainties}
We aim at calculating the expected root-mean-square (RMS) error of $\eta$ after the whole duration of the BC mission. Since the dare are obviously not available, we need to simulate them.
To this end, we are going present a simplified heliocentric analytical model that yields the perturbations on the Earth-Mercury range due to $\eta$ and all the parameters that are expected to correlate with. This is a typical Fisher/covariance analysis: the RMS of the parameters will be given by the square root of the diagonal elements of the covariance matrix.
We adopt the notation of Ref.\;\refcite{moyer2003}: we define $\ve r_{ij} =\ve r_j - \ve r_i$
and $r_{ij} = \vert \vert \ve r_{ij} \vert \vert $, where $\ve r_i$ is the coordinate of the $i$th-body in an inertial reference frame.
Planets are numbered from 1 (Mercury) to 8 (Neptune), while 0 refers to the Sun.
We also define the gravitational parameters for all bodies in the same way: $\mu_i = G m_i^G$.
The equations of motion for the $i$th-planet $i$, in the case $\eta \neq 0$, are
\cite{anderson1996,milani2002,turyshev2004,ashby2007, demarchi2016, congedo2016}
\begin{equation}
\label{eqr0i}
\ddot { \ve r}_{0i} = -\dfrac{\mu^\star}{r_{0i}^3}\ve r_{0i} + \displaystyle \sum_{j \neq i \neq 0} \mu_j \left[(1+ \eta \, \Omega_i)\dfrac{\ve r_{ij}}{r_{ij}^3} - (1+\eta\, \Omega_0) \dfrac{\ve r_{0j}}{r_{0j}^3}\right] ,
\end{equation}
where the summation includes all solar system bodies (planets, dwarves planets, asteroids, etc.), and $\mu^\star= \mu_0+\mu_i+\eta(\mu_i \Omega_0+\mu_0 \Omega_i)$.
We can write a similar equation for body $k$ and afterwards calculate the range $\rho_{ik}=\vert \vert \ve r_{0i}-\ve r_{0k}\vert \vert$ where $i$ and $k$ are Earth and Mercury.
Since $\Omega_i \ll \Omega_0$ for all $i$, the leading term is the last one, which is proportional to $\Omega_0$. It is an apparent term, essentially a perturbation on the acceleration of the Sun with respect to the Solar System Barycenter (SSB).
Note that there is a non-zero signal even if $\Omega_i=0$, which means that the experiment can be done also if the body $i$ is a drag-free test mass, e.g.\ a SC with an onboard accelerometer (see Section \ref{sect3}).
It is worth mentioning that the signals due to other PPN parameters, such as $\beta, \gamma, \alpha_1,\alpha_2$, along with the effect due to $\zeta$ (the rate of change of $\mu_0$), $J_{2 \odot}$ (gravitational ``flattening'' of the Sun) and the initial conditions of Earth and Mercury (see Ref.\;\refcite{demarchi2016} for details), must all be calculated and included in the global fit.
Also from \eref{eqr0i}, a high correlation among planetary perturbations (proportional to $\mu_j$s) and SEP violation is evident.
In order to avoid systematic effects, the $\mu_j$s must be added to the set of parameters to be estimated, and their errors must be taken into account in terms of prior constraints in the global covariance analysis.
Current uncertainties of planetary $\mu_j$s range from $2.8\times 10^{-4}$ (Mars) to $10.5$~km$^3$/s$^2$ (Neptune) \cite{luzum2011}.
Regarding asteroids, their relative errors can be very large (50\% or more).
To summarize, we will calculate the signatures on the Earth-Mercury range due to all the following effects:
\begin{enumerate}
\item initial conditions of Earth and Mercury;
\item SEP violation -- free parameter: $\eta$;
\item planets/dwarf planets/asteroids -- free parameters: $\mu_j$;
\item secular variation of the Sun's gravitational parameter $\mu_0$ -- free parameters: $\delta_{\mu_0}$ (bias of the measured $\mu_0$ from the true value at the starting epoch), and its rate of change in time $\zeta=\dot \mu_0/\mu_0$,
\item PPN -- free parameter: $\bar \beta=\beta-1$,
\item Sun's quadrupole coefficient: free parameter $J_{2 \odot}$, whereas higher order terms are negligible.
\end{enumerate}
The PPN parameter $\gamma$, which is related to the curvature produced
by unit rest mass, has not been considered here for simplicity.
However, this is not reductive since the best estimate of $\gamma$ ($\sigma[\gamma]=2.0\times 10^{-6}$)
is expected to be given right after the dedicated superior conjunction experiment
(SCE) during the cruise phase of BC.
The value of the Nordtvedt parameter can be derived from the Nordtvedt quation
\begin{equation}
\label{eq:nordt}
\eta=4 \beta - \gamma -3,
\end{equation}
which will be used as a prior.
We also neglect the preferred frame parameters $\alpha_1$ and $\alpha_2$ since they are poorly correlated with the other parameters of the Relativity experiment, in particular $\eta$. For more details compare the results of experiments A, B, C and D in Ref.\;\refcite{milani2002}.
Finally, we assume that the unperturbed orbits of planets and asteroids are circular with radius $R_{0i}$, and co-planar.
We define $\ve q$ as the vector of all $N_p$ parameters, $q_m \delta \ve r_{i,m}$ is the displacement from the circular reference orbit $\ve R_i=R_{0i}\ve u_r^i$ for the $i$th-body due to the (linearized) force $q_m \delta \ve f_{i,m}$ relative to the (small) parameter $q_m$.
The procedure is as follows:
\begin{enumerate}
\item write the heliocentric position of the $i$th-body as
\begin{equation}
\ve r_i= \ve R_i+\sum_{n=1}^{N_p} q_n \delta \ve r_{i,n};
\end{equation}
\item for each $q_m$, decompose $\delta \ve r_{i,m}$ and the perturbative force $\delta \ve f_{i,m}$ into radial, along-track and out-of-plane components
\begin{equation}
\begin{split}
\delta \ve r_{i,m}&=x_i \ve u_r^i+y_i \ve u_t^i+z_i \ve u_w^i, \\
\delta \ve f_{i,m}&=R_m^i \ve u_r^i+T_m^i \ve u_t^i+W_m^i \ve u_w^i ;
\end{split}
\end{equation}
\item solve the Hill's equations for $i=1$ and $i=3$
\begin{equation}
\label{eq:hill}
\begin{split}
\ddot x_i - 2\,n_i \dot y_i -3 \,n_i^2 x_i &=R_m^i ,\\
\ddot y_i+2 \,n_i \dot x_i &= T_m^i ,\\
\ddot z_i+ n_i^2 \dot z_i &= W_m^i ,
\end{split}
\end{equation}
where $n_i$ is the mean motion of the $i$th-body;
\item finally calculate the Earth-Mercury range as
\begin{equation}
\rho_{13}(t,\ve q)= \vert \vert \ve r_{13} \vert \vert \approx R_{13}+ \sum_n q_n \frac{\delta \ve r_{13,n} \cdot \ve R_{13}}{R_{13}}
\label{eq:pert1}
\end{equation}
where $\delta \ve r_{13,n}=\delta \ve r_{3,n}-\delta \ve r_{1,n}$ and
the factor $1/R_{13}$ can be rewritten in Legendre polynomials $P_n$
\begin{equation}
\frac{1}{R_{13}}= \frac{1}{R_{03}}\sum_{l=0}^\infty \left( \frac{R_{01}}{R_{03}}\right)^l P_l (\cos \Phi_{13}),
\end{equation}
where $\Phi_{ij}=(n_j-n_i)\,t+\varphi_j-\varphi_i$.
\end{enumerate}
Due to visibility windows, range and range-rate data contain several gaps. A gap
occurs approximately every day and lasts about 9.3 h.
A low-frequency sampling ($f_s =10^{-4}$ Hz) is therefore sufficient for our purposes since the
involved signals have frequencies of the same order of
planetary mean motions. We can then calculate the range at epochs $t_i$ and obtain the looked-after $N_p\times N_p$
Fisher matrix, or \textit{normal matrix}. Including all prior information, it is given by
\begin{equation}
F_{jk} = \sum_{i=1}^{N} \frac{1}{\sigma_i^2} \frac{\partial \rho_{13} (t_i,\ve q_0)}{\partial q_j} \frac{\partial \rho_{13}
(t_i,\ve q_0)}{\partial q_k} + \frac{1}{2}\frac{\partial^2 P(\ve q)}{\partial q_j \partial q_k} ,
\label{eq:fisher}
\end{equation}
where $N$ is the number of range measurements;
$\sigma_i$ is the RMS error on each data point\footnote{For the Ka-band we adopted $\sigma_i = 15 \sqrt{300 f_s} \mbox{ cm }= 2.6$ cm \cite{milani2010}.};
$P(\ve q)$ is a function that contains all prior information (the Nordtvedt equation \eref{eq:nordt} and the uncertainties on all the $\mu_m$s) and is given by
\begin{equation}
P(\ve q) =\frac{(\eta-4 \bar \beta)^2}{\sigma_N^2}+\sum_m \frac{(\mu_m- \mu_m^P)^2}{\sigma_{\mu_m}^2};
\end{equation}
$\mu_m^P$ are the measured values of $\mu_m$ and $\sigma_{\mu_m}$ are the corresponding errors;
the summation over $m$ is extended to all $GM$s;
$\sigma_N=2.0 \times 10^{-6}$ is the expected RMS error of $\gamma$ after the expected performance of the SCE.
The inverse of $F_{jk}$ yields the covariance matrix, whose diagonal elements give us the expected RMS errors, and correlations, of all the parameters.
\subsection{Results}
As well as standard parameters, we include the $\mu_j$s of all the
planets and the 343 more massive asteroids (the total
number of parameters was 362).
Since some of the $\mu_j$s are expected to be improved by GAIA \cite{mouret2009} and JUICE, we calculate the global covariance by
using the expected RMS errors of $\mu_j$ at the epoch of the mission.
The RMS error of all parameters, including the initial conditions of Mercury and Earth, are reported in \tref{tab:relnom}.
Regarding the SEP violation, we found $\sigma[\eta]=3.13 \times 10^{-5}$. If we were to compare this result with the ``idealistic case'' where the $\mu_j$s have all zero errors \cite{milani2002,schettino2015,cicalo2016}, we would
find that the uncertainties
degrade the precision of most of the PPN parameters by about an order of
magnitude. However, since the current RMS error of $\eta$, from
LLR measurements, is $\sigma[\eta]=4.4 \times 10^{-4}$, we can conclude that
the BC Relativity experiment will improve the
current constraint on $\eta$ by a factor of 10 at least, having included uncertainties on the planetary masses.
\begin{table}[!h]
\tbl{Expected formal errors for the Relativity experiment on-board BepiColombo.}
{\begin{tabular}{lll}
\toprule
parameter & units & RMS error \\%& Best fit value
\hline
$\beta$ & - & $ 7.81 \times 10^{-6 }$ \\
$\eta$ & - & $ {\bf 3.13 \times 10^{-5 }}$ \\
$\mu_0$ & [cm$^3$s$^{-2}$] & $ 5.50 \times 10^{13 }$ \\
$J_{2\odot}$ & - & $ 8.03 \times 10^{-10}$ \\
$\zeta=\dot \mu_0/\mu_0$ & [yr$^{-1}$] & $ 1.78 \times 10^{-14}$ \\
$X_{1}$ & [cm] & $ 2.49 \times10^3$ \\
$Y_{1}$ & [cm] & $ 1.18 \times10^4$ \\
$Z_{1}$ & [cm] & $ 5.15 $ \\
$\dot X_{1}$ & [cm s$^{-1}$] & $ 2.36 \times 10^{-3 }$ \\
$\dot Y_{1}$ & [cm s$^{-1}$] & $ 1.68 \times 10^{-3 }$ \\
$\dot Z_{1}$ & [cm s$^{-1}$] & $ 4.72 \times 10^{-6 }$ \\
$\dot X_{3}$ & [cm s$^{-1}$] & $ 1.77 \times 10^{-3 }$ \\
$\dot Y_{3}$ & [cm s$^{-1}$] & $ 9.41 \times 10^{-5 }$ \\ \botrule
\end{tabular}
}
\label{tab:relnom}
\end{table}
\section{An opportunity with the Lagrangian points}\label{sect3}
When testing for a SEP violation, the advantage of the ranging between
two planets over that between Earth and Moon is twofold: a longer
baseline ($\approx 1$ vs $\approx 3\times10^{-3}$ AU) and
$\delta a/a_\text{sun} \propto \Omega_0$ instead of $\Omega_\text{earth}-\Omega_\text{moon}$.
This in turn implies a much bigger ranging signal amplitude (about three orders
of magnitudes better than the Nordtvedt effect \cite{turyshev2004,milani2009}).
In fact, even if the time span and the precision of the data will be worse,
a bigger self-energy and a stronger signal will certainly allow better
measurements of $\eta$. For example, consider the BC experiment: the expected measurement precision on the SEP is $\sigma[\delta a/a_\text{sun}] \approx 10^{-11}$,
which will be roughly two orders of magnitude worse than WEP measurements achieved by LLR
and torsion balances experiments\cite{adelberger2009}.
However, since the signal is $\propto \Omega_0$, the parameter $\eta$ will be constrained with an accuracy of $10^{-5}\text{--}10^{-6}$ (see Section \ref{sect2} and also Ref.\;\refcite{milani2002}), which is of course better than LLR. This is also the case of the Lagrangian points ranging, with the only difference that a smaller baseline will give us an RMS error that will be similar in magnitude to LLR.
\begin{figure}[h!]
\centerline{\psfig{file=fig1_diagram.pdf,width=0.59\columnwidth}}
\caption{Spacecraft ranging towards $L_1$ or $L_2$ as a means by which to test the SEP (not in scale). We calculate the SEP signature as a perturbation on the Earth's orbit around the the Sun (${\bf r}_{03}$) as well as on the SC ranging (${\bf r}_{3p}$). We also include perturbations from other planets. \label{f1}}
\end{figure}
\subsection{Detailed calculations}
In the Earth's reference frame, the positions of the collinear Lagrangian points are the solutions of the following equation
\begin{equation}\label{eq:L1}
-\frac{\mu_0}{ \vert R-X \vert^3} (R - X) + \mu_3 \left(\frac{X}{ \vert X \vert ^3} -\frac{1}
{R^2}\right)+ n_3 ^2 (R-X) =0,
\end{equation}
where $R$ is the Eart-Sun distance, and $n_3$ is the mean motion of the Earth.
\eref{eq:L1} has three solutions: $X_{1,2}\approx\pm 0.01$ AU that correspond to $L_1$ and $L_2$,
and $X_3\approx 2$ AU that corresponds to $L_3$.
We will consider only the case of $L_1$ and $L_2$ as these are the spots where many missions fly to.
Consider a SC, hereafter identified with the index $p$, near $L_1$ (or $L_2$). Its mass and self-gravity
energy are negligible with respect to those of the Sun and all planets.
The SC's equation of motion relative to the Sun can be obtained by \eref{eqr0i} after this substitution: $(\Omega_3,\mu_3,\ve r_{03},\ve r_{3j}) \rightarrow (0,0,\ve r_{0p},\ve r_{pj})$.
We subtract the SC's equation of motion
from \eref{eqr0i} to finally derive the
relative motion, $\ve r_{3p}$, between the SC and Earth, which is given by
\begin{equation}\label{eq:L1_rel}
\ddot {\ve r}_{3p} = - \mu_0 \left( \dfrac{\ve r_{0p}}{r_{0p}^3}- \dfrac{\ve r_{03}}{r_{03}^3}\right)
-\mu_3 \dfrac{\ve r_{3p}}{r_{3p}^3} + \sum_{j\neq 0,3} \mu_j \left(\dfrac{\ve r_{pj}}{r_{pj}^3}-\dfrac{\ve r_{3j}}{r_{3j}^3}\right)+ \eta\,
\Omega_3 \sum_{j \neq 3} \mu_j \dfrac{\ve r_{j3}}{r_{j3}^3},\\
\end{equation}
where $\ve r_{0p}=\ve r_{03}+\ve r_{3p}$. It is worth noting that we are in fact solving the equation of motion for the observed SC ranging, $\ve r_{3p}$.
As it was done for the Earth-Mercury range in the previous section, we decompose $\ve r_{3p}=\{\delta x, \delta y\}$ in radial and along-track components (but now only $\delta x$ can be measured).
For simplicity we assume that the SC is very near to the Lagrangian point, such that the gravity field can be linearized in this case, and all trajectories are Lissajous orbits.
Details of the calculation can be found in Ref.\;\refcite{congedo2016}. In \fref{f3} we plot $\delta x$ (normalised to $\eta=1$) for the two scenarios of a SC orbiting around either $L_1$ or $L_2$.
\begin{figure}[h!]
\centerline{\psfig{file=fig2_SC_ranging.pdf,width=0.59\columnwidth}}
\caption{Range perturbations (normalised to $\eta=1$) for a SC orbiting either $L_1$ or $L_2$. \label{f3}}
\end{figure}
In order to compute our prediction for a measurement of the SEP around the Lagrangian point, we assume we have $N$ equally-spaced observations of the SC's range distance, over a total observation of $T=5$ yr, sampling interval $\delta t= 1$ h\footnote{Hereafter we assume an hour integration time for all range measurements.}.
We can then calculate the Fisher matrix from \eref{eq:fisher}. The free parameters considered in our analysis are: $\eta$, the initial position and velocity of the Earth and the initial position and velocity of the SC.
We distinguish between two possible scenarios.
In the \textit{realistic scenario} (A) we use a nominal range error typical for two-way ranging in the X-band, $\sigma_i=0.1$ m
\footnote{As obtained from a degradation of a conservative factor 2.5 of the Ka-band range error
$\sigma_i=0.15 \sqrt{300/\delta t}\approx0.04$ m\cite{iess2001,schettino2015,cicalo2016}, owing to the lower frequencies typical of the X-band.}.
Additionally, we assume the following prior uncertainties on the orbital initial conditions:
\begin{enumerate}
\item 2 m and $3\times10^{-5}$ m/s for the Earth's heliocentric radial position and velocity,
from a great abundance of radio tracking data
\cite{kaplan2015};
\item 145 m for the Earth's heliocentric along-track position as this is less well constrained \cite{kaplan2015};
\item no assumed prior on both the Earth's heliocentric along-track velocity as this is very weakly constrained by current data, and the parameters of the SC's orbit relative to Earth.
\end{enumerate}
In the \textit{optimistic scenario} (B) we use the range error
typical of the Ka-band, $\sigma_i=0.04$ m,
as well as a factor 10 improvement in the knowledge of the Earth's initial position and velocity, 0.2 m and $3\times10^{-6}$ m/s, which is likely to be achieved in the near future.
\subsection{Results}
Neglecting errors in planetary masses and ephemerides, we forecast $\sigma[\eta]= 6.4(2.0) \times 10^{-4}$ (5 yr integration time) via Earth-$L_1$ ranging in a realistic (optimistic) scenario depending on current (future)
range capabilities and knowledge of the Earth's ephemerides. A combined measurement, $L_1 + L_2$, gives instead an improved constraint of
$4.8(1.7) \times 10^{-4}$,
which would be comparable with those already achieved by LLR.
It is worth noting that the performances could be much improved if data were integrated over time and over the number of satellites
flying around either of the two Lagrangian points. We point out that some systematics (gravitational
perturbations of other planets or figure effects) are much more in control compared to other experiments.
This SC ranging would be a new and complementary probe to constrain
the strong equivalence principle in space.
\subsection{Conclusions}
In this work we described two experiments devoted to testing the SEP in space. In both cases we performed a global covariance analysis based on simulated data.
The first test is the BC Relativity experiment: we calculated the effect of the uncertainties on the masses of the Solar System's bodies on the estimation of PPN parameters.
We forecast a degradation for the RMSs of all parameters, including $\eta$ for the strong equivalence principle, of about an order of magnitude with respect to the nominal case where uncertainties are not taken into account. Nonetheless this result, in terms of $\eta$, represents an improvement of a factor 10 over the current precision achieved by LLR.
In the second part of the paper we calculated the signal due to SEP violation on the ranging between a ground station and a SC orbiting near an Earth-Sun collinear Lagrangian point.
With a covariance analysis based on a 5 years mission, we forecast an RMS error for $\eta$ that would be around the same level of current measurements by LLR and ground experiments.
We conclude that this recently proposed experiment would serve as a direct test of the SEP that is both independent from other experiments, and at least comparable in terms of performances achieved in a relatively short time span.
\section*{Acknowledgments}
FDM acknowledges the advice and support of the Celestial Mechanics group of Pisa. GC acknowledges support from Hertford College, Harding Fund,
the Beecroft Institute for Particle Astrophysics and Cosmology, and Oxford Martin School.
The results of the research presented in the first part of this work
have been performed within the scope of Contract No. ASI/
2007/I/082/06/0 with the Italian Space Agency.
|
1,314,259,995,939 | arxiv | \section{Introduction}
The super-extensions of the classical integrable systems lead to
super integrable systems and they have undergone extensive
development in the past years. There are many super integrable
systems in literatures, such as the super AKNS system
\cite{Gurses1}-\cite{Lizhang2}, the super KdV equation
\cite{Kupershmidt}-\cite{Shaw1}, the super KP hierarchy
\cite{Manin}-\cite{Shaw2}, etc. It was known that super systems
contained the odd variables which would provide more prolific fields
for mathematical researchers and physical ones. Darboux
transformation \cite{Liu1}-\cite{Siddiq}, bi-Hamiltonian structure
\cite{Popowicz}-\cite{Kersten}, Painle$\acute{v}$e analysis
\cite{Mathieu} and so on, have been widely studied. Very recently,
nonlinearization of the super AKNS system and the super Dirac system
have been investigated in Refs.
\cite{HeYuZhouCheng}-\cite{YuHeMaCheng}.
It is well known that mono-nonlinearization technique was firstly
proposed by Cao in Ref. \cite{Cao1}, and binary-nonlinearization
technique was proposed by Ma in Ref. \cite{Ma1}. Both
mono-nonlinearization and binary-nonlinearization have the following
characteristics. Firstly, the advantage of nonlinearizaton method is
to decompose infinite dimensional systems into finite ones.
Secondly, one of the essential steps of nonlinearization method is
to calculate the variational derivative. Lastly, the key of the
nonlinearization method is to find symmetry constraints between the
potential and the eigenfunction by means of variational derivative.
On the one hand, nonlinearization of Lax pairs is valid for many
classical integrable systems \cite{Lizeng}-\cite{Zhouruguang}. On
the other hand, binary nonlinearization has been applied to the
super AKNS system and the super Dirac system in
Refs.\cite{HeYuZhouCheng}-\cite{YuHeMaCheng}. However, is
nonlinearization method valid for the other super integrable
systems? For the cKdV system, the answer is affirmative in this
paper. The cKdV system firstly proposed by Hirota and Satsuma in
Ref.\cite{HirotaSatsuma} is very important in the classical
integrable systems. Its mono-nonlinearization and Darboux
transformation were studied in Refs.\cite{Cao1990, Qin2004}.
The paper is organized as follows. In the next section, the cKdV
system is to be extended into the super one, and the super
Hamiltonian structure will be obtained for new system by means of
the supertrace identity. In section 3, variational derivative of the
spectral parameter with respect to the potential is calculated by
Lemma 2.1 in Ref. \cite{YuHeMaCheng}, and a symmetry constraint
between the potential and the eigenfunction can be found. The
symmetry constraint is an interesting constraint, and it is explicit
for even elements, but it is implicit for odd elements. Then in
section 4, after introduction of two new odd variables, the novel
symmetry constraint is substituted into the Lax pairs and the
adjoint Lax pairs of the super cKdV system while considering the two
new variables. And we find that the constrained Lax pairs and the
adjoint Lax pairs of the super cKdV system are super Hamiltonian
systems, and are completely integrable systems in the Liouville
sense. Integrals of motion with odd eigenfunctions are given
explicitly. The conclusions and discussions are given in section 5.
\small \baselineskip 13pt
\section{The super cKdV soliton hierarchy}
Let's begin with the following spectral problem
\begin{equation}\label{c1}
\phi_x=U(u, \lambda)\phi,\quad U(u,
\lambda)=\left(\begin{array}{ccc}
-\frac{1}{2}\lambda+\frac{1}{2}q&-r&\alpha\\
1&\frac{1}{2}\lambda-\frac{1}{2}q&\beta\\
\beta&-\alpha&0\end{array}\right),\quad
u=\left(\begin{array}{c}q\\r\\\alpha\\\beta\end{array}\right),\quad
\phi=\left(\begin{array}{c}\phi_1\\\phi_2\\\phi_3\end{array}\right),
\end{equation}
where $u$ is a potential, and $\lambda$ is a spectral parameter. Set
$p(q)=p(r)=p(\lambda)=0$, and $p(\alpha)=p(\beta)=1$. Here $p(f)$
means the parity of arbitrary function $f$. Note that $U \in
\mathbf{B}(0,1)$, where $\mathbf{B}(0,1)$ is a Lie superalegra.
Set
$$V=\left(\begin{array}{ccc}
A&B&\rho\\C&-A&\delta\\\delta&-\rho&0\end{array}\right)
$$
where $p(A)=p(B)=p(C)=0$, $p(\rho)=p(\delta)=1$. Noting that
$$UV-VU=\left(\begin{array}{ccc}
-B-rC+\alpha\delta+\beta\rho&-\lambda
B+2rA+qB-2\alpha\rho&-\frac{1}{2}\lambda\rho-\alpha A-\beta
B+\frac{1}{2}q\rho-r\delta\\
\lambda
C+2A-qC+2\beta\delta&B+rC-\alpha\delta-\beta\rho&\frac{1}{2}\delta+\beta
A-\alpha C+\rho-\frac{1}{2}q\delta\\\frac{1}{2}\delta+\beta A-\alpha
C+\rho-\frac{1}{2}q\delta&\frac{1}{2}\lambda\rho+\alpha A+\beta
B-\frac{1}{2}q\rho+r\delta&0 \end{array}\right),$$
then there goes co-adjoint representation equation
\begin{equation}
V_x=[U, V]=UV-VU,
\end{equation}
it becomes
\begin{equation}\label{c2}\left\{\begin{array}{l}
A_x=-B-rC+\alpha\delta+\beta\rho,\\
B_x=-\lambda B+2rA+qB-2\alpha\rho,\\
C_x=\lambda C+2A-qC+2\beta\delta,\\
\rho_x=-\frac{1}{2}\lambda\rho-\alpha A-\beta
B+\frac{1}{2}q\rho-r\delta,\\
\delta_x=\frac{1}{2}\delta+\beta A-\alpha C+\rho-\frac{1}{2}q\delta.
\end{array}\right.\end{equation}
On setting $A=\sum\limits_{j\geq0}A_j\lambda^{-j}$,
$B=\sum\limits_{j\geq0}B_j\lambda^{-j}$,
$C=\sum\limits_{j\geq0}C_j\lambda^{-j}$,
$\rho=\sum\limits_{j\geq0}\rho_j\lambda^{-j}$,
$\delta=\sum\limits_{j\geq0}\delta_j\lambda^{-j}$, then
equation (\ref{c2}) is equivalent to
\begin{equation}\label{c3}\left\{\begin{array}{l}
B_0=C_0=\rho_0=\delta_0=0,\\
A_{j, x}=-B_j-rC_j+\beta\rho_j+\alpha\delta_j,\quad j\geq0,\\
B_{j, x}=-B_{j+1}+2rA_j+qB_j-2\alpha\rho_j,\quad j\geq0,\\
C_{j, x}=C_{j+1}+2A_j-qC_j+2\beta\delta_j,\quad j\geq0,\\
\rho_{j, x}=-\frac{1}{2}\rho_{j+1}-\alpha A_j-\beta
B_j+\frac{1}{2}q\rho_j-r\delta_j,\quad j\geq0,\\
\delta_{j, x}=\frac{1}{2}\delta_{j+1}+\beta A_j-\alpha
C_j+\rho_j-\frac{1}{2}q\delta_j,\quad j\geq0.\end{array}\right.
\end{equation}
It can be written as the following recurrence relation
\begin{equation}\label{c6}\left(\begin{array}{c}
A_{n+1}\\-C_{n+1}\\2\delta_{n+1}\\-2\rho_{n+1}\end{array}\right)
={\cal
L}\left(\begin{array}{c}A_n\\-C_n\\2\delta_n\\-2\rho_n\end{array}\right),\end{equation}
where the recursive operator is given by
$${\cal L}=\left(\begin{array}{cccc}
-\partial+\partial^{-1}q\partial&r+\partial^{-1}r\partial&
\frac{1}{2}\alpha+\partial^{-1}\alpha\partial&-\frac{1}{2}\beta+\partial^{-1}\beta\partial\\
2&\partial+q&\beta&0\\
-4\beta&-4\alpha&2\partial+q&2\\
-4\beta\partial+4\alpha&4r\beta&2r-2\alpha\beta&-2\partial+q\end{array}\right),$$
with $\partial=d/dx$ and
$\partial\partial^{-1}=\partial^{-1}\partial=1$.
Owing to $B_0=C_0=\rho_0=\delta_0=0$, we get that $A_{0, x}=0$. So
we choose the initial value $A_0=-\frac{1}{2}$. If we set all
constants of integration to be zero, all $A_j, B_j, C_j, \rho_j,
\delta_j (j>0)$ are uniquely given by (\ref{c6}). For instance
$$A_1=0, B_1=-r, C_1=1, \rho_1=\alpha, \delta_1=\beta,$$
$$A_2=-r+2\alpha\beta, B_2=r_x-q r, C_2=q, \rho_2=-2\alpha_x+q\alpha,
\delta_2=2\beta_x+q\beta.$$
Then, consider the auxiliary spectral problem associated with the spectral
problem (\ref{c1})
\begin{equation}\label{c4}
\phi_{t_n}=V^{(n)}\phi,\end{equation} where
$$V^{(n)}=(\lambda^{n}V)_++\Delta_n=\sum_{j=0}^{n}\left(\begin{array}{ccc}
A_j&B_j&\rho_j\\C_j&-A_j&\delta_j\\\delta_j&-\rho_j&0\end{array}\right)\lambda^{n-j}
+\left(\begin{array}{ccc}
\frac{1}{2}C_{n+1}&0&0\\0&-\frac{1}{2}C_{n+1}&0\\0&0&0\end{array}\right),$$
and $(\lambda^{n}V)_+$ denotes non-negative power of $\lambda$ in $V$.
The compatibility conditions of Lax pairs
\begin{equation}\label{c27}
\phi_x=U\phi,\quad \phi_{t_n}=V^{(n)}\phi,
\end{equation}
determine a hierarchy of super cKdV system
\begin{equation}\label{c5}\left\{\begin{array}{l}
q_{t_n}=C_{n+1, x},\\
r_{t_n}=B_{n+1}+rC_{n+1},\\
\alpha_{t_n}=\frac{1}{2}\alpha C_{n+1}-\frac{1}{2}\rho_{n+1},\\
\beta_{t_n}=\frac{1}{2}\delta_{n+1}-\frac{1}{2}\beta
C_{n+1}.\end{array}\right.\end{equation} The first nonlinear cKdV
system in the hierarchy (\ref{c5}) reads as
\begin{equation}\left\{\begin{array}{l}
q_{t_2}=q_{xx}+2qq_x+2r_x-4\alpha_x\beta-4\alpha\beta_x-4\beta\beta_{xx},\\
r_{t_2}=-r_{xx}+2q_xr+2qr_x+4\alpha\alpha_x-4r\beta\beta_x,\\
\alpha_{t_2}=-2\alpha_{xx}+\frac{3}{2}q_x\alpha+2q\alpha_x+r_x\beta+2r\beta_x
-2\alpha\beta\beta_x,\\
\beta_{t_2}=2\beta_{xx}+\frac{1}{2}q_x\beta+2q\beta_x+2\alpha_x,
\end{array}\right.\end{equation}
whose Lax pairs are $U$ and
$$V^{(2)}=\left(\begin{array}{ccc}
-\frac{1}{2}\lambda^{2}+\frac{1}{2}q_x+\frac{1}{2}q^{2}-2\beta\beta_x&
-r\lambda+r_x-qr&\alpha\lambda-2\alpha_x+q\alpha\\ \lambda+q&
\frac{1}{2}\lambda^{2}-\frac{1}{2}q_x-\frac{1}{2}q^{2}+2\beta\beta_x&
\beta\lambda+2\beta_x+q\beta\\ \beta\lambda+2\beta_x+q\beta&
-\alpha\lambda+2\alpha_x-q\alpha&0 \end{array}\right).$$
In what follows, the super Hamiltonian structures of the super cKdV system (\ref{c5})
can be achieved. Using the
super trace identity \cite{Hu, Maqin}
\begin{equation}\label{c28}
\frac{\delta}{\delta u}\int Str(V\frac{\partial
U}{\partial\lambda})dx=(\lambda^{-\gamma}\frac{\partial}{\partial\lambda}\lambda^{\gamma})
Str(\frac{\partial U}{\partial u}V),\end{equation} where Str means
super trace, we have
$$\left(\begin{array}{c}
\frac{\delta}{\delta q}\\\frac{\delta}{\delta
r}\\\frac{\delta}{\delta \alpha}\\\frac{\delta}{\delta
\beta}\end{array}\right)\int-A_{n+1}dx=(\gamma-n)
\left(\begin{array}{c}A_n\\-C_n\\2\delta_n\\-2\rho_n\end{array}\right),$$
where $\gamma$ is an arbitrary constant. Let $n=1$ in above
equality, we obtain $\gamma=0$. Therefore, we get the following
identity
$$\left(\begin{array}{c}A_{n+1}\\-C_{n+1}\\2\delta_{n+1}\\-2\rho_{n+1}\end{array}\right)
=\frac{\delta}{\delta u}H_n,\quad H_n=\int\frac{1}{n+1}A_{n+2}dx.$$
Thus, the super cKdV hierarchy can be written as the following super
Hamiltonian form
\begin{equation}\label{c7}
u_{t_n}=\left(\begin{array}{c}q\\r\\\alpha\\\beta\end{array}\right)_{t_n}
=K_n=J\left(\begin{array}{c}A_{n+1}\\-C_{n+1}\\2\delta_{n+1}\\-2\rho_{n+1}\end{array}\right)
=J\frac{\delta H_n}{\delta u},\end{equation} where the super
symplectic operator is given by
$$J=\left(\begin{array}{cccc}0&-\partial&0&0\\
-\partial&0&\frac{1}{2}\alpha&-\frac{1}{2}\beta\\
0&-\frac{1}{2}\alpha&0&\frac{1}{4}\\
0&\frac{1}{2}\beta&\frac{1}{4}&0\end{array}\right).$$
\section{A novel symmetry constraint}
In this section, a symmetry constraint between the potential and the eigenfunction can be obtained.
To this end, consider the
adjoint spectral problem associated with spectral problem (\ref{c1})
\begin{equation}\label{c8}
\psi_x=-(U(u, \lambda))^{St}\psi=\left(\begin{array}{ccc}
\frac{1}{2}\lambda-\frac{1}{2}q&-1&\beta\\
r&-\frac{1}{2}\lambda+\frac{1}{2}q&-\alpha\\
-\alpha&-\beta&0\end{array}\right)\psi,\quad
\psi=\left(\begin{array}{c}\psi_1\\\psi_2\\\psi_3\end{array}\right),
\end{equation}
where $St$ means super-transposition.
Using Lemma 2.1 in \cite{YuHeMaCheng}, we can easily get the
variational derivative of the spectral parameter $\lambda$ with
respect to the potential $u$:
\begin{equation}\label{c9}
\frac{\delta\lambda}{\delta u}=\frac{1}{E}\left(\begin{array}{c}
\frac{1}{2}(\psi_1\phi_1-\psi_2\phi_2)\\-\psi_1\phi_2\\\psi_1\phi_3+\psi_3\phi_2\\
\psi_2\phi_3-\psi_3\phi_1\end{array}\right),\end{equation} where
$E=\int\frac{1}{2}(\psi_1\phi_1-\psi_2\phi_2)dx$. When zero boundary
conditions
$\lim_{|x|\rightarrow\infty}\phi=\lim_{|x|\rightarrow\infty}\psi=0$
are imposed, it satisfies following equation
\begin{equation}\label{c10}
{\cal L}\frac{\delta\lambda}{\delta
u}=\lambda\frac{\delta\lambda}{\delta u},\end{equation} where ${\cal
L}$ is defined as in (\ref{c6}). The above variational derivative
will serve as a conserved covariant yielding a specific symmetry
used in symmetry constraints.
For Lax pairs (\ref{c27}), we choose the following symmetry
constraint
\begin{equation}\label{c10`}\left(\begin{array}{c}
-r+2\alpha\beta\\-q\\4\beta_x+2q\beta\\4\alpha_x-2q\alpha\end{array}\right)
=\left(\begin{array}{c} \frac{1}{2}(<\Psi_1, \Phi_1>-<\Psi_2,
\Phi_2>)\\-<\Psi_1, \Phi_2>\\<\Psi_1, \Phi_3>+<\Psi_3, \Phi_2>\\
<\Psi_2, \Phi_3>-<\Psi_3, \Phi_1>\end{array}\right),\end{equation}
where $\Phi_i=(\phi_{i1}, \cdots, \phi_{iN})^{T}$,
$\Psi_i=(\psi_{i1}, \cdots, \psi_{iN})^{T} (i=1, 2, 3)$, and $<.,.>$
denotes the standard inner product in $R^{N}$. We find that the odd
potentials $\alpha$ and $\beta$ can not be expressed by
eigenfunctions explicitly, but the even potentials $q$ and $r$ can
be expressed by eigenfunctions explicitly. Therefore, the symmetry
constraint (\ref{c10`}) is a novel constraint.
\begin{Remark}
In classical integrable systems, symmetry constraint between
potential and eigenfunction is either explicit or implicit. To this
day, we haven't got an example with its symmetry constraint that could
combine explicit constraint and implicit constraint. Even in super
integrable systems, we haven't got it too. Therefore, eq.(\ref{c10`}) is absolutely a novel
symmetry constraint.
\end{Remark}
Then denote the expression of $P(u)$ under the symmetry constraint
(\ref{c10`}) by $\tilde{P}$. From the property (\ref{c10}) and the
recurrence relation (\ref{c6}), we obtain
\begin{equation}\label{c13}\left\{\begin{array}{l}
\tilde{A}_{n+1}=\frac{1}{2}(<\Lambda^{n-1}\Psi_1,
\Phi_1>-<\Lambda^{n-1}\Psi_2, \Phi_2>),\quad n\geq1,\\
\tilde{B}_{n+1}=<\Lambda^{n-1}\Psi_2, \Phi_1>,\quad n\geq1,\\
\tilde{C}_{n+1}=<\Lambda^{n-1}\Psi_1, \Phi_2>,\quad n\geq1,\\
\tilde{\rho}_{n+1}=-\frac{1}{2}(<\Lambda^{n-1}\Psi_2,
\Phi_3>-<\Lambda^{n-1}\Psi_3, \Phi_1>),\quad n\geq1,\\
\tilde{\delta}_{n+1}=\frac{1}{2}(<\Lambda^{n-1}\Psi_1,
\Phi_3>+<\Lambda^{n-1}\Psi_3, \Phi_2>),\quad
n\geq1,\end{array}\right.\end{equation} where
$\Lambda=diag(\lambda_1, \lambda_2, \cdots, \lambda_N).$
\section{Binary nonlinearization}
In the last section, we have found a novel symmetry constraint
(\ref{c10`}). Because the odd potentials $\alpha$ and $\beta$ can
not be explicitly expressed by eigenfunctions, we introduce the
following new independent odd variables
\begin{equation}\label{c23}
\phi_{N+1}=\alpha,\quad\psi_{N+1}=4\beta.
\end{equation}
Choosing N distinct parameters $\lambda_1, \cdots, \lambda_N$, we
obtain the following two spatial and temporal systems
\begin{equation}\label{cspatial}\left\{\begin{array}{l}\left(\begin{array}{c}
\phi_{1j}\\\phi_{2j}\\\phi_{3j}\end{array}\right)_x=U(u, \lambda_j)
\left(\begin{array}{c}
\phi_{1j}\\\phi_{2j}\\\phi_{3j}\end{array}\right),\quad j=1, 2,
\cdots, N,\\
\left(\begin{array}{c}
\psi_{1j}\\\psi_{2j}\\\psi_{3j}\end{array}\right)_x=-U^{St}(u,
\lambda_j) \left(\begin{array}{c}
\psi_{1j}\\\psi_{2j}\\\psi_{3j}\end{array}\right),\quad j=1, 2,
\cdots, N,\end{array}\right.\end{equation}
\begin{equation}\label{ctime}\left\{\begin{array}{l}\left(\begin{array}{c}
\phi_{1j}\\\phi_{2j}\\\phi_{3j}\end{array}\right)_{t_n}=V^{(n)}(u,
\lambda_j) \left(\begin{array}{c}
\phi_{1j}\\\phi_{2j}\\\phi_{3j}\end{array}\right),\quad j=1, 2,
\cdots, N,\\
\left(\begin{array}{c}
\psi_{1j}\\\psi_{2j}\\\psi_{3j}\end{array}\right)_{t_n}=-(V^{(n)})^{St}(u,
\lambda_j) \left(\begin{array}{c}
\psi_{1j}\\\psi_{2j}\\\psi_{3j}\end{array}\right),\quad j=1, 2,
\cdots, N.\end{array}\right.\end{equation} It is easy to verify that
the compatibility condition of (\ref{cspatial}) and (\ref{ctime}) is
still the $n$th super cKdV systems $u_{t_n}=K_n$. When the symmetry
constraint (\ref{c10`}) and new independent variables (\ref{c23})
are considered, systems (\ref{cspatial}) and (\ref{ctime}) become
the following finite-dimensional system
\begin{equation}\label{c11}\left\{\begin{array}{l}
\phi_{1j, x}=\frac{1}{2}(-\lambda_j+<\Psi_1,
\Phi_2>)\phi_{1j}+\frac{1}{2}(<\Psi_1, \Phi_1>-<\Psi_2,
\Phi_2>-\phi_{N+1}\psi_{N+1})\phi_{2j}\\\qquad\quad+\phi_{N+1}\phi_{3j},\\
\phi_{2j, x}=\phi_{1j}+\frac{1}{2}(\lambda_j-<\Psi_1,
\Phi_2>)\phi_{2j}+\frac{1}{4}\psi_{N+1}\phi_{3j},\\
\phi_{3j, x}=\frac{1}{4}\psi_{N+1}\phi_{1j}-\phi_{N+1}\phi_{2j},\\
\phi_{N+1, x}=\frac{1}{4}(<\Psi_2, \Phi_3>-<\Psi_3,
\Phi_1>)+\frac{1}{2}<\Psi_1, \Phi_2>\phi_{N+1},\\
\psi_{1j, x}=\frac{1}{2}(\lambda_j-<\Psi_1,
\Phi_2>)\psi_{1j}-\psi_{2j}+\frac{1}{4}\psi_{N+1}\psi_{3j},\\
\psi_{2j, x}=\frac{1}{2}(-<\Psi_1, \Phi_1>+<\Psi_2,
\Psi_2>+\phi_{N+1}\psi_{N+1})\psi_{1j}+\frac{1}{2}(-\lambda_j+<\Psi_1,
\Phi_2>)\psi_{2j}\\\qquad\quad-\phi_{N+1}\psi_{3j},\\
\psi_{3j, x}=-\phi_{N+1}\psi_{1j}-\frac{1}{4}\psi_{N+1}\psi_{2j},\\
\psi_{N+1, x}=<\Psi_1, \Phi_3>+<\Psi_3, \Phi_2>-\frac{1}{2}<\Psi_1,
\Phi_2>\psi_{N+1},\end{array}\right.\end{equation} where $1\leq
j\leq N$. Then system (\ref{c11}) can be written as follows
\begin{equation}\label{c12}\left\{\begin{array}{l}
\Phi_{1, x}=\frac{1}{2}(-\Lambda+<\Psi_1,
\Phi_2>)\Phi_1+\frac{1}{2}(<\Psi_1, \Phi_1>-<\Psi_2,
\Phi_2>-\phi_{N+1}\psi_{N+1})\Phi_2\\\qquad\quad+\phi_{N+1}\Phi_3=\frac{\partial
H_1}{\partial\Psi_1},\\
\Phi_{2, x}=\Phi_1+\frac{1}{2}(\Lambda-<\Psi_1,
\Phi_2>)\Phi_2+\frac{1}{4}\psi_{N+1}\Phi_3=\frac{\partial
H_1}{\partial\Psi_2},\\
\Phi_{3,
x}=\frac{1}{4}\psi_{N+1}\Phi_1-\phi_{N+1}\Phi_2=\frac{\partial
H_1}{\partial\Psi_3},\\
\phi_{N+1, x}=\frac{1}{4}(<\Psi_2, \Phi_3>-<\Psi_3,
\Phi_1>)+\frac{1}{2}<\Psi_1, \Phi_2>\phi_{N+1}=\frac{\partial
H_1}{\partial\psi_{N+1}},\\
\Psi_{1, x}=\frac{1}{2}(\Lambda-<\Psi_1,
\Phi_2>)\Psi_1-\Psi_2+\frac{1}{4}\psi_{N+1}\Psi_3=-\frac{\partial
H_1}{\partial\Phi_1},\\
\Psi_{2, x}=\frac{1}{2}(-<\Psi_1, \Phi_1>+<\Psi_2,
\Phi_2>+\phi_{N+1}\psi_{N+1})\Psi_1+\frac{1}{2}(-\Lambda+<\Psi_1,
\Phi_2>)\Psi_2\\\qquad\quad-\phi_{N+1}\Psi_3=-\frac{\partial
H_1}{\partial\Phi_2},\\
\Psi_{3,
x}=-\phi_{N+1}\Psi_1-\frac{1}{4}\psi_{N+1}\Psi_2=\frac{\partial
H_1}{\partial\Phi_3},\\
\psi_{N+1, x}=<\Psi_1, \Phi_3>+<\Psi_3, \Phi_2>-\frac{1}{2}<\Psi_1,
\Phi_2>\psi_{N+1}=\frac{\partial
H_1}{\partial\phi_{N+1}},\end{array}\right.\end{equation} where
Hamiltonian function
\begin{eqnarray*}
H_1&=&-\frac{1}{2}<\Lambda\Psi_1, \Phi_1>+\frac{1}{2}<\Lambda\Psi_2,
\Phi_2>+\frac{1}{2} <\Psi_1, \Phi_2>(<\Psi_1, \Phi_1>-<\Psi_2,
\Phi_2>)\\&&+<\Psi_2,
\Phi_1>-\frac{1}{2}\phi_{N+1}\psi_{N+1}<\Psi_1,
\Phi_2>+\phi_{N+1}(<\Psi_1, \Phi_3>+<\Psi_3,
\Phi_2>)\\&&+\frac{1}{4}\psi_{N+1}(<\Psi_2, \Phi_3>-<\Psi_3,
\Phi_1>).\end{eqnarray*}
For $t_2$-part, we have the following spectral problem
\begin{equation}\label{t2-1}
\phi_{t_2}=V^{(2)}\phi=\left(\begin{array}{ccc}
-\frac{1}{2}\lambda^{2}+\frac{1}{2}q_x+\frac{1}{2}q^{2}-2\beta\beta_x&
-r\lambda+r_x-qr&\alpha\lambda-2\alpha_x+q\alpha\\ \lambda+q&
\frac{1}{2}\lambda^{2}-\frac{1}{2}q_x-\frac{1}{2}q^{2}+2\beta\beta_x&
\beta\lambda+2\beta_x+q\beta\\ \beta\lambda+2\beta_x+q\beta&
-\alpha\lambda+2\alpha_x-q\alpha&0
\end{array}\right)\phi,\end{equation}
and its adjoint spectral problem
\begin{equation}\label{t2-2}
\psi_{t_2}=-(V^{(2)})^{St}\psi=\left(\begin{array}{ccc}
\frac{1}{2}\lambda^{2}-\frac{1}{2}q_x-\frac{1}{2}q^{2}+2\beta\beta_x&
-\lambda-q&\beta\lambda+2\beta_x+q\beta\\
r\lambda-r_x+qr&-\frac{1}{2}\lambda^{2}+\frac{1}{2}q_x+\frac{1}{2}q^{2}-2\beta\beta_x&
-\alpha\lambda+2\alpha_x-q\alpha\\
-\alpha\lambda+2\alpha_x-q\alpha& -\beta\lambda-2\beta_x-q\beta&0
\end{array}\right)\psi.\end{equation}
Considering N copies of (\ref{t2-1}) and (\ref{t2-2}) under the
symmetry constraint (\ref{c10`}), we obtain the following
finite-dimensional system
\begin{equation}\label{t2-3}\left\{\begin{array}{l}
\phi_{1j,
t_2}=(-\frac{1}{2}\lambda_j^{2}+\frac{1}{2}\tilde{q}_x+\frac{1}{2}\tilde{q}^{2}
-2\tilde{\beta}\tilde{\beta}_x)\phi_{1j}+
(-\tilde{r}\lambda_j+\tilde{r}_x-\tilde{q}\tilde{r})\phi_{2j}
+(\tilde{\alpha}\lambda_j-2\tilde{\alpha}_x+\tilde{q}\tilde{\alpha})\phi_{3j},\\
\phi_{2j, t_2}=(\lambda_j+\tilde{q})\phi_{1j}+
(\frac{1}{2}\lambda_j^{2}-\frac{1}{2}\tilde{q}_x-\frac{1}{2}\tilde{q}^{2}
+2\tilde{\beta}\tilde{\beta}_x)\phi_{2j}
+(\tilde{\beta}\lambda_j+2\tilde{\beta}_x+\tilde{q}\tilde{\beta})\phi_{3j},\\
\phi_{3j,
t_2}=(\tilde{\beta}\lambda_j+2\tilde{\beta}_x+\tilde{q}\tilde{\beta})\phi_{1j}
+(-\tilde{\alpha}\lambda_j+2\tilde{\alpha}_x-\tilde{q}\tilde{\alpha})\phi_{2j},\\
\psi_{1j,
t_2}=(\frac{1}{2}\lambda_j^{2}-\frac{1}{2}\tilde{q}_x-\frac{1}{2}\tilde{q}^{2}
+2\tilde{\beta}\tilde{\beta}_x)\psi_{1j}-(\lambda_j+\tilde{q})\psi_{2j}+
(\tilde{\beta}\lambda_j+2\tilde{\beta}_x+\tilde{q}\tilde{\beta})\psi_{3j},\\
\psi_{2j,
t_2}=(\tilde{r}\lambda_j-\tilde{r}_x+\tilde{q}\tilde{r})\psi_{1j}
+(-\frac{1}{2}\lambda_j^{2}+\frac{1}{2}\tilde{q}_x+\frac{1}{2}\tilde{q}^{2}
-2\tilde{\beta}\tilde{\beta}_x)\psi_{2j}
+(-\tilde{\alpha}\lambda_j+2\tilde{\alpha}_x-\tilde{q}\tilde{\alpha})\psi_{3j},\\
\psi_{3j,
t_2}=(-\tilde{\alpha}\lambda_j+2\tilde{\alpha}_x-\tilde{q}\tilde{\alpha})\psi_{1j}
-(\tilde{\beta}\lambda_j+2\tilde{\beta}_x+\tilde{q}\tilde{\beta})\psi_{2j},
\end{array}\right.\end{equation}
where $1\leq j\leq N$, $\tilde{q}$, $\tilde{r}$, $\tilde{\alpha}$,
$\tilde{\beta}$ respectively denote $q$, $r$, $\alpha$, $\beta$
under the symmetry constraint (\ref{c10`}), and $\tilde{q}_x$,
$\tilde{r}_x$, $\tilde{\alpha}_x$, $\tilde{\beta}_x$ are given by
the following identities
$$\left\{\begin{array}{l}
\tilde{q}_x=<\Lambda\Psi_1, \Phi_2>-<\Psi_1, \Phi_2>^{2}+<\Psi_1,
\Phi_1>-<\Psi_2, \Phi_2>+\frac{1}{4}\psi_{N+1}(<\Psi_1,
\Phi_3>+<\Psi_3, \Phi_2>),\\
\tilde{r}_x=<\Psi_2, \Phi_1>-\frac{1}{2}<\Psi_1, \Phi_2>(<\Psi_1,
\Phi_1>-<\Psi_2, \Phi_2>)+\frac{1}{2}<\Psi_1,
\Phi_2>\phi_{N+1}\psi_{N+1},\\
\tilde{\alpha}_x=\frac{1}{4}(<\Psi_2, \Phi_3>-<\Psi_3,
\Phi_1>)+\frac{1}{2}<\Psi_1, \Phi_2>\phi_{N+1},\\
\tilde{\beta}_x=\frac{1}{4}(<\Psi_1, \Phi_3>+<\Psi_3,
\Phi_2>)-\frac{1}{8}<\Psi_1, \Phi_2>\psi_{N+1}.\end{array}\right.$$
Thus, the constrained system (\ref{t2-3}) becomes
\begin{equation}\left\{\begin{array}{l}
\Phi_{1, t_2}=\frac{1}{2}(-\Lambda^{2}+<\Lambda\Psi_1,
\Phi_2>+<\Psi_1, \Phi_1>-<\Psi_2, \Phi_2>)\Phi_1+\frac{1}{2}
[(<\Psi_1, \Phi_1>-<\Psi_2,
\Phi_2>)\Lambda\\\qquad\qquad-\phi_{N+1}\psi_{N+1}\Lambda+2<\Psi_2,
\Phi_1>]\Phi_2+\frac{1}{2}(2\phi_{N+1}\Lambda-<\Psi_2,
\Phi_3>+<\Psi_3, \Phi_1>)\Phi_3=\frac{\partial
H_2}{\partial\Psi_1},\\
\Phi_{2, t_2}=(\Lambda+<\Psi_1,
\Phi_2>)\Phi_1+\frac{1}{2}(\Lambda^{2}-<\Lambda\Psi_1,
\Phi_2>-<\Psi_1, \Phi_1>+<\Psi_2, \Phi_2>)\Phi_2+\frac{1}{4}
(\psi_{N+1}\Lambda\\\qquad\qquad+2<\Psi_1, \Phi_3>+2<\Psi_3,
\Phi_2>)\Phi_3=\frac{\partial H_2}{\partial\Psi_2},\\
\Phi_{3, t_2}=\frac{1}{4}(\psi_{N+1}\Lambda+2<\Psi_1,
\Phi_3>+2<\Psi_3,
\Phi_2>)\Phi_1-\frac{1}{2}(2\phi_{N+1}\Lambda-<\Psi_2,
\Phi_3>+<\Psi_3, \Phi_1>)\Phi_2\\\qquad\qquad=\frac{\partial
H_2}{\partial\Psi_3},\\
\phi_{N+1, t_2}=\frac{1}{2}\phi_{N+1}<\Lambda\Psi_1,
\Phi_2>+\frac{1}{4}(<\Lambda\Psi_2, \Phi_3>-<\Lambda\Psi_3,
\Phi_1>)=\frac{\partial H_2}{\partial\Psi_{N+1}},\\
\Psi_{1, t_2}=\frac{1}{2}(\Lambda^{2}-<\Lambda\Psi_1,
\Phi_2>-<\Psi_1, \Phi_1>+<\Psi_2, \Phi_2>)\Psi_1-(\Lambda+<\Psi_1,
\Phi_2>)\Psi_2+\frac{1}{4}(\psi_{N+1}\Lambda\\\qquad\qquad+2<\Psi_1,
\Phi_3>+2<\Psi_3, \Phi_2>)\Psi_3=-\frac{\partial
H_2}{\partial\Phi_1},\\
\Psi_{2, t_2}=-\frac{1}{2} [(<\Psi_1, \Phi_1>-<\Psi_2,
\Phi_2>)\Lambda-\phi_{N+1}\psi_{N+1}\Lambda+2<\Psi_2,
\Phi_1>]\Psi_1+\frac{1}{2}(-\Lambda^{2}+<\Lambda\Psi_1,
\Phi_2>\\\qquad\qquad+<\Psi_1, \Phi_1>-<\Psi_2,
\Phi_2>)\Psi_2-\frac{1}{2}(2\phi_{N+1}\Lambda-<\Psi_2,
\Phi_3>+<\Psi_3, \Phi_1>)\Psi_3=-\frac{\partial
H_2}{\partial\Phi_2},\\
\Psi_{3, t_2}=-\frac{1}{2}(2\phi_{N+1}\Lambda-<\Psi_2,
\Phi_3>+<\Psi_3,
\Phi_1>)\Psi_1-\frac{1}{4}(\psi_{N+1}\Lambda+2<\Psi_1,
\Phi_3>+2<\Psi_3, \Phi_2>)\Psi_2\\\qquad\qquad=\frac{\partial
H_2}{\partial\Phi_3},\\
\psi_{N+1, t_2}=<\Lambda\Psi_1, \Phi_3>+<\Lambda\Psi_3,
\Phi_2>-\frac{1}{2}\psi_{N+1}<\Lambda\Psi_1, \Phi_2>=\frac{\partial
H_2}{\partial\phi_{N+1}},\end{array}\right.\end{equation} where
Hamiltonian function is as follows
\begin{eqnarray*}
H_2&=&-\frac{1}{2}(<\Lambda^{2}\Psi_1, \Phi_1>-<\Lambda^{2}\Psi_2,
\Phi_2>)+\frac{1}{2}<\Lambda\Psi_1, \Phi_2>(<\Psi_1,
\Phi_1>-<\Psi_2, \Phi_2>)\\&& +<\Lambda\Psi_2,
\Phi_1>-\frac{1}{2}\phi_{N+1}\psi_{N+1}<\Lambda\Psi_1,
\Phi_2>+\frac{1}{4}\psi_{N+1}(<\Lambda\Psi_2,
\Phi_3>-<\Lambda\Psi_3, \Phi_1>)\\&& +<\Psi_2, \Phi_1><\Psi_1,
\Phi_2>-\frac{1}{2}(<\Psi_2, \Phi_3>-<\Psi_3, \Phi_1>)(<\Psi_1,
\Phi_3>+<\Psi_3, \Phi_2>)\\&&
+\phi_{N+1}(<\Lambda\Psi_1,
\Phi_3>+<\Lambda\Psi_3, \Phi_2>)+\frac{1}{4}(<\Psi_1,
\Phi_1>-<\Psi_2, \Phi_2>)^{2}.\end{eqnarray*}
Let's construct integrals of motion for (\ref{c12}). An obvious
equality $(\tilde{V}^{2})_x=[\tilde{U}, \tilde{V}^{2}]$ leads to
\begin{equation}\label{cF}
F_x=(\frac{1}{2}Str\tilde{V}^{2})_x=
\frac{d}{dx}(\tilde{A}^{2}+\tilde{B}\tilde{C}+2\tilde{\rho}\tilde{\delta})=0,
\end{equation}
that is to say, F is a generating function of integrals of motion
for the constrained spatial system (\ref{c12}). Since
$F=\sum\limits_{n\geq0}F_n\lambda^{-n}$, we obtain the following
expressions
$$F_n=\sum\limits_{i=0}^{n}(\tilde{A}_i\tilde{A}_{n-i}+\tilde{B}_i\tilde{C}_{n-i}
+2\tilde{\rho}_i\tilde{\delta}_{n-i}).$$ Using (\ref{c13}), we get
\begin{eqnarray}\label{c15}
F_0&=&\frac{1}{4}, \quad F_1=F_2=0,\nonumber\\
F_3&=&-\frac{1}{2}(<\Lambda\Psi_1, \Phi_1>-<\Lambda\Psi_2,
\Phi_2>)-\frac{1}{4}(<\Psi_2, \Phi_3>-<\Psi_3,
\Phi_1>)\psi_{N+1}\nonumber\\&&+\frac{1}{2}(<\Psi_1,
\Phi_1>-<\Psi_2, \Phi_2>-\phi_{N+1}\psi_{N+1})<\Psi_1,
\Phi_2>+<\Psi_2, \Phi_1>\nonumber\\&&+\phi_{N+1}(<\Psi_1,
\Phi_3>+<\Psi_3, \Phi_2>)=H_1,\nonumber\\
F_4&=&-\frac{1}{2}(<\Lambda^{2}\Psi_1, \Phi_1>-<\Lambda^{2}\Psi_2,
\Phi_2>)+\phi_{N+1}(<\Lambda\Psi_1, \Phi_3>+<\Lambda\Psi_3,
\Phi_2>)\nonumber\\&&+\frac{1}{2}(<\Psi_1, \Phi_1>-<\Psi_2,
\Phi_2>-\phi_{N+1}\psi_{N+1})<\Lambda\Psi_1, \Phi_2>+<\Lambda\Psi_2,
\Phi_1>\nonumber\\&&-\frac{1}{4}(<\Lambda\Psi_2,
\Phi_3>-<\Lambda\Psi_3, \Phi_1>)\psi_{N+1}+\frac{1}{4}(<\Psi_1,
\Phi_1>-<\Psi_2, \Phi_2>)^{2}\nonumber\\&&-\frac{1}{2}(<\Psi_2,
\Phi_3>-<\Psi_3, \Phi_1>)(<\Psi_1, \Phi_3>+<\Psi_3,
\Phi_2>)+<\Psi_2, \Phi_1><\Psi_1,
\Phi_2>,\nonumber\\
F_n&=&-\frac{1}{2}(<\Lambda^{n-2}\Psi_1,
\Phi_1>-<\Lambda^{n-2}\Psi_2,
\Phi_2>)+\phi_{N+1}(<\Lambda^{n-3}\Psi_1,
\Phi_3>+<\Lambda^{n-3}\Psi_3,
\Phi_2>)\nonumber\\&&+\frac{1}{2}(<\Psi_1, \Phi_1>-<\Psi_2,
\Phi_2>-\phi_{N+1}\psi_{N+1})<\Lambda^{n-3}\Psi_1,
\Phi_2>+<\Lambda^{n-3}\Psi_2, \Phi_1>\nonumber\\&&
-\frac{1}{4}(<\Lambda^{n-3}\Psi_2, \Phi_3>-<\Lambda^{n-3}\Psi_3,
\Phi_1>)\psi_{N+1}+\sum_{i=2}^{n-2}[\frac{1}{4}(<\Lambda^{i-2}\Psi_1,
\Phi_1>\nonumber\\&&-<\Lambda^{i-2}\Psi_2,
\Phi_2>)(<\Lambda^{n-i-2}\Psi_1, \Phi_1>-<\Lambda^{n-i-2}\Psi_2,
\Phi_2>)+<\Lambda^{i-2}\Psi_2,
\Phi_1>\nonumber\\&&<\Lambda^{n-i-2}\Psi_1,
\Phi_2>-\frac{1}{2}(<\Lambda^{i-2}\Psi_2,
\Phi_3>-<\Lambda^{i-2}\Psi_3, \Phi_1>)(<\Lambda^{n-i-2}\Psi_1,
\Phi_3>\nonumber\\&&+<\Lambda^{n-i-2}\Psi_3, \Phi_2>)],\quad
n\geq5.\end{eqnarray} Here $F_n(n\geq0)$ are all polynomials of
6N+2 dependent variables $\phi_{ij}$, $\psi_{ij}$, $\phi_{N+1}$ and
$\psi_{N+1}$, with $i=1, 2, 3$ and $j=1, \cdots, N$. Note that for
temporal part, $V_{t_n}=[V^{(n)}, V]$ is true. With the similar
discussion, we found that $F=\frac{1}{2}Str\tilde{V}^{2}$ is also a
generating function of integrals of motion for (\ref{ctime}).
Moreover, when the symmetry constraint (\ref{c10`}) and new
independent variables (\ref{c23}) are considered, system
(\ref{ctime}) is constrained as follows
\begin{equation}\label{c18}\left\{\begin{array}{l}
\phi_{1j,
t_n}=(\sum\limits_{m=0}^{n}\tilde{A}_m\lambda_j^{n-m}+\frac{1}{2}\tilde{C}_{n+1})\phi_{1j}
+\sum\limits_{m=0}^{n}\tilde{B}_m\lambda_j^{n-m}\phi_{2j}+
\sum\limits_{m=0}^{n}\tilde{\rho}_m\lambda_j^{n-m}\phi_{3j},\quad 1\leq j\leq N,\\
\phi_{2j,
t_n}=\sum\limits_{m=0}^{n}\tilde{C}_m\lambda_j^{n-m}\phi_{1j}
-(\sum\limits_{m=0}^{n}\tilde{A}_m\lambda_j^{n-m}+\frac{1}{2}\tilde{C}_{n+1})\phi_{2j}
+\sum\limits_{m=0}^{n}\tilde{\delta}_m\lambda_j^{n-m}\phi_{3j},\quad 1\leq j\leq N,\\
\phi_{3j,
t_n}=\sum\limits_{m=0}^{n}\tilde{\delta}_m\lambda_j^{n-m}\phi_{1j}
-\sum\limits_{m=0}^{n}\tilde{\rho}_m\lambda_j^{n-m}\phi_{2j},\quad 1\leq j\leq N,\\
\phi_{N+1, t_n}=\frac{1}{2}\phi_{N+1}<\Lambda^{n-1}\Psi_1, \Phi_2>
+\frac{1}{4}(<\Lambda^{n-1}\Psi_2, \Phi_3>-<\Lambda^{n-1}\Psi_3,
\Phi_2>),\\
\psi_{1j,
t_n}=-(\sum\limits_{m=0}^{n}\tilde{A}_m\lambda_j^{n-m}+\frac{1}{2}\tilde{C}_{n+1})\psi_{1j}
-\sum\limits_{m=0}^{n}\tilde{C}_m\lambda_j^{n-m}\psi_{2j}
+\sum\limits_{m=0}^{n}\tilde{\delta}_m\lambda_j^{n-m}\psi_{3j},\quad 1\leq j\leq N,\\
\psi_{2j,
t_n}=-\sum\limits_{m=0}^{n}\tilde{B}_m\lambda_j^{n-m}\psi_{1j}
+(\sum\limits_{m=0}^{n}\tilde{A}_m\lambda_j^{n-m}+\frac{1}{2}\tilde{C}_{n+1})\psi_{2j}
-\sum\limits_{m=0}^{n}\tilde{\rho}_m\lambda_j^{n-m}\psi_{3j},\quad 1\leq j\leq N,\\
\psi_{3j,
t_n}=-\sum\limits_{m=0}^{n}\tilde{\rho}_m\lambda_j^{n-m}\psi_{1j}
-\sum\limits_{m=0}^{n}\tilde{\delta}_m\lambda_j^{n-m}\psi_{2j},\quad
1\leq j\leq N,\\
\psi_{N+1, t_n}=<\Lambda^{n-1}\Psi_1, \Phi_3>+<\Lambda^{n-1}\Psi_3,
\Phi_2>-\frac{1}{2}\psi_{N+1}<\Lambda^{n-1}\Psi_1, \Phi_2>.
\end{array}\right.\end{equation}
After a direct calculation, we have
\begin{equation}\label{c19}\left\{\begin{array}{l}
\Phi_{1, t_n}=\frac{\partial F_{n+2}}{\partial\Psi_1},\quad \Phi_{2,
t_n}=\frac{\partial F_{n+2}}{\partial\Psi_2},\quad \Phi_{3,
t_n}=\frac{\partial F_{n+2}}{\partial\Psi_3},\quad \phi_{N+1,
t_n}=\frac{\partial F_{n+2}}{\partial\Psi_{N+1}},\\
\Psi_{1, t_n}=-\frac{\partial F_{n+2}}{\partial\Phi_1},\quad
\Psi_{2, t_n}=-\frac{\partial F_{n+2}}{\partial\Phi_2},\quad
\Psi_{3, t_n}=\frac{\partial F_{n+2}}{\partial\Phi_3},\quad
\psi_{N+1, t_n}=\frac{\partial
F_{n+2}}{\partial\Phi_{N+1}},\end{array}\right.\end{equation} which
shows that constrained system (\ref{c18}) is a super Hamiltonian
system.
In what follows, for 6N+2 dimensional super Hamiltonian systems
(\ref{c12}) and (\ref{c19}), we find 3N+1 integrals of
motion. It is natural to find that
\begin{equation}\label{c22}
f_k=\psi_{1k}\phi_{1k}+\psi_{2k}\phi_{2k}+\psi_{3k}\phi_{3k},\quad
1\leq k\leq N,\end{equation} are integrals of motion for constrained
systems (\ref{c12}) and (\ref{c19}). Therefore, for constrained
systems (\ref{c12}) and (\ref{c19}), we choose 3N+1 integrals of
motion
\begin{equation}\label{c3N+1}
f_1, \cdots, f_N, F_3, F_4, \cdots, F_{2N+3}.\end{equation} After a
simple calculation, we get
\begin{equation}\label{c21}
\{F_m, F_{n+2}\}=\frac{\partial}{\partial t_n}F_m=0,\end{equation}
where Poisson bracket is defined by
\begin{equation}\label{c20}
\{F, G\}=\sum\limits_{i=1}^{3}\sum\limits_{j=1}^{N}(\frac{\partial
F}{\partial\phi_{ij}}\frac{\partial
G}{\partial\psi_{ij}}-(-1)^{p(\phi_{ij})p(\psi_{ij})}\frac{\partial
F}{\partial\psi_{ij}}\frac{\partial
G}{\partial\phi_{ij}})+\frac{\partial
F}{\partial\phi_{N+1}}\frac{\partial
G}{\partial\psi_{N+1}}+\frac{\partial
F}{\partial\psi_{N+1}}\frac{\partial
G}{\partial\phi_{N+1}}.\end{equation} The identity (\ref{c21}) means
that $\{F_m\}_{m\geq0}$ are in involution. The property of
involution among $\{f_k\}_{k=1}^{N}$ is obvious.
About the independence of
$\{f_k\}_{k=1}^{N}$ and $\{F_m\}_{m=3}^{2N+3}$, we can refer to the
proof of Proposition 1 in \cite{HeYuZhouCheng}. Thus we obtain the
following theorem
\begin{Theorem}
The constrained systems (\ref{c12}) and (\ref{c19}) are Liouville
integrable super Hamiltonian systems, whose integrals of motion are
given by (\ref{c3N+1}).\end{Theorem}
\section{Conclusions and Discussions}
In this paper, the cKdV system is successfully extended to the super
one. For new system, its super Hamiltonian structure is expressed in
the form of (\ref{c7}). In our previous papers
\cite{HeYuZhouCheng}-\cite{YuHeMaCheng},the binary
nonlinearization has been applied to the super AKNS system and the super Dirac
system. For the super AKNS system, two kinds of nonlinearization of
Lax pairs, including nonlinearization under an explicit symmetry
constraint\cite{HeYuZhouCheng} and nonlinearization under an
implicit symmetry constraint\cite{YuHanHe}, have been considered
respectively. And for the super Dirac system, we only consider
binary nonlinearization under an explicit symmetry
constraint\cite{YuHeMaCheng}. From these three kinds of
nonlinearization of Lax pairs, the symmetry constraint
is either implicit or explicit. The novelty of the constraint
(\ref{c10`}) for the super cKdV system is due to the combination of
the explicit constraint for even potentials $(q,r)$ and the implicit
constraint for odd potentials $(\alpha,\beta)$. Such combination
will make the process of binary nonlinearization complex. It is
highly non-trivial to solve $(\alpha,\beta)$ from implicit
constraints (\ref{c10`}) because it is related to a coupled
differential equations with variable coefficients. We introduce two
new odd variables (\ref{c23}) following the technique of implicit
constraint\cite{mali}. Thus, the spatial part and temporal parts of
the super cKdV system are nonlinearized respectively to the
constrained spatial system (\ref{c12}) and to the constrained
temporal system (\ref{c19}). Then, we see that systems (\ref{c12})
and (\ref{c19}) are super Hamiltonian systems. Furthermore,
constrained systems (\ref{c12}) and (\ref{c19}) are integrable in
the Liouville sense.
However, we are not able to do this for supersymmetric cKdV system.
Because spectral matrix of supersymmetric cKdV system can not be
described by a certain Lie super algebra. In a word, how to make
nonlinearization of supersymmetric cKdV system is an interesting
problem. Furthermore, it is also an interesting problem to find an explicit solution
of the super finite dimensional integrable system. We shall consider these problems in
the future.
{\bf Acknowledgments}
This work is supported by the Hangdian
Foundation KYS075608072 and KYS075608077, NSF of China under grant
number 10971109 and 11001069,
and Program for NCET under Grant
No.NCET-08-0515. We thank anonymous referees for their valuable
suggestions and pertinent criticisms.
|
1,314,259,995,940 | arxiv | \section{Introduction}
\label{sec:intro}
\noindent Overparameterized deep neural networks (DNNs) are known to generalize well on the test data~\cite{arora2019fine,allen2019learning}. However, overparameterization increases the network size, making DNNs resource-hungry and leading to extended training and inference time. This hinders the training and deployment of DNNs on low-power devices and limits the application of DNNs in systems with strict latency requirements. Several efforts have been made to reduce the storage and computational complexity of DNNs using model compression \cite{han2015learning,yu2017compressing,verma2020network,singh2019hetconv,louizos2017bayesian,alvarez2017compression,luo2017thinet,chen2018constraint,singh2020leveraging}. Network pruning is the most popular approach for model compression. In network pruning, we compress a large neural network by pruning redundant parameters while maintaining the model performance. The pruning approaches can be divided into two categories: unstructured and structured. Unstructured pruning removes redundant connections in the kernel, leading to sparse tensors~\cite{lee2018snip,han2015deep, zhang2018systematic}. Unstructured sparsity produces sporadic connectivity in the neural architecture, causing irregular memory access~\cite{wen2016learning} that adversely impacts the acceleration in hardware platforms. On the other hand, structured pruning involves pruning parameters that follow a high-level structure ($e.g.$, pruning parameters at the filter-level~\cite{li2016pruning,luo2017thinet,ding2018auto}). Typically, structure pruning leads to practical acceleration, as the parameters are reduced while the memory access remains contiguous. Existing pruning methods typically involve a three-stage pipeline: pretraining, pruning and finetuning, where the latter two stages are carried out in multiple stages until a desired pruning ratio is achieved. While the final pruned model leads to a low inference cost, the cost to achieve the pruned architecture remains high.
The lottery ticket hypothesis (LTH) \cite{frankle2018lottery,frankle2019stabilizing} showed that a randomly initialized overparametrized neural network contains a sub-network, referred to as the ``winning ticket,'' that when trained in isolation achieves the same test accuracy as the original network. Similar to LTH, there is compelling evidence~\cite{neyshabur2018role,neyshabur2014search,du2018gradient,du2018power,allen2019convergence,allen2019learning,singh2019hetconv} suggesting that overparameterization is not essential for high test accuracy, but is helpful to find a good initialization for the network \cite{li2018learning,zou2020gradient}. However, the procedure to find such sub-networks involves iterative pruning~\cite{frankle2018lottery} making it computationally intensive. If we know the sub-network beforehand, we can train a much smaller and efficient model with only 1-10\% of the parameters of the original network, reducing the computational cost involved during training.
An open research question concerns how to design a sub-network without undergoing the expensive multi-stage process of training, pruning and finetuning. There have been recent attempts~\cite{lee2018snip,wang2020picking} to alleviate this issue, involving a one-time neural network pruning at initialization by solving an optimization problem for detecting and removing unimportant connections. Once the sub-network is identified, the model is trained without carrying out further pruning. This procedure of pruning only once is referred to as pruning at initialization or foresight pruning~\cite{wang2020picking}. While these methods can find an approximation to the winning ticket, they have the following limitations hindering their practical applicability: $(1)$ The initial optimization procedure still requires large memory, since the optimization process is carried out over the original overparameterized model. $(2)$ The obtained winning ticket is specific to a particular dataset on which it is approximated, $i.e.$, a network pruned using a particular dataset may not perform optimally on a different dataset. $(3)$ These pruning based methods lead to unstructured sparsity in the model. Due to common hardware limitations, it is very difficult to get a practical speedup from unstructured compression.
In this paper, we design a novel structured sparse convolution (SSC) filter for convolutional layers, requiring significantly fewer parameters compared to standard convolution. The proposed filter leverages the inherent spatial properties in the images. The commonly used deep convolutional architectures, when coupled with SSC, outperform other state-of-the-art methods that do pruning at initialization. Unlike typical pruning approaches, the proposed architecture is sparse by design and does not require multiple stages of pruning. The sparsity of the architecture is dataset agnostic and leads to better transfer ability of the model when compared to existing state-of-the-art methods that do pruning at initialization. We also show that the proposed filter has implicit orthogonality that ensures minimum filter redundancy at each layer. Additionally, we show that the proposed filter can be viewed as a generalization of existing efficient convolutional filters used in group-wise convolution (GWC)~\cite{xie2017aggregated}, point-wise convolution (PWC)~\cite{Szegedy2015deeper}, and depth-wise convolution (DWC)~\cite{Vanhoucke2014talk}. Extensive experiments and ablation studies on standard benchmarks depict the efficacy of the proposed filter. Moreover, we further compress existing efficient models such as MobileNetv2~\cite{sandler2018mobilenetv2} and ShuffleNetv2~\cite{ma2018shufflenet} while achieving performance comparable to the original models.
\section{Methods}
\label{sec:proposed_approach}
\noindent We propose Structured Sparse Convolution (SSC) filter, which is composed of layered spatially sparse $K \times K$ and $1\times 1$ kernels. Unlike the typical CNN filters that have a kernel of fixed size, the SSC filter has three type of kernels, as shown in Figure~\ref{fig:basic}. The heterogeneous nature of the kernels are designed to have varying receptive fields that can capture different features in the input. As shown in Section~\ref{sec:orthogonal}, heterogeneity in kernels allows the neural network layer to accumulate information from different spatial locations in the feature map while significantly reducing redundancy in the number of parameters.
\begin{figure}[!t]
\centering
\includegraphics[scale=1.2]{image/basis_filter.png}
\caption{The three basic components that are used in the proposed SSC filter. Blue blocks indicates a zero-weight location. The red, orange and green blocks show the active weight location in three different type of kernels.}
\label{fig:basic}
\end{figure}
Consider layer $l$ of a model with an input ($h_{l-1}$) of size $i_{l-1} \times i_{l-1} \times M$, where $i_{l-1}$ corresponds to the spatial dimension (width and height) and $M$ denotes the number of channels of the input. Assume that layer $l$ has $N$ filters, resulting in an output feature map $h_{l}$ of size $i_l \times i_l \times N$.
We represent the computational and memory cost at the $l^{th}$ layer using the number of floating-point operations ($F_l$) and the number of parameters ($P_l$), respectively. The computational and memory cost associated with a standard convolutional layer with a $K \times K$ kernel is the following:
\begin{align}
\small
F_l &= i_l^2 \times N \times (K^2M) , \label{eq:nf_std}\\
P_l &= N \times (K^2M ) , \label{eq:np_std}
\end{align}
where $(K^2M)$ represents the number of total parameters from all $M$ channel-specific kernels. As is evident from (\ref{eq:nf_std}) and (\ref{eq:np_std}), reducing $(K^2M)$ directly reduces both the number of parameters and the computational cost of the model. This is indeed what our proposed method (SSC) achieves -- we design two types of sparse kernels, which form the basic components of SSC.
\begin{figure}[!t]
\centering
\includegraphics[scale=0.4]{image/filter.png}
\caption{The proposed convolutional layer with $N$ SSC filters. Blue blocks denote the zero-weight locations in $3\times 3$ and $1\times1$ kernels, while other colors show active weights.}
\label{fig:filter}
\end{figure}
\par
\textbf{Odd/Even $\mathbf{K \times K}$ kernel}: The two types of $K \times K$ kernels differ in terms of the location of the enforced sparsity. Considering $S \in R^{K^2}$ to be the flattened version of the $K \times K$ 2D kernel, we define the odd kernel as:
\begin{equation}
\small
\begin{cases}
S[i] = 0 & i \in \{2p \enspace | \enspace 0 < 2p < K^2, p\in \mathbb{N} \} \\
S[i] = w_i & i \in \{2p+1 \enspace | \enspace 0 < 2p+1 < K^2, p\in \mathbb{N}\}
\end{cases} ,
\end{equation}
The even kernel is defined in a similar fashion, where the kernel is zero at odd coordinates and non-zero at even coordinates of the filter. Figure~\ref{fig:basic} (a-b) illustrates the odd and even kernel, respectively, when $K=3$. These kernels replace the standard $K \times K$ kernels used in a convolutional layer.
\par
\textbf{SSC Filter}: A convolutional layer having $N$ SSC filters is shown in Figure~\ref{fig:filter}. An SSC filter is referred to as odd (or even) filter, if it only contains odd (or even) kernels. For each convolutional layer, an equal number of odd and even filters are used. We make the following modifications to a standard convolution filter having $M$, $K\times K$ kernels:
\begin{enumerate}[leftmargin=4mm]
\setlength{\itemsep}{0.0em}
\item Among the $M$ different kernels, we replace each kernel at the $k*g$ location with an odd/even kernel, where $g$ is a hyperparameter such that $0 < g<M$ and $k \in \{n \in {\mathbb{N}} \enspace | \enspace 0 < k*g \leq M\}$. Note that each filter has only one type (odd/even) of kernel. Each of the $N$ filters have $M/g$ such kernels. The computational cost ($F_{sg}$) and the memory cost ($P_{sg}$) for all the odd/even kernels in a filter:
\begin{align}
F_{sg} &= i_l^2 \times N \times \frac{(K^2 - c)M}{g}, \label{eq:33filter} \\
P_{sg} &= N \times \frac{(K^2-c)M}{g}, \label{eq:33filterpara} \\
c &= \Bigg\{\begin{array}{ll}
\ceil{\frac{K^2}{2}} & \text{Odd kernel}\\
K^2 - \ceil{\frac{K^2}{2}} & \text{Even kernel}
\end{array} ,
\end{align}
where $c$ represents the number of zeros in the kernel and $\ceil{.}$ denotes the ceiling function.
\item Out of the remaining $M(1 - 1/g)$ kernel locations in the filter, we place a $1 \times 1$ kernel at a fixed interval of $p$ as shown in Figure~\ref{fig:basic} (c). Each of the $N$ filters have $M(1-1/g)/p$ $1 \times 1$ kernels. The computational and memory cost of these $1 \times 1$ kernels can be defined as:
\begin{align}
F_{sp} &= i_l^2 \times N \times \frac{M(1-1/g)}{p} \label{eq:11filter} , \\
P_{sp} &= N \times \frac{M(1-1/g)}{p} \label{eq:11filterpara} .
\end{align}
\item The SSC filter is empty at the remaining $M(1-1/p)(1-1/g)$ locations, causing the filter to ignore the corresponding feature maps (input channels). Note that while a particular filter may not act on certain input features, other SSC filters of the convolutional layer will. This is enforced by the shifting procedure introduced below.
\end{enumerate}
\textbf{Shift operation:} If we naively use SSC filters in a convolutional layer, there will be a loss of information, as all $N$ filters will ignore the same input feature maps. To ensure that each SSC filter attends to a different set of feature maps, we shift the location of all kernels ($K \times K$ and $1\times1$) by $(n\text{ mod }q)$ at initialization\footnote{The shift operation is applied only once before training begins.}, where $n \in \{1,..., N\}$ denotes the index of the filter and $q := max(g,p)$. The shift operation across $N$ filters can be visualized in Figure~\ref{fig:filter}. We can divide the $N$ filters into sets of disjoint filters such that all the filters in a particular set attend to distinct input feature maps. Formally, let the collection of sets be defined as:
\begin{align}
\mathcal{Q} := \{ (0, q), [q, 2q), \dots, [N - (N\text{ mod }q), N)\} , \label{eq:Q-set}
\end{align}
where $[a,a+q)$ denotes the set of filters $a$ through $a+q-1$. Then $\forall f, f' \in [a,a+q)$, $f$ and $f'$ tend to disjoint input feature maps if $f \neq f'$. Moreover, $f$ and $f'$ are ``near-orthogonal'' ($f^Tf' \approx 0$), since they attend to non-overlapping regions of the input feature maps. As discussed in Section~\ref{sec:orthogonal}, the orthogonal property of a layer is of independent interest and allows the network to learn uncorrelated filters. Note that the design of the SSC filter induces structural sparsity as the sparse region is predetermined and fixed, which is in contrast to the unstructured pruning method \cite{frankle2018lottery, lee2018snip, wang2020picking}.
We can quantify the total reduction in the number of floating-point operations ($R_F$) and the number of parameters ($R_p$) with respect to the standard convolutional layer:
\begin{align}
R_F =& \left(1-\frac{F_{sg}+F_{sp}}{F_l}\right)\times100\% \\
=& \left(1-\frac{(1-c/K^2)}{g}-\frac{(1-1/g)}{K^2p}\right)\times100\% ,
\label{eq:flopreduction}\\
R_p =& \left(1-\frac{P_{sg}+P_{sp}}{P_l}\right)\times100\%\\
= & \left(1-\frac{(1-c/K^2)}{g}-\frac{(1-1/g)}{K^2p}\right)\times100\% ,
\label{eq:parareduction}
\end{align}
where $0<g$ and $p<M$. The hyperparameters $p$ and $g$ are set to achieve the desired sparsity in the architectures; we use $R_p$ as guiding principle behind choosing $p$ and $g$. One can also achieve a desired reduction in floating-point operations ($R_F$) to determine the corresponding hyperparameters. However, in our experiments we consider sparsity constraints only.
\subsection{Implicit Orthogonality}
\noindent
\label{sec:orthogonal}
Recent work~\cite{shang2016understanding, xie2017all} shows that deep convolutional networks learn correlated filters in overparameterized regimes. This implies filter redundancy and correlated feature maps when working with deep architectures. The issue of correlation across multiple filters in the convolutional layer has been addressed by incorporating an explicit orthogonality constraint to the filter of each layer~\cite{bansal2018can,wang2020orthogonal}. Consider a 2D matrix $W \in \mathbb{R}^{J \times N}$ containing all the filters: $W=[f_1,f_2,\dots, f_N]$, where $f_n\in \mathbb{R}^{J}$ is the vector containing all the parameters in the $n^{th}$ filter, and $J = K^2M$ for a standard convolutional layer. The soft-orthogonality (SO) constraint on a layer $l$ with the corresponding 2D matrix $W_l$ is defined as:
\begin{equation}
L_{SO}=\lambda||W_l^TW_l-I||_F^2 , \label{eq:softorth}
\end{equation}
\noindent where $I\in \mathbb{R}^{N\times N}$ is the identity matrix, $\lambda$ controls the degree of orthogonality and $||.||_F^2$ is the Frobenius norm. However, note that the columns of $W$ can only be mutually orthogonal if $W_l$ is an undercomplete or a square matrix ($J \geq N$), which may not be the case in practice. For overcomplete settings ($J<N$), $W_l^TW_l$ can be far from the identity, since the rank of $W_l$ will be upper-bounded by $J$. This makes the optimization in (\ref{eq:softorth}) a biased objective. To alleviate this issue, double soft orthogonality (DSO) regularization has been adopted~\cite{bansal2018can}, which covers both overcomplete and undercomplete cases:
\begin{equation}
L_{DSO}=\lambda\left(||W_l^TW_l-I||_F^2+||W_lW_l^t-I||_F^2\right) .
\label{eq:dualsoftorth}
\end{equation}
Both the constraints $L_{SO}$ and $L_{DSO}$ are commonly used regularization techniques to encourage filter diversity. While the above regularization methods provide a reasonable result by decreasing the correlation across filters, they have several limitations: ($1$) The objectives in (\ref{eq:softorth}-\ref{eq:dualsoftorth}) are computationally expensive and have to be computed over all the layers of the convolutional network. ($2$) The above objectives do not enforce uncorrelated filters but only encourage filter diversity, making such regularization dependent on the dataset complexity. For example, a dataset with fewer training examples may lead to a large number of redundant filters compared to a complex dataset having many training examples.
In contrast, the SSC filters induce group-wise ``near-orthogonality''. In particular, for each set $[a, a+q)$ as defined in (\ref{eq:Q-set}), the $q$ filters are implicitly pairwise orthogonal as they operate at non-overlapping regions of the input feature maps. In particular, the odd/even kernels combined with the shift operation lead to implicit orthogonality. This can be easily visualized from Figure~\ref{fig:filter}. The only source of potential redundancy in SSC filters could arise from the $1\times1$ kernels, whose receptive field may overlap with odd/even kernels. However, as we show in Section~\ref{sec:parameter_redundancy}, the average pairwise correlation is significantly lower compared to the correlation in a standard convolutional layer. Moreover, a key advantage of using SSC filters is that we do not require expensive explicit layer-wise regularization objectives (\ref{eq:softorth}) and (\ref{eq:dualsoftorth}). The reduction in the correlation among filters is a byproduct of the sparsity enforced in the SSC filters, which is complemented by low computational and memory costs as shown in (\ref{eq:flopreduction}) and (\ref{eq:parareduction}). We provide a comparison of the correlation obtained from the proposed filter with the correlation obtained in a standard model with SO and DSO constraints in the experiments.
\subsection{Connection to other efficient filters}
\noindent
\label{sec:connections}
The existing efficient filters used for groupwise convolution (GWC), depthwise convolution (DWC) and pointwise convolution (PWC) can be seen as special cases of the proposed SSC filters. The GWC, DWC and PWC have proven to be essential components for the state-of-the-art architectures designed for low-end computational devices, such as mobile devices~\cite{sandler2018mobilenetv2, ma2018shufflenet}.
For convenience, we denote the hyperparameter setting $g=0$ to the case where no odd/even kernel is included in the convolutional layer, $i.e.$, there is no sparsity within the kernel. Similarly, $p=0$ denotes the setting where no $1\times1$ kernel is included. If we set $p=0$, the SSC filters can be used for GWC, where each filter will be of size $K \times K \times M/g$ and the kernels operate on a group of feature maps that are separated by an interval of $g$. The DWC operation can be achieved by using a SSC filter with $p=0$ and $g=M$, where each filter will act on a single input feature map. The PWC operation can be achieved with the SSC filter by setting $g=0$ and $p=1$. While the SSC operation provides a generic framework that can be reduced to GWC, DWC and PWC, the SSC filters are inherently sparse. One can get the standard (non-sparse) version of these efficient filters by using the standard $K \times K$ kernels instead of odd/even kernels defined in Section~\ref{sec:proposed_approach}.
Current state-of-the-art efficient architectures MobileNetv2~\cite{sandler2018mobilenetv2} and ShuffleNetv2~\cite{ma2018shufflenet} reduce the number of parameters by using a two-layer sequential step: DWC followed by a PWC operation. While these operations are less computationally expensive compared to the standard convolutional layer, the latency of the model increases due to the two-layer sequential step. In contrast, the proposed SSC filter can achieve the two operations (GWC/DWC followed by PWC) using a single layer without increasing the number of parameters. We can perform a composition of GWC and PWC by using $p=1,g > 0$. Similarly, a composition of DWC and PWC can be done using $p=1$ and $g=M$. In Table \ref{table:efficient_arch} we show the results of above composition.
\section{Related Work}
\noindent The area of model compression has seen immense progress in recent years. Below, we highlight some recently proposed techniques for learning sparse neural networks. We divide the model compression and pruning techniques into three categories based on the computational cost involved.
\textbf{Pruning After Training} (PAT)~\cite{srinivas2015data,han2015deep,han2015learning,frankle2018lottery,frankle2019lottery,wen2016learning, yoon2017combined,singh2019play,singh2020leveraging} is the most widely used pruning method. \cite{han2015learning} proposed pruning the weight parameters using iterative thresholding. Knowledge distillation based methods~\cite{sanh2019distilbert,jiao2019tinybert,chen2020distilling} attempt to train a compressed student model that mimics the behaviour of the full-sized original model. \cite{jaderberg2014} and \cite{denton2014exploiting} proposed low-rank approximation of the weight tensors to reduce the memory and time complexity at training and testing time. These approaches tend to accumulate errors in the prediction when multiple layers are compressed.
PAT-based pruning methods are typically costly and time-consuming, since they require pretraining of the original overparameterized model.
\textbf{Dynamic Pruning} (DP)~\cite{lin2020dynamic,wang2020dynamic,mostafa2019parameter,mocanu2018scalable} involves pruning and training the model simultaneously; as the training progresses the model size decreases. Soft Filter Pruning (SFP)~\cite{he2018soft} prunes the filter after each epoch, but updates the pruned filters when training the model. Deep Rewiring (DeepR)~\cite{bellec2018deep} and Dynamic Sparse~\cite{mostafa2019parameter} prune and re-grow the architecture periodically, but are more computationally expensive when pruning large networks. Sparse momentum (SM)~\cite{dettmers2019sparse} also follows the prune and re-grow approach, however, it uses smoothed gradients to accelerate the training. These approaches can be trained more efficiently compared to PAT-based methods, since a model trained with DP shrinks with training.
\textbf{Pruning Before Training (PBT)} is the most challenging and yet the most practically useful setting among the three categories. Despite its importance, there have been only a few attempts that explore this setting. These approaches find a small sub-network before the training begins and thus require less computational resources and training time. While there have been some attempts to prune the deep neural network before the training~\cite{lee2018snip, Lee2020A, wang2020picking,wimmer2022interspace,singh2019hetconv}, they still require multiple forward/backward passes through the full model to detect unimportant connections in the network. Single-Shot Network Pruning (SNIP)~\cite{lee2018snip} tries to identify a sparse sub-network by solving a minimization problem that preserves the loss of the pruned network at initialization. \cite{Lee2020A} studied the pruning problem from a signal propagation perspective and proposed using an orthogonal initialization to ensure faithful signal propagation. Recently, \cite{wang2020picking} proposed Gradient Signal Preservation (GraSP) to prune the network at initialization by preserving gradient flow. Interspace Pruning (IP)~\cite{wimmer2022interspace} was recently proposed to overcome the bias introduced in PBT methods, which as a result improves the generalization of existing unstructured pruning methods (including SNIP and GraSP). Although these methods are developed to find a sub-network at initialization, they still require optimizing the original overparameterized model using the training dataset, which can be expensive for low-end devices. Moreover, the sub-networks found are specific to a particular dataset, hindering knowledge transfer across multiple tasks. Synaptic Flow (SynFlow)~\cite{tanaka2020pruning} prunes weights using a information throughput criterion to find a sparse network. Our proposed method also falls into the PBT category, where the sparse network is identified at the initialization. In contrast to previous methods, the proposed method is sparse by design and does not require solving an optimization problem to find a task-specific sub-network.
\section{Experiments}
\input{cifar100_results}
\label{sec:experiments}
\noindent We demonstrate the efficacy and efficiency of our proposed filter with an extensive set of experiments. First, we evaluate the performance of commonly used deep convolutional networks (ResNet-32/50 and VGG-19~\cite{He2016, simonyan2014very}) with SSC filters on four classification benchmarks: CIFAR-10/100~\cite{krizhevsky2009learning}, Tiny-ImageNet~\cite{wu2017tiny} and ImageNet~\cite{deng2009imagenet}. We show that the proposed SSC filters achieve state-of-the-art accuracy in most settings. We also apply SSC filters to existing state-of-the-art ``efficient" architectures: MobileNetV2~\cite{sandler2018mobilenetv2} and ShuffleNetV2~\cite{ma2018shufflenet}. In Section~\ref{sec:compress_efficient}, we show that these architectures can be further compressed by 47-48\% with SSC, while achieving high accuracy on the CIFAR-10 benchmark.
\par
Next, we conduct experiments to analyze the properties of SSC filters. Specifically, we find that SSC leads to significantly lower layer-wise filter correlation, implying fewer redundant filters compared to alternative methods that use explicit regularizers. We also test the robustness of SSC to overfitting when trained with limited training data. Finally, we evaluate the ability of SSC filters in transfer learning.
\input{tinyImageNet_results}
For all the experiments, we follow the training setup (optimizer, learning rate, training epochs) used in \cite{wang2020picking}. We include all hyperparameters in the supplementary material. In addition to the aforementioned experiments, we carry out extensive ablations on choosing the hyperparameters $g$ and $p$. In general, the hyperparameters $p$ and $g$ are set to achieve the desired sparsity in the architectures; we use $R_p$ (as defined in \ref{eq:parareduction}) as our guiding principle behind choosing $p$ and $g$. For brevity, we defer the ablation studies and further discussion to the supplementary.\\
\noindent \textbf{Baselines:} We include a number of baselines to compare the performance of the proposed approach. The baselines include methods that prune the architecture after complete training, such as OBD~\cite{hassibi1993optimal}, MLPrune~\cite{zeng2019mlprune} and LT~\cite{frankle2018lottery}. We also consider methods that do dynamic pruning: DSR~\cite{mostafa2019parameter}, SET~\cite{mocanu2018scalable} and Deep-R~\cite{bellec2018deep}. From the class of methods that do pruning before training (PBT), we consider SNIP~\cite{lee2018snip}, GraSP~\cite{wang2020picking}, and SynFlow~\cite{tanaka2020pruning}. All the above baselines, except PBT methods, have an added advantage of training a large overparameterized network compared to our proposed method that trains on a highly-sparse architecture from initialization. While we report performance of all the above baselines, a fair comparison of our method can only be done with SynFlow, SNIP and GraSP. We also consider the improvements introduced with IP~\cite{wimmer2022interspace} for PBT methods.
\subsection{Performance on Classification benchmarks}
\noindent We evaluate the proposed method on the CIFAR10 and CIFAR100 classification benchmark by training commonly used architectures (VGG-19 and ResNet-32) with the standard convolution filters replaced by the proposed SSC filters. We report the test accuracy using VGG-19 and ResNet-32 under three pruning ratios, namely 90\%, 95\% and 98\% in Table~\ref{table:classification_cifar}. We observed that SSC performs better than SNIP and GraSP in 8 of the 12 different settings considered, especially when we use ResNet32 (outperforming in 5 of 6 settings). Note that even in the settings where SSC is inferior to SNIP or GraSP, the performance is highly competitive to the best method. Moreover, in the extreme setting of 98\% sparsity on the CIFAR-100 dataset, SSC outperforms the next best method using ResNet32 with a 7.8\% relative improvement.
Tiny-ImageNet is a medium-scale dataset containing images of 200 classes from the ImageNet. Again, we chose VGG-19 and ResNet-32 as the base architectures with varying pruning ratio. The results are reported in the Table~\ref{table:classification_tiny_imagenet}. Once again, we observe that our approach outperforms the baselines methods SNIP and GraSP in 5 of the 6 settings. In the extreme setting of 95\% sparsity in ResNet-32, we observe that the proposed method shows a relative improvement of 4.6\% (absolute improvement of 2.36\%) over GraSP.
We also conduct a large scale experiment on the ImageNet dataset using the ResNet-50 architecture. Results are shown in Table~\ref{table:ResNet-50_imagenet_classification}. The model is trained under two pruning ratios: 60\% and 80\%. We report the top-1 and top-5 accuracy using SNIP, GraSP and our proposed method. Despite the added advantage in SNIP and GraSP that use memory intensive ``foresight pruning'' on the large ResNet-50 architecture, our proposed SSC performs comparably. For the 60\% pruning ratio, the top-1 accuracy of SSC only lags behind by 0.26\% and the top-5 by 0.14\% in absolute difference. This makes SSC based deep convolutional models appealing for devices that cannot train large models as SSC does not require iterative pruning on a large model. We also conducted the ImageNet experiment over the ResNet18 architecture and the results are shown in the Table~\ref{table:ResNet-18_imagenet_classification}. In spite of the additional overhead used in baseline methods, we find SSC to be better or comparable to the baselines.
\input{imageNet_results}
\subsection{Compression of the efficient Architecture}
\input{compression_results}
\label{sec:compress_efficient}
\noindent We apply the proposed SSC filter to the MobileNetV2~\cite{sandler2018mobilenetv2} and ShuffleNetV2~\cite{ma2018shufflenet} models, which are the state-of-the-art architectures for low-end devices. As discussed in Section~\ref{sec:connections}, we replace the DWC and PWC operations with the proposed SSC operation, allowing us to further compress the efficient architectures~\cite{sandler2018mobilenetv2, ma2018shufflenet}. We set the hyperparameters as follows: $(1)$ For all PWC filters, we fix $p=2$ and $g=0$. This reduces the total number of PWC parameters in the standard architecture by 50\%. $(2)$ For the DWC filters, we set $p=0$ and $g=M$. We do not use the odd/even kernel in the SSC filter as there is only a single kernel in DWC filters. We report the results in Table~\ref{table:efficient_arch}. We consider $L_1P$~\cite{li2016pruning}, Slimming~\cite{liu2017learning} and AutoSlim~\cite{yu2019autoslim} as the baseline models. Unlike the proposed method, all three baselines require a pretrained model to prune the model. Despite this advantage over SSC, the SSC filter shows significant improvement over the baselines for the MobileNetV2 and ShuffleNetV2 architectures.
\begin{figure*}[!th]
\centering
\includegraphics[width=\textwidth]{image/corrdual.png}
\caption{The filter correlation (Y axis) in ResNet-32 (X axis: Layer) for the proposed SSC (10\% parameters) and baselines.}
\label{fig:corr}
\end{figure*}
\subsection{Parameter Redundancy}
\label{sec:parameter_redundancy}
\noindent In Section~\ref{sec:orthogonal}, we discussed the implicit orthogonal nature of the SSC filters. We compare the average absolute correlation at each layer of the ResNet-32 architecture trained with and without using the SSC filters. The pairwise correlation ($\rho$) between the $i^{th}$ and $j^{th}$ filter can be calculated as: $\rho(f_i,f_j)=\mathbb{E}[(f_i-\mu_i)(f_j-\mu_j)]/(\sigma_i\sigma_j)^{1/2}$, where $f_i \in \mathbb{R}^{K^2M}$ represents the 1D vector of all the parameters in the $i^{th}$ filter and $\mu_i$ is the mean of all the parameters in $f_i$ and $\sigma_i=\sum_{i=1}^{K^2M}(f_i-\mu_i)$. We define the average absolute correlation at layer ${\ell}$: $C^{\ell}=\frac{1}{N^2}\sum_{i=1}^N\sum_{j=1}^N |\rho(f^{\ell}_i,f^{\ell}_j)|$. In Figure~\ref{fig:corr}, we report the correlation measure $C^{\ell}$ at each layer of the ResNet-32 architecture trained on the CIFAR-100 benchmark. We also compare the correlation measures obtained when a standard ResNet-32 model is trained with explicit orthogonal constraints described in (\ref{eq:softorth}) and (\ref{eq:dualsoftorth}).
As seen in Figure~\ref{fig:corr}, the correlation measure of the standard convolutional filters is higher compared to other methods, implying filter redundancy in the standard ResNet-32 architecture. We observe that for 31 layers out of the total 33, the SSC filters have lower correlation measure compared to standard filters. In comparison to methods with explicit regularization constraints, the SSC filters have lower correlation measure for 21 layers. This shows that SSC-based ResNet architecture can utilize parameters more effectively by learning diverse filters compared to the baselines. Moreover, the SSC filters circumvent the under/over-complete issues (Section~\ref{sec:orthogonal}), while avoiding expensive computations (\ref{eq:softorth}-\ref{eq:dualsoftorth}).
\vspace{-0.7em}
\subsection{Robustness to overfitting}
\noindent An overparameterized deep learning model is known to overfit in a low-data regime. As shown empirically in Section~\ref{sec:parameter_redundancy}, the proposed SSC filter can learn diverse filters. In this section, we analyze the robustness of the SSC filters when the training data is scarce. To test if the SSC filters are robust to overfitting, we train the ResNet-32 architecture on partial training data. We consider three scenarios that involve learning the model using only 20\%, 40\% and $50\%$ images of the total training set. The performance of the three scenarios is reported in Figure~\ref{fig:overfitting} (Left). The hyperparameters $g$ and $p$ are chosen such that the neural network has only 10\% of the total parameters in a standard ResNet-32 architecture. As expected, we observe that the model with the SSC filters are more robust to overfitting when compared to the model with the standard convolutional filters. In our experiments, we also observed that the difference in performance becomes more significant when the training data decreases.
\begin{figure}
\centering
\includegraphics[height=3cm,width=3.8cm]{image/overfitting.png}\quad
\includegraphics[height=3cm,width=3.8cm]{image/data_agnostic.png}
\vspace{0.5em}
\caption{\textbf{Left:} Accuracy with ResNet-32 on the CIFAR-10 dataset when only 20\%, 40\% and 50\% samples are used. \textbf{Right:} Accuracy of sparse networks when transferred to a different task.}
\label{fig:overfitting}
\end{figure}
\subsection{Transfer Learning}
\label{sec:data_agnostic_model}
\noindent In this section, we study the generalization of the sparse architectures on two different tasks. Recall from Section~\ref{sec:proposed_approach} that the pruning in SSC is governed by choosing the hyperparameters $g$ and $p$; thus, the obtained sparsity in SSC is independent of the training data. Whereas for the baseline methods, the sparse network is found by iterative procedures performed on the training data. We compare the transfer ability of the SSC architecture with the best performing baseline ($i.e.$, GraSP).
To study the transfer ability across two tasks, we first train GraSP on CIFAR-100 benchmark to prune the original architecture and then transfer (fine-tune) the obtained sparse network on the Tiny-ImageNet dataset. Similarly, for the proposed SSC-based architecture, we first find the sparse network by tuning the hyperparameters $g$ and $p$ on the CIFAR-100 benchmark, and then use the obtained sparse network for training on the Tiny-ImageNet dataset. To keep the complexity of the Tiny-ImageNet task same as CIFAR-100, we only use 100 classes from the Tiny-ImageNet dataset. The performance of the two methods (GraSP and Ours) under pruning ratios of $90\%$ and $95\%$ is shown in Figure~\ref{fig:overfitting} (right). We observe that the SSC-based architectures performs better than GraSP, depicting better transfer ability.
\section{Conclusions}
\noindent We have proposed structured sparse convolutional (SSC) for deep convolutional neural networks. SSC is based on efficient filters composed of novel odd/even kernels and $1 \times 1$ kernels. The proposed kernels leverage the spatial dependencies in the input features to reduce the floating-point operations and the number of parameters in the deep neural network from initialization. Through a series of experiments, we demonstrate the efficacy of the proposed SSC when applied to commonly used deep convolutional networks. A key attribute of SSC filters is that unlike existing approaches, SSC requires no additional pruning during/after training. We also show that the SSC filters generalize other efficient filters (GWC, DWC, and PWC) and demonstrate the applicability of SSC-filters on existing efficient architectures like MobileNet and ShuffleNet.
While this work has demonstrated potential in designing sparse filters without additional steps, our proposed filter has limited practical speedup as existing deep learning libraries do not support efficient operations on structured tensors. However, an efficient CUDA implementation to incorporate such operations for structured sparsity can easily solve this issue. We strongly believe that the proposed method could be highly beneficial to the broader community in training powerful deep systems with only a fraction of computational resources. Regardless, there is still little that we understand about identifying good sparse models that can generalize. We hope that this work will motivate the research community to identify efficient filters that can naturally lead to highly-sparse and effective deep learning models.
{\small
\bibliographystyle{ieee_fullname}
|
1,314,259,995,941 | arxiv | \section{Introduction}
\label{introduction}
Any (boundary continuous) hyperbolic space induces on the boundary at infinity
a M\"obius structure which reflects most essential asymptotic properties of the space.
A {\em M\"obius structure}
$M$
on a set
$X$
is a class of M\"obius equivalent semi-metrics on
$X$,
where two semi-metrics are equivalent if and only if they have
the same cross-ratios on every 4-tuple of points in
$X$.
In other words, a M\"obius structure is given by cross-ratios.
The {\em inverse problem} of M\"obius geometry asks to describe M\"obius structures which are
induced by hyperbolic spaces. In this paper, we give a solution to the inverse problem
in a simplest case when the space
$X$
is the circle,
$X=S^1$.
The paper is a continuation of \cite{Bu18}, \cite{Bu19}, where the inverse problem is formulated,
and important steps toward its solution are done.
Various hyperbolic cone constructions (see \cite{BoS}, \cite{BS07}) give a hyperbolic metric space with
prescribed metric at infinity. However, no one of them is equivariant with respect to M\"obius
transformations of the metric. Thus one can consider the inverse problem as the existence problem
of an equivariant hyperbolic cone over a given metric.
We introduce a set of axioms describing M\"obius structures on the circle, which are
induced by hyperbolic spaces. We always consider {\em ptolemaic} M\"obius structures,
that is, for which every semi-metric with infinitely remote point is a metric. Our
{\em monotonicity} axiom is somewhat stronger than that in \cite{Bu18}.
Thus a M\"obius structure, which satisfies it, is called {\em strictly monotone}.
As in \cite{Bu18}, we also use a key {\em Increment} axiom. For the definition and details see
sect.~\ref{subsect:increment_axiom}.
The main result of the paper is the following
\begin{thm}\label{thm:main} Given a strictly monotone M\"obius structure
$M$
on the circle satisfying Increment axiom, there is a complete,
proper and geodesic hyperbolic metric space
$Y$
with boundary at infinity
$\d_{\infty} Y=S^1$,
for which the induced M\"obius structure
$M_Y$
on
$\d_{\infty} Y$
is isomorphic to
$M$, $M_Y=M$.
\end{thm}
\begin{rem}\label{rem:fine_topology} The class
$\mathcal{I}$
of strictly monotone M\"obius structures on the circle which satisfy Increment
axiom contains an open in a fine topology neighborhood of the canonical
M\"obius structure
$M_0$,
see sect.~\ref{subsect:increment_axiom}.
\end{rem}
{\em Structure of the paper}. In section~\ref{sect:moebius_structures}, we give
a brief introduction to M\"obius structures, formulate basic axioms, including
Increment axiom, and discuss a fine topology on the set
$\mathcal{M}$
M\"obius structures satisfying our axioms.
In section~\ref{sect:lines_zzpath} we recall the notions of lines and zz-paths
associated with a given M\"obius structure
$M\in\mathcal{M}$.
After a brief discussion in sect.~\ref{sect:metric}
of the metric on the set
$\operatorname{Harm}$
of harmonic 4-tuples, we consider in
sect.~\ref{sect:involutions_without_fixed_points} an important notion of involutions
without fixed points and the associated notion of elliptic quasi-lines. Given
$\omega\in X=S^1$,
we consider here the set
$\operatorname{Harm}_\omega\subset\operatorname{Harm}$
of harmonic 4-tuples containing
$\omega$.
Such sets play a important role in the proof of the main theorem.
A key technical part of the paper is section~\ref{sect:diameter_quasi_lines},
where we give an universal upper bound for the diameter of elliptic
quasi-lines. Such estimate allows to reduce the study of geometry on
the space
$\operatorname{Harm}$
to the study of its much simpler subspaces
$\operatorname{Harm}_\omega$.
In section~\ref{sect:hyperbolic_approximations}, we discuss properties of
a hyperbolic cone construction over
$X_\omega$
called the hyperbolic approximation
$Z$
of
$X_\omega$.
We show here that
$Z$
is a hyperbolic geodesic metric space. This section is based on the book \cite[Chapter~6]{BS07}.
Finally, in sect.~\ref{sect:Xquasi-isometricZ} we show that the spaces
$\operatorname{Harm}_\omega$
and
$Z$
are quasi-isometric. As a corollary, we obtain that the required filling
$Y=\operatorname{Harm}$
of a given M\"obius structure
$M\in\mathcal{M}$
on the circle is hyperbolic. The proof essentially uses Increment axiom and results
of \cite{Bu18}.
\section{M\"obius structures}
\label{sect:moebius_structures}
\subsection{Basic notions}
\label{subsect:basics}
Let
$X$
be a set. A 4-tuple
$q=(x,y,z,u)\in X^4$
is said to be {\em admissible} if no entry occurs three or
four times in
$q$.
A 4-tuple
$q$
is {\em nondegenerate}, if all its entries are pairwise
distinct. Let
$\mathcal{P}_4=\mathcal{P}_4(X)$
be the set of all ordered admissible 4-tuples of
$X$, $\operatorname{reg}\mathcal{P}_4\subset\mathcal{P}_4$
the set of nondegenerate 4-tuples.
A function
$d:X^2\to\widehat\mathbb{R}=\mathbb{R}\cup\{\infty\}$
is said to be a {\em semi-metric}, if it is symmetric,
$d(x,y)=d(y,x)$
for each
$x$, $y\in X$,
positive outside the diagonal, vanishes on the diagonal
and there is at most one infinitely remote point
$\omega\in X$
for
$d$,
i.e. such that
$d(x,\omega)=\infty$
for some
$x\in X\setminus\{\omega\}$.
Moreover, we require that if
$\omega\in X$
is such a point, then
$d(x,\omega)=\infty$
for all
$x\in X$, $x\neq\omega$.
A metric is a semi-metric that satisfies the triangle inequality.
A {\em M\"obius structure}
$M$
on
$X$
is a class of M\"obius equivalent semi-metrics on
$X$,
where two semi-metrics are equivalent if and only if they have
the same cross-ratios on every
$q\in\operatorname{reg}\mathcal{P}_4$.
Given
$\omega\in X$,
there is a semi-metric
$d_\omega\in M$
with infinitely remote point
$\omega$.
It can be obtained from any semi-metric
$d\in M$
for which
$\omega$
is not infinitely remote by a {\em metric inversion},
\begin{equation}\label{eq:metric_inversion}
d_\omega(x,y)=\frac{d(x,y)}{d(x,\omega)d(y,\omega)}.
\end{equation}
Such a semi-metric is unique up to a homothety, see \cite{FS13},
and we use notation
$|xy|_\omega=d_\omega(x,y)$
for the distance between
$x$, $y\in X$
in that semi-metric. We also use notation
$X_\omega=X\setminus\{\omega\}$.
Every M\"obius structure
$M$
on
$X$
determines the
$M$-{\em topology}
whose subbase is given by all open balls centered at finite points
of all semi-metrics from
$M$
having infinitely remote points.
\begin{exa}\label{exa:canonical_moebius_circle} Our basic example is the
{\em canonical} M\"obius structure
$M_0$
on the circle
$X=S^1$.
We think of
$S^1$
as the unit circle in the plane,
$S^1=\set{(x,y)\in\mathbb{R}^2}{$x^2+y^2=1$}$.
For
$\omega=(0,1)\in X$
the stereographic projection
$X_\omega\to\mathbb{R}$
identifies
$X_\omega$
with real numbers
$\mathbb{R}$.
We let
$d_\omega$
be the standard metric on
$\mathbb{R}$,
that is,
$d_\omega(x,y)=|x-y|$
for any
$x,y\in\mathbb{R}$.
This generates a M\"obius structure on
$X$
which is called {\em canonical}. The basic feature of the canonical M\"obius
structure on
$X=S^1$
is that for any 4-tuple
$(\sigma,x,y,z)\subset X$
with the cyclic order
$\sigma xyz$
we have
$d_\sigma(x,y)+d_\sigma(y,z)=d_\sigma(x,z)$.
\end{exa}
\subsection{Harmonic pairs}
\label{subsect:harm_pairs}
From now on, we assume that
$X$
is the circle,
$X=S^1$.
It is convenient to use unordered pairs
$(x,y)\sim(y,x)$
of distinct points on
$X$,
and we denote their set by
$\operatorname{aY}=S^1\times S^1\setminus\Delta/\sim$,
where
$\Delta=\set{(x,x)}{$x\in S^1$}$
is the diagonal. A pair
$q=(a,b)\in\operatorname{aY}\times\operatorname{aY}$
is harmonic if
\begin{equation}\label{eq:harmonic}
|xz|\cdot|yu|=|xu|\cdot|yz|
\end{equation}
for some and hence any semi-metric of the M\"obius structure, where
$a=(x,y)$, $b=(z,u)$.
The pair
$a$
is called the {\em left} axis of
$q$,
while
$b$
the {\em right} axis. We denote by
$\operatorname{Harm}$
the set of harmonic pairs,
$\operatorname{Harm}\subset\operatorname{aY}\times\operatorname{aY}$,
of the given M\"obius structure. There is a canonical involution
$j:\operatorname{Harm}\to\operatorname{Harm}$
without fixed points given by
$j(a,b)=(b,a)$.
Note that
$j$
permutes left and right axes. The quotient space we denote by
$\operatorname{Hm}:=\operatorname{Harm}/j$.
In other words,
$\operatorname{Hm}$
is the set of unordered harmonic pairs of unordered pairs
of points in
$X$,
and
$\operatorname{Harm}$
is its 2-sheeted covering.
\begin{rem}\label{rem:2-covering_harm} Sometimes, we need a 2-sheeted
covering
$\widetilde\operatorname{Harm}$
of
$\operatorname{Harm}$,
which consists of harmonic pairs
$q=(a,b)$
with
$a=(x,y)\in S^1\times S^1\setminus\Delta$, $b\in\operatorname{aY}$.
Note that
$\widetilde\operatorname{Harm}$
is homeomorphic to the tangent bundle of
$\operatorname{H}^2$.
\end{rem}
\subsection{Axioms}
\label{subsect:axioms}
We list a set of axioms for a M\"obius structure
$M$
on the circle
$X=S^1$,
which needed for Theorem~\ref{thm:main}.
\begin{itemize}
\item [(T)] Topology: $M$-topology
on
$X$
is that of
$S^1$.
\item[(M($\alpha$))] Monotonicity: Fix
$1>\alpha\ge\sqrt{2}-1$.
Given a 4-tuple
$q=(x,y,z,u)\in X^4$
such that the pairs
$(x,y)$, $(z,u)$
separate each other, we have
$$|xy|\cdot|zu|\ge\max\{|xz|\cdot|yu|+\alpha|xu|\cdot|yz|,\alpha|xz|\cdot|yu|+|xu|\cdot|yz|\}$$
for some and hence any semi-metric from
$M$.
\item[(P)] Ptolemy: for every 4-tuple
$q=(x,y,z,u)\in X^4$
we have
$$|xy|\cdot|zu|\le |xz|\cdot|yu|+|xu|\cdot|yz|$$
for some and hence any semi-metric from
$M$.
\end{itemize}
A M\"obius structure
$M$
on the circle
$X$
that satisfies axioms T, M($\alpha$), P is said to be {\em strictly monotone}.
We denote by
$\mathcal{M}$
the class of strictly monotone M\"obius structures on
$X$.
\begin{rem}\label{rem:zolotov} Axiom~M($\alpha$) is motivated by the work
\cite{Zo18} of V.~Zolotov. It is stronger than that in
\cite{Bu19}. The lower bound for
$\alpha$
is used in sect.~\ref{subsect:diam_quai-lines}.
\end{rem}
\begin{rem}\label{rem:axiom_Q} Axiom~P is satisfied, for example,
for the M\"obius structure on the boundary at infinity of any
$\operatorname{CAT}(-1)$
space, see \cite{FS12}.
\end{rem}
\begin{rem}\label{rem:canonical_axioms} The canonical M\"obius structure
$M_0$
on
$X=S^1$
clearly satisfies Axioms~T, M($\alpha$), P.
\end{rem}
We recall some immediate corollaries from the axioms, see \cite{Bu19}. It follows from axiom~(P)
that any semi-metric from
$M$
with an infinitely remote point is a metric, i.e. it satisfies the triange inequality.
A choice of
$\omega\in X$
uniquely determines the interval
$xy\subset X_\omega$
for any distinct
$x$, $y\in X$
different from
$\omega$
as the arc in
$X$
with the end points
$x$, $y$
that does not contain
$\omega$.
We have \cite[Corollary~2.6, Corollary~2.7 ]{Bu19}.
\begin{cor}\label{cor:interval_monotone}
Axiom M($\alpha$) implies the following. Assume for a nondegenerate 4-tuple
$q=(x,y,z,u)\in\operatorname{reg}\mathcal{P}_4$
the interval
$xz\subset X_u$
is contained in
$xy$, $xz\subset xy\subset X_u$.
Then
$|xz|_u<|xy|_u$.
\end{cor}
\begin{cor}\label{cor:harm_separate} For any harmonic pair
$((x,y),(z,u))\in\operatorname{Harm}$
the pairs
$(x,y)$, $(z,u)\in\operatorname{aY}$
separate each other.
\end{cor}
\subsection{Increment axiom and a fine topology on $\mathcal{M}$}
\label{subsect:increment_axiom}
Increment axiom is not used explicitly in the paper.
However, it is very important in proving that lines with respect to
a M\"obius structure are geodesic, see \cite{Bu18}. We recall it here
for convenience of the reader. For more details see \cite{Bu17}, where
it has been introduced.
The following is an alternative description of a M\"obius structure which
is convenient in many cases. For any semi-metric
$d$
on
$X$
we have three cross-ratios
$$q\mapsto \operatorname{cr}_1(q)=\frac{|x_1x_3||x_2x_4|}{|x_1x_4||x_2x_3|};
\operatorname{cr}_2(q)=\frac{|x_1x_4||x_2x_3|}{|x_1x_2||x_3x_4|};
\operatorname{cr}_3(q)=\frac{|x_1x_2||x_3x_4|}{|x_2x_4||x_1x_3|}$$
for
$q=(x_1,x_2,x_3,x_4)\in\operatorname{reg}\mathcal{P}_4$,
whose product equals 1, where
$|x_ix_j|=d(x_i,x_j)$.
We associate with
$d$
a map
$M_d:\operatorname{reg}\mathcal{P}_4\to L_4$
defined by
\begin{equation}\label{eq:moeb_map}
M_d(q)=(\ln\operatorname{cr}_1(q),\ln\operatorname{cr}_2(q),\ln\operatorname{cr}_3(q)),
\end{equation}
where
$L_4\subset\mathbb{R}^3$
is the 2-plane given by the equation
$a+b+c=0$.
Two semi-metrics
$d$, $d'$
on
$X$
are M\"obius equivalent if and only
$M_d=M_{d'}$.
Thus a M\"obius structure on
$X$
is completely determined by a map
$M=M_d$
for any semi-metric
$d$
of the M\"obius structure, and we often identify a M\"obius structure
with the respective map
$M$.
In this description, axioms~(M($\alpha$)) and (P) are these:
M(($\alpha$)) Fix
$1>\alpha\ge\sqrt{2}-1$.
Given a 4-tuple
$q=(x,y,z,u)\in X^4$
such that the pairs
$(x,y)$, $(z,u)$
separate each other, we have
$$\operatorname{cr}_3(q)\ge\max\left\{1+\frac{\alpha}{\operatorname{cr}_1(q)},\alpha+\frac{1}{\operatorname{cr}_1(q)}\right\}.$$
(P) for every 4-tuple
$q=(x,y,z,u)\in X^4$
we have
$$\operatorname{cr}_3(q)\le 1+\frac{1}{\operatorname{cr}_1(q)}.$$
We use notation
$\operatorname{reg}\mathcal{P}_n$
for the set of ordered nondegenerate
$n$-tuples
of points in
$X=S^1$, $n\in\mathbb{N}$.
For
$q\in\operatorname{reg}\mathcal{P}_n$
and a proper subset
$I\subset\{1,\dots,n\}$
we denote by
$q_I\in\operatorname{reg}\mathcal{P}_k$, $k=n-|I|$,
the
$k$-tuple
obtained from
$q$
(with the induced order) by crossing out all entries which correspond to elements of
$I$.
(I) Increment Axiom: for any
$q\in\operatorname{reg}\mathcal{P}_7$
with cyclic order
$\operatorname{co}(q)=1234567$
such that
$q_{247}$
and
$q_{157}$
are harmonic, we have
$$\operatorname{cr}_1(q_{345})>\operatorname{cr}_1(q_{123}).$$
It is proved in \cite[Proposition~7.10]{Bu17} that the canonical M\"obius
structure
$M_0$
on the circle
$X=S^1$
satisfies Increment Axiom.
We define a fine topology on
$\mathcal{M}$
as follows. Let
$\operatorname{reg}^+\mathcal{P}_7\subset X^7$
be the subset of
$\operatorname{reg}\mathcal{P}_7$
which consists of all
$q\in\operatorname{reg}\mathcal{P}_7$
with the cyclic order. We take on
$\operatorname{reg}^+\mathcal{P}_7$
the topology induced from the standard topology of the 7-torus
$X^7$.
We associate with a M\"obius structure
$M\in\mathcal{M}$
a section of the trivial bundle
$\operatorname{reg}^+\mathcal{P}_7\times\mathbb{R}^4\to\operatorname{reg}^+\mathcal{P}_7$
given by
$$M(q)=(q,\operatorname{cr}_2(q_{247}),\operatorname{cr}_2(q_{157}),\operatorname{cr}_1(q_{345}),\operatorname{cr}_1(q_{123}))$$
for
$q=1234567\in\operatorname{reg}^+\mathcal{P}_7$.
Taking the product topology on
$\operatorname{reg}^+\mathcal{P}_7\times\mathbb{R}^4$,
we define the {\em fine} topology on
$\mathcal{M}$
with base given by sets
$$U_V=\set{M\in\mathcal{M}}{$M(\operatorname{reg}^+\mathcal{P}_7)\subset V$},$$
where
$V$
runs over open subsets of
$\operatorname{reg}^+\mathcal{P}_7\times\mathbb{R}^4$.
The class
$\mathcal{I}$
of (strictly) monotone M\"obius structures on the circle which satisfy Axiom~(I)
contains an open in the fine topology neighborhood of
$M_0$,
see \cite[Proposition~7.14]{Bu17}.
\section{Lines and zigzag paths}
\label{sect:lines_zzpath}
Here we briefly recall definitions and some properties of lines and zigzag paths
from \cite{Bu18}, \cite{Bu19}.
\subsection{Lines}
\label{subsect:lines}
\begin{lem}\label{lem:project_point_line}\cite[Lemma~3.1]{Bu19} Given
$a\in\operatorname{aY}$
and
$x\in X$, $x\notin a$,
there is a uniquely determined
$y\in X$
such that the pair
$(a,b)$
is harmonic,
$(a,b)\in\operatorname{Hm}$,
where
$b=(x,y)$.
\end{lem}
We denote by
$\rho_a(x)=y$
the point
$y$
from Lemma~\ref{lem:project_point_line}. The {\em line} with axis
$a\in\operatorname{aY}$
is defined as the set
$\operatorname{h}_a\subset\operatorname{Hm}$
which consists of all pairs
$q=(a,b)$
with
$b=(x,\rho_a(x))$
where
$x$
run over an arc in
$X$
determined by
$a$.
This is well defined because
$\rho_a:X\to X$
is involutive,
$\rho_a^2=\operatorname{id}$
(we extend
$\rho_a$
to
$a=(z,u)$
by
$\rho_a(z)=z$, $\rho_u=u$). In this case, we use notation
$x_a:=b$
and say that
$x_a\in\operatorname{h}_a$
is the projection of
$x$
to the line
$\operatorname{h}_a$.
For more about lines see \cite{Bu18}. In partial, every line
is homeomorphic to the real line
$\mathbb{R}$,
different points on a line are in {\em strong causal relation}, that is,
either of them lies on an open arc in
$X$
determined by the other one, and vice versa, given
$b$, $b'\in\operatorname{aY}$
in strong causal relation, there exists a unique line
$\operatorname{h}_a$
through
$b$, $b'$,
see \cite[Lemma~3.2, Lemma~4.2]{Bu18}. In this case, the pair
$a\in\operatorname{aY}$
(or the line
$\operatorname{h}_a$)
is called the {\em common perpendicular} to
$b$, $b'$.
The {\em segment}
$qq'$
of a line
$\operatorname{h}_a$
with
$q=(a,b)$, $q'=(a,b')\in\operatorname{h}_a$
is defined as the union of
$q$, $q'$
and all
$q''=(a,b'')\in\operatorname{h}_a$
such that
$b''$
separates
$b$, $b'$.
The last means that
$b$
and
$b'$
lie on different open arcs in
$X$
determined by
$b''$.
The points
$q$, $q'$
are the {\em ends} of
$qq'$.
The segment
$qq'\subset\operatorname{h}_a$
is homeomorphic to the standard segment
$[0,1]$.
\subsection{Distance between harmonic pairs with common axis}
\label{subsect:distance_harmonic_pairs}
Given two harmonic pairs in
$q$, $q'\in\operatorname{Hm}$
with a common axis, say
$q=(a,b)$
and
$q'=(a,b')$,
we define {\em the distance}
$|qq'|$
between them as
\begin{equation}\label{eq:distance}
|qq'|=\left|\ln\frac{|xz'|\cdot|yz|}{|xz|\cdot|yz'|}\right|
\end{equation}
for some and hence any semi-metric on
$X$
from
$M$,
where
$a=(x,y)$, $b=(z,u)$, $b'=(z',u')\in\operatorname{aY}$.
One easily checks that every line
$\operatorname{h}_a\subset\operatorname{Hm}$
with this distance is isometric to the real line
$\mathbb{R}$
with the standard distance.
\subsection{Zigzag paths}
\label{subsect:zigzag_paths}
Every harmonic pair
$q=(a,b)\in\operatorname{Hm}$
has two axes. Thus moving along of a line, we have a possibility
to change the axis of the line at any moment and move along the line
determined by the other axis. This leads to the notion of zig-zag path.
A {\em zig-zag} path, or zz-path,
$S\subset\operatorname{Hm}$
is defined as finite (maybe empty) sequence of segments
$\sigma_i$
in
$\operatorname{Hm}$,
where consecutive segments
$\sigma_i$, $\sigma_{i+1}$
have a common end
$q=\sigma_i\cap\sigma_{i+1}\in\operatorname{Hm}$
with axes determined by
$\sigma_i$, $\sigma_{i+1}$.
Segments
$\sigma_i$
are also called {\em sides} of
$S$,
while a {\em vertex} of
$S$
is an end of a side. Given
$q$, $q'\in\operatorname{Hm}$,
there is a zz-path
$S$
in
$\operatorname{Hm}$
with at most five sides that connects
$q$
and
$q'$
(see \cite[Lemma~3.3]{Bu18}). This notion is easily lifted to
$\operatorname{Harm}$.
\section{Metric on $\operatorname{Hm}$ and filling of $M$}
\label{sect:metric}
\subsection{Distance $\delta$ on $\operatorname{Hm}$}
\label{subsect:dist_de}
Let
$S=\{\sigma_i\}$
be a zz-path in
$\operatorname{Hm}$.
We define the length of
$S$
as the sum
$|S|=\sum_i|\sigma_i|$
of the length of its sides. Now, we define a distance
$\delta$
on
$\operatorname{Hm}$
by
$$\delta(q,q')=\inf_S|S|,$$
where the infimum is taken over all zz-paths
$S\subset\operatorname{Hm}$
from
$q$
to
$q'$.
One easily sees that
$\delta$
is a finite pseudometric on
$\operatorname{Hm}$,
see \cite[Proposition~6.2]{Bu18}.
The following result is obtained in \cite{Bu18}, \cite{Bu19}.
\begin{thm}\label{thm:de_metric_space} Assume that a M\"obius structure
$M$
on
$X=S^1$
is strictly monotone, i.e., it satisfies axioms~(T), (M($\alpha$)), (P). Then
$(\operatorname{Hm},\delta)$
is a complete, proper, geodesic metric space with
$\delta$-metric
topology coinciding with that induced from
$X^4$. If, in addition
$M$
satisfies Increment axiom, then every line in
$\operatorname{Hm}$
is a geodesic.
\end{thm}
\begin{rem}\label{rem:hm_vs_harm} Since
$\operatorname{Harm}$
is a 2-sheeted covering of
$\operatorname{Hm}$,
all of the conclusions of Theorem~\ref{thm:de_metric_space} hold for the space
$\operatorname{Harm}$.
\end{rem}
\subsection{Filling}
\label{subsect:filling}
Now we define a filling
$Y$
of a strictly monotone M\"obius structure
$M$
on
$X$
as the space
$(\operatorname{Hm},\delta)$
of harmonic pairs in
$M$
with the distance
$\delta$, $Y=(\operatorname{Hm},\delta)$.
Our aim is to show under the assumption that
$M$
in addition satisfies Increment axiom
$Y$
is a required in Theorem~\ref{thm:main} hyperbolic space. Sometimes, we pass to
its 2-sheeted covering
$\operatorname{Harm}$
and use the same notation
$Y=(\operatorname{Harm},\delta)$.
\section{Involutions of $X$ without fixed points}
\label{sect:involutions_without_fixed_points}
\subsection{Some properties}
\label{subsect:properties_fixed_points}
Involution
$\rho:X\to X$
of
$X=S^1$
is an involutive,
$\rho^2=\operatorname{id}$,
homeomorphism.
\begin{lem}\label{lem:separate} Let
$\rho:X\to X$
be an involution without fixed points. Then for any distinct
$x$, $y\in X$
the pairs
$a=(x,\rho(x))$, $b=(y,\rho(y))$
separate each other.
\end{lem}
\begin{proof} Assume to the contrary that there are distinct
$x$, $y\in X$
such that the respective
$a$, $b\in\operatorname{aY}$
do not separate each other. Let
$X=a^+\cup a^-$
decomposition of
$X$
into (closed) arcs determined by
$a$.
By the assumption,
$b$
lies on one of these arcs, say
$b\subset a^+$.
Since
$\rho$
is an involution, we have
$\rho(a)=a$
and
$\rho(b)=b$.
Therefore,
$\rho$
preserves
$a^+$
permuting its ends
$x$, $\rho(x)$.
But in this case we observe a fixed point of
$\rho$
inside of
$a^+$.
This is a contradiction because
$\rho$
has no fixed points.
\end{proof}
Let
$\rho:X\to X$
be an involution without fixed points. The factor
$X/\rho$
can be identified with the subset
$$e_\rho=\set{(x,\rho(x))\in\operatorname{aY}}{$x\in X$}\subset\operatorname{aY},$$
which is called an {\em elliptic quasi-line.}
\begin{lem}\label{lem:harmonic_pairs_ellitic} Let
$e=e_\rho$
be an elliptic quasi-line in
$\operatorname{aY}$.
Then for every
$s\in\operatorname{aY}$
there is a unique
$t\in e$
such that the 4-tuple
$(s,t)$
is harmonic.
\end{lem}
\begin{proof} First, we show that the image under the involution
$\rho$
of at least one of the open arcs
$s^+$, $s^-$,
in which
$s=(x,y)$
separates
$X$,
misses that arc. Indeed, if
$\rho(x)=y$,
then
$\rho(y)=x$.
In that case,
$\rho$
permutes the arcs
$s^+$, $s^-$
since otherwise,
$\rho(s^\pm)=s^\pm$,
and thus
$\rho$
has a fixed point.
By Lemma~\ref{lem:separate} we know that the pairs
$(x,\rho(x))$
and
$(y,\rho(y))$
separate each other. Hence,
$\rho(s)$
and
$s$
do not separate each other, and we can assume without loss of generality, that
$\rho(s)\subset s^-$.
Then
$\rho(s^+)$
misses
$s^+$
since otherwise
$\rho(s^+)\supset s^+$,
and thus
$\rho$
has a fixed point.
We denote that arc by
$s^+$
and define a function
$f:s^+\to\mathbb{R}$
by
$$f(z)=\frac{|zy|_x}{|\rho(z)y|_x},$$
where recall
$x$
is the infinitely remote point for the semi-metric
$|zu|_x$.
By the choice of
$s^+$,
we have
$\rho(z)=y$
for no
$z\in s^+$.
Thus
$f$
is continuous,
$f(z)\to\infty$
as
$z\to x$
and
$f(z)\to 0$
as
$z\to y$.
By continuity,
$f(z)=1$
for some
$z\in s^+$.
Then the 4-tuple
$(s,t)$
is harmonic for
$t=(z,\rho(z))\in e$.
If
$t'\in e$
is another element with harmonic
$(s,t')$,
then
$s$
is the common perpendicular to
$t$, $t'$
and thus
$t$, $t'$
are in the strong causal relation see sect.~\ref{subsect:lines}, in particular, they
do not separate each other. This contradicts the conclusion of
Lemma~\ref{lem:separate}.
\end{proof}
\begin{rem}\label{rem:lift_quasi-line}
Let
$\rho:X\to X$
be an involution without fixed points. Applying Lemma~\ref{lem:harmonic_pairs_ellitic}
to any
$s\in e_\rho$
we obtain a harmonic pair
$(s,t(s))\in\operatorname{Harm}$
with both
$s$, $t(s)\in e_\rho$.
The set
$\widehat e_\rho=\set{(s,t(s))}{$s\in e_\rho$}\subset\operatorname{Harm}$
is also called the {\em elliptic quasi-line} in
$\operatorname{Harm}$
associated with the involution
$\rho$.
In this sense, we can lift any elliptic quasi-line
$e_\rho\subset\operatorname{aY}$
to the uniquely determined elliptic quasi-line
$\widehat e_\rho\subset\operatorname{Harm}$.
It follows from Lemma~\ref{lem:harmonic_pairs_ellitic} and Lemma~\ref{lem:separate}
that
$\widehat e_\rho$
is invariant under the involution
$j:\operatorname{Harm}\to\operatorname{Harm}$.
Thus we can speak about elliptic quasi-lines in
$\operatorname{Hm}$.
\end{rem}
\subsection{Involutions associated with a harmonic 4-tuple}
\label{subsect:involution_harm}
Every harmonic 4-tuple
$q=(a,b)\in\operatorname{Harm}$
generates a pair of involutions
$\rho_q^\pm:X\to X$
without fixed points as follows. We fix decomposition of
$X\setminus a$
into open arcs
$a^\pm$
with the common ends
$a$, $X=a^+\cup a^-\cup a$,
and define maps
$\rho_q^\pm:X\to X$
by
$$\rho_q^\pm(x)=\begin{cases}
\rho_b\circ\rho_a(x),\ x\in\overline a^\pm\\
\rho_a\circ\rho_b(x),\ x\in\overline a^\mp,
\end{cases}
$$
where
$\overline a^\pm$
are respective closed arcs. Since
$\rho_b\circ\rho_a(x)=\rho_a\circ\rho_b(x)$
for
$x=a$,
the maps
$\rho_q^\pm$
are well defined and they are continuous involutions of
$X$
without fixed points. Since
$\rho_a(b)=b$
and
$\rho_b(a)=a$,
it follows from Lemma~\ref{lem:harmonic_pairs_ellitic} that
$q\in\widehat\rho_\rho$
for
$\rho=\rho_q^\pm$.
\begin{rem}\label{rem:noncommuting} The maps
$\rho_a$, $\rho_b$
may not be commuting, thus
$\rho^+\neq\rho^-$
in general, and to define an involution
$\rho$
we are forced to make a choice of one of the arcs, in which
$a$
(or
$b$)
separates
$X$.
\end{rem}
\subsection{Canonical decomposition of $\operatorname{Harm}$ over $X$}
\label{subsect:canonical_decomposition}
For every
$\omega\in X$
consider the set
$\operatorname{Harm}_\omega$
which consists of all pairs
$q=(a,b)\in\operatorname{Harm}$
with
$\omega\in a$.
Clearly,
$\operatorname{Harm}=\cup_{\omega\in X}\operatorname{Harm}_\omega$,
and for different
$\omega$, $\omega'\in X$
the sets
$\operatorname{Harm}_\omega$, $\operatorname{Harm}_{\omega'}$
intersect over the line
$h_{(\omega,\omega')}$, $\operatorname{Harm}_\omega\cap\operatorname{Harm}_{\omega'}=h_{(\omega,\omega')}$.
Our aim in this section is to show that every
$\operatorname{Harm}_\omega$
is cobounded in
$\operatorname{Harm}$
uniformly in
$\omega\in X$,
see Corollary~\ref{cor:uniform_cobounded}.
\subsection{Virtual projection $\operatorname{Harm}\to\operatorname{Harm}_\omega$}
\label{subsect:construction_harm_to_harm_omega}
Involutions associated with
$q=(a,b)\in\operatorname{Harm}$
depend on the choice of arcs
$a^+$, $a^-$,
see sect.~\ref{subsect:involution_harm}. To make that choice canonical,
we fix an orientation of the circle
$X=S^1$
and pass to the 2-sheeted covering
$\widetilde\operatorname{Harm}$
of
$\operatorname{Harm}$,
see Remark~\ref{rem:2-covering_harm}. Then for every
$q=(a,b)\in\widetilde\operatorname{Harm}$, $a=(x,y)\in X^2$,
the arc
$a^+$
is defined as the oriented arc from
$x$
to
$y$
with the orientation induced by the orientation of
$X$.
Now, we define
$\rho_q=\rho^+_q$.
\begin{lem}\label{lem:omega_projection} For every
$\omega\in X$
there is a well defined retraction
$h_\omega:\widetilde\operatorname{Harm}\to\operatorname{Harm}_\omega$.
\end{lem}
\begin{proof} Given
$q=(a,b)\in\widetilde\operatorname{Harm}$
we consider the quasi-elliptic line
$e=e_\rho$
associated with the involution
$\rho=\rho^+_q:X\to X$.
Then the line
$h_s\subset\operatorname{Harm}$
with
$s=(\omega,\rho(\omega))\in\operatorname{aY}$
lies in fact in
$\operatorname{Harm}_\omega$
by the definition,
$h_s\subset\operatorname{Harm}_\omega$.
By Lemma~\ref{lem:harmonic_pairs_ellitic}, there is a uniquely determined
$t\in e$
with
$(s,t)$
harmonic, that is,
$(s,t)\in h_s$.
Now, we put
$h_\omega(q)=(s,t)$.
This canonically defines a retraction
$h_\omega:\widetilde\operatorname{Harm}\to\operatorname{Harm}_\omega$
which we call a {\em virtual} projection of
$\operatorname{Harm}$
to
$\operatorname{Harm}_\omega$.
\end{proof}
\section{Diameter of elliptic quasi-lines}
\label{sect:diameter_quasi_lines}
In this section, we show that the diameter of any elliptic quasi-line in
$\operatorname{Harm}$
is uniformly bounded above.
\subsection{Width of a strip}
\label{subsect:length_segments}
Recall, see \cite[sect.~3.3]{Bu19}, that a 4-tuple
$p=(a,b)\in X^4$
with
$a=(x,y)$, $b=(u,z)$
is a {\em strip} if
$a$, $b$
are in the strong causal relation and the pairs
$(x,z)$, $(u,y)$
separate each other. Note that
$p'=(b,c)\in X^4$
with
$b=(x,u)$, $c=(y,z)$
is also a strip based on the same 4-tuple
$(x,y,u,z)\in X^4$.
Since the pairs
$a$, $b$
are in the strong causal relation, there is uniquely determined
common perpendicular
$s=(v,w)$
to
$a$, $b$.
We use notation
$p=(a,b,s)$
for a strip with common perpendicular
$s$.
Note that
$s$
is uniquely determined by
$(a,b)$,
and we add
$s$
to fix notation.
We define the width of the strip
$p$
as the length
$l=\operatorname{width}(p)$
of the segment
$x_su_s=y_sz_s\subset\operatorname{h}_s$
on the line
$\operatorname{h}_s$.
The following estimate has been obtained in \cite[Lemma~3.2]{Bu19}.
\begin{lem}\label{lem:length_preestimate} For any strip
$p=(a,b,s)$
we have
$$\operatorname{width}(p)\le 2\sqrt{\frac{|xu||yz|}{|xy||zu|}},$$
where
$a=(x,y)$, $b=(u,z)$. A similar estimate holds for the associated strip
$p'=(b,c,t)$,
where
$t$
is common perpendicular to
$b=(x,u)$, $c=(y,z)$
$$\operatorname{width}(p')\le 2\sqrt{\frac{|xy||zu|}{|xu||yz|}},$$
in particular,
$\operatorname{width}(p)\cdot\operatorname{width}(p')\le 4$.
\end{lem}
\subsection{Diameter of elliptic quasi-lines in $\operatorname{Harm}$}
\label{subsect:diam_quai-lines}
\begin{pro}\label{pro:diam_quasi-lines} There is a constant
$D>0$
such that for any involution
$\rho:X\to X$
without fixed points we have
$$\operatorname{diam}\widehat e_\rho\le D,$$
where
$\widehat e_\rho\subset\operatorname{Harm}$
is the elliptic quasi-line associated with
$\rho$,
see Remark~\ref{rem:lift_quasi-line}, and
$\operatorname{diam}=\operatorname{diam}_\delta$
is taken with respect to the distance
$\delta$
in
$\operatorname{Harm}$,
see sect.~\ref{subsect:dist_de}.
\end{pro}
In the proof, we use the construction from \cite[Lemma~3.3]{Bu18}, see sect.~\ref{subsect:zigzag_paths},
which gives a zz-path in
$\operatorname{Harm}$
between given
$p$, $q\in\widehat e_\rho$
consisting of 5 sides. We estimate the length of sides separately in
Lemmas~\ref{lem:length_al_be_above}, \ref{lem:gamma_length}, \ref{lem:mu_estimates},
\ref{lem:nu_estimates}.
\begin{figure}[htbp]
\centering
\psfrag{t}{$t$}
\psfrag{s}{$s$}
\psfrag{u}{$u$}
\psfrag{z}{$z$}
\psfrag{x}{$x$}
\psfrag{y}{$y$}
\psfrag{c}{$c$}
\psfrag{d}{$d$}
\psfrag{e}{$e$}
\psfrag{f}{$f$}
\psfrag{v}{$v$}
\psfrag{al}{$\alpha$}
\psfrag{be}{$\beta$}
\psfrag{ga}{$\gamma$}
\includegraphics[width=0.6\columnwidth]{diam.eps}
\caption{}\label{fi:diam}
\end{figure}
Let
$(z,u)$, $(s,t)\in\operatorname{aY}$
be pairs which separate each other. They separate
$X$
into four open arcs. We choose one of them as follows.
Assume (without loss of generality) that
$|us||zt|\ge|zs||ut|$
(this does not depend of the choice of the metric from our M\"obius structure
$M$,
in particular,
$|us|_t\ge|zs|_t$
in any metric
$|\ |_t$
from
$M$
with infinitely remote point
$t$).
Then we take the arc
$us\subset X$
between
$u$, $s$
that does not contain
$z$, $t$.
Next, we take a metric
$|\ |_t$
from
$M$
with infinitely remote point
$t$,
and take points
$x$, $y\in us$
(in the order
$uxys$)
such that
$|ux|_t=|xy|_t=|ys|_t=:h$.
It follows from continuity and monotonicity of the metric that
such points exist and they are uniquely determined.
Then the pairs
$(x,y)$, $(s,t)$
as well as the pairs
$(x,y)$, $(z,u)$
are in the strong causal relation, see sect.~\ref{subsect:lines}.
There are common perpendiculars
$(c,d)$
to the pairs
$(x,y)$, $(s,t)$,
and
$(e,f)$
to the pairs
$(x,y)$, $(z,u)$,
see Figure~\ref{fi:diam}.
These common perpendiculars
are uniquely determined, see sect.~\ref{subsect:lines}.
We estimate from above the length of the segments
$\alpha=x_{(c,d)}t_{(c,d)}=y_{(c,d)}s_{(c,d)}\subset\operatorname{h}_{(c,d)}$
and
$\beta=x_{(e,f)}u_{(e,f)}=y_{(e,f)}z_{(e,f)}\subset\operatorname{h}_{(e,f)}$.
\begin{lem}\label{lem:length_al_be_above} In notations above we have
$|\alpha|\le 2$, $|\beta|\le 4$.
\end{lem}
\begin{proof} For the strip
$p=(a,b,s)$,
where
$a=(x,y)$, $b=(s,t)$, $s=(c,d)$,
we have
$|\alpha|=\operatorname{width}(p)$.
Lemma~\ref{lem:length_preestimate} gives
$|\alpha|\le 2\sqrt{\frac{|ys||xt|}{|xy||st|}}=2\sqrt{\frac{|ys|_t}{|xy|_t}}=2$.
Similarly, for the strip
$p'=(a,b',s')$,
where
$b'=(z,u)$, $s'=(e,f)$,
we have
$|\beta|=\operatorname{width}(p')$.
Lemma~\ref{lem:length_preestimate} gives
$|\beta|\le 2\sqrt{\frac{|xu||yz|}{|xy||uz}}=2\sqrt{\frac{|yz|_t}{|uz|_t}}$,
because
$|xu|_t=|xy|_t$.
Let
$v\in X$
be the point opposite to
$u$
with respect to the reflection
$X\to X$
determined by the line
$\operatorname{h}_{(s,t)}$,
i.e.
$u_{(s,t)}=v_{(s,t)}\in\operatorname{h}_{(s,t)}$.
Then
$|sv|_t=|us|_t\le 3h$
and
$v\not\in uz$
for the open arc
$uz\subset X$,
that includes
$us$,
by the choice of the open arc
$us\subset X$.
By the triange inequality and monotonicity
$|yz|_t\le|ys|_t+|sz|_t<h+|sv|_t\le 4h$, $|zu|_t>|xu|_t=h$.
Hence,
$|\beta|\le 2\sqrt{\frac{4h}{h}}=4$.
\end{proof}
Next, we estimate from above the length
$|\gamma|$
of the segment
$\gamma=c_{(x,y)}e_{(x,y)}=d_{(x,y)}f_{(x,y)}\subset\operatorname{h}_{(x,y)}$
on the line
$\operatorname{h}_{(x,y)}$.
\begin{lem}\label{lem:gamma_length} In notation above, we have
$|\gamma|\le 6$.
\end{lem}
\begin{proof} Using notations above, we assume that the points
$d$, $f$
lie on the segment
$xy\subset X_t$.
We consider, first, the case when
$e\le t$,
that is,
$e=t$,
or
$e$
lies on the ray
$ut\subset X_t$.
In this case, the points
$d$, $f$
lies in the order
$xfdy$
on the segment
$xy\subset X_t$.
Indeed, the pairs
$(c,d)$, $(e,f)$
are in the strong causal relation being the perpendiculars to
$(x,y)$.
Thus, the opposite assumption
$xdfy$
leads to the conclusion that the pairs
$(c,d)$
and
$(s,t)$
are in the strong causal relation. This contradicts the fact that
$(c,d)$
is a perpendicular to
$(s,t)$.
Now, we have
$$|\gamma|=\ln\frac{|xd||yf|}{|xf||yd|}.$$
Note that
$|xd|_t<|xy|_t=h$
by monotonicity, because
$d$
lies in the interior of the segment
$xy\subset X_t$.
We have
$$|\alpha|=\ln\frac{|ds||cy|}{|dy||cs|}=\ln\frac{|cy|_t}{|dy|_t}$$
because
$|ds|_t=|cs|_t$.
By Lemma~\ref{lem:length_al_be_above} we have
$|\alpha|\le 2$.
Thus
$|dy|_t\ge|cy|_te^{-2}\ge|ys|_te^{-2}=he^{-2}$.
It follows that
$|xd|_t/|dy|_t\le h/(he^{-2})=e^2$.
Next, we estimate
$|yf|_t/|xf|_t$
from above. Since
$yf\subset xy\subset X_t$,
we have
$|yf|_t<|xy|_t=h$
by monotonicity.
By Lemma~\ref{lem:length_al_be_above}, we have
$$e^{|\beta|}=\frac{|uf||ex|}{|xf||eu|}\le e^4.$$
Hence
$|xf|\ge\frac{|uf||ex|}{e^4|eu|}$.
By monotonicity, we have
$|uf|_t>|ux|_t=h$
and
$|ex|_t>|eu|_t$,
where the last inequality uses the assumption
$e\le t$,
see beginning of the proof. Therefore,
$|xf|_t\ge h/e^4$,
and we conclude that
$|yf|_t/|xf|_t\le e^4$.
Hence,
$|\gamma|\le\ln(e^2\cdot e^4)=6$.
Now, we consider the case
$e>t$,
that is,
$e$
lies on the ray
$zt\subset X_t$.
In this case, we cannot garantee that the points
$d$, $f$
lies in the order
$xfdy$
on the segment
$xy\subset X_t$.
Thus we consider two subcases
(1) The points
$d$, $f$
lies in the order
$xfdy$
on the segment
$xy$.
We represent the length
$|\gamma|$
as
$$e^{|\gamma|}=\frac{|xc||ye|}{|xe||yc|},$$
and take a metric from
$M$
with the infinitely remote point
$u$.
We have
$xc\subset xe\subset X_u$,
thus
$|xc|_u/|xe|_u<1$,
and hence
$e^{|\gamma|}\le |ye|_u/|yc|_u$.
Next, we use that
$$e^{|\beta|}=\frac{|fz||ye|}{|fy||ze|}\le e^4$$
by Lemma~\ref{lem:length_al_be_above}.
Since
$|fz|_u=|ze|_u$,
we obtain
$|ye|_u\le e^4|fy|_u$.
Since
$ys\subset yc\subset X_u$,
we have
$|ys|_u<|yc|_u$,
which gives
$e^{|\gamma|}\le e^4|fy|_u/|ys|_u$.
Using the metric inversion, see (\ref{eq:metric_inversion}), we pass
to the metric with infinitely remote point
$t$
and use that
$|ys|_t=h$, $|fy|_t<|xy|_t=h$:
$$|fy|_u=\frac{|fy|_t}{|fu|_t|yu|_t}\le\frac{h}{|fu|_t|yu|_t}.$$
$$|ys|_u=\frac{|ys|_t}{|yu|_t|su|_t}=\frac{h}{|yu|_t|su|_t}.$$
Using that
$|fu|_t>|ux|_t=h$
by monotonicity and
$|su|_t\le 3h$
by the triange inequality, we finally obtain
$e^{|\gamma|}\le e^4|fy|_u/|ys|_u\le e^4|su|_t/|fu|_t\le e^4\cdot3h/h=e^4\cdot 3$.
Hence,
$|\gamma|\le 4+\ln 3$.
(2) The points
$d$, $f$
lies in the order
$xdfy$
on the segment
$xy$.
Recall that the pairs
$(c,d)$
and
$(e,f)$
are in the strong causal relation, and the pairs
$(c,d)$, $(s,t)$
separate each other. Thus
$c$
lies on the ray
$et\subset X_t$
which does not contain
$d$.
Hence, this time we have
$xe\subset xc\subset X_t$
and
$$e^{|\gamma|}=\frac{|xe||yc|}{|xc||ye|}.$$
By monotonicity,
$|xe|_t<|xc|_t$
and we conclude that
$e^{|\gamma|}<|yc|_t/|ye|_t$.
To estimate
$|yc|_t$
from above, we use that
$$e^{|\alpha|}=\frac{|ds||yc|}{|dy||cs|}\le e^2$$
by Lemma~\ref{lem:length_al_be_above}. Since
$|ds|_t=|cs|_t$
and
$dy\subset xy\subset X_t$,
we obtain
$|yc|_t\le e^2|dy|_t\le e^2|xy|_t=e^2h$.
On the other hand,
$ys\subset ye\subset X_t$.
Thus
$|ye|_t>|ys|_t=h$
by monotonicity. Therefore
$e^{|\gamma|}\le e^2$
and
$|\gamma|\le 2$.
\end{proof}
Let
$p=((z,u),(z',u'))$, $q=((s,t),(s',t'))\in\widehat e_\rho$
be given distinct harmonic pairs of pairs from
$\operatorname{aY}$.
Then the pairs
$(z,u)$, $(s,t)\in\operatorname{aY}$
separate each other being different members of the elliptic quasi-line in
$e_\rho\subset\operatorname{aY}$.
Assume as above (without loss of generality) that
$|us||zt|\ge|zs||ut|$.
Then we take the arc
$us\subset X$
between
$u$, $s$
that does not contain
$z$, $t$.
We also assume that
$t'$, $z'$
lie on the arc in
$X$
between
$s$, $t$
that contains
$su$.
\begin{rem}\label{rem:t_prime_in_su} In this case,
$sz'\subset st'\subset su\subset X_t$.
Indeed, since
$z=\rho(u)$, $s'=\rho(t')$,
the pairs of points
$(z,u)$, $(s',t')$
separate each other. Thus the opposite assumption
$u\in st'$
would imply
$|su|_t<|st'|_t=|ss'|_t<|sz|_t$,
a contradiction with our assumption
$|us||zt|\ge|zs||ut|$.
To show that
$z'\in st'$,
we fix
$q=((s,t),(s',t'))$
and move
$u$
from
$t'$
to
$t$
along the arc
$t't\subset st$.
Then
$z'$
moves from
$s$
to
$t'$
along the arc
$st'\subset st$.
Since
$u\in t't$
by the first part of the argument, we see that
$z'\in st'$.
\end{rem}
\begin{lem}\label{lem:mu_estimates} In notations above, assume that
$sy\subset st'\subset X_t$
(recall that
$st'\subset su$,
see Remark~\ref{rem:t_prime_in_su}). Then
$|\mu|\le\ln 3$,
where the segment
$\mu=d_{(s,t)}t'_{(s,t)}=c_{(s,t)}s'_{(s,t)}$
lies on the line
$h_{(s,t)}$.
\end{lem}
\begin{proof} We have
$$|\mu|=\left|\ln\frac{|sd||tt'|}{|st'||td|}\right|=\left|\ln\frac{|sd|_t}{|st'|_t}\right|.$$
Since
$sy\subset sd\subset su\subset X_t$,
we estimate
$h=|sy|_t\le|sd|_t\le|su|_t\le 3h$.
Since
$sy\subset st'\subset su$,
we estimate
$h=|sy|_t\le|st'|_t\le|su|_t\le 3h$.
Thus
$|\mu|\le\ln 3$.
\end{proof}
\begin{lem}\label{lem:zz_prime_estimate} In notations above, we have
$|zz'|_t\ge h$.
\end{lem}
\begin{proof} If
$sy\subset sz'\subset X_t$,
then
$|zz'|_t\ge|sz'|_\ge|sy|_t=h$.
Thus we assume that
$sz'\subset sy$.
Then
$ux\subset uz'$
and hence
$|uz'|_t\ge|ux|_t=h$.
Since the pair of pairs
$p=((z,u),(z',u'))$
is harmonic, we have
$|zz'||uu'|=|zu'||z'u|$
in any metric of the M\"obius structure
$M$.
Note that
$t$
lies on the arc in
$X$
between
$u$, $u'$
that does not contain
$z$, $z'$.
Thus we have
$|uu'|_t\le|uz'|_t+|z'z|_t+|zu'|_t$
by the triangle inequality in the metric
$|\ |_t$
with infinitely remote point
$t$.
Using notations
$|zu'|_t=:a$, $|z'u|_t=:b$, $|zz'|_t=\varepsilon$,
we conclude that
$ab\le\varepsilon(a+b+\varepsilon)$.
Therefore,
$$\varepsilon\ge\frac{a+b+\sqrt{(a+b)^2+4ab}}{2}\ge a+b>b.$$
But
$b=|z'u|_t\ge h$.
Hence
$|zz'|_t\ge h$
also in this case.
\end{proof}
\begin{lem}\label{lem:nu_estimates} In notations above, we have
$|\nu|\le\ln 18$,
where the segment
$\nu=f_{(u,z)}z'_{(u,z)}=e_{(u,z)}u'_{(u,z)}$
lies on the line
$\operatorname{h}_{(u,z)}$.
\end{lem}
\begin{proof} We first show that
$|uz'|_t\ge h$.
If
$sz'\subset sy$,
then
$xy\subset uz'$
and hence
$h=|xy|_t\le |uz'|_t$.
Thus we assume that
$sy\subset sz'$.
Then
$|sy|_{s'}\le|sz'|_{s'}<|zz'|_{s'}$.
As in Lemma~\ref{lem:zz_prime_estimate} applied to a metric
$|\ |_{s'}$
with infinitely remote point
$s'$,
we obtain
$|uz'|_{s'}>|zz'|_{s'}>|sy|_{s'}$.
The metric inversion with respect to
$t$
gives
$$|uz'|_{s'}=\frac{|uz'|_t}{|us'|_t|z's'|_t};\
|sy|_{s'}=\frac{|sy|_t}{|ss'|_t|ys'|_t}.$$
Using that
$|sy|_t=h$
and by monotonicity
$|ss'|_t<|z's'|_t$, $|ys'|_t<|us'|_t$,
we obtain
$|uz'|_t>h$.
Now, using monotonicity, Lemma~\ref{lem:zz_prime_estimate} and the first part of
the proof, we have the following two-sided estimates
for
$|uz'|_t$, $|zf|_t$, $|uf|_t$
and
$|zz'|_t$:
$h\le|uz'|_t\le|us|_t\le 3h$,
$h=|sy|_t\le|zf|_t\le|zu|_t\le|uv|_t\le 6h$,
$h=|ux|_t\le|uf|_t\le|uy|_t\le 2h$,
$h\le|zz'|_t\le|uz|_t\le 6h$,
where the point
$v\in X_t$
is determined in Lemma~\ref{lem:length_al_be_above}.
Since
$$|\nu|=\left|\ln\frac{|uz'||zf|}{|uf||zz'|}\right|$$
for any metric from the M\"obius structure
$M$,
this gives
$|\nu|\le\ln 18$.
\end{proof}
Now, we estimate the length
of the zz-path
$\sigma=\mu\alpha\gamma\beta\nu$
in a particular case, when
$sy\subset st'\subset X_t$.
\begin{lem}\label{lem:zz_path_particular} In notations at the beginning of
the section, assume that
$sy\subset st'\subset X_t$
for the zz-path
$\sigma=\mu\alpha\gamma\beta\nu$
between
$p=((z,u),(z',u'))$
and
$q=((s,t),(s',t'))\in\widehat e_\rho$.
Then
$|\sigma|\le D$
with
$D=12+\ln 54<16$.
\end{lem}
\begin{proof} We have
$|\alpha|\le 2$, $|\beta|\le 4$
by Lemma~\ref{lem:length_al_be_above},
$|\gamma|\le 6$
by Lemma~\ref{lem:gamma_length},
$|\mu|\le\ln 3$
by Lemma~\ref{lem:mu_estimates}
and
$|\nu|\le\ln 18$
by Lemma~\ref{lem:nu_estimates}. Note that the assumption
$sy\subset st'$
is only used in the estimate for
$|\mu|$.
Thus
$|\sigma|\le|\mu|+|\alpha|+|\gamma|+|\beta|+|\nu|\le D$.
\end{proof}
In notations above, assume that
the harmonic pair
$q=((s,t),(s',t'))\in\widehat e_\rho$
is fixed. Then the harmonic pair
$p=((z,u),(z',u'))\in\widehat e_\rho$
is uniquely determined by the point
$u$
on the arc
$tt'\subset X$
between
$t$, $t'$
that does not contain
$s$, $s'$
because
$z=\rho(u)$
and
$(z',u')\in\operatorname{aY}$
is determined by
$(z,u)$,
see Lemma~\ref{lem:harmonic_pairs_ellitic}. The point
$u$
in its own turn determines
$x$, $y\in us$.
The conclusion of Lemma~\ref{lem:zz_path_particular} holds
for
$u\in tt'$
such that
$sy\subset st'$.
This gives an upper bound for the distance
$|us|_t$,
in particular,
$u$
is separated from
$t$.
Let
$u_0\in tt'$
by maximal with this property, i.e.
$y=t'$
for
$y=y(u_0)$.
At the moment, we do not have a required estimate of
$|\sigma|$
for
$u$
on the (open) arc
$tu_0\subset tt'$.
To fill in this gap, we apply the same construction for
$p=((z,u),(z',u'))$
and
$q'=j(q)=((s',t'),(s,t))$
assuming without loss of generality that
$|us'||zt'|\ge|zs'||ut'|$
and choosing the arc
$s'u\subset X$
between
$s'$, $u$
that does not contain
$z$, $t'$.
Then
$u$
determines as above
$x'$, $y'\in s'u$
with
$|ux'|_{t'}=|x'y'|_{t'}=|y's'|_{t'}=:h'$.
Now, the conclusion of Lemma~\ref{lem:zz_path_particular} holds for
$u\in tt'$
such that
$s'y'\subset s't\subset X_{t'}$.
Let
$u_1\in tt'$
be maximal with this property, i.e.
$y'=t$
for
$y'=y'(u_1)$.
We show that the subarcs
$u_0t'$
and
$u_1t$
in
$tt'$
overlap. At this point, we need the condition
$\alpha\ge\sqrt{2}-1$
in Axiom~M($\alpha$).
\begin{lem}\label{lem:overlap} In notations above the arcs
$u_0t'$, $u_1t\subset X$
overlap,
$u_0t'\cap u_1t\not=\emptyset$.
\end{lem}
\begin{proof} By the assumption on
$u_0$,
we have
$h=|u_0x|_t=|xy|_t=|xt'|_t$.
Thus the pair
$((x,t),(u_0,t'))$
is harmonic. Then by Axiom~M($\alpha$)
$|u_0t'|_t\ge\sqrt{2}h$.
Taking the metric inversion, we obtain
$$|u_0t|_{t'}=\frac{|u_0t|_t}{|u_0t'|_t|tt'|_t}=\frac{1}{|u_0t'|_t}\le\frac{1}{\sqrt{2}h}.$$
Again, since
$|u_1x'|_{t'}=|x't|_{t'}=h'$,
the pair
$((x',t'),(u_1,t))$
is harmonic. By Axiom~M($\alpha$),
$|u_1t|_{t'}\ge\sqrt{2}h'$.
We show that
$2hh'\ge 1$.
Note that
$h=|st'|_t=|ss'|_t$
by harmonicity of
$q$,
and
$h'=|s't|_{t'}=|ss'|_{t'}$
by harmonicity of
$q'=j(q)$.
Taking the metric inversion, we have
$$|ss'|_{t'}=\frac{|ss'|_t}{|st'|_t|s't'|_t}=\frac{1}{|s't'|_t}.$$
Since
$|s't'|_t\le|s's|_t+|st'|_t$
by the triange inequality, we see that
$|s't'|_t\le 2h$.
Then
$$hh'=\frac{|st'|_t}{|s't'|_t}\ge 1/2.$$
Therefore,
$2hh'\ge 1$.
Now,
$|u_1t|_{t'}\ge\sqrt{2}h'\ge\frac{1}{\sqrt{2}h}\ge|u_0t|_{t'}$.
Hence
$u_0t'\cap u_1t\not=\emptyset$
by monotonicity.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{pro:diam_quasi-lines}] We use notations
introduced above. For
$p$, $q\in\widehat e_\rho$,
$p=((z,u),(z',u'))$, $q=((s,t),(s',t'))$,
and
$x$, $y\in su\subset X_t$
with
$|ux|_t=|xy|_t=|ys|_t$,
if
$|ut'|_t\le|u_0t'|_t$,
then
$sy\subset st'$
and
$\delta(p,q)\le D$
by Lemma~\ref{lem:zz_path_particular}. In particular,
this condition is fulfilled for
$p=q'=j(q)=((s',t'),(s,t))$
because then
$u=t'$.
Thus
$\delta(q',q)\le D$.
In the opposite case,
$|ut'|_t>|u_0t'|_t$,
we have
$|ut|_{t'}\le|u_1t|_{t'}$
by Lemma~\ref{lem:overlap}. Hence
$\delta(p,q')\le D$.
In this case,
$\delta(p,q)\le\delta(p,q')+\delta(q',q)\le 2D$
by the triangle inequality. Therefore,
$\operatorname{diam}\widehat e_\rho\le 2D$
with
$D<16$.
\end{proof}
\begin{cor}\label{cor:uniform_cobounded} The subspace
$\operatorname{Harm}_\omega\subset\operatorname{Harm}$
is cobounded in
$\operatorname{Harm}$
uniformly in
$\omega\in X$,
that is, for any
$q\in\operatorname{Harm}$, $\omega\in X$
we have
$\operatorname{dist}_\delta(q,\operatorname{Harm}_\omega)\le D$
for some universal constant
$D>0$.
\end{cor}
\begin{proof} We take one of two involutions associated with
$q\in\operatorname{Harm}$,
see sect.~\ref{subsect:construction_harm_to_harm_omega}, and denote it by
$\rho$.
Let
$\widehat e_\rho\subset\operatorname{Harm}$
be elliptic quasi-line associated with the involution
$\rho:X\to X$.
Then
$q\in\widehat e_\rho$,
see sect.~\ref{subsect:involution_harm}, and by Lemma~\ref{lem:omega_projection},
$h_\omega(q)\in\widehat e_\rho\cap\operatorname{Harm}_\omega$.
Thus
$\operatorname{dist}_\delta(q,\operatorname{Harm}_\omega)\le\delta(q,h_\omega(q))\le\operatorname{diam}_\delta(\widehat e_\rho)\le D$
by Proposition~\ref{pro:diam_quasi-lines}.
\end{proof}
\section{Hyperbolic approximation of $X_\omega$}
\label{sect:hyperbolic_approximations}
A hyperbolic approximation is a kind of a hyperbolic cone over a metric
space, see \cite{BS07}. A specific feature of a hyperbolic approximation
of a metric space is that it is defined via families of metric balls in the space
in such a way to reflect their combinatorics.
\subsection{Definition}
\label{subsect:definition}
The set
$\operatorname{Harm}_\omega$
of harmonic 4-tuples with common entry
$\omega$
can be identified with the set of metric balls in
$X_\omega$.
Indeed, every
$q=((a,b),(o,\omega))\in\operatorname{Harm}_\omega$
determines the sphere
$S_r(o)=(a,b)$
because
$o$
is the midpoint between
$a$, $b$, $|ao|_\omega=|ob|_\omega=:r$,
and hence the ball
$B_r(o)=\set{x\in X_\omega}{$|ox|_\omega\le r$}\subset X_\omega$
with
$\d B_r(o)=S_r(o)$.
Vice versa, given a ball
$B_r(o)\subset X_\omega$
of radius
$r>0$
centered at
$o$,
we have a 4-tuple
$q=((a,b),(o,\omega))$,
where
$(a,b)=\d B_r(o)$,
which is harmonic,
$q\in\operatorname{Harm}_\omega$,
because
$o$
is the midpoint between
$a$, $b$.
A (finite or infinite) sequence of spheres
$S_r(o_i)=(a_i,b_i)\subset X_\omega$
is said to be a {\em harmonic chain of radius}
$r$
if the pair
$((a_i,b_i),(a_{i+1},b_{i+1}))$
is harmonic for every
$i$.
Assuming that an orientation of (and hence an order on)
$X_\omega$
is fixed, and that
$a_i<b_i$, $a_{i+1}<b_{i+1}$, $a_i<a_{i+1}<b_i$,
we observe that
$b_{i+1}>b_i$
because the pairs
$(a_i,b_i)$, $(a_{i+1},b_{i+1})$
separate each other and
$a_{i+1}<b_{i+1}$.
Moreover,
$o_i<a_{i+1}$
since otherwise
$b_{i+1}=\omega$
or
$b_{i+1}<a_{i+1}$.
Similarly,
$b_i<o_{i+1}$.
Speaking about harmonic chains of spheres, we mean that these
assumptions are always satisfied. Note that then the pairs
$(a_i,b_i)$, $(a_{i+2},b_{i+2})$
are in strong causal relation. Indeed, this is equivalent to
$b_i<a_{i+2}$,
which is fulfilled because otherwise
$a_{i+2}\le b_i$
and hence
$a_{i+2}<o_{i+1}$.
But this contradicts the inequality
$o_{i+1}<a_{i+2}$.
We fix
$0<\sigma\le 1/24$
and for every
$k\in\mathbb{Z}$
let
$V_k\subset\operatorname{Harm}_\omega$
be an infinite in both directions harmonic chain of radius
$r=\sigma^k$.
We put
$V=\cup_{k\in\mathbb{Z}}V_k\subset\operatorname{Harm}_\omega$
and define a {\em harmonic hyperbolic} approximation
$Z=Z(\sigma)$
of
$X_\omega$
with parameter
$\sigma$
as a graph with the vertex set
$V$.
We consider vertices in
$V$
as spheres (balls) of respective harmonic chains. For any
$v\in V$
we denote
$B(v)$
the respective ball in
$X_\omega$.
Two vertices
$v$, $v'\in V$
are connected by an edge if and only is they lie on one and the same level
$V_k$
and are in this case neighboring spheres,
$v=S_r(o_i)$, $v'=S_r(o_j)$
with
$|i-j|=1$
and
$r=\sigma^k$,
or
$v\in V_k$, $v'\in V_l$
with
$|k-l|=1$
and in this case the respective ball with the larger level is contained in the respective
ball with the smaller lever, i.e.
$B_r(o_i)\subset B_{r'}(o_j)$
if
$r=\sigma^{k+1}$, $r'=\sigma^k$.
An edge
$vv'\subset Z$
is called {\em horizontal}, it its vertices lie on one and the same level,
$v$, $v'\in V_k$
for some
$k\in\mathbb{Z}$. Other edges are called {\em radial}. The level function
$\ell:V\to\mathbb{Z}$
is defined by
$l(v)=k$
for
$v\in V_k$.
Since every level
$Z_k\subset Z$, $k\in\mathbb{Z}$,
is connected, the graph
$Z$
is connected. We endow
$Z$
with path metric assuming that the length of every edge is 1. We denote
by
$|vv'|$
the distance between points
$v$, $v'\in V$
in
$Z$.
Note that
$Z$
is geodesic because it is connected and distances between vertices take
integer values.
\subsection{ Geodesics in $Z$}
\label{subsect:geodesics_z}
The construction of the (harmonic) hyperbolic approximation
$Z$
here is slightly different from that in \cite{BS07}. Thus
we basically follow \cite[sect.~6.2]{BS07} with appropriate adaptation
of the arguments.
\begin{lem}\label{lem:ancestor} For every
$v\in V$
there is a vertex
$w\in V$
with
$\ell(w)=\ell(v)-1$
connected with any
$v'\in V$, $\ell(v')=\ell(v)$, $|vv'|\le 1$,
by a radial edge.
\end{lem}
\begin{proof} There are two neighbors
$v'$, $v''$
of
$v$
in
$Z$,
sitting on the same level as
$v$,
$|vv'|$, $|vv''|\le 1$.
One of them,
$v'$,
is on the left to
$v$,
the other one,
$v''$
is on the right to
$v$.
Let
$v'=(a',b')$, $v'=(a'',b'')$.
Then
$|a'b''|_\omega\le 6r'$,
where
$r'=\sigma^{k+1}$
for
$k+1=\ell(v)$.
On the other hand, for every neighboring
$w$, $w'\in V_k$,
$w=(c,d)$, $w'=(c',d')$,
the pair
$((c,d),(c',d'))$
is harmonic. Thus
$|c'd|_\omega|cd'|_\omega=|cc'|_\omega|dd'|_\omega$.
Hence
\begin{equation}\label{eq:harmonic_below}
|c'd|_\omega=\frac{|cc'|_\omega|dd'|_\omega}{|cd'|_\omega}\ge\frac{r}{4}
\end{equation}
for
$r=\sigma^k$
because
$|cc'|_\omega\ge r$, $|dd'|_\omega\ge r$
and
$|cd'|_\omega\le 4r$.
For the neighbors
$v'$, $v''$
of
$v$
we have
$v'\cup v\cup v''=a'b''\subset X_\omega$.
Since
$\sigma=r'/r\le 1/24$,
we have
$|a'b''|_\omega\le 6r'\le r/4$.
The balls
$\{w\in V_k\}$
cover
$X_\omega$.
Assume that there is
$w\in V_k$
such that
$(v'\cup v\cup v'')\subset w$.
Then the vertices
$v$, $v'$, $v''\in Z$
are connected with
$w$
by radial edges.
Otherwise
$a'b''$
is covered by no
$w\in V_k$.
Then there are at most two neighboring
$w=(c,d)$, $w'=(c',d')\in V_k$
which cover
$a'b''$, $a'b''\subset cd\cup c'd'$.
Assuming that
$w$
is left to
$w'$,
we observe that the intersection
$w\cap w'=c'd$.
Since
$|a'b''|_\omega\le |c'd|_\omega$
by the estimate above, we see
that
$a'b''$
is contained in one of
$w$, $w'$
in contradiction with our assumption.
\end{proof}
\begin{lem}\label{lem:geodesics_in_z} Any vertices
$v$, $v'\in V$
can be connected in
$Z$
by a geodesic
$\gamma$
which consists of at most two radial subsegments
$\gamma'$, $\gamma''\subset\gamma$
and at most one horizontal edge between them. If there is such an edge,
then it lies on the lowest level of the geodesic. Otherwise the unique
common vertex
$w$
of
$\gamma'$, $\gamma''$
is the lowest level vertex of
$\gamma$.
\end{lem}
The proof proceeds exactly as in \cite[Lemma~6.2.6]{BS07}
using Lemma~\ref{lem:ancestor} and that fact that for any harmonic chain
$V_k$, $k\in\mathbb{Z}$
two balls
$v$, $v'\in V_k$
intersect if and only if they are neighboring in
$V_k$.
Thus we omit it.
\subsection{Hyperbolicity of $Z$}
\label{subsect:hypbolicity_z}
The Gromov product of
$v$, $v'$
with respect to
$u$
in a metric space
$Z$
is defined by
$$(v|v')_u=\frac{1}{2}(|vu|+|v'u|-|vv'|).$$
A metric space
$Z$
is said to be
$\delta$-{\em hyperbolic}, $\delta\ge 0$,
if for any
$v$, $v'$, $v''\in Z$
and a base point
$u\in Z$,
we have
$$(v|v')_u\ge\min\{(v|v'')_u,(v'|v'')_u\}-\delta.$$
Now, we come back to our harmonic geodesic approximation
$Z$.
\begin{lem}\label{lem:adjacent_vertices} Assume that
$|vv'|\le 1$
for vertices
$v$, $v'\in Z$
of one and the same level,
$\ell(v)=\ell(v')$.
Then
$|ww'|\le 1$
for any vertices
$w$, $w'\in Z$
adjacent to
$v$, $v'$
respectively and sitting one level below.
\end{lem}
\begin{proof} The balls
$B(w)$, $B(w')$
intersect because
$B(v)\subset B(w)$, $B(v')\subset B(w')$
and the balls
$B(v)$, $B(v')$
intersect.
Since
$w$, $w'$
are members of a harmonic chain, they are adjacent in
$Z$, $|ww'|\le 1$.
\end{proof}
From this we immediately obtain.
\begin{cor}\label{cor:radial_geodesics_common_ends} For any two radial geodesics
$\gamma$, $\gamma'\subset Z$
with common ends, the distance in
$Z$
between vertices of
$\gamma$
and
$\gamma'$
of the same level is at most 1.
\end{cor}
It is convenient to use the following terminology. Let
$V'\subset V$
be a subset. A point
$u\in V$
is called a {\em cone point} for
$V'$
if
$\ell(u)\le\inf_{v\in V'}\ell(v)$
and every
$v\in V'$
is connected to
$u$
by a radial geodesic. A cone point of maximal level is called
a {\em branch point} of
$V'$.
\begin{lem}\label{lem:cone_point} For any two points
$v$, $v'\in V$
there is cone point and, hence, a branch point.
\end{lem}
\begin{proof} By Lemma~\ref{lem:geodesics_in_z},
$v$, $v'$
can be connected in
$Z$
by a geodesic
$\gamma$
which contains at most one horizontal edge. If there is no horizontal edge in
$\gamma$,
then the lowest level point
$w$
of
$\gamma$
is a branch point of
$v$, $v'$.
Otherwise, let
$uu'\subset\gamma$
be the horizontal edge. It lies on the lowest level of
$\gamma$.
Without loss of generality, we assume that
$vu$, $v'u'$
are radial geodesics.
By Lemma~\ref{lem:ancestor}, there is
$w\in V$
with
$\ell(w)=\ell(\gamma)-1$
which is connected to
$u$, $u'$
by radial edges. Taking concatenation
$vuw$, $v'u'w$
we see that
$w$
connected to
$v$, $v'$
by radial geodesics. Hence,
$w$
is a cone point of
$v$, $v'$.
\end{proof}
Note that if
$u$
is a cone point of
$v$, $v'$
and
$w$
is their branch point, then
$(v|v')_u=|uw|$
in the case the geodesic
$vv'$
has no horizontal edge, and
$(v|v')_u=|uw|+1/2$
otherwise. In particular,
$|uw|\ge (v|v')_u-1/2$
is either case.
\begin{lem}\label{lem:cone_point_estimate} Let
$u\in V$
be a cone point of
$v$, $v'\in V$,
$\gamma=uv$, $\gamma'=uv'$
radial geodesics. Then for any
$y\in\gamma$, $y'\in\gamma'$
sitting one the same level
$\ell(y)=\ell(y')\le\ell(w)$,
where
$w$
is a branch point of
$v$, $v'$,
we have
$|yy'|\le 2$.
\end{lem}
\begin{proof} Concatenations
$vwu$, $v'wu$
are radial geodesics in
$Z$.
By Corollary~\ref{cor:radial_geodesics_common_ends}, we have
$|yy''|\le 1$
for
$y\in\gamma$, $y''\in vwu$
sitting on the same level,
$\ell(y)=\ell(y'')$,
and similarly
$|y'y''|\le 1$
for
$y'\in\gamma'$, $y''\in v'wu$
with
$\ell(y')=\ell(y'')$.
For
$\ell(y)=\ell(y')\le\ell(w)$
we can choose
$y''\in wu$
with
$\ell(y'')=\ell(y)=\ell(y')$,
and thus
$|yy'|\le|yy''|+|y''y'|\le 2$.
\end{proof}
We need the following Proposition from \cite[Proposition~6.2.9]{BS07},
for which we give a different proof.
\begin{lem}\label{lem:de_inequality_cone_point} Let
$v$, $v'$, $v''\in V$
and let
$w$, $w'$, $w''$
be branch points for the pairs of vertices
$\{v',v''\}$, $\{v,v''\}$
and
$\{v,v'\}$
respectively. Let
$u$
be a cone point of
$\{w,w',w''\}$.
Then
$$(v|v')_u\ge\min\{(v|v'')_u,(v'|v'')_u\}-\delta$$
with
$\delta=5/2$.
\end{lem}
\begin{proof} We put
$t_0=\min\{|uw|,|uw'|\}$
and let
$\gamma$, $\gamma'$, $\gamma''$
be radial geodesics between
$u$
and
$v$, $v'$, $v''$
respectively. Assume that
$y\in\gamma$, $y'\in\gamma'$, $y''\in\gamma''$
satisfy
$|uy|=|uy'|=|uy''|=t_0$.
By Lemma~\ref{lem:cone_point_estimate} we have
$|yy''|$, $|y'y''|\le 2$.
Thus by the triangle inequality
$|yy'|\le 4$.
By monotonicity of the Gromov product
$$(v|v')_u\ge (y|y')_u=t_0-\frac{1}{2}|yy'|\ge t_0-2.$$
By the remark above
$t_0\ge\min\{(v|v'')_u,(v'|v'')_u\}-1/2$.
Hence, the claim.
\end{proof}
Using argument of \cite[Proposition~6.2.10]{BS07}, we obtain.
\begin{pro}\label{pro:hyperbolic_harmonic_approximation} Any hyperbolic harmonic
approximation
$Z$
of
$X_\omega$
is a geodesic
$\delta$-hyperbolic
space with
$\delta=5$.
\qed
\end{pro}
\section{$X_\omega$ and $Z$ are quasi-isometric}
\label{sect:Xquasi-isometricZ}
Our aim is to show that for every
$\omega\in X$
the space
$X_\omega$
and its hyperbolic harmonic approximation
$Z=Z(\sigma)$
are quasi-isometric. Let
$V$
be the vertex set of
$Z$.
By definition, we have an inclusion
$f:V\hookrightarrow X_\omega$.
We show that
$f$
is a quasi-isometry with respect to the metric on
$Z$
and the
$\delta$-metric
on
$X_\omega$.
\subsection{Estimates from above}
\label{subsect:estimates_above}
In this section we establish estimates from above, that is, we show that
there is a constant
$D=D(\sigma)$
depending only on
$\sigma$
such that for every edge
$vv'$
of
$Z$
we have
$\delta(v,v')\le D$.
For horizontal edges this is proven in Lemma~\ref{lem:equal_radius_harm},
and for vertical edges in Lemma~\ref{lem:different_levels_radius_unfixed}.
Fix
$\omega\in X$, $r>0$.
Then the sphere
$S_r(o)\subset X_\omega$
of radius
$r$
centered at
$o\in X_\omega$
determines the harmonic pair
$((a,b),(o,\omega))\in\operatorname{Harm}$,
where
$S_r(o)=(a,b)$.
\begin{lem}\label{lem:equal_radius_harm} Fix
$\omega\in X$, $r>0$,
and consider two spheres
$S_r(o)=(a,b)$, $S_r(o')=(a',b')$
in
$X_\omega$
such that the pair of pairs
$((a,b),(a',b'))$
is harmonic. Then the
$\delta$-distance
between harmonic
$q=((a,b),(o,\omega))$
and
$q'=((a',b'),(o',\omega))$
is at most
$2\ln 4$,
$\delta(q,q')\le 2\ln 4$.
\end{lem}
\begin{proof} We fix an orientation of
$X_\omega$
and assume without loss of generality that the ordered pairs
$(a,b)$, $(a',b')$
agree with the orientation, and
$a$
precedes
$b'$.
Note that
$b$
is not on the segment
$o'b'\subset X_\omega$, $b\not\in o'b'$,
see sect.~\ref{subsect:definition}.
The harmonic pairs
$q=((a,b),(o,\omega))$
and
$\widehat q=((a,b),(a',b'))$
have the common axis
$(a,b)$.
Thus the distance
$l$
between
$q$, $\widehat q$
along
$h_{(a,b)}$
is computed as
$$e^l=\frac{|aa'||ob|}{|ao||a'b|}=\frac{|aa'|_\omega}{|a'b|_\omega}$$
because
$|ao|_\omega=r=|ob|_\omega$.
Since
$\widehat q$
is harmonic, we have
$|aa'||bb'|=|a'b||ab'|$.
Thus
$e^l=\frac{|ab'|_\omega}{|bb'|_\omega}$.
By the triange inequality and monotonicity,
$|ab'|_\omega\le|ab|_\omega+|bb'|_\omega\le|ab|_\omega+|a'b'|_\omega\le 4r$.
By the remark above,
$|bb'|_\omega\ge|o'b'|_\omega=r$.
Therefore,
$l\le\ln 4$.
Similarly,
$\widehat q$
and
$q'$
have the common axis
$(a',b')$,
and
$l'=|q'\widehat q|\le\ln 4$.
Hence,
$\delta(q,q')\le |q\widehat q|+|q'\widehat q|\le 2\ln 4$.
\end{proof}
\begin{cor}\label{cor:horizont_edge_above} For every horizontal edge
$vv'\subset Z$
we have
$\delta(v,v')\le C$
with
$C\le 2\ln 4$.
\end{cor}
\begin{proof} Indeed, the vertices
$v$, $v'$
of any horizontal edge in
$Z$
satisfy the condition of Lemma~\ref{lem:equal_radius_harm}.
\end{proof}
\begin{lem}\label{lem:containing_spheres} Fix
$\omega\in X$, $0<\sigma\le 1/24$,
and consider two spheres
$S_r(o)=(a,b)$, $S_{r'}(o')=(a',b')$
in
$X_\omega$,
where
$r=\sigma^k$, $r'=\sigma^{k+1}$
for some
$k\in\mathbb{Z}$,
such that
$o$
lies in the open interval
$(a'b')\subset X_\omega$, $o\in (a'b')$.
Then the spheres
$(a,b)$, $(a',b')$
do not separate each other in
$X$.
Let
$h\subset\operatorname{Harm}$
be the unique line that contains
$(a,b)$
and
$(a',b')$.
Then the distance
$l$
between
$(a,b)$
and
$(a',b')$
along
$h$
is estimated above as
$l\le\sqrt{2/\sigma}$.
independent of
$k$.
\end{lem}
\begin{proof} To estimate
$l$
we use Lemma~\ref{lem:length_preestimate}. We assume as in the proof of
Lemma~\ref{lem:equal_radius_harm} that the ordered pairs
$(a,b)$, $(a',b')$
agree with a fixed orientation of
$X_\omega$.
Since both
$o',o$
lies in the interval
$(a'b')\subset X_\omega$,
we have
$|o'o|\le|a'b'|\le 2r'$.
Then
$|a'o|\le|a'o'|+|o'o|\le 3r'<r$
because
$\sigma\le 1/24$.
Hence
$a<a'$,
similarly
$b'<b$,
and the pairs
$(a,b)$, $(a',b')\subset X$
do not separate each other. Thus
$p=((a,b),(a',b'))$
is a strip. By Lemma~\ref{lem:length_preestimate} we have
$$l=\operatorname{width}(p)\le 2\sqrt{\frac{|aa'||bb'|}{|ab||a'b'|}}.$$
Since
$o\in(a'b')$,
it holds
$|aa'|_\omega$, $|bb'|_\omega\le r$.
Axiom~(M($\alpha$)) gives
$|ab|_\omega\ge\sqrt{2}r$, $|a'b'|_\omega\ge\sqrt{2}r'$.
Thus
$l\le 2\sqrt{r^2/2rr'}=\sqrt{2/\sigma}$.
\end{proof}
\begin{lem}\label{lem:different_levels_radius} Fix
$\omega\in X$, $0<\sigma\le 1/24$,
and consider two spheres
$S_r(o)=(a,b)$, $S_{r'}(o')=(a',b')$
in
$X_\omega$,
where
$r=\sigma^k$, $r'=\sigma^{k+1}$
for some
$k\in\mathbb{Z}$,
such that
$o$
lies in the open interval
$(a'b')\subset X_\omega$, $o\in (a'b')$.
Then the
$\delta$-distance
between harmonic
$q=((a,b),(o,\omega))$
and
$q'=((a',b'),(o',\omega))$
is estimated above as
$\delta(q,q')\le\sqrt{2/\sigma}+2\ln 3$
independent of
$k$.
\end{lem}
\begin{proof} We fix an orientation and hence the respective order on
$X_\omega$.
If
$o'=o$,
then
$q$, $q'$
lie on the line
$h_{(o,\omega)}$,
and in this case
$\delta(q,q')=|qq'|=\ln(r/r')=\ln(1/\sigma)<\sqrt{1/\sigma}$.
Thus we assume that
$o'\neq o$.
Without loss of generality, we assume that
$o'<o$
with respect to the order on
$X_\omega$.
We also assume that
$a<b$, $a'<b'$.
As in Lemma~\ref{lem:containing_spheres}, the pairs
$(a,b)$
and
$(a',b')$
do not separate each other. Let
$(c,d)$
be the common perpendicular to
$(a,b)$
and
$(a',b')$,
$h=h_{(c,d)}\subset\operatorname{Harm}$
the unique line containing
$(a,b)$
and
$(a',b')$.
Then we have a zz-path in
$\operatorname{Harm}$
between
$q$, $q'$
which consists of 3 sides.
First, one goes from
$q$
to
$\widehat q=h_{(a,b)}\cap h$
along
$h_{(a,b)}$.
We denote the respective distance by
$m$.
Then one goes along
$h$
from
$\widehat q$
to
$\widehat q'=h\cap h_{(a',b')}$.
By Lemma~\ref{lem:containing_spheres}, the respective distance
$l$
is estimated above as
$l\le\sqrt{2/\sigma}$.
Finally, one goes from
$\widehat q'$
along
$h_{(a',b')}$
to
$q'$.
We denote the respective distance by
$t$.
Thus we need to estimate above
$m$
and
$t$.
We assume without loss of generality that
$c\in(a'b')$.
Note that
$c\not\in o'o$,
since otherwise
$c$
is equal neither
$o$
nor
$o'$
because
$o'\neq o$,
and
$d$
must lie simultaneously left to
$a$
and right to
$b'$,
which is impossible.
We consider two cases
(1) $c<o'$
and
(2) $o<c$.
Case~(1). We have
$$e^m=\frac{|ao||bc|}{|ac||bo|}=\frac{|bc|_\omega}{|ac|_\omega}.$$
Using that
$|bc|_\omega\le|ab|_\omega\le 2r$
and
$|a'b'|_\omega\le 2r'$,
we have
$|ac|_\omega\ge|aa'|_\omega\ge r-|a'b'|_\omega\ge r-2r'$,
and obtain
$$e^m\le\frac{2r}{r-2r'}\le\frac{2}{1-2\sigma}\le 3.$$
On the other hand,
$$e^m=\frac{|a\omega||bd|}{|ad||b\omega|}=\frac{|bd|_\omega}{|ad|_\omega},$$
thus
$|bd|_\omega/|ad|_\omega\le 3$.
Now we compute
$t$.
By the assumption
$c<o'<o$
we have
$d<a$.
Thus
$|b'd|_\omega\le |bd|_\omega$, $|a'd|_\omega\ge|ad|_\omega$
and we obtain
$$e^t=\frac{|a'\omega||b'd|}{|a'd||b'\omega|}=\frac{|b'd|_\omega}{|a'd|_\omega}\le\frac{|bd|_\omega}{|ad|_\omega}\le 3.$$
Thus
$t\le\ln 3$.
Case~(2). This is obtained similarly to case~(1) by interchanging
$a$, $b$
and
$a'$, $b'$.
Finally,
$\delta(q,q')\le m+l+t\le\sqrt{2/\sigma}+2\ln 3$.
\end{proof}
\begin{lem}\label{lem:different_levels_radius_unfixed} Fix
$\omega\in X$, $0<\sigma\le 1/24$,
and for a sphere
$S_r(o)=(a,b)\subset X_\omega$
consider a maximal harmonic chain of spheres
$S_{r'}(o_i)=(a_i',b_i')\subset X_\omega$, $i=1,\dots,n$,
that is contained in
$(a,b)$,
where
$r=\sigma^k$, $r'=\sigma^{k+1}$
for some
$k\in\mathbb{Z}$.
Then the
$\delta$-distance
between harmonic
$q=((a,b),(o,\omega))$
and
$q_i'=((a_i',b_i'),(o_i',\omega))$,
is estimated above as
$\delta(q,q_i')\le c_1/\sqrt{\sigma}+c_2$
for every
$i=1,\dots,n$
independent of
$k$,
where
$c_1\le\sqrt{2}+4\ln 4$, $c_2=2\ln 3$.
\end{lem}
\begin{proof} The segments
$a_i'a_{i+1}'$, $i=1,\dots,n$
have disjoint interiors, and their union cover the union of spheres
$S_{r'}(o_i)$.
Thus
$$\sum_i|a_i'a_{i+1}'|\le|ab|\le 2r.$$
On the other hand,
$|a_i'a_{i+1}'|\ge|a_i'o_i'|=r'$
because
$o_i'$
lies in the interval
$a_i'a_{i+1}'$,
see sect.~\ref{subsect:definition}. Thus
$n\le 2r/r'=2/\sigma$.
There is
$j\in\{1,\dots,n\}$
such that
$o\in(a_j',b_j')$.
By Lemma~\ref{lem:different_levels_radius}, we have
$\delta(q,q_j')\le\sqrt{2/\sigma}+2\ln 3$.
Using Lemma~\ref{lem:equal_radius_harm}, we obtain
$\delta(q,q_i')\le\delta(q,q_j')+\delta(q_j',q_i')\le\sqrt{2/\sigma}+2\ln 3+2n\ln 4$
for every
$i=1,\dots,n$.
Therefore,
$\delta(q,q_i')\le c_1/\sqrt{\sigma}+c_2$,
where
$c_1\le \sqrt{2}+4\ln 4$, $c_2=2\ln 3$.
\end{proof}
\begin{cor}\label{cor:vert_edge_above} For every vertical edge
$vv'\subset Z$
we have
$\delta(v,v')\le C$
with
$C\le\sqrt{2/\sigma}+2\ln 3$.
\end{cor}
\begin{proof} Indeed, vertices
$v$, $v'$
of any vertical edge in
$Z$
satisfy the condition of Lemma~\ref{lem:different_levels_radius_unfixed}.
\end{proof}
\begin{cor}\label{cor:qi_above} For each pair of vertices
$v$, $v'\in V$
we have
$\delta(v,v')\le C|vv'|_Z$
with
$C\le\sqrt{2/\sigma}+2\ln 3$.
\end{cor}
\begin{proof} Let
$\gamma\subset Z$
be a geodesic between
$v$, $v'$, $\gamma=v_0\dots v_n$
$v_0=v$, $v_n=v'$,
with edges
$v_iv_{i+1}$, $i=0,\dots,n-1$.
By definition, the length of
$\gamma$
is the number of edges it consists,
$|vv'|_Z=|\gamma|_Z=n$.
By Corollaries~\ref{cor:horizont_edge_above}, \ref{cor:vert_edge_above} we have
$\delta(v_i,v_{i+1})\le C|v_iv_{i+1}|_Z=C$.
Thus
$\delta(v,v')\le C|vv'|_Z$.
\end{proof}
\subsection{Estimates from below}
\label{subsect:estimates_below}
We fix an orientation of
$X$.
Then we have a respective order on each
$X_x$, $x\in X$,
induced by the orientation.
\begin{lem}\label{lem:distance_separated_spheres} Fix
$\omega\in X$, $r>0$,
and let
$S_r(o)=(a,b)$, $S_r(o')=(a',b')\subset X_\omega$
be separated spheres with the order
$aba'b'$.
Then the
$\delta$-distance
between harmonic pairs
$q=((a,b),(o,\omega))$, $q'=((a',b'),(o',\omega))\in\operatorname{Harm}$,
is estimated above as
$\delta(q,q')\le C(r,|ba'|_\omega)$,
with
$C(r,|ba'|_\omega)\le 4\ln\left(3\sqrt{\frac{r}{|ba'|_\omega}}+\sqrt{\frac{|ba'|_\omega}{r}}\right)$.
\end{lem}
\begin{proof} Since pairs
$(a,b)$, $(a',b')$
are in strong causal relation, there is a common perpendicular
$h=h_{(x,y)}$
to them. We assume without loss of generality that
$x<y$
with respect to our order on
$X_\omega$.
This implies that
$o<x$
and
$y<o'$.
We have two harmonic
$p=((a,b),(x,y))$, $p'=((x,y),(a'b'))\in\operatorname{Harm}$,
and we denote by
$\alpha$
the segment of
$h_{(a,b)}$
between
$q$
and
$p$,
by
$\gamma$
the segment of
$h_{(x,y)}$
between
$p$
and
$p'$,
and by
$\beta$
the segment of
$h_{(a',b')}$
between
$p'$
and
$q'$.
Then
$\sigma=\alpha\gamma\beta$
is a zz-path between
$q$, $q'$
which consists of three sides
$\alpha$, $\gamma$, $\beta$.
Since
$\delta(q,q')\le|\sigma|$,
we estimate above
$|\sigma|=|\alpha|+|\beta|+|\gamma|$.
We have
$$e^{|\alpha|}=\frac{|ax|_\omega|bo|_\omega}{|ao|_\omega|bx|_\omega}=\frac{|ax|_\omega}{|bx|_\omega},$$
because
$|ao|_\omega=r=|bo|_\omega$.
Similarly,
$$e^{|\beta|}=\frac{|a'o'|_\omega|b'y|_\omega}{|a'y|_\omega|b'o'|_\omega}=\frac{|b'y|_\omega}{|a'y|_\omega},$$
because
$|a'o'|_\omega=r=|b'o'|_\omega$.
Next
$$e^{|\gamma|}=\frac{|xa'|_\omega|by|_\omega}{|xb|_\omega|a'y|_\omega}.$$
Harmonicity of
$p$
means that
$|bx||ay|=|ax||by|$,
and harmonicity of
$p'$
means that
$|a'y||b'x|=|a'x||b'y|$.
Using this, we obtain
$$L:=e^{|\alpha|+|\beta|+|\gamma|}=\frac{|b'x|_\omega^2|ay|_\omega^2}{|a'x|_\omega|b'y|_\omega|ax|_\omega|by|_\omega}.$$
Since
$bx\subset ob\subset X_\omega$
and
$a'y\subset a'o'\subset X_\omega$,
we have
$|bx|_\omega\le r$, $|a'y|_\omega\le r$
by monotonicity. Thus by the triange inequality
$|ay|_\omega\le |ab|_\omega+|ba'|_\omega+|a'y|_\omega\le 3r+|ba'|_\omega$.
Similarly,
$|b'x|_\omega\le 3r+|ba'|_\omega$.
By monotonicity
$|xa'|_\omega\ge|ba'|_\omega$, $|by|_\omega\ge|ba'|_\omega$,
$|b'y|_\omega\ge|o'b'|_\omega=r$, $|ax|_\omega\ge|ao|_\omega=r$.
Therefore,
$$L\le\frac{(3r+|ba'|_\omega)^4}{r^2|ba'|_\omega^2},$$
and the required estimate follows.
\end{proof}
\begin{lem}\label{lem:de_estimate_harmonic_chain} Fix
$\omega\in X$, $r>0$,
and let
$S_r(o_i)=(a_i,b_i)$, $i\in\mathbb{Z}$,
be a harmonic chain in
$X_\omega$.
Then for every sphere
$S_r(o)=(a,b)\subset X_\omega$
we have
$\delta(q,q_i)\le D=4\ln 160$,
where
$q=((a,b),(o,\omega))$, $q_i=((a_i,b_i),(o_i,\omega))\in\operatorname{Harm}_\omega$
with
$i\in\mathbb{Z}$
such that
$ab\cap a_ib_i\neq\emptyset$.
\end{lem}
\begin{proof} If
$o=o_i$
for some
$i\in\mathbb{Z}$,
then
$q=q_i$,
and there is nothing to prove. Thus we assume that
$o=o_i$
for no
$i\in Z$,
and furthermore we assume without loss of generality that
$i=0$,
and we have the following order
$aa_0b$
of points on
$X_\omega$.
Since
$a_0b_0\cap a_kb_k=\emptyset$
for
$|k|\ge 2$,
spheres
$S_r(o)$, $S_r(o_k)$
are separated. For
$k\ge 4$,
the spheres
$S_r(o)$
and
$S_r(o_k)$
are separated by at least the sphere
$S_r(o_2)$,
Thus
$|ba_k|_\omega\ge r$
in this case. On the other hand
$|ba_k|_\omega\le |a_0a_k|_\omega\le 2kr$
by the triangle inequality.
We let
$q_k=((a_k,b_k),(o_k,\omega))$
be the respective harmonic pair. By Lemma~\ref{lem:distance_separated_spheres}, we have
$\delta(q,q_k)\le C(r,|ba_k|_\omega)$,
where
$$C(r,|ba_k|_\omega)\le 4\ln\left(3\sqrt{\frac{r}{|ba_k|_\omega}}+\sqrt{\frac{|ba_k|_\omega}{r}}\right).$$
Thus
$\delta(q,q_4)\le 4\ln(3+\sqrt{8})\le 4\ln 10$.
By Lemma~\ref{lem:equal_radius_harm},
$\delta(q_k,q_0)\le 2|k|\ln 4$
for every
$k\in\mathbb{Z}$.
Therefore,
$\delta(q,q_0)\le 4\ln 10+8\ln 4=4\ln 160=D$.
\end{proof}
\begin{lem}\label{lem:cobouded_vertex} The set
$V=V(\omega,\sigma)$
is cobouded in
$\operatorname{Harm}_\omega$
uniformly in
$\omega\in X$
with respect to the metric
$\delta$,
that is,
$\delta(p,V)\le D$
for every
$p\in\operatorname{Harm}_\omega$,
where
$D$
depends only on
$\sigma$.
\end{lem}
\begin{proof} Given
$p\in\operatorname{Harm}_\omega$, $p=((a,b),(o,\omega))$, $(a,b)=S_r(o)$,
there is
$k\in\mathbb{Z}$
such that
$\sigma^{k+1}<r\le\sigma^k$.
We take
$q\in\operatorname{Harm}_\omega$, $q=((a',b'),(o,\omega))$
with
$(a',b')=S_{\sigma^k}(o)$.
Then
$p$, $q$
lie on the line
$\operatorname{h}_{(o,\omega)}$
and hence
$\delta(p,q)\le|pq|=\ln\frac{\sigma^k}{r}\le\ln\frac{1}{\sigma}$
(in fact
$\delta(p,q)=|pq|$
by Theorem~\ref{thm:de_metric_space}). By Lemma~\ref{lem:de_estimate_harmonic_chain},
there is
$q'\in V_k$
such that
$\delta(q,q')\le D_1$
with
$D_1=4\ln 160$.
Thus
$\delta(p,V)\le\delta(p,q')\le\ln\frac{1}{\sigma}+D_1=:D$.
\end{proof}
Recall that by Lemma~\ref{lem:geodesics_in_z} any two vertices
$p$, $p'\in V$
are connected by a geodesic
$\gamma$
in
$Z$
which consists of at most two radial subsegments
$\gamma'$, $\gamma''\subset\gamma$
and at most one horizontal edge
$h=qq'$
between them, possibly degenerated,
$q=q'$,
which lies on the lowest level of
$\gamma$, $\gamma=\gamma'\cup h\cup\gamma''$.
We assume that
$|\gamma''|\le|\gamma'|$
and consider two cases, the first is Lemma~\ref{lem:length_below_geod_hyp_approx},
the second one is Lemma~\ref{lem:dedistance_below}.
\begin{lem}\label{lem:length_below_geod_hyp_approx}
Given vertices
$p$, $p'\in V$,
assume that
$|\gamma''|\le 1$
for a geodesic
$\gamma=\gamma'\cup h\cup\gamma''$
between
$p$, $p'$.
Then
$\delta(p,p')\ge C|pp'|_Z-D$
for
$C=\ln\frac{1}{\sigma}$
and a constant
$D\ge 0$
depending only on
$\sigma$.
\end{lem}
\begin{proof}
By our assumption,
$|\gamma'|\ge|\gamma|-2=|pp'|_Z-2$,
and
$\gamma'\subset Z$
is a radial geodesic between harmonic
$p$
and
$q$
in
$X_\omega$, $|\gamma'|=|pq|_Z$,
where
$p=((a,b),(o,\omega))=S_r(o)$, $q=((c,d),(o',\omega))=S_{r'}(o')$,
$r=\sigma^l$, $r'=\sigma^k$.
For the levels
$l=\ell(p)$
and
$k=\ell(q)$
we have
$l>k$
and
$|\gamma'|=l-k$.
The part
$e\cup\gamma''$
of
$\gamma$
consist of at most two edges between
$q$
and
$p'$,
one horizontal and one radial, thus
$\delta(q,p')\le D_1$
by Corollaries~\ref{cor:horizont_edge_above}, \ref{cor:vert_edge_above},
with
$D_1\le\sqrt{2/\sigma}+2\ln 12$.
We take the sphere
$S_{r'}(o)=(a',b')\subset X_\omega$,
and consider the harmonic
$\widehat p=((a',b'),(o,\omega))\in\operatorname{Harm}_\omega$.
Then by the triange inequality we have
$|\delta(p,q)-\delta(p,\widehat p)|\le\delta(q,\widehat p)$.
Since
$q$
is a vertex of the hyperbolic approximation
$Z$,
the sphere
$S_{r'}(o)$
is a member of a harmonic chain. Since
$pq\subset Z$
is a radial geodesic segment,
$ab\subset cd\subset X_\omega$.
By the choice of
$S_{r'}(o)$,
we have
$ab\subset a'b'$,
whence
$cd\cap a'b'\neq\emptyset$.
Thus we can apply Lemma~\ref{lem:de_estimate_harmonic_chain} to
$q$, $\widehat p$,
and obtain
$\delta(q,\widehat p)\le D_2=4\ln 160$.
Therefore,
$\delta(p,q)\ge\delta(p,\widehat p)-D_2$.
On the other hand,
$p$, $\widehat p$
lie on a line in
$\operatorname{Harm}_\omega$,
thus
$|p\widehat p|=\ln(r'/r)=(l-k)\ln\frac{1}{\sigma}$
because
$r'/r=1/\sigma^{l-k}$.
By Theorem~\ref{thm:de_metric_space},
$\delta(p,\widehat p)=|p\widehat p|$.
Furthermore,
$|pq|_Z=l-k$
because
$pq\subset Z$
is a radial geodesic segment. Therefore,
$\delta(p,q)\ge C|pq|_Z-D_2$
with
$C=\ln(1/\sigma)$.
Finally,
$\delta(p,p')\ge\delta(p,q)-\delta(q,p')\ge C|pq|_Z-(D_1+D_2)\ge C(|pp'|_Z-2)-(D_1+D_2)=
C|pp'|_Z-D$
with
$D=2C+D_1+D_2$.
\end{proof}
\begin{lem}\label{lem:dedistance_below} Given vertices
$p$, $p'\in V$,
assume that
$|\gamma''|\ge 2$
for a geodesic
$\gamma=\gamma'\cup h\cup\gamma''$
between
$p$, $p'$.
Then
$\delta(p,p')\ge C|pp'|_Z-D$
with
$C=\frac{1}{2}\ln\frac{1}{\sigma}$
and
$D$
depending only on
$\sigma$.
\end{lem}
\begin{proof} As in Lemma~\ref{lem:length_below_geod_hyp_approx},
$\gamma'\subset Z$
is a radial geodesic between harmonic
$p$
and
$q$
in
$X_\omega$, $|\gamma'|=|pq|_Z=l-k$,
where
$\ell(p)=l$, $\ell(q)=k$.
Then
$k$
is the level of
$qq'$, $k=\ell(q)=\ell(q')$
and
$\gamma''\subset Z$
is a radial geodesic between harmonic
$q'$
and
$p'$
in
$X_\omega$, $|\gamma''|=|p'q'|_Z=l'-k$
where
$\ell(p')=l'$.
By our assumption
$|\gamma'|\ge|\gamma''|$.
Thus
$l\ge l'$,
and
$|pp'|_Z=|\gamma|\le|\gamma'|+|\gamma''|+1\le 2|\gamma'|+1=2(l-k)+1$.
Let
$S$
be a zz-path in
$\operatorname{Harm}$
between
$p$, $p'$
that approximates the distance
$\delta(p,p')$, $\delta(p,p')\ge |S|-\varepsilon$
for some
$\varepsilon>0$.
We fix an involution
$\rho:X\to X$
associated with
$p'=((a',b'),t')$, $t'=(o',\omega)$,
see sect.~\ref{subsect:involution_harm}, and let
$e=e_\rho$
be the respective elliptic quasi-line. By Lemma~\ref{lem:harmonic_pairs_ellitic},
there is a unique
$s\in e$
such that the pair
$\widehat q=(s,t)$
is harmonic, where
$p=((a,b),t)$, $t=(o,\omega)$.
Again, by Lemma~\ref{lem:harmonic_pairs_ellitic}, there is a unique
$t''\in e$
such that the pair
$(s,t'')$
is harmonic. Thus
$q''=(s,t'')\in\widehat e$
as well as
$p'\in\widehat e$
by definition of
$e$.
By Proposition~\ref{pro:diam_quasi-lines},
$\delta(p',q'')\le D_0$
for some universal constant
$D_0<16$.
Hence, there is a zz-path
$S'$
between
$p'$
and
$q''$
with
$|S'|\le D_0+\varepsilon$.
Note that
$t$, $t''$
lie on the line
$\operatorname{h}_s$.
Let
$S''$
be a zz-path between
$q''=(s,t'')$
and
$\widehat q=(s,t)$
which consists of one side,
$S''\subset\operatorname{h}_s$.
Then
$p=((a,b),t)$
and
$\widehat q$
lie on the line
$\operatorname{h}_t$.
Thus the concatenation
$\widehat S:=S\ast S'\ast S''\ast \widehat qp$
is a closed zz-path in
$\operatorname{Harm}$.
We apply \cite[Proposition~6.1]{Bu18} to conclude
$|S|+|S'|+|S''|>|p\widehat q|$.
The projection
$\operatorname{pr}_t:S\ast S'\ast S''\to\operatorname{h}_t$
does not increase distances, see \cite[Lemma~5.5 and Proposition~6.1]{Bu18},
and
$|\operatorname{pr}_t(S'')|=0$
because
$\operatorname{pr}_t(S'')=\widehat q$.
Therefore,
$|S|\ge |p\widehat q|-(D_0+\varepsilon)$.
By definition of
$e$,
we have
$t'=(o',\omega)\in e$.
We denote
$s=(z,u)$.
Since
$(s,t)$
is harmonic, we have
$|zo|_\omega=|ou|_\omega$.
We assume without loss of generality that
$o<o'$, $z<o<u$
with respect to our fixed order on
$X_\omega$.
Since
$s$, $t'\in e$,
the pairs
$s=(z,u)$
and
$t'=(o',\omega)$
separate each other, see Lemma~\ref{lem:separate}. Hence,
$|ou|_\omega>|oo'|_\omega$.
We denote by
$p_n$
the vertex of
$\gamma'$
on the level
$n$, $\ell(p_n)=n$, $k\le n\le l$,
and similarly by
$p_n'$
the vertex of
$\gamma''$
on the level
$n$, $\ell(p_n')=n$, $k\le n\le l'$.
Denote by
$\alpha_n$
the curve in
$Z$
between
$p_n$
and
$p_n'$
consisting horizontal edges. By the assumption
$|\gamma''|\ge 2$,
thus there is a vertex
$p_n'\in\gamma''$
with
$n=k+2$.
Note that
$|\alpha_{k+2}|\ge 4$
because otherwise we can shorten the geodesic
$\gamma$
between
$p$
and
$p'$.
Therefore, there is an edge
$vv'\subset\alpha_{k+2}$
with vertices
$v$, $v'$
different from the ends
$p_{k+2}$, $p_{k+2}'$
of
$\alpha_{k+2}$.
Thus the intersection
$B_v\cap B_{v'}$
misses the balls
$B_{p_{k+2}}$
and
$B_{p_{k+2}'}$
by properties of harmonic chains. Here
$B_v\subset X_\omega$
is the ball corresponding to the vertex
$v\in V$.
Since
$\gamma'$, $\gamma''\subset Z$
are radial geodesics, we have
$B_p\subset B_{p_{k+2}}$, $B_{p'}\subset B_{p_{k+2}'}$
for respective balls in
$X_\omega$.
Recall that
$o$
is the center of
$B_p$,
and
$o'$
the center of
$B_{p'}$.
It follows that the intersection
$B_v\cap B_{v'}$
is a segment on
$X_\omega$
lying inside of the segment
$oo'\subset X_\omega$.
By inequality~(\ref{eq:harmonic_below}),
$|B_v\cap V_{v'}|\ge r/4$
for
$r=\sigma^{k+2}$,
and we obtain
$|oo'|_\omega\ge\sigma^{k+2}/4$.
Thus
$$|p\widehat q|=\ln\frac{|ou|_\omega}{\sigma^l}\ge\ln\frac{|oo'|_\omega}{\sigma^l}\ge\ln\frac{\sigma^{k+2}}{\sigma^l}=
(l-k-2)\ln\frac{1}{\sigma}.$$
Since
$|\gamma|\le 2(l-k)+1$,
we have
$|p\widehat q|\ge|\gamma|/2\cdot\ln\frac{1}{\sigma}-D_1$
with
$D_1=\frac{5}{2}\ln\frac{1}{\sigma}$.
Therefore,
$$|S|\ge|p\widehat q|-(D_0+\varepsilon)\ge C|\gamma|-(D_0+D_1+\varepsilon),$$
where
$C=\frac{1}{2}\ln\frac{1}{\sigma}$.
Finally, we conclude
$\delta(p,p')\ge C|pp'|_Z-D$,
where
$D=D_0+D_1$.
\end{proof}
\begin{pro}\label{pro:inclusion_quasi-isometry} The inclusion
$f:V\hookrightarrow\operatorname{Harm}_\omega$
is a quasi-isometry with respect to the metric on
$Z$
and
$\delta$-metric
on
$\operatorname{Harm}_\omega$.
\end{pro}
\begin{proof} By Corollary~\ref{cor:qi_above} we have
$\delta(v,v')\le C|vv'|_Z$
for every pair vertices
$v$, $v'\in V$,
where the constant
$C$
depends only on
$\sigma$.
By Lemmas~\ref{lem:length_below_geod_hyp_approx} and \ref{lem:dedistance_below}
we have
$\delta(v,v')\ge C|vv'|_Z-D$
for every pair vertices
$v$, $v'\in V$,
where the constants
$C$, $D$
depend only on
$\sigma$.
Thus the map
$f$
is quasi-isometric. By Lemma~\ref{lem:cobouded_vertex}, the set
$V$
is cobouded in
$\operatorname{Harm}_\omega$.
Thus
$f$
is quasi-isometry.
\end{proof}
\begin{pro}\label{pro:de_metric_space_hyp} Assume that a M\"obius structure
$M$
on
$X=S^1$
is strictly monotone, i.e., it satisfies axioms~(T), (M($\alpha$)), (P),
and satisfies Increment axiom. Then
$(\operatorname{Harm},\delta)$
is a complete, proper, hyperbolic geodesic metric space with
$\delta$-metric
topology coinciding with that induced from
$X^4$.
\end{pro}
\begin{proof} By Theorem~\ref{thm:de_metric_space},
$(\operatorname{Harm},\delta)$
is a complete, proper, geodesic metric space with
$\delta$-metric
topology coinciding with that induced from
$X^4$.
By Corollary~\ref{cor:uniform_cobounded}, any its subset
$\operatorname{Harm}_\omega$, $\omega\in X$,
is quasi-isometric
$(\operatorname{Harm},\delta)$.
Using Proposition~\ref{pro:inclusion_quasi-isometry}, we see that
$(\operatorname{Harm}_\omega,\delta)$
is quasi-isometric to its hyperbolic approximation
$Z=Z(\omega,\sigma)$. Thus
$(\operatorname{Harm},\delta)$
is quasi-isometric to
$Z$.
By Proposition~\ref{pro:hyperbolic_harmonic_approximation},
$Z$
is hyperbolic. Since both spaces
$(\operatorname{Harm},\delta)$
and
$Z$
are geodesic, the space
$(\operatorname{Harm},\delta)$
is hyperbolic.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main}] We define
$Y=(\operatorname{Harm},\delta)$.
By Proposition~\ref{pro:de_metric_space_hyp},
$Y$
is a complete, proper, hyperbolic geodesic metric space. We clearly
have
$\d_{\infty}\operatorname{Harm}_\omega=X_\omega$
for every
$\omega\in X$.
Since
$\operatorname{Harm}_\omega$
is cobouded in
$Y$,
we have
$\d_{\infty} Y=\operatorname{Harm}_\omega\cup\{\omega\}=X=S^1$.
The fact that the induced M\"obius structure
$M_Y$
on
$X$
is isomorphic to
$M$
is tautological because all of the geometry of
$Y$
including
$Y$
itself is determined via
$M$.
In particular, given two points
$x$, $x'\in\d_{\infty} Y$,
we take
$\omega\in\d_{\infty} Y$
different from
$x$, $x'$.
Then
$x$, $x'\in X_\omega$,
and we consider the line
$\operatorname{h}=\operatorname{h}_{(x,\omega)}\subset\operatorname{Harm}_\omega\subset Y$.
Furthermore, we fix
$y\in X_\omega$, $y\neq x$,
and observe that there are points
$p$, $q\in\operatorname{h}$
such that
$x\in p$, $y\in q$.
Then
$|xx'|_\omega=\beta e^{\pm|pq|}$
for some fixed constant
$\beta(=|xy|_\omega)$.
In other words, the metric of
$X_\omega$
is recovered from the geometry of
$Y$.
\end{proof}
|
1,314,259,995,942 | arxiv | \section{Introduction}
A family $\mathcal I \subseteq [\omega]^\omega$ is said to be {\em independent} if for all finite subsets $X_0, ..., X_{n-1} \in \mathcal I$ and all $g:n \to 2$ we have that $X^{g(0)}_0 \cap ... \cap X^{g(n-1)}_{n-1}$ is infinite where $X_i^{g(i)}$ means $X_i$ if $g(i) = 1$ and $\omega \setminus X_i$ if $g(i) = 0$. Such a set is said to be {\em maximal} if it is not properly contained in any other independent set. The {\em independence number} $\mfi$ is the least size of a maximal independent family. Despite being one of the classical cardinal characteristics, $\mfi$ is notoriously difficult to manipulate. Indeed many relatively simple open questions remain surrounding $\mfi$, most notably the consistency of $\mfi < \mfa$ where $\mfa$ is the almost disjointness number\footnote{See the appendix of \cite{CFGS21} for a discussion of this problem.}. Part of the issue is that $\mfi$ has no known upper bound, besides the trivial $2^{\aleph_0}$ while it has several lower bounds thus preserving $\mfi$ small requires preserving the smallness of several other cardinal characteristics simultaneously.
One of the first breakthroughs in studying $\mfi$ came in \cite{Sh92} where the consistency of $\mfi < \mfu$ was established\footnote{The cardinal $\mfu$, the {\em ultrafilter number} is the least size of an ultrafilter base on $\omega$.}. There, a special independent family, now known as a {\em selective} independent family was constructed under $\CH$ and it was shown that under somewhat delicate conditions such a family's maximality could be preserved over a countable support iteration of length $\omega_2$ of certain proper forcing notions. Since then selective independence has become one of the main tools in providing models of $\mfi < 2^{\aleph_0}$ with interesting properties. See e.g \cite{DefMIF, FM19, CFGS21}. In particular selective independent families are Sacks indestructible.
All the published examples in the literature\footnote{At least all examples the author is aware of.} of countable support iterations of proper forcing notions which are shown to preserve selective independent families are such that the iterands all have the Sacks property. The Sacks property is used throughout these proofs\footnote{Though note that the Sacks property is not enough to ensure that selective independence is preserved: Silver forcing has the Sacks property but will kill the maximality of any ground model independent family and, if iterated $\omega_2$ many times will result in a model of $\mfi = 2^{\aleph_0} > \aleph_1$.}. However the Sacks property is not needed and is overkill. In this article we show that the $h$-perfect tree forcing notions\footnote{See Definition \ref{hperfecttreedef} for the definition of these posets.} introduced by Goldstern, Judah and Shelah in \cite{SMZnoCohen} also preserve selective independence, though they may, and often do depending on $h$, fail to have the Sacks property.
\begin{maintheorem}[$\CH$]
\begin{enumerate}
\item
Let $h:\omega \to \omega$ be any function so that for all $n < \omega$ we have $1 < h(n) < \omega$ then the $h$-perfect tree forcing, $\PT_h$ preserves any ground model selective independent family.
\item
Let $\delta$ be an ordinal and $\langle \P_\alpha, \dot{\Q}_\alpha \; | \; \alpha < \delta\rangle$ be a countable support iteration so that for all $\alpha < \delta$ we have \begin{center}$\forces_\alpha$``\, $\dot{\Q}_\alpha$ is the $h$-perfect tree forcing for some $h \in \baire$ with $1 < h(n) < \omega$ for all $n < \omega$".\end{center}
\noindent Then $\P_\delta$ preserves all ground model selective independent families.
\end{enumerate}
\label{mainthm1}
\end{maintheorem}
This allows us to show that in the models obtained by iterating such forcing notions there is a selective independent family of size $\aleph_1$ and, in particular, $\mfi = \aleph_1$. As a result we obtain the following consistency results.
\begin{maintheorem}
The following are consistent.
\begin{enumerate}
\item
$\mfi = \mfu < \non(\N) = \cof(\N) = 2^{\aleph_0}$
\item
$\mfi < \mfu = \non(\N) = \cof(\N) = 2^{\aleph_0}$
\end{enumerate}
\label{mainthm2}
\end{maintheorem}
Finally riffing off work of Brendle, Fischer and Khomskii \cite{DefMIF}, Schilhan \cite{Schilhanultrafilter} and Bergfalk, Fischer and the author \cite{BFS21} we can obtain that the cardinal characteristic inequalities above are consistent with $\Pi^1_1$-definable witnesses and a $\Delta^1_3$-definable well order of the reals.
\begin{maintheorem}
The cardinal characteristic inequalities featured in Main Theorem \ref{mainthm2} are consistent with a $\Delta^1_3$ well-order of the reals, a $\Pi^1_1$ witness to $\mfi = \aleph_1$ and, in the case of the first inequality, a $\Pi^1_1$ witness to $\mfu = \aleph_1$.
\label{mainthm3}
\end{maintheorem}
The rest of this paper is organized as follows. In the next section we review the basics of maximal independent families and selective independence. In the following section we introduce the $h$-perfect tree forcing of \cite{SMZnoCohen} and prove Main Theorem \ref{mainthm1}. In Section 4 we move on to applications and prove in particular Main Theorems \ref{mainthm2} and \ref{mainthm3}. We also discuss the relation between independent families and strong measure zero sets. Section 5 concludes with a discussion and some open questions. Throughout most of our terminology is standard and conforms e.g. to that of \cite{JechST} and \cite{Hal17}. For combinatorial cardinal characteristics of the continuum we follow \cite{BlassHB}.
Let us finally stress that most of the results in this paper, in particular Main Theorem \ref{mainthm1} are probably not new and indeed were suggested to the author by both J\"{o}rg Brendle and Vera Fischer\footnote{Private communication.}. However, they do not seem to ever have been written down, at least not explicitly and this seemed worth while to do. In particular, while the consistency of $\mfi < \non(\N)$ was shown in \cite[Theorem 3.8]{BHHH04}, the proof uses a short finite support iteration of ccc forcing notions over a model of $\MA$ (a ``dual iteration") and hence is very different than the model constructed here. Moreover in the model in \cite{BHHH04} we do not know the value of $\mfu$.
\bigskip
\noindent {\em Acknowledgments}. The author thanks J\"{o}rg Brendle for pointing out \cite{SMZnoCohen} to him and suggesting that $h$-perfect forcing may preserve selective independent families. The author thanks Vera Fischer for many very helpful conversations on this material and sharing her wealth of knowledge on selective independence and the cardinal $\mfi$.
\section{Selective Independence}
In this section we introduce the notion of a {\em selective independent family}. The reader familiar with this idea, for example as presented in \cite{DefMIF} or \cite{FM19}, can comfortably skip this section as nothing is new. Selective independent families were introduced implicitly in Shelah's proof of the consistency of $\mfi < \mfu$ in \cite{Sh92}. To facilitate the discussion we utilize the following notation.
\begin{notation} For $\mathcal{I}\subseteq [\omega]^{\omega}$,\begin{itemize}
\item let $\mathrm{FF}(\mathcal{I})$ denote the set of finite partial functions $g$ from $\mathcal{I}$ to $\{0,1\}$, and
\item for $g\in\mathrm{FF}(\mathcal{I})$ write $\mathcal{I}^g$ for $$\bigcap\{A\mid A\in\mathrm{dom}(g)\textnormal{ and }g(A)=1\}\cap\bigcap\{\omega\backslash A\mid A\in\mathrm{dom}(g)\textnormal{ and }g(A)=0\}.$$
\end{itemize}
\end{notation}
In this notation, a family $\mathcal{I}\subseteq [\omega]^\omega$ is \emph{independent} if $\mathcal{I}^g$ is infinite for all $g\in\mathrm{FF}(\mathcal{I})$. An independent family $\mathcal{I}$ is \emph{maximal} if
$$\forall X\in [\omega]^\omega\;\exists g\in\mathrm{FF}(\mathcal{I})\text{ such that }\mathcal{I}^g\cap X\text{ or }\mathcal{I}^g\backslash X\text{ is finite,}$$
We will need a slight strengthening of maximality.
\begin{definition}
An independent family $\mathcal{I}$ is \emph{densely maximal} if $$\forall X\in [\omega]^\omega\text{ and }g'\in\mathrm{FF}(\mathcal{I})\;\exists g\supseteq g'\text{ in }\mathrm{FF}(\mathcal{I})\text{ such that }\mathcal{I}^g\cap X\text{ or }\mathcal{I}^g\backslash X\text{ is finite.}$$
\end{definition}
In other words, an independent family $\mathcal I$ is {\em densely maximal} if for each $X \in [\omega]^\omega$ the collection of $g$'s witnessing that $\mathcal I \cup \{X\}$ is not a larger independent family is dense in $(\mathrm{FF}(\mathcal I), \supseteq)$.
\begin{definition}
Let $\mathcal I$ be an independent family. The \emph{density ideal of $\mathcal{I}$}, denoted $\mathrm{id}(\mathcal{I})$, is $$\{X\subseteq\omega\mid \forall g'\in\mathrm{FF}(\mathcal{I})\;\exists g\supseteq g'\text{ in }\mathrm{FF}(\mathcal{I})\text{ such that }\mathcal{I}^g\cap X\text{ is finite}\}.$$
Dual to the density ideal of $\mathcal{I}$ is the \emph{density filter of $\mathcal{I}$}, denoted $\mathrm{fil}(\mathcal{I})$ and defined as $$\{X\subseteq\omega\mid \forall g'\in\mathrm{FF}(\mathcal{I})\;\exists g\supseteq g'\text{ in }\mathrm{FF}(\mathcal{I})\text{ such that }\mathcal{I}^g\backslash X\text{ is finite}\}.$$
\end{definition}
Observe also that for an infinite independent family $\mathcal{I}$, none of the above definitions' meanings change if we replace the word ``finite'' with ``empty''. We have as well the following from \cite[Lemma 5.4]{BFS21}.
\begin{lemma}\label{lemma0} A family $\mathcal{I}\subseteq [\omega]^\omega$ is densely maximal if and only if $$P(\omega)=\mathrm{fil}(\mathcal{I})\cup\langle\omega\backslash\mathcal{I}^g\mid g\in\mathrm{FF}(\mathcal{I})\rangle_{\mathrm{dn}}.$$
\end{lemma}
Where for a set $\mathcal X \subseteq [\omega]^\omega$ the set $\langle \mathcal X \rangle_{\mathrm{dn}}$ denotes the downward closure of $\mathcal X$ under $\subseteq^*$ i.e. $A \in \langle \mathcal X \rangle_{\mathrm{dn}}$ if and only if there is an $X \in \mathcal X$ with $A \subseteq^* X$. Later on we will similarly denote $\langle \mathcal X\rangle_{\mathrm{up}}$ for the upward closure of $\mathcal X$ under $\subseteq^*$.
The following are easily verified, see \cite[Lemma 5.5]{BFS21}.
\begin{lemma}\label{lemma1}
\begin{enumerate}
\item
If $\mathcal I'$ is an independent family and $\mathcal{I}\subseteq\mathcal{I}'$ then $\mathrm{fil}(\mathcal{I})\subseteq\mathrm{fil}(\mathcal{I}')$;
\item if $\kappa$ is a regular uncountable cardinal and $\langle\mathcal{I}_\alpha\mid\alpha<\kappa\rangle$ is a continuous increasing chain of independent families then $\mathrm{fil}(\bigcup_{\alpha<\kappa}\mathcal{I}_\alpha)=\bigcup_{\alpha<\kappa}\mathrm{fil}(\mathcal{I}_\alpha)$;
\item If $\mathcal I$ is an independent family then $\mathrm{fil}(\mathcal{I})=\bigcup\{\,\mathrm{fil}(\mathcal{J})\mid \mathcal{J}\in [\mathcal{I}]^{\leq\omega}\}$.
\end{enumerate}
\end{lemma}
Recall that given a family $\mathcal F$ of subsets of $\omega$ we say that
\begin{enumerate}
\item
$\mathcal F$ is a $P${\em -set} if every countable family $\{A_n \; | \; n < \omega\} \subseteq \mathcal F$ has a psuedointersection $B \in \mathcal F$,
\item
$\mathcal F$ is a $Q$-{\em set} if given every partition of $\omega$ into finite sets $\{I_n \; |\; n < \omega\}$ there is a {\em semiselector} $A \in \mathcal F$ i.e. $|A \cap I_n| \leq 1$ for all $n < \omega$,
\item
$\mathcal F$ is {\em Ramsey} if it is both a $P$-set and a $Q$-set.
\end{enumerate}
If $\mathcal F$ is a filter and a $P$-set (respectively a $Q$-set, Ramsey set) we call $\mathcal F$ a $P$-filter (respectively a $Q$-filter, Ramsey filter).
\begin{definition}
An independent family $\mathcal{I}$ is \emph{selective} if it is densely maximal and $\mathrm{fil}(\mathcal{I})$ is Ramsey.
\end{definition}
\begin{fact}[Shelah, see \cite{Sh92}] $\CH$ implies the existence of a selective independent family.
\end{fact}
Under certain circumstances countable support iterations of proper forcing notions over a model of $\CH$ will preserve that a given ground model selective independent family is maximal. Towards clarifying the meaning of ``certain conditions" recall the following preservation result, due to Shelah, see \cite[Lemma 3.2]{Sh92}.
\begin{theorem}\label{Shelah_preservation1}
Assume \textsf{CH}. Let $\delta$ be a limit ordinal and let $\langle\mathbb{P}_\alpha,\dot{\mathbb{Q}}_\alpha\mid\alpha<\delta\rangle$ be a countable support iteration of ${^\omega}\omega$-bounding proper posets. Let $\mathcal{F}\subseteq P(\omega)$ be a Ramsey set and let $\mathcal{H}$ be a subset of $P(\omega)\backslash\langle\mathcal{F}\rangle_{\mathrm{up}}$. If $V^{\mathbb{P}_\alpha}\vDash P(\omega)=\langle\mathcal{F}\rangle_{\mathrm{up}}\cup\langle\mathcal{H}\rangle_{\mathrm{dn}}$ for all $\alpha<\delta$ then $V^{\mathbb{P}_\delta}\vDash P(\omega)=\langle\mathcal{F}\rangle_{\mathrm{up}}\cup\langle\mathcal{H}\rangle_{\mathrm{dn}}$ as well.
\end{theorem}
We will also need the notion of {\em Cohen preserving}.
\begin{definition}
Let $\P$ be a forcing notion. We say that $\P$ is {\em Cohen preserving} if every every new dense open subset of $2^{{<}\omega}$ (or, equivalently $\omega^{{<}\omega}$, ...) contains an old dense subset. More formally, $\P$ is Cohen preserving if for all $p \in \P$ and all $\P$-names $\dot{D}$ so that $p \forces$ ``$\dot{D} \subseteq2^{{<}\omega}$ is dense open" there is a dense $E \subseteq 2^{{<}\omega}$ in the ground model and a $q \leq_\P p$ so that $q \forces \check{E} \subseteq \dot{D}$.
\end{definition}
Being Cohen preserving is preserved by countable support iterations of proper forcing notions.
\begin{theorem}[Shelah, See Conclusion 2.15D, pg. 305 of \cite{PIP}, see also \cite{FM19}, Theorem 27]
If $\delta$ is an ordinal and $\langle \Q_\alpha, \dot{\mathbb R}_\alpha \; | \; \alpha < \delta\rangle$ is a countable support iteration of forcing notions so that for each $\alpha < \delta$ we have $\forces_\alpha$``$\dot{\mathbb R}_\alpha$ is proper and Cohen preserving" then $\Q_\delta$ is proper and Cohen preserving.
\label{Shelah_preservation2}
\end{theorem}
\section{$h$-Perfect Trees Preserve Selective Independent Families}
In this section we prove Main Theorem \ref{mainthm1}. First, recall the definition of $h$-perfect tree forcing from \cite{SMZnoCohen}.
\begin{definition}
Given a function $h:\omega \to \omega$ with $1 < h(n) < \omega$ for all $n < \omega$, the $h${\em -perfect tree forcing}, denoted $\mathbb{PT}_h$, is the forcing notion consisting of trees $p \subseteq \omega^{<\omega}$ so that the following hold:
\begin{enumerate}
\item
For all $t \in p$ and all $l \in {\rm dom}(t)$ we have $t(l) < h(l)$.
\item
Every $t \in p$ has either one or $h(l(t))$-many immediate successors in $T$.
\item
For every $t \in p$ there is a $t ' \supseteq t$ with $t' \in p$ and there are $h(l(t'))$ many immediate successors of $t'$ in $p$.
\end{enumerate}
The order is inclusion.
\label{hperfecttreedef}
\end{definition}
Note that the case where $h(n) = 2$ for all $n<\omega$ is simply Sacks forcing, while for a fast growing $h:\omega \to \omega$ this forcing will make the ground model reals measure zero so which $h$ we choose can affect the properties of the forcing significantly. This forcing notion was first considered in \cite{SMZnoCohen}. In \cite{SMZnoCohen} the following is shown.
\begin{fact}[\cite{SMZnoCohen}]
For any $h:\omega \to \omega$ with $1 < h(n) < \omega$ for all $n < \omega$ the following hold.
\begin{enumerate}
\item
$\mathbb{PT}_{h}$ is proper, and in fact satisfies Axiom A.
\item
$\mathbb{PT}_{h}$ is $\baire$-bounding.
\item
$\mathbb{PT}_{h}$ preserves $P$-points.
\end{enumerate}
\label{basicfacts}
\end{fact}
For the rest of this section fix an arbitrary function $h:\omega \to \omega$ so that $1 < h(n) < \omega$ for all $n < \omega$. We will prove first that forcing with $\mathbb{PT}_h$ preserves a ground model selective independent family. Then we will show that under $\CH$ arbitrary countable support iterations of $h$-perfect tree forcings (where the $h$ can change and need not even be in the ground model) will preserve selective independent families. First we introduce some arboreal terminology. If $p \in \mathbb{PT}_{h}$ and $n < \omega$ then a node $t \in p$ is an ${n}^{\rm th}${\rm -splitting node} if it has $h(l(t))$ many immediate successors and it has the $n - 1$ predecessors with this property. Denote by ${\rm Split}_n(p)$ the set of $n$-splitting nodes. We say that for two $h$-perfect trees $p, q \in\mathbb{PT}_{h}$ that $q \leq_n p$ if $q \leq p$ and for all $i < n + 1$ ${\rm Split}_i(p) ={\rm Split}_i(q)$. Given any $p \in \PT_h$ and any node $t \in p$ we let $p_t$ denote the tree $\{s \in p \; | \; s\subseteq t \, {\rm or} \, t \subseteq s\}$. Note that $p_t \in \PT_h$ and $p_t \leq p$ for any $t \in p$.
\begin{lemma}
Let $p \in \PT_h$ and let $\dot{X}$ be a $\PT_h$-name for an infinite subset of $\omega$. There is a $q \leq p$ so that for all $n < \omega$ and any $n$-splitting node $t \in q$ we have that $q_t$ decides $\dot{X} \cap \check{n}$.
\end{lemma}
Such a $q$ is called {\em preprocessed for} $\dot{X}$.
\begin{proof}
This is a standard fusion argument but we sketch it for completeness. Fix $p$ and $\dot{X}$ as in the lemma. Inductively define a fusion sequence $...\leq_n p_n \leq_{n-1} p_{n-1} \leq_{n-2} ... \leq_1 p_1 \leq_0 p_0 = p$ as follows. Given $k < \omega$ and $p_k$, for each $k^{\rm th}$ splitting node $t \in {\rm Split}_k(p_k)$ let $p_t' \leq(p_k)_t$ decide $\dot{X} \cap \check{k}$. Let $p_{k+1} = \bigcup_{t \in {\rm Split}_k(p_k)} p'_t$. Clearly $p_{k+1} \leq_k p_k$ and the fusion $q:= \bigcap_{k < \omega} p_k$ is as needed.
\end{proof}
We now move to the first substantial lemma.
\begin{lemma}
$\PT_h$ is Cohen preserving.
\label{cohenpreserving}
\end{lemma}
\begin{proof}
Fix $p \in \PT_h$ and suppose that $\dot{D}$ is a $\PT_h$-name for a dense open subset of $2^{{<}\omega}$. Enumerate $2^{{<}\omega}$ as $\{s_n \; | \; n<\omega\}$. We will inductively construct sequences $\{p_n\; | \; n < \omega\}$ and $\{t_n\; | \; n < \omega\}$ so that the following hold.
\begin{enumerate}
\item
$p_0 = p$.
\item
$p_{n+1} \leq_n p_n$ for all $n < \omega$.
\item
$t_n \supseteq s_n$ for all $n < \omega$.
\item
$p_{n+1} \forces \check{t}_n \in \dot{D}$.
\end{enumerate}
Given such sequences, let $q = \bigcap_{n<\omega} p_n$ and $E = \{t_n \; | \; n < \omega\}$. Clearly $q \forces \check{E} \subseteq \dot{D}$ and $E$ is dense so, assuming we can construct such sequences we will be done. This is done by induction. Given $p_k$, enumerate ${\rm Split}_k(p_k)$ as $\{u_i \; | \; i <l\}$. Note that $l \in \omega$ depends not just on $h$ but also on $p_k$ but what matters here is that it is finite (which it is). Now let $p'_{k, 0} \leq (p_k)_{u_0}$ decide some $t^0_k \supseteq s_k$ to be in $\dot{D}$ (since $\dot{D}$ is forced to be dense this is possible). Next, let $p'_{k, 1} \leq (p_k)_{u_1}$ decide some $t^1_k \supseteq t^0_k$ to be in $\dot{D}$. Continuing this way, inductively,let for all $0 < i < l$ let $p'_{k, i} \leq (p_k)_{u_i}$ decide some $t^i_k \supseteq t^{i-1}_k$ to be in $\dot{D}$. Let $p_{k+1} = \bigcup_{i < l} p'_{k, i}$ and $t_{k+1}$ be $t^{l-1}_k$. Since $\dot{D}$ is forced to be open we have that $p_{k+1} \forces \check{t}_{k+1} \in \dot{D}$ so we're done.
\end{proof}
Fix a selective independent family $\mathcal I$ in the ground model.
\begin{lemma}
If $G \subseteq \PT_h$ is generic over $V$ then in $V[G]$ the ideal $\mathrm{id}(\mathcal I)$ is generated by $\mathrm{id}(\mathcal I) \cap V$.
\label{idealpreserving}
\end{lemma}
Note that dually this lemma implies that in $V[G]$ the filter $\mathrm{fil}(\mathcal I)$ is generated by $\mathrm{fil}(\mathcal I) \cap V$.
\begin{proof}
Let $p \in \PT_h$, let $\dot{X}$ be a $\PT_h$-name for an infinite subset of $\omega$ and suppose that $p \forces \dot{X} \in {\rm id}(\mathcal I)$. We need to find a ground model $Y \in [\omega]^\omega$ and an $r \leq p$ so that $r \forces \dot{X} \subseteq \check{Y}$. Towards this, via a fusion argument, or just using the properness of $\PT_h$, find a $q \leq p$ and a countable $\mathcal J \subseteq \mathcal I$ so that $q \forces \dot{X} \in {\rm id}(\mathcal J)$. Since $\mathcal J$ is countable, we can associate $\mathrm{FF}(\mathcal J)$ with $2^{<\omega}$. Let $\dot{D}$ be the $\PT_h$-name for the dense open subset of $2^{<\omega}$ defined by $q \forces \check{g} \in \dot{D}$ if and only if $\mathcal J^g \cap \dot{X} = \emptyset$. Since $\PT_h$ is Cohen preserving there is an $r \leq q$ and a dense $E \subseteq 2^{<\omega}$ so that $r \forces \check{E} \subseteq \dot{D}$. Let $Y = \bigcap_{h \in E} (\omega \setminus \mathcal J^h)$. Observe that $Y \in {\rm id}(\mathcal J)$ and hence $Y \in {\rm id}(\mathcal I)$. To see this, let $g' \in {\rm FF}(\mathcal J)$ be arbitrary and let $g \supseteq g'$ be in $E$. We have that $Y \subseteq \omega \setminus \mathcal J^g$ and hence $Y \cap \mathcal J^g = \emptyset$ which by definition means that $Y$ is in the ideal. The following claim now completes the proof.
\begin{claim}
$r \forces \dot{X} \subseteq \check{Y}$.
\end{claim}
\begin{proof}
Let $r \in G$ be $\PT_h$-generic over $V$. Note that by the way $\dot{D}^G$ is defined we have that $\dot{X} = \bigcap_{g \in \dot{D}^G} (\omega \setminus \mathcal B^g)$ and since $E \subseteq \dot{D}^G$ we're done.
\end{proof}
\end{proof}
\begin{theorem}[$\CH$]
If $\mathcal I$ is a selective independent family and $G \subseteq \PT_h$ is generic over $V$ then $V[G] \models$``$\mathcal I$ is a selective independent family".
\label{preserveI}
\end{theorem}
\begin{proof}
There are three things to check. We need to show 1) that ${\rm fil}(\mathcal I)$ is a $P$-filter, 2) that ${\rm fil}(\mathcal I)$ is a $Q$-filter and 3) that $\mathcal I$ is densely maximal. We take these one at a time. First though, since we assume $\CH$ in $V$, note that the fact that ${\rm fil}(\mathcal I)$ is a $P$-set implies that we can assume that ${\rm fil}(\mathcal I)$ is generated by an $\omega_1$ length $\subseteq^*$-descending sequence $\{B_\alpha \; | \; \alpha < \omega_1\}$. In other words for all $\alpha < \beta < \omega_1$ we have $B_\beta \subseteq^* B_\alpha$ and every $A \in {\rm fil}(\mathcal I)$ is almost contained in some (equivalently a tail of) $B_\gamma$. Fix such a sequence $\{B_\alpha \; | \; \alpha < \omega_1\}$.
To see that ${\rm fil}(\mathcal I)$ is a $P$-filter in $V[G]$ then, note that if $\{A_n \; | \; n < \omega\} \subseteq {\rm fil}(\mathcal I)$ in $V[G]$ by Lemma \ref{idealpreserving} there are countable ordinals $\{\gamma_n\; |\; n < \omega\}$ so that for all $n < \omega$ we have $B_{\gamma_n} \subseteq^* A_n$. Let $\gamma \geq {\rm sup}_{n < \omega} \gamma_n$. We have $B_\gamma \subseteq^* A_n$ for all $n < \omega$ so ${\rm fil}(\mathcal I)$ is a $P$-filter in $V[G]$.
The fact that ${\rm fil}(\mathcal I)$ is a $Q$-filter still in $V[G]$ follows immediately from the fact that $\PT_h$ is $\baire$-bounding.
Thus it remains to see that $\mathcal I$ remains densely maximal in $V[G]$. Suppose not and let $\dot{X}$ be a $\PT_h$-name for an infinite subset of $\omega$ so that in $V[G]$ $\dot{X}^G$ is not in ${\rm fil}(\mathcal I)$ and for all $g \in {\rm FF}(\mathcal I)$ we have $\dot{X} \nsubseteq \omega \setminus \mathcal I^g$. Let $p \in G$ force this. Without loss of generality we may assume that $p$ is preprocessed for $\dot{X}$ i.e. for each $n < \omega$ and every $t\in {\rm Split}_n(p)$ we have that $p_t$ decides $\dot{X} \cap \check{n}$. For each split node $t$ of $p$ let $Y_t = \{n \; | \; p_t \nVdash \check{n} \notin \dot{X}\}$. Note that for all split nodes $t$ we have that $p_t \forces \dot{X} \subseteq \check{Y}_t$.
\begin{claim}
For all split nodes $t$ of $p$ we have $Y_t \in {\rm fil}(\mathcal I)$.
\end{claim}
\begin{proof}
Otherwise there is a $t \in {\rm Split}(p)$ so that $Y_t \subseteq \omega \setminus \mathcal I^g$ for some $g \in {\rm FF}(\mathcal I)$ but since $p_t \forces \dot{X} \subseteq \check{Y}_t$ we have that $p_t \forces \dot{X} \subseteq \omega \setminus \mathcal I^g$. However this contradicts the choice of $p$.
\end{proof}
Since ${\rm fil}(\mathcal I)$ is a $P$-filter generated by ground model elements there is a $C \in {\rm fil}(\mathcal I) \cap V$ so that $C \subseteq^* Y_t$ for all $t \in {\rm Split}(p)$. Let $f \in \baire$ be a strictly increasing function such that for all $n < \omega$ we have $C \setminus f(n) \subseteq \bigcap \{Y_t \; | \; t \in {\rm Split}_j(p), \; j \leq n + 2\}$. The following is proved in \cite{Sh92}, as well as \cite[Lemma 3.15]{CFGS21} but we include it for completeness.
\begin{claim}
There is a $C^* \subseteq C$ so that $C^* \in {\rm fil}(\mathcal I) \cap V$ and, letting $\{k_n \; | \; n < \omega\}$ be a strictly increasing enumerate of $C^*$, we have that $f(k_n) < k_{n+1}$ for all $n < \omega$ and $f(1) < k_1$.
\end{claim}
\begin{proof}
This follows from the fact that ${\rm fil}(\mathcal I)$ is a $Q$-filter\footnote{In fact a family $\mathcal F \subseteq [\omega]^\omega$ is a $Q$-filter if and only if for each increasing $f \in \baire$ there is a $C^* = \{k_n \; | \; n < \omega\} \in \mathcal F$ such that $f(k_n) < k_{n+1}$, see \cite[Lemma 3.15]{CFGS21}.}. Inductively find a sequence $\{n_l\}_{l \in \omega}$ so that $n_0 = 0$ and $$n_{l+1} = {\rm min}\{n \; | \; n_l < n {\rm \, and \, for \, all} \, m < n_l \, f(m) < n\}$$. Consider the interval partition $\mathcal E_0 = \{[n_{3l}, n_{3l + 3})\}_{l \in \omega}$. Since ${\rm fil}(\mathcal I)$ is a $Q$-set and a filter there is a $C_1 \subseteq C$ so that for all $l < \omega$ we have $|C_1 \cap [n_{3l}, n_{3l + 3})| \leq 1$. Now define an equivalence relation $\mathcal E_1$ on $\omega$ by $$m \equiv_{\mathcal E_1} k \, {\rm iff} \, m = k \lor m, k \in C_1 \land (m < k \leq f(m) \lor k < m \leq f(k)).$$
In words, this says that every element of $\omega \setminus C_1$ is in their own equivalence class and distinct elements $m, k \in C_1$ are $\mathcal E_1$-equivalent just in case applying $f$ to the smaller one is greater or equal to the bigger one. Every $\mathcal E_1$ equivalence class has at most two members. To see this, suppose $ m_1 < m_2 < m_3 \in C_1$ were all in the same equivalence class. By definition of $\mathcal E_1$ we have $m_1 < m_2 < m_3 \leq f(m_1)$. However, since $C_1$ is a semiselector for the interval partition $\mathcal E_0$ there are distinct $l_1 < l_2 < l_3$ so that for all $i \in \{1, 2, 3\}$ we have $m_i \in [n_{3l_i}, n_{3l_{i+1}})$. Thus we get $m_1 < n_{3l_2} \leq m_2 < n_{3l_3} \leq m_3 \leq f(m_1)$ but by the definition of the $n_l$ sequence we also have $f(m_1) \leq n_{3l_2 + 1} < n_{3l_3}$ which is a contradiction.
Now let $C_2 \subseteq C_1$ be a semiselector for $\mathcal E_1$ in ${\rm fil}(\mathcal I)$. Without loss of generality $0 \in C_2$. Let $\{k_n\}$ be an increasing enumeration of $C_2$. For all $n < n'$ we have that $n$ and $n'$ are not in the same $\mathcal E_1$ equivalence class and therefore $f(n) < n'$. As such $C_2 = C^*$ is as needed.
For the final point note that ${\rm fil}(\mathcal I)$ is closed under finite changes to elements so we can augment $C^*$ to get $f(1) < k_1$ as needed.
\end{proof}
We will find a $q \leq p$ forcing that $C^* \subseteq \dot{X}$ which contradicts the choice of $X$. Obviously this will be a fusion argument. Let $t^* = {\rm Stem}(p)$ and let $p_0 = p = p_{t^*}$. Now for each $i \in h(l(t^*))$ let $w(t^*, i) \in p_0$ be a $1$-splitting node extending $(t^* )^\frown i$. Since $k_1 > f(1)$ we have that $k_1 \in \bigcap \{Y_{w(t^*, i)} \; | \; i \in h(l(t^*))\}$. This means that for each $i \in h(l(t^*))$ there is a $w'(t^*, i) \in {\rm Split}_{k_1 + 1}(p_0)$ extending $w(t^*, i)$ which forces that $k_1 \in \dot{X}$ since $p_0$ is preprocessed. Let $p_1 = \bigcup_{i \in h(l(t^*))} p_{w'(t^*, i)}$. Note that $p_1 \leq_0 p_0$ and forces that $k_1 \in \dot{X}$. Also note that ${\rm Split}_1(p_1) \subseteq {\rm Split}_{k_1 + 1}(p_0)$ and, by construction, for each $m$ we have that ${\rm Split}_{m}(p_1) \subseteq {\rm Split}_{k_1 + m}(p_0)$
Now proceed inductively defining $p_{n+1}$ as follows. Assume $p_n$ has been defined and that for all $m < \omega$ we have ${\rm Split}_{n +m}(p_n) \subseteq {\rm Split}_{k_n + m} (p_0)$. Observe that we have $k_{n+1} \in \bigcap\{Y_t \; | \; t \in {\rm Split}_{n}(p_n)$ since $k_{n+1} > f(k_n)$ and hence we can find for each $t \in {\rm Split}_n(p_n)$ and each $i \in h(l(t))$ a $w(t, i) \in {\rm Split}_{k_{n+1} + 1}(p_0)$ in $p_n$ which contains $t^\frown i$ so that $(p_n)_{w(t, i)} \forces \check{k}_{n+1} \in \dot{X}$. Let $p_{n+1} = \bigcup_{t \in {\rm Split}_n(p_n)} \bigcup_{i \in h(l(t))} (p_n)_{w(t, i)}$. Clearly this is as needed.
Let $q$ be the fusion of the $p_n$'s. We have that $q \forces \check{C}^* \subseteq \dot{X}$ contradicting the fact that $\dot{X}$ is forced not to be in ${\rm fil}(\mathcal I)$ so we're done.
\end{proof}
Now we show how to lift the above proof to show that iterations of $h$-perfect tree forcing notions preserve selective independent families.
\begin{theorem}[$\CH$]
Let $\delta$ be an ordinal and $\mathcal I$ be a selective independent family. Let $\langle \P_\alpha, \dot{\Q}_\alpha \; | \; \alpha < \delta\rangle$ be a countable support iteration of posets so that for all $\alpha < \delta$ we have $\forces_\alpha$``$\dot{\Q}_\alpha$ is $\PT_h$ for some $h \in \baire$ with $1 < h(n) < \omega$ for all $n < \omega$". If $G \subseteq \P_\delta$ is generic over $V$ then $V[G] \models$ ``$\mathcal I$ is a selective independent family".
\label{iteration}
\end{theorem}
\begin{proof}
The proof is by induction on $\delta$. Note first that by Theorems \ref{Shelah_preservation1} and \ref{Shelah_preservation2} and Lemmas \ref{cohenpreserving} and \ref{idealpreserving} we have that for each $\alpha \leq \delta$ that $\P_\alpha$ is Cohen preserving and forces that ${\rm fil}(\mathcal I)$ is generated by ground model sets. This guarantees that $\P_\alpha$ forces that ${\rm fil}(\mathcal I)$ is a $P$-filter. Moreover being $\baire$-bounding ensures that ${\rm fil}(\mathcal I)$ is a $Q$-filter hence ${\rm fil}(\mathcal I)$ is forced to be Ramsey by every $\P_\alpha$ for $\alpha \leq \delta$. Therefore we just need to ensure that $\forces_\delta$``$\mathcal I$ is densely maximal" under the assumption that for all $\alpha < \delta$ we have that $\forces_\alpha$``$\mathcal I$ is densely maximal". To show this we will use the characterization of dense maximality given by Lemma \ref{lemma0}. We now consider two separate cases:
\noindent \underline{Case 1}: $\delta = \beta + 1$ for some $\beta$. The proof of this case is almost verbatim the same as the proof of theorem \ref{preserveI} noting that, by the above, we can assume that ${\rm fil}(\mathcal I)^{V^{\P_\beta}}$ is generated by ${\rm fil}(\mathcal I) \cap V$ and the $f$ and $C^*$ found in that proof can be assumed to come from the ground model by $\baire$-boundedness.
\noindent \underline{Case 2}: $\delta$ is a limit ordinal. Inductively we have that if $\beta < \delta$ and $G_\beta \subseteq \P_\beta$ is generic over $V$ then $$V[G_\beta] \models P(\omega) = \langle {\rm fil}(\mathcal I) \cap V\rangle_{\rm up} \cup \langle \omega \setminus \mathcal I^g\; | \; g \in \mathsf{FF}(\mathcal I)\rangle_{\rm dn}.$$ But then by Theorem \ref{Shelah_preservation1} plus the fact that ${\rm fil}(\mathcal I)$ is a Ramsey filter in $V^{\P_\delta}$ we get $$\forces_\delta P(\omega) = \langle {\rm fil}(\mathcal I) \cap V\rangle_{\rm up} \cup \langle \omega \setminus \mathcal I^g\; | \; g \in \mathsf{FF}(\mathcal I)\rangle_{\rm dn}.$$
Which, by Lemma \ref{lemma0} is exactly what we needed to show.
\end{proof}
\section{Applications}
We now turn to applications of the results from the previous section. The most obvious of these is that there is a small independent family in any model obtained by iteratively forcing with $h$-perfect tree partial orders. In particular we get the following as a corollary to Theorem \ref{iteration}.
\begin{corollary}
It is consistent that $\mfi = \aleph_1 < {\rm non}(\mathcal N) = \aleph_2$.
\end{corollary}
As mentioned in the introduction this consistent inequality was first shown in \cite[Theorem 3.8]{BHHH04}, though by a very different construction.
In the models constructed in \cite{SMZnoCohen} many interesting properties hold with regards to the structure of the strong measure zero sets, for example the consistency of ``the additivity of the strong measure zero ideal is $\aleph_2 = 2^{\aleph_0}$". As a consequence all of these are also consistent with $\mfi = \aleph_1$. One thing to note as a consequence of this is the following.
\begin{corollary}
It is independent of $\ZFC$ whether there is a maximal independent family of strong measure zero.
\end{corollary}
\begin{proof}
In the Laver model, \cite[Model 7.6.13]{BarJu95} every strong measure zero set is countable so no maximal independent family has strong measure zero. By contrast if the additivity of the strong measure zero ideal is $\aleph_2$ then in particular any set of reals of size $\aleph_1$ will be strong measure zero, in particular any selective independent family from the ground model.
\end{proof}
We can also now iteration mixings of $h$-perfect posets with other proper partial orders which iteratively preserve small selective independent families.
\begin{theorem}
The following are consistent.
\begin{enumerate}
\item
$\mfi = \mfu < \non (\mathcal N)$
\item
$\mfi < \mfu = \non(\mathcal N)$
\end{enumerate}
\label{inequalities}
\end{theorem}
\begin{proof}
For the first inequality, as noted above in Fact \ref{basicfacts}, for any $h$ we have that $\mathbb{PT}_h$ preserves $P$-points hence in any model constructed by iterating $h$-perfect tree forcings with countable support over a model of $\CH$ there will be a $P$-point base of size $\aleph_1$. For the second inequality alternating between $\PT_h$ for e.g. $h(n)= 2^n$ and the forcing notions $\Q_\mathcal I$ of \cite{Sh92} (alongside some bookkeeping device) will increase $\mfu$ but preserves selective independent families.
\end{proof}
Finally we note some applications to definability.
\begin{theorem}
Both inequalities featured in Theorem \ref{inequalities} are consistent with a $\Pi^1_1$ independent family of size $\aleph_1$, a $\Delta^1_3$ well order of the reals and, in the case of the first inequality, a $\Pi^1_1$ ultrafilter base for a $P$-point of size $\aleph_1$.
\end{theorem}
\begin{proof}
Schilhan \cite{Schilhanultrafilter} has shown that in $L$ there is a $\Pi^1_1$ ultafilter base for a $P$-point and Brendle, Fischer and Khomskii \cite{DefMIF} have shown that in $L$ there is a $\Sigma^1_2$ selective independent family and that if there is a $\Sigma^1_2$ maximal independent family then there is a $\Pi^1_1$ maximal independent family. It follows that all of these objects can be preserved by the iterations described in the proof of Theorem \ref{inequalities} assuming the ground model is $L$. Finally for the $\Delta^1_3$-well order of the reals we apply the forcing from \cite{BFS21} noting that the main theorem of that paper is precisely that such objects can be preserved by this forcing, even when other forcing notions, such as $\mathbb{PT}_h$ are added to the iteration.
\end{proof}
\section{Conclusion and Open Questions}
The proof of Main Theorem \ref{mainthm1} are almost verbatim the same as those for Sacks forcing \cite{FM19}, Shelah's forcing for killing a maximal ideal used in \cite{Sh92} and very similar to the proof for the coding with perfect tree forcing from \cite{BFS21}. In particular really only structural properties of the forcing are used. This suggests there should be a general property of proper, $\baire$-bounding forcing notions which imply that small selective independent families are preserved. The following seems like the first place to go to isolate such a property.
\begin{question}
Suppose $\delta$ is an ordinal and $\langle \P_\alpha , \dot{\Q}_\alpha \; | \; \alpha < \delta\rangle$ is a countable support iteration of proper, $\baire$-bounding, Cohen preserving forcing notions. Let $\mathcal I$ is a selective independent family in $V \models \CH$. If for all $\alpha < \delta$ $\forces_\alpha$`` $\dot{\Q}_\alpha$ forces ``${\rm fil}(\mathcal I)$ is generated by ground model sets" then does $\P_\delta$ preserve the maximality of $\mathcal I$?
\end{question}
|
1,314,259,995,943 | arxiv | \section{Introduction}
We denote by $|S|$ the size of a set $S$. Let $G$ be a graph. We denote by $V(G)$ the set of its vertices.
Sometimes instead of writing $|V(G)|$ we will use shorter notation $|G|$. We call $|G|$ the \textit{size of G}.
We denote by $E(G)$ the set of edges of a graph $G$.
A \textit{clique} in the undirected graph is a set of pairwise adjacent vertices and a \textit{stable set} in the undirected graph is a set of pairwise nonadjacent vertices.
A \textit{tournament} is a directed graph such that for every pair $v$ and $w$ of vertices, exactly one of the edges $(v,w)$ or $(w,v)$ exists.
For a tournament $H$ and a vertex $v \in V(H)$ we denote by $H \setminus \{v\}$ the tournament obtained from $H$ by deleting $v$ and all edges incident with it.
We denote by $H^{c}$ the tournament obtained from $H$ by reversing directions of all edges of $H$.
If $(v,w)$ is an edge of the tournament then we say that $v$ is $\textit{adjacent to}$ $w$ (alternatively: $w$ is an \textit{outneighbor} of $v$)
and $w$ is \textit{adjacent from} $v$ (alternatively: $v$ is an \textit{inneighbor} of $w$).
For two sets of vertices $V_{1}$, $V_{2}$ of a given tournament $T$ we say that $V_{1}$ is \textit{complete to} $V_{2}$ (or equivalently $V_{2}$ is \textit{complete from} $V_{1}$) if every vertex of $V_{1}$ is adjacent to every vertex of $V_{2}$. We say that a vertex $v$ is complete to/from a set $V$ if $\{v\}$ is complete to/from $V$.
A tournament is \textit{transitive} if it contains no directed cycle. For a set of vertices $V=\{v_{1},v_{2},...,v_{k}\}$ we say that an ordering $(v_{1},v_{2},...,v_{k})$ is \textit{transitive} if $v_{1}$ is adjacent to $v_j$ for every $i < j$.
If a tournament $T$ does not contain some other tournament $H$ as a subtournament then we say that $T$ is $H$-\textit{free}.
A celebrated unresolved conjecture~ of Erd\H{o}s and Hajnal is as follows:
\begin{theorem}
\label{EHConun}
For any undirected graph $H$ there exists $\epsilon(H)>0$ such that every $n$-vertex undirected graph that does not contain $H$ as an induced subgraph contains a clique or a stable of size at least $n^{\epsilon(H)}$.
\end{theorem}
In 2001 Alon, Pach and Solymosi proved (\cite{alon}) that Conjecture~\ref{EHConun} has an equivalent directed version, where undirected graphs are replaced by tournaments and cliques and stable sets by transitive subtournaments, as follows:
\begin{theorem}
\label{EHCon}
For any tournament $H$ there exists $\epsilon(H)>0$ such that every $n$-vertex $H$-free $n$-vertex tournament contains a transitive subtournament of size at least $n^{\epsilon(H)}$.
\end{theorem}
If for a graph $H$ there exists $\epsilon(H)>0$ as in \ref{EHCon}, then we say that \textit{$H$ satisfies the Erd\H{o}s-Hajnal conjecture}
(alternatively: $H$ has the \textit{Erd\H{o}s-Hajnal property}).\\
A set of vertices $S \subseteq V(H)$ of a tournament $H$ is called \textit{homogeneous} if for every $v \in V(H) \backslash S$ the following holds: either for all $w \in S$ we have: $(w,v)$ is an edge or for all $w \in S$ we have: $(v,w)$ is an edge. A homogeneous set $S$ is called \textit{nontrivial} if $|S|>1$ and $S \neq V(H)$. A tournament is called \textit{prime} if it does not have nontrivial homogeneous sets.\\
The following theorem, that is an immediate corollary of the results given in \cite{alon} and applied to tournaments, shows why prime tournaments are important.
\begin{theorem}
\label{nonprime}
If Conjecture~\ref{EHCon} is false then the smallest counterexample is prime.
\end{theorem}
Therefore of interest is studying the Erd\H{o}s-Hajnal property for prime tournaments.
We need a few more definitions that we borrow from \cite{chorochudber} and put below for the reader's convenience.
For an integer $t$, we call the graph $K_{1,t}$ a {\em star}. Let $S$ be a
star with vertex set $\{c, l_1, \ldots, l_t\}$, where $c$ is adjacent to
$l_1, \ldots, l_t$. We call $c$ the {\em center of the star}, and
$l_1, \ldots, l_t$ {\em the leaves of the star}.
Note that in the case $t=1$ we may choose arbitrarily any one of the two vertices to be the center of the star, and the other vertex is then considered to be the leaf.
Let $\theta=(v_{1},v_{2},...,v_{n})$ be an ordering of the vertex set $V(T)$ of an $n$-vertex tournament $T$. We say that a vertex $v_{j}$ is \textit{between} two vertices $v_{i},v_{k}$ under $\theta=(v_{1},...,v_{n})$ if $i<j<k$ or $k<j<i$.
An edge $(v_i,v_j)$ is a {\em backward edge under
$\theta$} if $i>j$. The \textit{graph of backward edges under $\theta$},
denoted by $B(T, \theta)$, is the undirected graph that has vertex set $V(T)$,
and $v_i v_j \in E(B(T, \theta))$ if and only if $(v_i,v_j)$ or
$(v_j,v_i)$ is a backward edge of $T$ under $\theta$.
A {\em right star}
in $B(T, \theta)$ is an induced subgraph with vertex set
$\{v_{i_0}, \ldots, v_{i_t}\}$, such that \\
$B(T,\theta)|\{v_{i_0}, \ldots, v_{i_t}\}$ is a star with center $v_{i_t}$,
and $i_t > i_0, \ldots, i_{t-1}$. In this case we also
say that $\{v_{i_0}, \ldots, v_{i_t}\}$ is a right star in $T$.
A {\em left star}
in $B(T, \theta)$ is an induced subgraph with vertex set
$\{v_{i_0}, \ldots, v_{i_t}\}$, such that \\
$B(T,\theta)|\{v_{i_0}, \ldots, v_{i_t}\}$ is a star with center $v_{i_0}$,
and $i_0 < i_1, \ldots, i_t$. In this case we also
say that $\{v_{i_0}, \ldots, v_{i_t}\}$ is a left star in $T$.
A {\em star} in $B(T, \theta)$ is a left star or a right star.
Let $H$ be a tournament and assume there exists an ordering $\theta$ of its
vertices such that every connected component of $B(H, \theta)$ is either a
star or a singleton. We call this ordering a \textit{star ordering}.
If in addition every star is either a left star or a right star, and no center of a star is between leaves of another star, then the corresponding
ordering is called a \textit{galaxy ordering} and the tournament $H$ is called a \textit{galaxy}.
The main results of \cite{chorochudber} that we will heavily rely on in this paper are:
\begin{theorem}
\label{galaxy-theorem}
Every galaxy has the Erd\H{o}s-Hajnal property.
\end{theorem}
\begin{theorem}
\label{five-vertex-theorem}
Every tournament $H$ on at most five vertices has the Erd\H{o}s-Hajnal property.
\end{theorem}
We denote by $K_{6}$ the six-vertex tournament with $V(K_{6})=\{v_{1},...,v_{6}\}$ such that under ordering $(v_{1},...,v_{6})$
of its vertices the set of backward edges is: $\{(v_{4},v_{1}),(v_{6},v_{3}), (v_{6},v_{1}), (v_{5},v_{2})\}$. We call this ordering of vertices of $K_{6}$ the \textit{canonical ordering} (Fig.1).
\begin{center}
\begin{tikzpicture}[every path/.style={>=latex},every node/.style={draw,circle}]
\def \n {5}
\def \radius {2cm}
\node (a) at (1.5,0) { $v_{1}$ };
\node (b) at (3.0,0) { $v_{2}$ };
\node (c) at (4.5,0) { $v_{3}$ };
\node (d) at (6.0,0) { $v_{4}$ };
\node (e) at (7.5,0) { $v_{5}$ };
\node (f) at (9.0,0) { $v_{6}$ };
\draw[->] (f) edge [out=143,in=37] (a);
\draw[->] (f) edge [out=145,in=35] (c);
\draw[->] (d) edge [out=145,in=35] (a);
\draw[->] (e) edge [out=210,in=330] (b);
\end{tikzpicture}
\label{fig:k6}
\end{center}
\begin{center}
Fig.1 Tournament $K_{6}$. The only prime tournament on at most six vertices for which the conjecture is still open. Presented is the canonical ordering of its vertices. All edges that are not drawn are from left to right.
\end{center}
In this paper we prove the following:
\begin{theorem}
\label{six_vertex_theorem}
If $H$ is a six-vertex tournament not isomorphic to $K_{6}$ then it has the Erd\H{o}s-Hajnal property.
\end{theorem}
This reduces the six-vertex case to a single tournament.
The correctness of the conjecture for $K_{6}$ remains an open question.
Note that $K_{6}$ is a prime tournament. One can also check that $K_{6}$ does not have a galaxy ordering of vertices.
In fact the only ordering under which the graph of backward edges of $K_{6}$ is a forest is the canonical ordering presented in Fig.1.\\
We need to define two more special tournaments on six vertices that we denote by $L_{1}$ and $L_{2}$
and one special tournament on five vertices, denoted by $C_{5}$.
Tournament $C_{5}$ (see: Fig.2) is the unique tournament on five vertices such that each of its vertices has exactly two outneighbors
and two inneighbors. Tournament $C_{5}$ is prime and one can check that it is not a galaxy.
\begin{center}
\begin{tikzpicture}[every path/.style={>=latex},every node/.style={draw,circle}]
\def \n {5}
\def \radius {2cm}
\node[draw, circle] (a) at ({360/\n * (1 - 1)+18}:\radius) {$v_{1}$};
\node[draw, circle] (b) at ({360/\n * (2 - 1)+18}:\radius) {$v_{2}$};
\node[draw, circle] (c) at ({360/\n * (3 - 1)+18}:\radius) {$v_{3}$};
\node[draw, circle] (d) at ({360/\n * (4 - 1)+18}:\radius) {$v_{4}$};
\node[draw, circle] (e) at ({360/\n * (5 - 1)+18}:\radius) {$v_{5}$};
\draw[->] (a) edge (b);
\draw[->] (b) edge (c);
\draw[->] (c) edge (d);
\draw[->] (d) edge (e);
\draw[->] (e) edge (a);
\draw[->] (a) edge (c);
\draw[->] (b) edge (d);
\draw[->] (c) edge (e);
\draw[->] (d) edge (a);
\draw[->] (e) edge (b);
\end{tikzpicture}
\label{fig:c5}
\end{center}
\begin{center}
Fig.2 Tournament $C_{5}$ - the only prime five-vertex tournament that is not a galaxy.
\end{center}
Tournament $L_{1}$ is obtained from $C_{5}$ by adding one extra vertex and making it adjacent to exactly one vertex of $C_{5}$ (it does not matter to which one since all tournaments obtained by procedure are isomorphic). Tournament $L_{2}$ is obtained from $C_{5}$ by adding one extra vertex and making it adjacent from $3$ vertices of $C_{5}$ that induce a cyclic triangle (again, it does not matter which cyclic triangle since all tournaments obtained by this procedure are isomorphic).
Both tournaments are presented on Fig.3.
\begin{center}
\begin{tikzpicture}[every path/.style={>=latex},every node/.style={draw,circle}]
\def \n {5}
\def \radius {2cm}
\node[draw, circle][shift={(0:-5cm)}] (a) at ({360/\n * (1 - 1)+18}:\radius) {$v_{1}$};
\node[draw, circle][shift={(0:-5cm)}] (b) at ({360/\n * (2 - 1)+18}:\radius) {$v_{2}$};
\node[draw, circle][shift={(0:-5cm)}] (c) at ({360/\n * (3 - 1)+18}:\radius) {$v_{3}$};
\node[draw, circle][shift={(0:-5cm)}] (d) at ({360/\n * (4 - 1)+18}:\radius) {$v_{4}$};
\node[draw, circle][shift={(0:-5cm)}] (e) at ({360/\n * (5 - 1)+18}:\radius) {$v_{5}$};
\node[draw, circle][shift={(0:-10cm)}] (f) at ({360/\n * (5 - 1)+18}:\radius) {$v_{6}$};
\node[draw, circle][shift={(0:-13cm)}] (aa) at ({360/\n * (1 - 1)+18}:\radius) {$v_{1}$};
\node[draw, circle][shift={(0:-13cm)}] (bb) at ({360/\n * (2 - 1)+18}:\radius) {$v_{2}$};
\node[draw, circle][shift={(0:-13cm)}] (cc) at ({360/\n * (3 - 1)+18}:\radius) {$v_{3}$};
\node[draw, circle][shift={(0:-13cm)}] (dd) at ({360/\n * (4 - 1)+18}:\radius) {$v_{4}$};
\node[draw, circle][shift={(0:-13cm)}] (ee) at ({360/\n * (5 - 1)+18}:\radius) {$v_{5}$};
\node[draw, circle][shift={(0:-18cm)}] (ff) at ({360/\n * (5 - 1)+18}:\radius) {$v_{6}$};
\draw[->] (a) edge (b);
\draw[->] (b) edge (c);
\draw[->] (c) edge (d);
\draw[->] (d) edge (e);
\draw[->] (e) edge (a);
\draw[->] (a) edge (c);
\draw[->] (b) edge (d);
\draw[->] (c) edge (e);
\draw[->] (d) edge (a);
\draw[->] (e) edge (b);
\draw[->] (f) edge [out=340,in=200] (e);
\draw[->] (d) edge (f);
\draw[->] (f) edge (c);
\draw[->] (a) edge (f);
\draw[->] (b) edge [out=185,in=80] (f);
\draw[->] (aa) edge (bb);
\draw[->] (bb) edge (cc);
\draw[->] (cc) edge (dd);
\draw[->] (dd) edge (ee);
\draw[->] (ee) edge (aa);
\draw[->] (aa) edge (cc);
\draw[->] (bb) edge (dd);
\draw[->] (cc) edge (ee);
\draw[->] (dd) edge (aa);
\draw[->] (ee) edge (bb);
\draw[->] (ff) edge [out=340,in=200] (ee);
\draw[->] (dd) edge (ff);
\draw[->] (cc) edge (ff);
\draw[->] (aa) edge (ff);
\draw[->] (bb) edge [out=185,in=80] (ff);
\end{tikzpicture}
\end{center}
\begin{center}
Fig.3 Tournament $L_{1}$ on the left and tournament $L_{2}$ on the right. Both are obtained from $C_{5}$ by adding one extra vertex.
\end{center}
This paper is organized as follows:
\begin{itemize}
\item in Section 2 we reduce the question about the correctness of the conjecture for six-vertex tournaments to three tournaments: $K_{6}, L_{1}, L_{2}$,
\item in Section 3 we introduce some tools to analyze tournaments $L_{1}$ and $L_{2}$,
\item in Section 4 we prove the conjecture for tournaments $L_{1}$ and $L_{2}$ and complete the proof of our main result.
\end{itemize}
\section{The landscape of six-vertex tournaments}
Our main result in this section is as follows:
\begin{theorem}
\label{technical-theorem}
If $H$ is a six-vertex tournament not isomorphic to $K_{6}, L_{1}, L_{1}^{c}, L_{2}, L_{2}^{c}$ then $H$ satisfies the Erd\H{o}s-Hajnal conjecture.
\end{theorem}
We will first prove a lemma describing the structure of
all six-vertex tournaments.
\begin{theorem}
\label{technical-lemma}
Let $H$ be a six-vertex tournament. Then one of the following holds:
\begin{enumerate}
\item $H$ is a galaxy, or
\item there exists $v \in V(H)$, s.t. $H \setminus \{v\}$ is isomorphic to $C_{5}$ and $v$ has exactly one inneighbor or exactly one outneighbor in $H \setminus \{v\}$, or
\item $H$ is not prime, or
\item the vertices of $H$ or $H^{c}$ can be ordered as: $(a,b,c,d,e,f)$ such that the backward edges are: \\$(f,a), (e,a), (d,b), (f,c)$ (thus $H \setminus \{b\}$ or $H^{c} \setminus \{b\}$ is isomorphic to $C_{5}$
and the outneighbors of $b$ form a cyclic triangle), or
\item $H$ is isomorphic to $K_{6}$.
\end{enumerate}
\end{theorem}
\Proof
We may assume that $H$ is prime (for otherwise (3) holds), and so every vertex
of $H$ has at most four inneighbors and at most four outneighbors.\\
\textbf{Case 1: some vertrex of $H$ has four outneighbors\\}
Suppose that $H$ has a vertex $v$ with $4$ outneighbors. Let $\{a,b,c,d\}$ be outneighbors of $v$ and denote by $u$ the remaining vertex.
Then $u$ is adjacent to $v$ and, since $H$ is prime, $u$ has at least one
and at most $3$ outneighbors in $\{a,b,c,d\}$.
We call an ordering of the vertices of $H \setminus v$ {\em useful} if
it is a galaxy ordering of $H \setminus v$, and no backward edge is
incident with $u$. We observe that if $H \setminus v$ admits a useful ordering,
then adding $v$ at the start of this ordering produces a
galaxy ordering of $H$ (since $(u,v)$ is the only new backward edge, and no
other backward edge is incident with either $u$ or $v$), and (1) holds.
Thus we may assume that $H \setminus v$ admits no useful ordering.
Suppose first that $u$ has exactly three outneighbors in $\{a,b,c,d\}$, say $u$
is adjacent to $a,b,c$ and from $d$. If $H|\{a,b,c\}$ is a transitive
tournament (where $(a,b,c)$ is the transitive ordering, say), then $(d,u,a,b,c)$
is a useful ordering of $H \setminus v$, a contradiction.
Therefore we may assume that $\{a,b,c\}$ induces a cyclic triangle.
Without loss of generality we may assume that $(a,b), (b,c), (c,a)$ are edges.
Suppose first that $d$ has at most one inneighbor in $\{a,b,c\}$, say $b$ (without loss of generality) if one exists. But then $(d,u,a,b,c)$ is a useful
ordering of $H \setminus v$, a contradiction.
Thus $d$ has at least two inneighbors in $\{a,b,c\}$, i.e. $d$ has at most one outneighbor in $\{a,b,c\}$, say $b$ (without loss of generality) if one exists. But then $(u,v,a,b,c,d)$ is a galaxy ordering with backward edges: $(d,u), (c,a)$ and
$(d,b)$ (if $b$ is an outneighbor of $d$), and so (1) holds.
We can thus assume that $u$ has at most two outneighbors in $\{a,b,c,d\}$.
Next suppose that $u$ has exactly two outneighbors in $\{a,b,c,d\}$, say $u$ is adjacent from $a,b$ and to $c,d$. Without loss of generality we assume that $a$ is adjacent to $b$, and $c$ is adjacent to $d$. If there are at most $2$ edges from $\{c,d\}$ to $\{a,b\}$, then $(a,b,u,c,d)$ is a useful ordering of $H \setminus v$,
a contradiction. Thus we may assume that there are at least $3$ edges from $\{c,d\}$ to $\{a,b\}$. In other words, there is at most one edge from $\{a,b\}$ to $\{c,d\}$. If such an edge does not exist (i.e. $\{c,d\}$ is complete to $\{a,b\}$) then $(v,c,d,a,b,u)$ is a galaxy ordering of $H$, where each backward edge
is incident with $u$, and (1) holds, so we may assume that there is exactly
one edge from $\{a,b\}$ to $\{c,d\}$. We now check that in all cases
the theorem holds. If $a$ is adjacent to $d$ then $(v,c,a,d,b,u)$ is a galaxy
ordering with all backward edges incident with $u$, and (1) holds.
If $b$ is adjacent to $c$ then $\{a,b,u,c,d\}$ induces a tournament isomorphic to $C_{5}$ and $v$ has a unique inneighbor in it, so (2) holds.
If $a$ is adjacent to $c$ then $\{u,v,d,a,c,b\}$ is a galaxy ordering with backward edges: $(a,u), (b,u), (c,d)$, and (1) holds.
Finally, if $b$ is adjacent to $d$ then $(v,c,b,d,a,u)$ is a galaxy ordering with backward edges: $(u,v),(u,c),(u,d), (a,b)$, and again (1) holds.
Thus we may assume that $u$ has exactly one outneighbor in $\{a,b,c,d\}$, say
$a$. Let $(a^{'},b^{'},c^{'},d^{'})$
be the ordering of $\{a,b,c,d\}$ in which $a$ has no backward edges, and where the number of backward edges is minimum subject to the previous constraint. Note that such an ordering is always a galaxy ordering. But then $(v,a^{'},b^{'},c^{'},d^{'},u)$ is also a galaxy ordering, and (1) holds. \\
We conclude that if some vertex in $H$ has $4$ outneighbors then the theorem holds.
Thus we can assume that every vertex of $H$ has at most three outneighbors. We can also conclude
that every vertex of $H$ has at most three inneighbors. The latter is true since the statement of the theorem
is invariant under reversing directions of all the edges of $H$. Indeed, after reversing all the edges the galaxy remains a galaxy, and the property of being
prime is also trivially invariant under this operation. Furthermore, both $C_{5}$ and $K_{6}$ are isomorphic to the tournaments obtained by reversing their edges. Therefore it remains to handle:\\
\textbf{Case 2: Every vertex has at most three outneighbors and at most three inneighbors\\}
Let us denote by $n_{3,2}$ the number of vertices $v$ of $H$ such that $v$ has $3$ outneighbors and
$2$ inneighbors. Similarly, let us denote by $n_{2,3}$ the number of vertices $v$ of $H$ such that $v$ has $3$ inneighbors and $2$ outneighbors. Then we have:
\begin{equation}
15=E(H) = 3n_{3,2} + 2n_{2,3} = 2n_{3,2} + 3n_{2,3}.
\end{equation}
Thus we have: $n_{3,2}=n_{2,3} = 3$.
Let $a,b,c$ be the vertices that have three outneighbors, let $x,y,z$ the remaining vertices.
Assume first that $H|\{a,b,c\}$ is a transitive tournament, where $(a,b,c)$
(say) is a transitive ordering.
Then $c$ is complete to $\{x,y,z\}$ since, by definition, it has $3$ outneighbors, but it has no outneighbors in $\{a,b\}$. Similarly, vertex $b$ has exactly $2$ outneighbors in $\{x,y,z\}$ and without loss of generality we can assume that these are: $y$ and $z$. Vertex $a$ has exactly one outneighbor in $\{x,y,z\}$.
Suppose first that $a$ is adjacent from $x$. Then, since $x$ has $2$ outneighbors and we already know that $x$ is adjacent to $a$ and $b$, we conclude that $x$ is adjacent from $y$ and $z$. Without loss of generality we can assume that $y$ is adjacent to $z$. If $a$ is adjacent to $y$ (and thus from $z$) then
$\{c,y\}$ is a homogeneous set and (3) holds. Thus we may assume that $a$ is adjacent to $z$ and from $y$.
But note that now $ H \setminus \{z\}$ is isomorphic to $C_{5}$ and $z$ has a unique outneighbor in $H \setminus \{z\}$, namely $x$. Thus (2) holds.
Therefore we may assume that $a$ is adjacent to $x$ and from $y$ and $z$.
Since $x$ has $2$ outneighbors, without loss of generality we can assume that $x$ is adjacent to $y$ and from $z$. Now, since $y$ has $2$ outneighbors, we can deduce that $y$ is adjacent to $z$
(this is true because the only outneighbor of $y$ in $\{a,b,c,x\}$ is $a$).
Now, $(a,c,x,b,y,z)$ is an ordering as in (4). This completes the case when
$H|{a,b,c}$ is a transitive tournament.
Thus we only need to consider the case when $\{a,b,c\}$ induces a cyclic
triangle.
If $\{x,y,z\}$ induces a transitive tournament then we can reverse the edges of $H$ and repeat the analysis that we have just done for $\{a,b,c\}$. We can do it since, as we have already mentioned, the statement of the theorem is invariant under the operation of reversing all the edges of the tournament. Thus, without loss of generality, we can assume that both $\{x,y,z\}$ and $\{a,b,c\}$ induce cyclic triangles.
We may assume without loss of generality that $(x,y),(y,z),(z,x)$ and $(a,b),(b,c),(c,a)$ are edges.
Note that the edges from $\{x,y,z\}$ to $\{a,b,c\}$ form a matching. Indeed, each vertex of $\{x,y,z\}$ has exactly one outneighbor in $\{x,y,z\}$, therefore it has exactly one outneighbor in $\{a,b,c\}$ (since each vertex of $\{x,y,z\}$ has exactly $2$ outneighbors in $V(H)$), and each vertex from $\{a,b,c\}$ has exactly one inneighbor from $\{x,y,z\}$.
Without loss of generality we can assume that $x$ is adjacent to $a$. Assume first that $y$ is adjacent to $b$, and so $z$ is adjacent to $c$.
Now $(b,c,x,a,y,z)$ is a galaxy ordering with the backward edges: $(a,b)$, $(y,b)$, $(z,c)$, $(z,x)$.
Thus we may assume that $y$ is adjacent to $c$, and $z$ is adjacent to $b$.
But now $(b,c,x,a,y,z)$ is a canonical ordering of $K_{6}$, and (5) holds. That completes the proof of the lemma.
\bbox
We are now ready to prove Theorem \ref{technical-theorem}.
\Proof
We will use Lemma \ref{technical-lemma}. If outcome (1) holds then the result
follows from \ref{galaxy-theorem}. If outcome (2) holds then $H$ is isomorphic to one of the two tournaments: $L_{1}, L_{1}^{c}$. If outcomes (3) holds, then
the result follows from \ref{nonprime} and \ref{five-vertex-theorem}.
Finally, if outcome (4) holds then $H$ is isomorphic to $L_{2}$ or $L_{2}^{c}$.
This completes the proof of Theorem \ref{technical-theorem}.
\bbox
\section{Regularity tools}
In this section we will introduce some regularity tools that will be very useful later on to prove the conjecture for $L_{1}$ and $L_{2}$.
Denote by $tr(T)$ the largest size of the transitive subtournament of $T$.
For $X \subseteq V(T)$, write $tr(X)$ for $tr(T|X)$.
Let $X,Y \subseteq V(T)$ be disjoint. Denote by $e_{X,Y}$ the number of directed edges $(x,y)$, where $x \in X$ and $y \in Y$.
The \textit{directed density from X to Y} is defined as
$d(X,Y)=\frac{e_{X,Y}}{|X||Y|}.$
We call a tournament $T$ \textit{$\epsilon$-critical} for $\epsilon>0$ if $tr(T) < |T|^{\epsilon}$ but for every proper subtournament $S$ of $T$ we have: $tr(S) \geq |S|^{\epsilon}$. Next we list some properties of $\epsilon$-critical tournaments that we borrow from \cite{chorochudber}.
\begin{theorem}
\label{remarktheorem}
For every $N>0$ there exists $\epsilon(N)>0$ such that for every $0<\epsilon<\epsilon(N)$ every $\epsilon$-critical tournament $T$ satisfies $|T| \geq N$.
\end{theorem}
\Proof
Since every tournament contains a transitive subtournament of order $2$ so it suffices to take $\epsilon(N)=\log_{N}(2)$. \bbox
\begin{theorem}
\label{firsttechnicallemma}
Let $T$ be an $\epsilon$-critical tournament with $|T|=n$ and $\epsilon,c,f>0$ be constants such that $\epsilon <\log_{c}(1-f)$.
Then for every $A \subseteq V(T)$ with $|A| \geq cn$ and every transitive subtournament $G$ of $T$ with $|G| \geq f \cdot tr(T)$ we have: $A$ is not complete from $V(G)$ and $A$ is not complete to $V(G)$.
\end{theorem}
\Proof
Assume otherwise. Let $A_T$ be a transitive subtournament in $T|A$ of size
$tr(A)$.
Then $|A_{T}| \geq (cn)^{\epsilon}$. Now we can merge $A_{T}$ with $G$ to obtain a transitive subtournament of size at least $(cn)^{\epsilon}+f tr(T)$. From the definition of $tr(T)$ we have $(cn)^{\epsilon}+f tr(T) \leq tr(T)$. So $c^{\epsilon}n^{\epsilon} \leq (1-f) tr(T)$, and in particular $c^{\epsilon}n^{\epsilon} < (1-f)n^{\epsilon}$. But this contradicts the fact that $\epsilon <\log_{c}(1-f)$. \bbox
\begin{theorem}
\label{veryeasylemma}
Let $T$ be an $\epsilon$-critical tournament with $|T|=n$ and $\epsilon,c>0$ be constants such that $\epsilon<\log_{\frac{c}{2}}(\frac{1}{2})$.
Then for every two disjoint subsets $X,Y \subseteq V(T)$ with $|X| \geq cn$, $|Y| \geq cn$
there exist an integer
$k \geq \frac{cn}{2}$
and vertices $x_{1},...,x_{k} \in X$ and $y_{1},...,y_{k} \in Y$ such that $y_{i}$ is adjacent to
$x_{i}$ for $i=1,...,k$.
\end{theorem}
\Proof
Assume otherwise. Write $m= \lfloor \frac{cn}{2} \rfloor$.
Consider the bipartite graph $G$ with bipartition $(X,Y)$ where $\{x,y\} \in E(G)$ if $(y,x) \in V(T)$.
Then we know that $G$ has no matching of size $m$.
By K\"{o}nig's Theorem (see \cite{diestel}) there exists $C \subseteq V(G)$
with $|C|<m$, such that every edge of $G$ has an end in $C$. Write
$C \cap X=C_{X}$ and $C \cap Y = C_{Y}$. We have $|C_{X}| \leq \frac{|X|}{2}$ and $|C_{Y}| \leq \frac{|Y|}{2}$. Therefore $|X \backslash C_{X}| \geq \frac{|X|}{2}$ and $|Y \backslash C_{Y}| \geq \frac{|Y|}{2}$, and
by the definition of $C$ and $G$, we know that $X \backslash C_{X}$ is complete to $Y \backslash C_{Y}$. Denote by $T_{1}$ a transitive subtournament of size
$tr(T|(X \backslash C_{X}))$ in $T|(X \backslash C_{X})$. Denote by $T_{2}$ a transitive subtournament of size $tr(T|(Y \backslash C_{Y}))$ in
$T|(Y \backslash C_{Y})$.
From the $\epsilon$-criticality of $T$ and since $|X \backslash C_{X}| \geq \frac{cn}{2}$, $|Y \backslash C_{Y}| \geq \frac{cn}{2}$, we also have: $|T_{1}| \geq (\frac{cn}{2})^{\epsilon}$, $|T_{2}| \geq (\frac{cn}{2})^{\epsilon}$.
We can merge $T_{1}$ and $T_{2}$ to obtain bigger transitive tournament $T_{3}$ with $|T_{3}| \geq 2(\frac{c}{2})^{\epsilon} n^{\epsilon}$.
Therefore, since $T$ is $\epsilon$-critical, we have: $2(\frac{c}{2})^{\epsilon} < 1$. But this contradicts the condition $\epsilon<\log_{\frac{c}{2}}(\frac{1}{2})$.
\bbox
Next we introduce one more structure that will be crucial to prove the conjecture for $L_{1}$
and $L_{2}$. Again, its definition can be found in \cite{chorochudber}, but we give it again for the
reader's convenience.
Let $c>0$, $0<\lambda<1$ be constants, and let $w$ be a $\{0,1\}$-vector of length $|w|$. Let $T$ be a tournament with $|T|=n$. A sequence of disjoint subsets
$(S_{1},S_{2},...,S_{|w|})$ of $V(T)$ is a $(c, \lambda, w)$-{\em structure} if
\begin{itemize}
\item whenever $w_i=0$ we have $|S_{i}| \geq cn$ (we say that $S_i$ is a {\em linear set})
\item whenever $w_i=1$ the set $T|S_{i}$ is transitive and $|S_{i}| \geq c \cdot tr(T)$ (we say that $T_i$ is a {\em transitive set})
\item $d(S_{i},S_{j}) \geq 1 - \lambda$ for all $1 \leq i < j \leq |w|$.
\end{itemize}
The following was proved in \cite{chorochudber}:
\begin{theorem}
\label{thirdtechnicallemma}
Let $S$ be a tournament, let $w$ be a $\{0,1\}$-vector, and let
$0 < \lambda_{0} < \frac{1}{2}$ be a constant. Then there exist $\epsilon_{0},c_{0}>0$ such that for every $0<\epsilon<\epsilon_{0}$, every $S$-free $\epsilon$-critical tournament contains a $(c_{0}, \lambda_{0}, w)$-structure.
\end{theorem}
\begin{center}
\begin{tikzpicture}[every path/.style={>=latex},every node/.style={draw,circle}]
\def \n {5}
\def \radius {3cm}
\node[input] (a) at (3.3,0) { $A_{1}$ };
\node[matrx] (b) at (6.6,0) { $T_{1}$ };
\node[input] (c) at (9.9,0) { $A_{2}$ };
\node[input] (d) at (13.5,0) { $A_{3}$ };
\node[matrx] (e) at (16.8,0) { $T_{2}$ };
\draw[vecArrow] (a) to (b);
\draw[innerWhite] (a) to (b);
\draw[vecArrow] (b) to (c);
\draw[innerWhite] (b) to (c);
\draw[vecArrow] (c) to (d);
\draw[innerWhite] (c) to (d);
\draw[vecArrow] (d) to (e);
\draw[innerWhite] (d) to (e);
\end{tikzpicture}
\end{center}
\begin{center}
Fig.4 Schematic representation of the $(c,\lambda,w)$-structure. This structure consists of three linear sets: $A_{1},A_{2},A_{3}$ and two transitive sets: $T_{1}$ and $T_{2}$. The arrows indicate the orientation of most
of the edges going between different elements of the $(c,\lambda,w)$-structure. Each $T_{i}$ satisfies: $|T_{i}| \geq c \cdot tr(T)$ and each $A_{i}$ satisfies: $|A_{i}| \geq c \cdot n$, where $n=|T|$. We have here: $w=(0,1,0,0,1)$.
\end{center}
We say that a $(c,\lambda,w)$-structure is \textit{smooth} if the last condition of the definition of the
$(c,\lambda,w)$-structure is satisfied in a stronger form, namely we have: $d(\{v\}, S_{j}) \geq 1 - \lambda$ for $v \in S_{i}$ and $d(S_{i},\{v\}) \geq 1 - \lambda$ for $v \in S_{j}$, $i<j$.
Theorem \ref{thirdtechnicallemma} leads to the following conclusion:
\begin{theorem}
\label{smooththeorem}
Let $S$ be a tournament, let $w$ be a $\{0,1\}$-vector, and let
$0 < \lambda_{1} < \frac{1}{2}$ be a constant. Then there exist $\epsilon_{1},c_{1}>0$ such that for every $0<\epsilon<\epsilon_{1}$, every $S$-free $\epsilon$-critical tournament contains a smooth $(c_{1}, \lambda_{1}, w)$-structure.
\end{theorem}
\Proof
By Theorem \ref{thirdtechnicallemma}, there exist
$\epsilon_{0},c_{0}>0$ such that for every $0<\epsilon<\epsilon_{0}$, every $S$-free $\epsilon$-critical tournament contains a $(c_{0}, \lambda_{0}, w)$-structure.
Denote this structure by $(A_{1},...,A_{k})$. Let $M$ be a positive constant.
For an ordered pair $(i,j)$, where $i,j \in \{1,...,k\}$ and $i \neq j$ let $Bad^{M}(i,j)$ be the set of these vertices $v \in A_{i}$ such that
\begin{itemize}
\item $v$ is adjacent from more than $M \lambda_{0} |A_{j}|$ vertices of $A_{j}$ if $i < j$ and
\item $v$ is adjacent to more than $M \lambda_{0} |A_{j}|$ vertices of $A_{j}$ if $i > j$.
\end{itemize}
Note first that $|Bad^{M}(i,j)| \leq \frac{|A_{i}|}{M}$. Indeed, otherwise by the definition of $Bad^{M}(i,j)$, the number of backward edges between $A_{i}$ and $A_{j}$
is more than $\lambda_{0}|A_{i}||A_{j}|$ which contradicts the fact that $d(A_{\min(i,j)},A_{\max(i,j)}) \geq 1 - \lambda_{0}$.
Now let $A_{i}^{M} = A_{i} \setminus \bigcup_{j \in \{1,...,k\}, j \neq i} Bad^{M}(i,j)$.
From the fact that $|Bad^{M}(i,j)| \leq \frac{|A_{i}|}{M}$, we get $|A_{i}^{M}| \geq (1-\frac{k-1}{M})|A_{i}|$. Now take $M=2k$.
Then we obtain $|A_{i}^{M}| \geq \frac{|A_{i}|}{2}$.
Consider the sequence $(A_{1}^{M},...,A_{k}^{M})$. Take a pair $\{i,j\}$, where $i,j \in \{1,...,k\}$ and $i < j$.
Note that by the definition of $A_{i}^{M}$, we know that every vertex $v \in A_{i}^{M}$ is adjacent from at most $M\lambda_{0}|A_{j}|$ vertices of
$A_{j}^{M}$. For $M=2k$, since $|A_{j}^{M}| \geq \frac{|A_{j}|}{2}$, we obtain: every vertex $v \in A_{i}^{M}$ is adjacent from at most
$2M\lambda_{0}|A_{j}^{M}|$ vertices of $A_{j}^{M}$. Similarly, we get: every vertex $v \in A_{j}^{M}$ is adjacent to at most
$2M\lambda_{0}|A_{i}^{M}|$ vertices of $A_{i}^{M}$. Consequently, $(A_{1}^{M},...,A_{k}^{M})$ is a smooth $(\frac{c_{0}}{2},2M\lambda_{0},w)$-structure.
Thus taking: $\lambda_{0} = \frac{\lambda_{1}}{4k}$ and $c_{1} = \frac{c_{0}}{2}$, we complete the proof.
\bbox
\section{The Erd\H{o}s-Hajnal conjecture holds for $L_{1}$ and $L_{2}$}
We are ready to prove that both $L_{1}$ and $L_{2}$ satisfy the conjecture.
We will use two special orderings of the vertices of $L_{1}$ and two special
orderings of the vertices of $L_{2}$.
\begin{center}
\begin{tikzpicture}[every path/.style={>=latex},every node/.style={draw,circle}]
\def \n {5}
\def \radius {3cm}
\node (a) at (1.2,0) { $v_{3}$ };
\node (b) at (2.4,0) { $v_{4}$ };
\node (c) at (3.6,0) { $v_{5}$ };
\node (d) at (4.8,0) { $v_{1}$ };
\node (e) at (6.0,0) { $v_{2}$ };
\node (f) at (7.2,0) { $v_{6}$ };
\node (g) at (10.2-1,0) { $v_{2}$ };
\node (h) at (11.4-1,0) { $v_{4}$ };
\node (i) at (12.6-1,0) { $v_{1}$ };
\node (j) at (13.8-1,0) { $v_{3}$};
\node (k) at (15.0-1,0) { $v_{6}$ };
\node (l) at (16.2-1,0) { $v_{5}$ };
\draw[->] (e) edge [out=150,in=30] (a);
\draw[->] (d) edge [out=210,in=330] (a);
\draw[->] (e) edge [out=210,in=330] (b);
\draw[->] (f) edge [out=150,in=30] (c);
\draw[->] (l) edge [out=150,in=30] (i);
\draw[->] (i) edge [out=150,in=30] (g);
\draw[->] (l) edge [out=150,in=30] (g);
\draw[->] (j) edge [out=215,in=325] (h);
\end{tikzpicture}
\end{center}
\begin{center}
Fig.5 Two crucial orderings of the vertices of $L_{1}$. The left one is the forest ordering and the right one is the cyclic ordering. Notice that neither of them is a galaxy ordering.
\end{center}
The first ordering of the vertices of $L_{1}$ is as follows: $(v_{3},v_{4},v_{5},v_{1},v_{2},v_{6})$, where the set of backward edges is: $\{(v_{1},v_{3}),(v_{2},v_{4}),(v_{2},v_{3}),(v_{6},v_{5})\}$. We call it the \textit{forest ordering of $L_{1}$} since under this ordering the graph of backward edges is a forest.
The second ordering of the vertices of $L_{1}$ is as follows: $(v_{2},v_{4},v_{1},v_{3},v_{6},v_{5})$, where the set of backward edges is: $\{(v_{1},v_{2}),(v_{5},v_{1}),(v_{5},v_{2}),(v_{3},v_{4})\}$.
We call it the \textit{cyclic ordering of $L_{1}$}.
\begin{center}
\begin{tikzpicture}[every path/.style={>=latex},every node/.style={draw,circle}]
\def \n {5}
\def \radius {3cm}
\node (a) at (1.2,0) { $v_{1}$ };
\node (b) at (2.4,0) { $v_{2}$ };
\node (c) at (3.6,0) { $v_{3}$ };
\node (d) at (4.8,0) { $v_{4}$ };
\node (e) at (6.0,0) { $v_{6}$ };
\node (f) at (7.2,0) { $v_{5}$ };
\node (g) at (10.2-1,0) { $v_{2}$ };
\node (h) at (11.4-1,0) { $v_{4}$ };
\node (i) at (12.6-1,0) { $v_{1}$ };
\node (j) at (13.8-1,0) { $v_{6}$};
\node (k) at (15.0-1,0) { $v_{3}$ };
\node (l) at (16.2-1,0) { $v_{5}$ };
\draw[->] (f) edge [out=150,in=30] (a);
\draw[->] (d) edge [out=210,in=330] (a);
\draw[->] (f) edge [out=210,in=330] (b);
\draw[->] (e) edge [out=150,in=30] (c);
\draw[->] (l) edge [out=150,in=30] (i);
\draw[->] (i) edge [out=150,in=30] (g);
\draw[->] (l) edge [out=150,in=30] (g);
\draw[->] (k) edge [out=215,in=325] (h);
\end{tikzpicture}
\end{center}
\begin{center}
Fig.6 Two crucial orderings of vertices of $L_{2}$. The left one is the forest ordering and the right one is the cyclic ordering. Notice that neither of them is a galaxy ordering.
\end{center}
The first ordering of the vertices of $L_{2}$ is as follows: $(v_{1},v_{2},v_{3},v_{4},v_{6},v_{5})$, where the set of backward edges is: $\{(v_{4},v_{1}),(v_{5},v_{2}),(v_{5},v_{1}),(v_{6},v_{3})\}$. We call it the \textit{forest ordering of $L_{2}$}.
The second ordering of the vertices of $L_{2}$ is as follows: $(v_{2},v_{4},v_{1},v_{6},v_{3},v_{5})$, where the set of backward edges is: $\{(v_{1},v_{2}),(v_{5},v_{1}),(v_{5},v_{2}),(v_{3},v_{4})\}$.
We call it the \textit{cyclic ordering of $L_{2}$}.
\begin{theorem}
\label{l2-theorem}
Tournament $L_{2}$ satisfies the Erd\H{o}s-Hajnal conjecture.
\end{theorem}
\Proof
We will prove that every $L_{2}$-free tournament $T$ on $n$ vertices contains a transitive subtournament
of size at least $n^{\epsilon}$ for $\epsilon > 0$ small enough.
Assume for a contradiction that this is not the case and let
$T$ be the smallest $L_{2}$-free $\epsilon$-critical tournament. By Theorem \ref{remarktheorem} we may assume that $|T|$ is large enough. We will get a contradiction, proving that $T$ contains a transitive subtournament of order $n^{\epsilon}$.
By Theorem \ref{smooththeorem} we extract from $T$ a smooth $(c_{0}(\lambda_{0}), \lambda_{0}, w)$-structure $\chi_{0}=(A_{1},A_{2},T_{0},A_{3},A_{4},A_{5})$,
where $w=(0,0,1,0,0,0)$ and $\lambda_{0} > 0$ is an arbitrary positive number.
We will fix $\lambda_{0}$ to be small enough.
We then take an arbitrary subset $S$ of $T_{0}$ such that $|S|$ is divisible by $3$ and $|S|$ is of maximum size. Notice that $|S| \geq |T_{0}|-2$.
Since $|T_{0}| \geq c_{0}(\lambda_{0}) tr(T)$ and $|T|$ is large, it follows
that $|T_{0}| \geq 4$, and so $|S| \geq \frac{|T_{0}|}{2}$.
Now take the sequence $\chi=(A_{1},A_{2},S,A_{3},A_{4},A_{5})$.
Since $(A_{1},A_{2},T_{0},A_{3},A_{4},A_{5})$ is a a smooth $(c_{0}(\lambda_{0}), \lambda_{0}, w)$-structure
and $S$ is a subset of $T_{0}$ of size $|S| \geq \frac{|T_{0}|}{2}$, we get that $(A_{1},A_{2},S,A_{3},A_{4},A_{5})$
is a smooth $(c(\lambda), \lambda, w)$-structure for $\lambda = 2\lambda_{0}$ and $c(\lambda) = \frac{c_{0}(\lambda_{0})}{2} = \frac{c_{0}(\frac{\lambda}{2})}{2}$.
We partition $S$ into three subsets: the set of first $\frac{|S|}{3}$ vertices called $T_{1}$, the set of next $\frac{|S|}{3}$ vertices called $T_{2}$ and the remaining part called $T_{3}$ (here we refer to the transitive ordering of $S$).
By Theorem \ref{veryeasylemma} we may assume that there exist $x_{1},...,x_{k} \in A_{1}$
and $y_{1},...,y_{k} \in A_{5}$ such that $k \geq \frac{cn}{2}$ and $(y_{i},x_{i})$ is an edge for $i=1,...,k$. Denote $X=\{x_{1},...,x_{k}\}$, $Y=\{y_{1},...,y_{k}\}$.
Let $X_{wrong}$ be the set of vertices of $X$ that are complete to
$T_{3}$, and let $Y_{wrong}$ the set of vertices of of $Y$ that are complete
from $T_{1}$. Assume first that $|X_{wrong}| \geq \frac{k}{3}$.
But $X_{wrong}$ is complete to $T_{3}$, $|X_{wrong}| \geq \frac{c}{6}n$, and $|T_{3}| \geq \frac{c}{3}tr(T)$, which contradicts Theorem \ref{firsttechnicallemma} if $\epsilon < \log_{\frac{c}{6}}(1-\frac{c}{3})$. We get a
similar contradiction if $|Y_{wrong}| \geq \frac{k}{3}$.
Therefore $|X_{wrong}| < \frac{k}{3}$ and $|Y_{wrong}| < \frac{k}{3}$. Write $\mathcal{I} = \{i \in \{1,...,k\}\:x_{i} \notin X_{wrong} \land y_{i} \notin Y_{wrong}\}$. We have: $|\mathcal{I}| > \frac{k}{3}$, and in particular $\mathcal{I} \neq \emptyset$. Fix $j \in \mathcal{I}$. Let $u \in T_1$ be an outneighbor of $y_{j}$,
and let $v \in T_3$ be an inneighbor of $x_{j}$. Note that since $u \in T_{1}$ and $v \in T_{3}$, $(u,v)$ is an edge.
Assume first that both $(x_{j},u)$ and $(v,y_{j})$ are edges.
Let $T_{2}^{*}$ be the set of vertices of $T_2$ that are outneighbors of
$x_{j}$ and inneighbors of $y_{j}$.
From the fact that $\chi$ is smooth, we get: $|T_{2}^{*}| \geq |T_{2}|-2\lambda|T| \geq \frac{c}{3}(1-6\lambda)tr(T) \geq \frac{c}{6}tr(T)$ if we take $\lambda \leq \frac{1}{12}$.
Let $A_{3}^{*}$ be the set of vertices of $A_{3}$ that are outneighbors of $x_{j}$, $u$ and $v$, and inneighbors of $y_{j}$. Again, from the fact that $\chi$ is smooth, we get:
$|A_{3}|^{*} \geq |A_{3}|(1-4\lambda) \geq \frac{c}{2}n$ for $\lambda \leq \frac{1}{8}$.
Now, if $\epsilon < \log_{\frac{c}{2}}(1-\frac{c}{6})$, by Theorem
\ref{smooththeorem} there exists $z \in A_{3}^{*}$ and $w \in T_{2}^{*}$ such
that $(z,w)$ is an edge, and so $(x_{j},u,w,v,z,y_{j})$ is the forest ordering
of $L_{2}$, a contradiction.
Thus either $(u,x_{j})$ is an edge or $(y_{j},v)$ is an edge.
Assume that the former holds (if the latter holds, the argument is similar, and
we omit it). Let $A_{2}^{*}$ be the set of vertices of $A_{2}$ that are outneighbors of $x_{j}$ and inneighbors of $u$ and $y_{j}$. From the fact that $\chi$ is smooth, we get: $|A_{2}^{*}| \geq |A_{2}|(1-3\lambda) \geq \frac{c}{2}n$ for $\lambda \leq \frac{1}{6}$. Let $A_{4}^{*}$ be the set of vertices of $A_{4}$ that are outneighbors of $x_{j}$ and $u$, and inneighbors of $y_{j}$. From the fact that $\chi$ is smooth, we get: $|A_{4}^{*}| \geq |A_{4}|(1-3\lambda) \geq \frac{c}{2}n$ for $\lambda \leq \frac{1}{6}$.
Now, if $\epsilon < \log_{\frac{c}{4}}(\frac{1}{2})$, Theorem \ref{veryeasylemma} implies that there exist
$z \in A_{4}^{*}$ and $ w \in A_{2}^{*}$ such that $(z,w)$ is an edge.
Let $A_{3}^{*}$ be the set of vertices of $A_{3}$ that are outneighbors of $x_{j},w,u$, and inneighbors of $z,y_{j}$. From the fact that $\chi$ is smooth, we get:
$|A_{3}^{*}| \geq |A_{3}|(1-5\lambda) \geq \frac{c}{2}n$ for $\lambda < \frac{1}{10}$.
In particular, $A_{3}^{*}$ is nonempty. Let $s \in A_{3}^{*}$.
Now $(x_{j},w,u,s,z,y_{j})$ is the cyclic ordering of $L_{2}$, again a contradiction. This completes the proof.
\bbox
\begin{theorem}
\label{l1-theorem}
Tournament $L_{1}$ satisfies the Erd\H{o}s-Hajnal conjecture.
\end{theorem}
\Proof
The proof goes along the same line as the proof of the previous theorem.
Again we take an $\epsilon$-critical tournament $T$ that this time is $L_{1}$-free, and
get a contradiction for $\epsilon>0$ small enough.
By Theorem \ref{smooththeorem} we extract from $T$ a smooth $(c_{0}(\lambda_{0}), \lambda_{0}, w)$-structure $\chi_{0}=(A_{1},A_{2},T_{0},A_{3},A_{4},A_{5}, A_{6})$,
where $w=(0,0,1,0,0,0,0)$ and $\lambda_{0} > 0$ is an arbitrary positive number.
We will fix $\lambda_{0}$ to be small enough.
As in the previous proof, we use $\chi_{0}$ to construct a $(c(\lambda),\lambda,w)$-structure $\chi=(A_{1},A_{2},S,A_{3},A_{4},A_{5}, A_{6})$, where
$|S|$ is divisible by $3$.
We partition $S$ into three subsets: the set of first $\frac{|S|}{3}$ vertices called $T_{1}$, the set of next $\frac{|S|}{3}$ vertices called $T_{2}$ and the remaining part
called $T_{3}$.
As in the previous proof, we may assume that there exist $x_{j} \in A_{1},y_{j} \in A_{5}$
such that $(y_{j},x_{j})$ is an edge, $y_{j}$ has an outneighbor $u$ in $T_{1}$, and $x_{j}$ has an inneighbor $v$ in $T_{3}$.
Assume first that both $(x_{j},u)$ and $(v,y_{j})$ are edges.
Now denote by $T_{2}^{*}$ the set of vertices of $T_{2}$ that are outneighbors of $x_{j}$, and inneighbors of $y_{j}$. From the fact that $\chi$ is smooth, we get: $|T_{2}^{*}| \geq |T_{2}|-2\lambda|T| \geq \frac{c}{3}(1-6\lambda)tr(T) \geq \frac{c}{6}tr(T)$ if we take $\lambda \leq \frac{1}{12}$.
Let us also denote by $A_{6}^{*}$ the set of vertices of $A_{6}$ that are outneighbors of $x_{j}$, $u$, $v$ and $y_{j}$. Again, from the fact that $\chi$ is smooth, we get:
$|A_{6}|^{*} \geq |A_{6}|(1-4\lambda) \geq \frac{c}{2}n$ for $\lambda \leq \frac{1}{8}$.
Now, if $\epsilon < \log_{\frac{c}{2}}(1-\frac{c}{6})$,
Theorem \ref{smooththeorem} implies that there exist $z \in A_{6}^{*}$
and $w \in T_{2}^{*}$ such that $(z,w)$ is an edge, and so
$(x_{j},u,w,v,y_{j},z)$ is the forest ordering of $L_{1}$, a contradiction.
Thus either $(u,x_{j})$ is an edge or $(y_{j},v)$ is an edge.
We assume that the former holds (if the latter holds, the argument is similar
and we omit it).
Let $A_{2}^{*}$ be the set of vertices of $A_{2}$ that are outneighbors of $x_{j}$ and inneighbors of $u$ and $y_{j}$. From the fact that $\chi$ is smooth, we get: $|A_{2}^{*}| \geq |A_{2}|(1-3\lambda) \geq \frac{c}{2}n$ for $\lambda \leq \frac{1}{6}$. Let $A_{3}^{*}$ be the set of vertices of $A_{3}$ that are outneighbors of $x_{j}$ and $u$, and inneighbors of $y_{j}$. From the fact that $\chi$ is smooth, we get: $|A_{3}^{*}| \geq |A_{3}|(1-3\lambda) \geq \frac{c}{2}n$ for $\lambda \leq \frac{1}{6}$.
Now, if $\epsilon < \log_{\frac{c}{4}}(\frac{1}{2})$,
Theorem \ref{veryeasylemma} implies that there exist
$z \in A_{3}^{*}$ and $w \in A_{2}^{*}$ such that $(z,w)$ is an edge.
Denote by $A_{4}^{*}$ the set of vertices of $A_{4}$ that are outneighbors of $x_{j},w,u,z$, and inneighbors of $y_{j}$. From the fact that $\chi$ is smooth, we get:
$|A_{4}^{*}| \geq |A_{4}|(1-5\lambda) \geq \frac{c}{2}n$ for $\lambda < \frac{1}{10}$.
In particular, $A_{4}^{*}$ is nonempty. Let $s \in A_{4}^{*}$.
Now $(x_{j},w,u,z,s,y_{j})$ is a cyclic ordering of $L_{1}$, again a
contradiction.
This completes the proof.
\bbox
We are now ready to finish the proof of Theorem \ref{six_vertex_theorem}.
\Proof
By Theorem \ref{technical-theorem}, it suffices to prove the conjecture for $L_{1},L_{1}^{c},L_{2}$ and $L_{2}^{c}$. We have just proved the conjecture for $L_{1}$ and $L_{2}$. Thus obviously $L_{1}^{c}$ and $L_{2}^{c}$ also satisfy the conjecture. This completes the proof.
\bbox
|
1,314,259,995,944 | arxiv | \section{Introduction}
Despite contributing only a small fraction of the overall cosmic-ray flux, cosmic-ray electrons and positrons (CREs) provide an important and unique probe of our local Galactic neighborhood.
CREs lose energy rapidly via inverse Compton scattering and synchrotron processes while propagating in the Galaxy, effectively placing a maximal propagation distance for TeV electrons of order $\sim$1 kpc\cite{KOBA}.
CREs at TeV energies provide a direct measurement of the local cosmic-ray accelerators and diffusion.
The Fermi-$LAT$\cite{LAT}, and more recently AMS\cite{AMS}, have measured the CRE spectrum with high statistics using satellite-based experiments up to energies of several hundred GeV.
However, above those energies these instruments run into difficulty from the combination of the steep CRE spectrum and their relatively small acceptance.
Ground-based experiments that utilize the Imaging Atmospheric Cherenkov Technique (IACT) have the capability to extend the CRE spectrum to higher energies due to their large
collection area ($\sim10^{5}$m$^{2}$).
HESS\cite{HESS1}\cite{HESS2} and MAGIC\cite{MAGIC} have demonstrated this ability and previously measured the CRE spectrum from the ground up to TeV energies.
Their results provide evidence of at least one nearby CRE source and agree with satellite measurements within systematical uncertainties where there is energy overlap.
The combined picture that has emerged is one where the CRE spectrum is mostly flat and can be described by a simple powerlaw from $\sim$10 GeV up to just below $\sim$1 TeV, above which
HESS has measured a spectral steepening.
MAGIC data\cite{MAGIC} are consistent with a single power-law up to $\sim$3 TeV.
Additionally, the positron fraction spectrum, $\phi(e^{+})/(\phi(e^{-})+\phi(e^{+}))$, has now been measured above 10 GeV by the HEAT\cite{HEAT}, PAMELA\cite{PAM}, Fermi-$LAT$\cite{LATfrac}, and AMS\cite{AMSfrac} experiments.
This ratio is found to rise with increasing energy up to $\sim$200 GeV, above which it appears to flatten out.
Positrons are mainly produced in secondary interactions between cosmic rays and the interstellar gas.
These results point to the possible existence of an additional local source of positrons on top of secondary production.
Explanations could include additional production from nearby standard astrophysical objects, such as pulsars or supernova remnants, or more exotic production mechanisms, such as the annihilation or decay of particle Dark Matter.
Conversely, recent studies have also proposed a more detailed propagation model\cite{prop} or a better accounting of secondary production\cite{waxman} to describe the results.
A full understanding of this situation will require detailed input about both the positron fraction and the CRE spectrum.
While the excitement of the unexpected excess found in earlier ATIC data\cite{ATIC} is now largely over, a high statistics measurement of the CRE spectrum to TeV energies will help us build a clear picture of our local CRE emitters.
\section{Methods}
Data presented here were collected by the VERITAS telescope array located at the Fred Lawrence Whipple Observatory (FLWO) in southern Arizona (31$^{\circ}$ 40$'$N, 110$^{\circ}$ 57$'$W, 1.3km a.s.l.).
VERITAS is a ground-based array of four telescopes sensitive to gamma- and cosmic-rays above $\sim$100 GeV.
While VERITAS is primarily a gamma-ray instrument, CREs are a diffuse source across the sky and so are collected during all astrophysical observations.
\begin{figure}[t]
\begin{center}
\includegraphics[width = 4.5in]{figure1_forArchive.eps}
\caption{BDT response for proton MC (gray filled area) and the full dataset (blue points) for the energy range 630 GeV to 1 TeV.
The insert shows the ratio, data/MC, over the same range.
The agreement is very good except close to the limits of the distribution.
As we approach the side dominated by background-like events, -1.0, we find an excess in the data over the proton MC.
We expect an excess here from helium and higher-Z primaries, particularly since helium makes up $\sim20\%$ of the cosmic-ray flux.
Likewise, as we approach the side dominated by signal-like events, 1.0, we find an excess in the data over the proton MC that arises from the CREs measured in this study.}
\label{dataMC}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width = 5.5in]{figure2_forArchive.eps}
\caption{The top three panels present the fit results for each of the 15 trials in three energy bins.
The x-axis represents the trial number and the y-axis represents the measured electron fraction (the individual trial error bars are the 1$\sigma$ errors from the fit).
Also shown is the mean value and uncertainty on the mean for each bin.
The bottom five panels show the individual fit results for the first five trials in the 631$-$708 GeV energy bin
(the black points are data, the best fit line is wide and red, the proton MC line is thin and green, and the electron MC line is dashed and orange).
For each fit we list the best fit electron and proton fractions and the $\chi^{2}/dof$ for the comparison between the data and best fit.
}
\label{METHOD}
\end{center}
\end{figure}
To isolate this signal, we apply strict analysis cuts to select only the best reconstructed events under pristine weather conditions.
Data events must have four good telescope images and reconstruct within the inner $1^{\circ}$ of the camera (the VERITAS field of view is $3.5^{\circ}$).
A strict cut is placed on the distance between the array center and the reconstructed array core, $coreR<200$ m.
Only extragalactic fields are considered in order to reduce the contamination from the Galactic Plane diffuse gamma-ray flux.
Additionally, all detected or candidate gamma-ray sources within each field of view are excluded.
To reduce the number of detector configurations in our simulations, we selected only data collected between September, 2009 and July, 2012, which are the dates of the two major VERITAS hardware upgrades (telescope relocation and PMT replacement, respectively).
We also restrict the mean zenith angle of the data to be between $65^{\circ} - 75^{\circ}$, where we exclude data runs with mean values outside of this range (a data run is typically $\sim$20 minutes).
To accumulate sufficient Monte Carlo statistics, we generated simulations at a single zenith angle, $70^{\circ}$.
This restricted data zenith range ensures the level of data/MC agreement necessary in this analysis.
296 hours of live-time remain after all these cuts.
We rely heavily on our Monte Carlo for interpretation and signal extraction.
To simulate the electromagnetic and hadronic showers, we used Corsika 6.970\cite{corsika}, with the QGSJetII.3 and URQMD 1.3cr underlying event generators, and GrISUDet 5.0.0\cite{grisu} for the VERITAS detector response.
We generated electrons, protons, and helium showers with a $4^{\circ}$ radius on the sky to approximate the isotropic and diffuse cosmic-ray flux.
Larger simulation radii were tested and found to not improve this approximation.
For signal and background discrimination we use Boosted Decision Trees (BDTs) that were integrated into the standard VERITAS analysis chain using the ROOT TMVA\cite{TMVA} framework.
The BDTs were trained with a diffuse electron MC sample (signal) and a representative sub-sample of the data chosen randomly from the full dataset (background).
We used four array-level shower variables in the training and event discrimination: MSCW, MSCL, $\chi^{2}(E)$, and the emission height.
MSCW and MSCL are variables based on the comparison of the spread along the major and minor axes of an ellipse fit to the camera images (length and width respectively) compared to expected values from simulations.
$\chi^{2}(E)$ represents the variability of the energy measurements in each of the four telescopes.
The emission height is the reconstructed height of the peak width of the shower.
We ensured that the trees were not overtrained by using a fraction of the full training signal and background samples to test the response.
No overtraining was found and the trees and method were also used to successfully reconstruct the Crab Nebula energy spectrum as a cross-check.
Each data event is assigned a BDT response value, from $-$1.0 to 1.0, where higher values indicate that the event is more signal-like.
Figure \ref{dataMC} shows a comparison of the BDT response for the full dataset and proton MC.
The agreement is very good except near the limits of the distribution.
As we approach the side dominated by background-like events, $-$1.0, we find an excess in the data over the proton MC.
We expect an excess here from helium and higher-Z primaries, particularly since helium makes up $\sim20\%$ of the overall cosmic-ray flux\cite{pdg}.
We investigated the BDT response of helium MC events and they are found to peak at $-$1.0 and fall off faster than proton MC, in agreement with this interpretation.
As we approach the side dominated by signal-like events, 1.0, we find an excess in the data over the proton MC that arises from the CREs measured in this study.
We select only those events with BDT response values $>0.7$, which focuses on the region that contains the majority of the signal-like events and rejects the majority of the background-like events.
We then employ an extended likelihood fitting method to the BDT response within this region to extract the contribution of electron and proton events to the total.
This fit floats the electron and proton MC shapes relative to each other to find the best combined fit to the data.
Helium and higher-Z shower events are found to be sufficiently rejected by the BDT cut to ignore at first order.
To estimate the final electron fraction and its uncertainty, we divide the data into sub-samples and run the experiment several times (see Fig. \ref{METHOD}).
The mean of these separate trials is the final electron fraction with the uncertainty defined as the error on this mean.
This method removes some systematic biases from the fitting method and lets the data itself drive the statistical precision of the measured value.
One caveat of this technique is that fits in trials/bins with less than $\sim$100 data events do not return sensible results.
As a result we use fewer trials for higher energy bins where there are less statistics.
The highest bin presented here is the result of a single fit to 200 events; the uncertainty quoted is the 1$\sigma$ error from the fit.
\begin{figure}[t]
\begin{center}
\includegraphics[width = 5.0in]{figure3_forArchive.eps}
\caption{VERITAS preliminary cosmic-ray electron spectrum as a function of energy (GeV) in solid blue circles.
The best fit to the data is represented as a dashed line and is found to be two power-laws with a break energy 710 $\pm$ 40 GeV and spectral indices of $-$3.1 $\pm$ 0.1$_{stat}$ ($-$4.1 $\pm$ 0.1$_{stat}$) below (above) the break.
The $\chi^{2}/$dof of the fit is 0.9.
Shown for comparison are data from other experiments in the same energy range.
The gray band represents the systematical uncertainty on the VERITAS measurement.}
\label{MONEY}
\end{center}
\end{figure}
\section{Results}
We show in Fig. \ref{MONEY} the preliminary VERITAS CRE energy spectrum spanning $\sim$300 GeV to $\sim$5 TeV.
The spectrum steepens at higher energies and is best described by two power-laws with a cutoff.
The best fit for this cutoff energy is found to be 710$\pm$40$_{stat}$ GeV, with best fit spectral indices below (above) this energy of $-$3.2 $\pm$ 0.1$_{stat}$ ($-$4.1 $\pm$ 0.1$_{stat}$).
The $\chi^{2}/dof $ of this fit is 9.71/11.
The gray band represents the systematical uncertainty, which is dominated by the $\sim20\%$ uncertainty on the VERITAS absolute energy scale.
This translates into a +64\%/$-$33\% (+98\%/$-$43\%) systematical uncertainty for a spectral index of $-$3.2 ($-$4.1).
We measure an additional $10\%$ systematical uncertainty above $\sim$1 TeV that quantifies hardware uncertainties at those energies.
This additional uncertainty is added in quadrature.
Due to the similarity of gamma and electron electromagnetic showers, we cannot rule out a significant contamination of gamma-ray events within our electron data.
However, Fermi-$LAT$ has now measured the diffuse extragalactic gamma-ray flux up to $\sim$800 GeV and finds it orders of magnitude below their own CRE measurement\cite{diffuseLAT}.
They additionally find evidence for a cutoff in the diffuse gamma-ray spectrum above a couple hundred GeV.
CRE results shown here qualitatively agree with prior ground-based and satellite-based measurements at similar energies.
Of the many experiments studying CREs, this result represents the second high-statistics measurement of a cutoff in the CRE spectrum around $\sim$1 TeV.
The precise measurement of this energy cutoff is an important parameter in any successful model our local CRE environment.
VERITAS has significantly more data on disk than what has been used in this study and work continues with the goal of extending our measurement out to even higher energies.
The CRE spectrum between 5 and 10 TeV is unexplored and this is the energy range where we expect to see spectral features from individual nearby astrophysical sources (if they are the dominant source).
We urge caution in over-interpretation of the uptick in the final VERITAS data point since this is within 2$\sigma$ of the best fit line.
\section*{Acknowledgements}
This research is supported by grants from the U.S. Department of Energy Office of Science, the U.S. National Science Foundation and the Smithsonian Institution, and by NSERC in Canada. We acknowledge the excellent work of the technical support staff at the Fred Lawrence Whipple Observatory and at the collaborating institutions in the construction and operation of the instrument.
Computations were made on the supercomputer Guillimin from McGill Univesity, managed by Calcul Quebec and Compute Canada. The operation of this supercomputer is funded by the Canada Foundation for Innovation (CFI), NanoQuebec, RMGA and the Fonds de recherche du Quebec - Nature et technologies (FRQ-NT).
The VERITAS Collaboration is grateful to Trevor Weekes for his seminal contributions and leadership in the field of VHE gamma-ray astrophysics, which made this study possible.
|
1,314,259,995,945 | arxiv |
\section{Alternate Approximation for Many Additive Agents}
\label{sec:additive}
For the special case of additive buyers, we show how to modify the analysis of
\citet{bilw-focs14} in order to achieve a much tighter bound than that stated in
Theorem~\ref{thm:main-partition}. The relaxation and stitching steps hold as
before; we prove that \citeauthor{bilw-focs14}'s single-agent approximation can
be made to respect {ex~ante}\ constraints with only a small penalty to the
approximation factor.
\begin{lemma}
\label{lem:approx-additive}
For any product distribution $\mathcal D$ and any ${\mathbf \eaprob} \in [0,1]^m$,
\[\earev(\mathcal D,\feasi[\mathsc{Additive}]) \leq 7\,\eatrev({\mathbf \dist},\feasi[\mathsc{Additive}])\]
\end{lemma}
Combining Lemma~\ref{lem:approx-additive} with Lemma~\ref{lem:relaxation} and
Corollary~\ref{cor:stitching} as before, we get our improved result.
\begin{theorem}
\label{thm:main-additive}
For any product value distribution ${\mathbf \dist}$, there exists a supply-feasible
sequential two-part tariff mechanism ${\mathcal M}$ such that
\[ \mathsc{Rev}({\mathbf \dist}, \times_n \feasi[\mathsc{Additive}])\le 28\,\revm({\mathbf \dist}) \]
\end{theorem}
We devote the remainder of this section to the proof of
Lemma~\ref{lem:approx-additive}; we drop the explicit dependence on the
feasibility constraint, $\feasi[\mathsc{Additive}]$, for clarity.
We make use of the fact that relaxing the demand constraint is unnecessary
in the additive setting. This allows us to get a much tighter concentration
result in the core. Instead of defining $t_j = \eapricej + \tau$, as in
Section~\ref{sec:single-agent}, we define $t_j = \max\left(\eapricej,
\easrev(\mathcal D)\right)$. It is straightforward to verify that our core
decomposition (Lemma~\ref{lem:core-decomposition}) continues to hold under this
definition. The key insight in~\citet{bilw-focs14}'s analysis is that this
definition allows for a nontrivial bound on the variance of the core, leading to
a strong concentration result via Chebyshev's inequality, while keeping the
expected number of items in the tail small. It turns out that their analysis
goes through under an {ex~ante}\ constraint, except for a small loss in the core
due to enforcing the constraint for the bundle pricing.
In addition to the notation from Section~\ref{sec:single-agent}, we define the
following notation (see Table~\ref{tab:add-notation}). Let $r_j =
\earev[\eaprobj](\distj)$ and $r = \easrev(\mathcal D)$. Note that in the additive setting
$\easrev(\mathcal D) = \sum_j\earev[\eaprobj](\distj)$; in other words, $r = \sum_jr_j$.
\begin{table}[t]
\renewcommand{\arraystretch}{1.5}
\caption{Notation for Section~\ref{sec:additive}.}
\begin{tabular}{r l l}
\hline
Notation & Definition & Formula \\
\hline
${\mathbf \eaprob}$ & {Ex~ante}\ probabilities & \\
${\mathbf \eaprice}$ & {Ex~ante}\ prices &
$\eapricej = \distj^{-1}(1-\eaprobj)\,\,\forall j\in [m]$ \\
$r_j$ & Revenue from item $j$ &
$\earev[\eaprobj](\distj)$ \\
$r$ & Item-pricing revenue &
$\sum_jr_j$ \\
$t_j$ & Core-tail threshold for item $j$ &
$\max(\eapricej, r)$ \\
$\tprobj$ & Probability item $j$ is in the tail &
$\prob[\valj\sim\distj]{\valj > t_j}$ \\
${\mathbf \dist}-{\mathbf \price}$ & Distribution ${\mathbf \dist}$ shifted to the left by ${\mathbf \price}$ &
\\
\hline
\end{tabular}
\label{tab:add-notation}
\end{table}
By Lemmas~\ref{lem:core-decomposition} and \ref{lem:core-bound}, we have
\[\earev(\mathcal D) \leq {\mathbf \eaprice}\cdot{\mathbf \eaprob} + \mathsc{Val}(\core-{\mathbf \eaprice}) +
\sum_{A\subseteq[m]}\tprobA\mathsc{Rev}(\dists^T_A)\]
Clearly ${\mathbf \eaprice}\cdot{\mathbf \eaprob} \leq \easrev(\mathcal D)$, so it remains to bound the
other two terms. Note that the {ex~ante}\ constraint has been effectively removed
from these terms; we will show that \citeauthor{bilw-focs14}'s unconstrained
bounds continue to apply here. We state these bounds and then show that our
distributions satisfy their conditions. Recalling that $\mathsc{BRev}(\core-{\mathbf \eaprice})
leq \eatrev(\mathcal D)$ completes the proof.
\begin{lemma}[\citet{bilw-focs14}]
\label{lem:add-core}
If, for all $j$, $Var(\dist^C_j-\eapricej) \leq 2rr_j$, then
\[\mathsc{Val}(\core - {\mathbf \eaprice}) \leq
4\,\max\left\{\mathsc{BRev}(\core-{\mathbf \eaprice}),\easrev({\mathbf \dist})\right\}\]
\end{lemma}
\begin{lemma}[\citet{bilw-focs14}]
\label{lem:add-tail}
If, for all $j$, $\mathsc{Rev}(\dist^T_j) \leq r_j/\tprobj$ and $\tprobj \leq r_j/r$,
then
\[\sum_{A\subseteq[m]}\tprobA\mathsc{Rev}(\dists^T_A) \leq 2\,\easrev({\mathbf \dist})\]
\end{lemma}
The following two lemmas capture the necessary conditions.
\begin{lemma}
$Var(\dist^C_j-\eapricej) \leq 2rr_j$
\end{lemma}
\begin{proof}
We first prove $\mathsc{Rev}(\dist^C_j-\eapricej) \leq r_j$ for all $j$. Let
$\pricej^* \geq 0$ be an optimal price for selling to
$\dist^C_j-\eapricej$. Selling to $\dist^C_j$ at price $\pricej^* +
\eapricej$ gets at least as much revenue and sells with probability
at most $\eaprobj$, so $\mathsc{Rev}(\dist^C_j-\eapricej) \leq
\earev[\eaprobj](\dist^C_j)$. Now, let $q_j^*$ be the probability with
which a mechanism obtaining $\earev[\eaprobj](\dist^C_j)$ sells to
$\dist^C_j$. Clearly, since $\distj$ stochastically dominates $\dist^C_j$,
selling to $\distj$ with probability $q_j^*$ gets at least as much
revenue and satisfies the same {ex~ante}\ constraint.
Given the above, we employ an argument originally due to \citet{LY-PNAS13} to
bound the variance of $\dist^C_j-\eaprobj$. Note that $\dist^C_j-\eapricej$ is
supported on $[0,r]$, but its revenue is at most $r_j$. So
$\prob[V\sim(\dist^C_j-\eapricej)]{V \geq v} \leq r_j/v$. Then
\begin{align*}
\operatorname{E}\expectarg[V\sim(\dist^C_j-\eapricej)]{V^2} &\leq \int_0^{r^2}\min(1,
r_j/\sqrt{v})\,dv \\
&\leq 2rr_j
\end{align*}
\end{proof}
\begin{lemma}
$\mathsc{Rev}(\dist^T_j) \leq r_j/\tprobj$ and $\tprobj \leq r_j/r$.
\end{lemma}
\begin{proof}
The first inequality follows from the assumption that $t_j \geq \eapricej$.
Let $\pricej^*$ be an optimal price for $\mathsc{Rev}(\dist^T_j)$. Then, by setting price
$\pricej^*$, one can obtain $\tprobj\mathsc{Rev}(\dist^T_j)$ from $\distj$ while respecting
the {ex~ante}\ constraint $\eaprobj$. In other words,
$\tprobj\mathsc{Rev}(\dist^T_j) \le \earev[\eaprobj](\distj) = r_j$.
Recall that $t_j \ge r$. So one could sell item $j$ at price
$t_j$ and earn profit $\tprobjt_j\ge \tprobj r$ while
respecting the {ex~ante}\ constraint $\eaprobj$. But
$\earev[\eaprobj](\distj) = r_j$, therefore we must have $\tprobj r \leq
\tprobjt_j\leq r_j$.
\end{proof}
\subsection{Bounding the Core}
Recall that an item $j$ is in the core if its value $\valj$ is no more than the
threshold $t_j = \eapricej + \tau$. We will bound the {ex~ante}\
constrained social welfare of the core, $\eaVal(\core,{\mathcal F})$, in two parts: the
welfare obtained from values below ${\mathbf \eaprice}$ via a prophet inequality and the
welfare between ${\mathbf \eaprice}$ and ${\mathbf \eaprice} + \tau$ using a concentration
bound introduced by \citet{rw-15}.
Recall that $\dist^C_j$ denotes the value distribution for item $j$
conditioned on being in the core. We use $\dist^C_j-\eapricej$ to denote
the distribution of $\valj-\beta$ conditioned on $\valj$ being in the
core; in other words, $\dist^C_j-\eapricej$ is the distribution $\dist^C_j$
shifted to the left by $\eapricej$. $\core-{\mathbf \eaprice}$ is defined to be
the product of the distributions $\dist^C_j-\eapricej$. Observe that value
vectors drawn from $\core-{\mathbf \eaprice}$ are bounded by $\tau$ in every
coordinate. The following lemma breaks $\eaVal(\core,{\mathcal F})$ up into
the two components, each of which can be bounded separately.
\begin{lemma}
\label{lem:core-bound}
For any product disribution ${\mathbf \dist}$ and downwards closed feasibility
constraint ${\mathcal F}$,
$\eaVal(\core,{\mathcal F}) \leq {\mathbf \eaprice}\cdot{\mathbf \eaprob} +
\mathsc{Val}(\core-{\mathbf \eaprice},{\mathcal F})$.
\end{lemma}
\begin{proof}
Let ${\mathbf \alloc}({\mathbf \val})$ be the interim allocation rule of a ${\mathbf \eaprob}$-constrained
BIC mechanism which attains social welfare equal to $\eaVal(\core, {\mathcal F})$. Then
\begin{align*}
\eaVal(\core,{\mathcal F}) &= \sum_j\int_0^{t_j}f_j(y)\allocj(y)y\,dy \\
&\leq \sum_j\int_0^{t_j}f_j(y)\allocj(y)\eapricej \,dy + \sum_j\int_0^{t_j}f_j(y)\allocj(y)(y-\eapricej)\,dy \\
&\leq {\mathbf \eaprice}\cdot{\mathbf \eaprob} + \mathsc{Val}(\core-{\mathbf \eaprice}, {\mathcal F}).
\end{align*}
\end{proof}
We can recover $\mathsc{Val}(\core-{\mathbf \eaprice},{\mathcal F})$ using a two-part
tariff for the original distribution ${\mathbf \dist}$ by employing the following
concentration result proved by \citet{rw-15}, based on a result of
\citet{schechtman-99}.
\begin{lemma}[\citet{rw-15}]
\label{lem:schechtman}
Let ${\mathbf \val}$ be a constrained additive value function with a
downwards closed feasibility constraint, drawn from a distribution over
support $(-\infty, \tau]$ for some $\tau\ge 0$. Let $a$ be the median of
the value of the grand bundle, ${\mathbf \val}([m])$. Then, $\operatorname{E}\expectarg{{\mathbf \val}([m])} \leq
3a + 4\tau/\ln 2$.
\end{lemma}
\begin{lemma}
\label{lem:lipschitz-dist-bound}
\begin{align*}
\mathsc{Val}(\core-{\mathbf \eaprice},{\mathcal F}) & \leq 6\,\mathsc{BRev}({\mathbf \dist}-{\mathbf \eaprice},{\mathcal F}) +
\frac{8}{\ln 2} \,\easrev({\mathbf \dist},{\mathcal F})
\end{align*}
\end{lemma}
\begin{proof}
We apply Lemma~\ref{lem:schechtman} to the distribution $\core-{\mathbf \eaprice}$ to
obtain $\mathsc{Val}(\core-{\mathbf \eaprice},{\mathcal F}) \le 3a+4\tau/\ln 2$ where $a$ is the
median of the value of the grand bundle under the distribution
$\core-{\mathbf \eaprice}$, and $\tau$ is the constant defined earlier.
Consider offering the grand bundle at price $a$ to a buyer with value
drawn from $\core-{\mathbf \eaprice}$; the buyer accepts with probability
$1/2$. Therefore $\mathsc{BRev}({\mathbf \dist}-{\mathbf \eaprice})\ge\mathsc{BRev}(\core-{\mathbf \eaprice})\ge
a/2$. Next, suppose that $\tau>0$. Consider selling the items
separately at prices $t_j$ for all $j$. Recall that $\tau>0$
implies that $\prob[{\mathbf \val}\sim{\mathbf \dist}]{\exists j \text{ s.t. } \valj >
t_j} = 1-\tprobA[\emptyset]= 1/2$. So the agent buys at least one item
with probability $1/2$. Noting that $t_j>\tau$ for all $j$,
this item pricing obtains a revenue of at least $\tau/2$. Since
also $t_j \geq \eapricej$ for all $j$, we have $\tau \leq
2\easrev({\mathbf \dist},{\mathcal F})$.
\end{proof}
\subsection{Putting the Pieces Together}
Combining Lemmas~\ref{lem:core-decomposition}, \ref{lem:tail-bound},
\ref{lem:core-bound}, and \ref{lem:lipschitz-dist-bound} together, we
obtain the main result of this section:
\begin{lemma}
\label{lem:single-agent}
For any product value distribution ${\mathbf \dist}$, downward closed feasibility
constraint ${\mathcal F}$ and {ex~ante}\ constraints ${\mathbf \eaprob}$,
\[\earev({\mathbf \dist},{\mathcal F}) \leq 6\,\mathsc{BRev}({\mathbf \dist}-{\mathbf \eaprice},{\mathcal F}) +
8(1+\ln 2 + 1/\ln 2)\,\easrev({\mathbf \dist},{\mathcal F}) + {\mathbf \eaprice}\cdot{\mathbf \eaprob}.\]
\end{lemma}
It remains to bound the ${\mathbf \eaprice}\cdot{\mathbf \eaprob}$ term. Note that this term is
the revenue that would be obtained in the absence of any demand constraint
(equivalently, in the additive setting) by setting the {ex~ante}\ prices on the
items. When ${\mathcal F}$ is a partition matroid and if the {ex~ante}\
constraint ${\mathbf \eaprob}$ lies in the shrunk polytope $\frac 12{\mathcal P}_{\feas}$,
\citet{CHMS-STOC10} show via a prophet inequality that the term
${\mathbf \eaprice}\cdot{\mathbf \eaprob}$ is bounded by the revenue of an item pricing.
\begin{lemma}[\citet{CHMS-STOC10}]
\label{thm:partition-matroid}
For a partition matroid ${\mathcal F}$, {ex~ante}\ constraints
${\mathbf \eaprob}\in\frac 12{\mathcal P}_{\feas}$, and corresponding {ex~ante}\ prices
${\mathbf \eaprice}$,
\[{\mathbf \eaprice}\cdot{\mathbf \eaprob} \leq 2\,\easrev({\mathbf \dist},{\mathcal F}).\]
\end{lemma}
No prophet inequality based on static thresholds is known for general
matroids. However, \citet{fsz-15} nonetheless show that, if ${\mathbf \eaprob}
\in b{\mathcal P}_{\feas}$, selling at the {ex~ante}\ prices recovers a $(1-b)$
fraction of the relaxed revenue under a stronger demand constraint.
This leads to the following result.
\begin{lemma}[\citet{fsz-15}]
\label{thm:ocrs}
For a general matroid ${\mathcal F}$, constant $b \in (0,1)$, {ex~ante}\ constraints
${\mathbf \eaprob} \in b{\mathcal P}_{\feas}$, and corresponding {ex~ante}\ prices ${\mathbf \eaprice}$, there
exists a submatroid ${\mathcal F}' \subseteq {\mathcal F}$ such that
\[{\mathbf \eaprice}\cdot{\mathbf \eaprob} \leq \frac{1}{1-b}\easrev({\mathbf \dist},{\mathcal F}').\]
Furthermore, the constraint ${\mathcal F}'$ is efficiently computable.
\end{lemma}
We are now ready to prove Lemma~\ref{lem:approx-partition} and
Corollary~\ref{cor:general-ex-ante}, stated in
Section~\ref{sec:theorems}.
\begingroup
\def\ref{cor:general-ex-ante}{\ref{lem:approx-partition}}
\begin{lemma}
Let $\mathcal D$ be any product value distribution and ${\mathcal F}$ be a matroid with
feasible polytope ${\mathcal P}_{\feas}$. Then, for any $q\in \frac 12{\mathcal P}_{\feas}$, there
exists a submatroid ${\mathcal F}' \subseteq {\mathcal F}$ such that
\[ \earev(\mathcal D, {\mathcal F}) \le 33.1\,\eatrev(\mathcal D, {\mathcal F}') \]
If ${\mathcal F}$ is a partition matroid, then ${\mathcal F}' = {\mathcal F}$.
\end{lemma}
\addtocounter{theorem}{-1}
\endgroup
\begin{proof}
We first observe that $\mathsc{BRev}({\mathbf \dist}-{\mathbf \eaprice},{\mathcal F}) \le
\eatrev({\mathbf \dist},{\mathcal F})$. In particular, for any $a>0$, a two-part
tariff with entry fee $a$ and item prices ${\mathbf \eaprice}$ achieves at
least as much revenue over values drawn from ${\mathbf \dist}$ as does a
bundle pricing with price $a$ over values drawn from ${\mathbf \dist} -
\beta$. The lemma now follows from Lemma~\ref{lem:single-agent},
together with the bounds on ${\mathbf \eaprice}\cdot{\mathbf \eaprob}$ given by
Lemmas~\ref{thm:partition-matroid} and \ref{thm:ocrs}
\end{proof}
\smallskip As a final remark, we note that the condition ${\mathbf \eaprob} \in
\frac12{\mathcal P}_{\feas}$ in Lemma~\ref{lem:approx-partition} is necessary only
to recover ${\mathbf \eaprice}\cdot{\mathbf \eaprob}$; we can, in fact, show a slightly
weaker result which holds for arbitrary ${\mathbf \eaprob}$.
\begingroup
\def\ref{cor:general-ex-ante}{\ref{cor:general-ex-ante}}
\begin{corollary}
Let $\mathcal D$ be any product value distribution and ${\mathcal F}$ be a matroid. Then
for any $q\in[0,1]^m$, there exists a submatroid ${\mathcal F}' \subseteq
{\mathcal F}$ such that
\[\earev(\mathcal D, {\mathcal F}) \le 35.1\,\eatrev(\mathcal D,{\mathcal F}') \]
If ${\mathcal F}$ is a partition matroid, then ${\mathcal F}' = {\mathcal F}$.
\end{corollary}
\addtocounter{theorem}{-1}
\endgroup
\begin{proof}
For any ${\mathbf \eaprob}
\in [0,1]^m$,
\[\earev(\mathcal D,{\mathcal F}) \leq
\max_{\substack{{\mathbf \eaprob}'\leq{\mathbf \eaprob} \\ {\mathbf \eaprob}'\in{\mathcal P}_{\feas}}}
\earev[{\mathbf \eaprob}'](\mathcal D,{\mathcal F}).\]
Therefore, there exists ${\mathbf \eaprob}' \in {\mathcal P}_{\feas}$ and corresponding
${\mathbf \eaprice}'$, such that Lemma~\ref{lem:single-agent} gives
\[\earev({\mathbf \dist},{\mathcal F}) \leq 31.1 \eatrev[{\mathbf \eaprob}'](\mathcal D,{\mathcal F}) +
{\mathbf \eaprice}'\cdot{\mathbf \eaprob}'.\]
Furthermore, by scaling ${\mathbf \eaprob}'$ to lie in $\frac12{\mathcal P}_{\feas}$ we can only
increase the corresponding {ex~ante}\ prices, so Lemma~\ref{thm:ocrs} gives
${\mathbf \eaprice}'\cdot{\mathbf \eaprob}' \leq 4\easrev[{\mathbf \eaprob}'](\mathcal D,{\mathcal F}')$ for some
${\mathcal F}'\subseteq{\mathcal F}$. The corollary now follows by noting
$\eatrev[{\mathbf \eaprob}'](\mathcal D,{\mathcal F}) \leq \eatrev(\mathcal D,{\mathcal F})$ and
$\easrev[{\mathbf \eaprob}'](\mathcal D,{\mathcal F}') \leq \easrev(\mathcal D,{\mathcal F}')$.
\end{proof}
\subsection{Core Decomposition with {Ex~Ante}\ Constraints}
\label{sec:core-decomp}
The proof of Lemma~\ref{lem:core-decomposition} makes use of the following two
lemmas, which are analogous to results proved by Babaioff et al. after Li and
Yao.
\begin{lemma}
\label{lem:subdomain-stitching}
There exists a set $\{\eaprobsA \in [0,1]^m : A \subseteq [m]\}$ such that
$\sum_A \tprobA \eaprobAj \leq \eaprobj$ for all $j$ and
\[\earev({\mathbf \dist},{\mathcal F}) \leq
\sum_{A\subseteq[m]}\tprobA\earev[\eaprobsA]({\mathbf \dist}_A,{\mathcal F})\]
\end{lemma}
\begin{proof}
Let ${\mathcal M}$ be a BIC mechanism which is ${\mathbf \eaprob}$-constrained under ${\mathbf \dist}$
such that $\revm({\mathbf \dist}) = \earev({\mathbf \dist},{\mathcal F})$. So $\revm({\mathbf \dist}) =
\sum_{A\subseteq[m]}\tprobA\revm({\mathbf \dist}_A)$. Let $\eaprobAj$ be the probability
that ${\mathcal M}$ allocates item $j$ given that ${\mathbf \val}$ is drawn from ${\mathbf \dist}_A$;
that is, $\eaprobAj = \operatorname{E}\expectarg[{\mathbf \val}\sim{\mathbf \dist}_A]{\allocj({\mathbf \val})}$. Clearly
$\sum_{A}\tprobA \eaprobAj \leq \eaprobj$, by the assumption that ${\mathcal M}$ is
${\mathbf \eaprob}$-constrained. The result follows since $\revm({\mathbf \dist}_A) \leq
\earev[\eaprobsA]({\mathbf \dist}_A,{\mathcal F})$ for each $A$.
\end{proof}
\begin{lemma}
\label{lem:marginal-mechanism}
For any two independent distributions ${\mathbf \dist}_S$ and ${\mathbf \dist}_T$ over disjoint
sets of items $S$ and $T$ with corresponding {ex~ante}\ constraints
${\mathbf \eaprob}_S$ and ${\mathbf \eaprob}_T$ and a joint feasibility constraint ${\mathcal F}$,
\[\earev[({\mathbf \eaprob}_S; {\mathbf \eaprob}_T)]({\mathbf \dist}_S\times{\mathbf \dist}_T,{\mathcal F})
\leq \eaVal[{\mathbf \eaprob}_S]({\mathbf \dist}_S,{\mathcal F}|_S) +
\earev[{\mathbf \eaprob}_T]({\mathbf \dist}_T,{\mathcal F}|_T).\]
\end{lemma}
\begin{proof}
Let ${\mathcal M}$ be a BIC mechanism which is $({\mathbf \eaprob}_S; {\mathbf \eaprob}_T)$-constrained
under $({\mathbf \dist}_S\times{\mathbf \dist}_T)$ such that $\revm({\mathbf \dist}_S\times{\mathbf \dist}_T) =
\earev[({\mathbf \eaprob}_S;{\mathbf \eaprob}_T)]({\mathbf \dist}_S\times{\mathbf \dist}_T,{\mathcal F})$. We construct a
mechanism ${\mathcal M}'$ for selling items in $T$ as follows. ${\mathcal M}'$ first samples
${\mathbf \val}_S\sim{\mathbf \dist}_S$, and then solicits a bid ${\mathbf \val}_T$ for items in $T$. Let
$({\mathbf \alloc}_{S\cup T}({\mathbf \val}_S;{\mathbf \val}_T), p({\mathbf \val}_S;{\mathbf \val}_T))$ be the
allocation returned and payment charged by ${\mathcal M}$ for the combined bid; then
${\mathcal M}'$ returns the allocation ${\mathbf \alloc}_T({\mathbf \val}_S;{\mathbf \val}_T) $ and charges
$p({\mathbf \val}_S;{\mathbf \val}_T) - {\mathbf \val}_S({\mathbf \alloc}_S({\mathbf \val}_S;{\mathbf \val}_T))$.
We now prove that ${\mathcal M}'$ is truthful. Suppose the bidder submits a bid
${\mathbf \val}_T'$. His utility is ${\mathbf \val}_T({\mathbf \alloc}_T({\mathbf \val}_S; {\mathbf \val}_T')) -
\big(p({\mathbf \val}_S; {\mathbf \val}_T') - {\mathbf \val}_S({\mathbf \alloc}_S({\mathbf \val}_S; {\mathbf \val}_T'))\big)$,
which is the utility of a bidder participating in ${\mathcal M}$ with valuation
$({\mathbf \val}_S,{\mathbf \val}_T')$. Since ${\mathcal M}$ is truthful, the bidder can do no worse by
bidding ${\mathbf \val}_T$ in ${\mathcal M}'$ and receiving the utility of an agent who bids
truthfully in ${\mathcal M}$.
Note that ${\mathcal M}'$ allocates item $j \in T$ exactly when ${\mathcal M}$ does
(conditioned on ${\mathbf \val}_S$). So ${\mathcal M}'$ is demand-feasible. Furthermore, since
${\mathcal M}'$ draws ${\mathbf \val}_S$ from ${\mathbf \dist}_S$, ${\mathcal M}'$ is also
${\mathbf \eaprob}_T$-constrained under ${\mathbf \dist}_T$. Formally, let ${\mathbf \alloc}'$ be the
allocation rule of ${\mathcal M}'$; then
$\operatorname{E}\expectarg[{\mathbf \val}_T\sim{\mathbf \dist}_T]{\allocj'({\mathbf \val}_T)} =
\operatorname{E}\expectarg[{\mathbf \val}_S\sim{\mathbf \dist}_S,{\mathbf \val}_T\sim{\mathbf \dist}_T]{\allocj({\mathbf \val}_S;{\mathbf \val}_T)} \leq
\eaprobj$ for all $j \in T$.
The revenue obtained by ${\mathcal M}'$ is
\begin{align*}
\revm[{\mathcal M}']({\mathbf \dist}_T) &=
\operatorname{E}\expectarg[{\mathbf \val}_S\sim{\mathbf \dist}_S,{\mathbf \val}_T\sim{\mathbf \dist}_T]{p({\mathbf \val}_S;{\mathbf \val}_T)
- {\mathbf \val}_S({\mathbf \alloc}_S({\mathbf \val}_S;{\mathbf \val}_T))} \\
&= \earev[({\mathbf \eaprob}_S;{\mathbf \eaprob}_T)]({\mathbf \dist}_S\times{\mathbf \dist}_T,{\mathcal F})
- \operatorname{E}\expectarg[{\mathbf \val}_S\sim{\mathbf \dist}_S,{\mathbf \val}_T\sim{\mathbf \dist}_T]{{\mathbf \val}_S(
{\mathbf \alloc}_S({\mathbf \val}_S;{\mathbf \val}_T))} \\
&\geq \earev[({\mathbf \eaprob}_S;{\mathbf \eaprob}_T)]({\mathbf \dist}_S\times{\mathbf \dist}_T,{\mathcal F}) -
\eaVal[{\mathbf \eaprob}_S]({\mathbf \dist}_S,{\mathcal F}|_S),
\end{align*}
where the inequality follows because the welfare ${\mathcal M}$ obtains from items
in $S$ is a lower bound on the welfare of any ${\mathbf \eaprob}_S$-constrained mechanism
for ${\mathbf \dist}_S$.
\end{proof}
\begin{proofof}{Lemma \ref{lem:core-decomposition}}
By Lemmas \ref{lem:subdomain-stitching} and \ref{lem:marginal-mechanism}, we
have
\[\earev({\mathbf \dist},{\mathcal F}) \leq \sum_{A\subseteq[m]} \tprobA\left(
\eaVal[\eaprobsA](\dists^C_A,{\mathcal F}|_{A^c}) +
\earev[\eaprobsA](\dists^T_A,{\mathcal F}|_{A})\right).\]
For each $A\subseteq [m]$, let ${\mathcal M}^A$ be a truthful $\eaprobsA$-constrained
demand-feasible mechanism which obtains welfare equal to
$\eaVal[\eaprobsA](\dists^C_A,{\mathcal F}|_{A^c})$. One way to allocate items when values
are drawn from $\core$ is to choose to sell only items from some set
$A\subseteq[m]$. Consider a mechanism which chooses from among all subsets of
items, choosing $A$ with probability $\tprobA$, and then runs ${\mathcal M}^A$. The
expected welfare from such a mechanism is exactly
$\sum_{A\subseteq[m]}\tprobA\eaVal[\eaprobsA](\dists^C_A,{\mathcal F}|_{A^c})$. Since
$\sum_{A\subseteq[m]}\tprobA\eaprobAj \leq \eaprobj$, the welfare of this
mechanism also provides a lower bound on $\eaVal(\core,{\mathcal F})$.
\end{proofof}
\section{Introduction}
Multi-parameter optimal mechanism design is challenging from both a
computational and a conceptual viewpoint, even when it involves only a single
buyer. Multi-parameter type spaces can be exponentially large, and
multi-dimensional incentive constraints lack the nice structure of
single-dimensional constraints that permits simplification of the optimization
problem. As a result, optimal mechanisms can possess undesirable properties such
as requiring randomness \citep{BCKW-SODA10, HN-EC12, HN-EC13}, displaying
non-monotonicity of revenue in values \citep{HR-2013, rw-15}, and are in many
cases computationally hard to find (see, e.g., \citealp{DDT-WINE12,
DDT-SODA14}). The situation exacerbates in multi-agent settings. \citet[chap.
8]{Hartline-MDnA} identifies two further difficulties: multi-parameter agents
impose multi-dimensional externality on each other that may not be possible to
capture succinctly; and multi-parameter problems are typically not revenue
linear, meaning that the optimal revenue does not scale linearly with the
probability of service. Designing simple near-optimal mechanisms in such
settings is a primary goal of algorithmic mechanism design.
In this paper we study the problem facing a monopolist with many items
and many buyers, where each buyer is interested in buying one of many
different subsets of items, and his value for each such subset is
additive over the items in that subset. What selling mechanism should
the monopolist use in such a setting to maximize his revenue? One
challenge for the seller is that buyers may have heterogeneous
preferences: some buyers are interested in buying a few specific
items, others are indifferent between multiple items, and yet others
have a long shopping list. We design the first approximation mechanism
for this problem; our main result is a constant-factor approximation when
buyers' values are additive up to a matroid feasibility
constraint.
Our approximation mechanism has a particularly simple and appealing
format -- a sequential extension of standard {\em two-part tariff
mechanisms}. Two-part tariffs for a single agent have the following
structure. The buyer first pays a fixed entry fee and is then allowed
to purchase any set of items at fixed per-item prices. The buyer may
choose not to participate in the mechanism, in which case he does not
pay the entry fee and does not receive any item. In our context,
buyers pay (different) entry fees at the beginning, and then take
turns (in an arbitrary but fixed order) to buy a subset of items at
predetermined item-specific prices, subject to availability. There are
many real-world examples of two-part tariffs, such as amusement park
pricing; memberships for discount shopping clubs like Costco, Sam's
Club, and Amazon's Prime; telephone services; and membership programs
for cooperatives and CSAs. These mechanisms have long been studied in
economics for their ability to effectively price discriminate among
different buyers despite their relative simplicity. \citet{Arm-99}
shows, for example, that for an additive value buyer with independent
item values and sufficiently many items, two-part tariffs extract
nearly the entire social surplus.
Our work combines and significantly extends techniques from several
different lines of work in mechanism design. We use the {\em {ex~ante}\
relaxation} of \citet{Alaei-FOCS11} to break up the multi-agent
revenue maximization problem into its single-agent counterparts and
capture the externalities among buyers through {ex~ante}\ supply
constraints. We solve the single-agent problems with {ex~ante}\ supply
constraints by adapting and extending the so-called {\em core-tail
decomposition} technique of \citet{LY-PNAS13}, as well employing the
prophet inequalities of \citet{CHMS-STOC10} and
\citet{fsz-15}. Finally, we use ideas from \citep{CHMS-STOC10} to
combine the resuting single-agent mechanisms sequentially and obtain a
multi-agent approximation mechanism that is {ex~post}\ supply~feasible.
While our main result applies to buyers with values additive up to a
matroid constraint, parts of our approach extend to more general value
functions such as those satisfying the gross substitutes condition.
\subsection{Multi-Parameter Mechanism Design: Previous Work}
This paper belongs to a long line of research on finding simple and
approximately optimal mechanisms for multi-parameter settings under
various assumptions on the buyers' value functions and type
distributions, and on the seller's supply constraint.
The first breakthrough along these
lines was made by \citet{CHK-07} who showed that the revenue of an
optimal mechanism for a single unit-demand buyer can be approximated
within a factor of $3$ by an item pricing,\footnote{\citet{CHMS-STOC10}
later improved this approximation factor to $2$.} a mechanism that
allows the buyer to choose any item to buy at fixed per-item
prices. More recently, \citet{bilw-focs14} developed a similar result
for a single buyer with additive values.\footnote{This is the
culmination of a series of papers including \cite{HN-EC12, HN-EC13,
LY-PNAS13}.} They showed that the revenue of an optimal mechanism
in this case is approximated within a factor of $6$ by one of two
simple mechanisms: an item pricing that fixes a price for each item
and allows the buyer to choose any subset of items to buy, and a
bundle pricing that allows the buyer to buy the grand bundle of all
items at a fixed price. Observe that item pricing and bundle pricing
are both two-part tariffs (with the entry fee or the per-item prices
being zero, respectively).
Unit-demand and additive types are two extremes within a broader class
of value functions that we call {\em constrained additive values}. A
constrained additive buyer has a value (drawn independently) for each
item under sale; he is interested in buying a set of items that
satisfies a certain downward-closed constraint; his value is additive
over any such set. We have only recently begun to understand optimal
mechanism design for a single agent with constrained additive
values. \citet{rw-15} proved that in this setting, as in the additive
case, either item pricing or bundle pricing gives a constant-factor
approximation to the optimal revenue.\footnote{\citet{rw-15}'s result
holds for a much broader setting with a single subadditive value
agent, but their factor of approximation is rather large -- about
340.} There are many similarities between the two lines of work on
unit-demand buyers and additive buyers, and \citeauthor{rw-15}'s
result can be seen as a unification of the two approaches, albeit with
a worse approximation factor.
Multi-parameter settings with multiple buyers are less well
understood. For settings with many unit-demand buyers,
\citet{CHMS-STOC10, cms-10} developed a generic approach for
approximation via sequential posted-price mechanisms (SPMs). SPMs
approach buyers in some predetermined order and offer items for sale
to each buyer at predetermined prices while supplies last. For
settings with many additive-value buyers, \citet{yao-15} showed that
either running a second-price auction for each item separately or
optimally selling to bidder $i$ the set of items for which he is the
highest bidder\footnote{In the latter case, Yao approximates the
optimal revenue via two-part tariffs.}
achieves a constant-factor approximation. \citet{CDW-16} presented a
new uniform framework that can be used to rederive both Yao and Chawla
et al.'s results, with a tighter analysis for the former. However,
prior to our work, no approximations were known for other constrained
additive settings or for settings with heterogeneous buyers. Consider,
for example, a setting with some unit-demand and some additive
buyers. In this case, neither of the results mentioned above provide
an approximation. \citeauthor{CHMS-STOC10}'s analysis relies on a
reduction from multi-dimensional incentive constraints to
single-dimensional ones that applies only to the unit-demand setting,
and, in particular, cannot account for revenue from bundling, which is
crucial in non-unit-demand settings. \citeauthor{yao-15}'s approach on
the other hand relies on allocating each item to the highest value
agent, and cannot provide a constant-factor approximation for
subadditive~agents.\footnote{To see why Yao's approach cannot work for
unit-demand agents, observe that if a single unit-demand agent has
the highest value for each item, the seller must try to sell all but
one item to non-highest-value buyers in order to obtain good
revenue.}
A different approach to optimal mechanism design due to
\citet{CDW-STOC12, CDW-FOCS12, CDW-SODA13, CDW-FOCS13} uses
linear programming formulations for settings with small support type
distributions, and shows that optimal mechanisms are virtual welfare
maximizers. This approach is unsuitable for our setting which, even
upon discretization of values, involves distributions over exponential
size supports. Moreover, mechanisms generated by this approach tend to
lack the nice structure and simplicity of pricing-based~mechanisms.
Finally, a new approach to mechanism design has emerged in recent
years that uses duality theory to design as well as analyze optimal or
approximately optimal mechanisms \citep[see, e.g.,][]{DDT-EC15, GK-14, GK-15, HH-15,
CDW-16}. Designing good dual solutions in this context, however,
involves more art than science, and for the most part, positive
results are restricted to very special classes of value functions and
value distributions.
\subsection{Our Techniques and Contributions}
\paragraph{{Ex~Ante}\ Relaxation.}
Our work follows a generic approach developed by \citet{Alaei-FOCS11}
for transforming multi-agent mechanism design problems into their
single-agent counterparts via the so-called {\em {ex~ante}\ relaxation}.
In a multi-agent setting, agents impose externalities upon each other
through the seller's supply constraint: each item must be sold to at
most one buyer {ex~post}. Alaei proposes relaxing the problem by
enforcing the supply constraint {ex~ante}\ rather than {ex~post}: the
probabilities with which an item is sold to the different buyers
should sum up to no more than one. In other words, in expectation the
item is sold at most once. Applying the {ex~ante}\ relaxation to a
mechanism design problem with multiple buyers involves three steps:
\begin{enumerate}
\item {\bf Decompose into single-agent problems:} determine the {ex~ante}\
probabilities with which each item can be sold to each buyer; for each item
these probabilities should sum up to no more than $1$;
\item {\bf Solve single-agent problems:} for each agent, find an approximately
optimal mechanism satisfying the {ex~ante}\ supply constraint determined in the
first step;
\item {\bf Stitch single-agent mechanisms:} combine the single-agent
mechanisms developed in the second step into a single mechanism that satisfies
the supply constraints {ex~post}.
\end{enumerate}
The first step is conceptually simple and applies in any setting where
buyers have independent values. We reproduce this argument in
Section~\ref{sec:relaxation} for completeness.
Alaei described how to implement the second and third steps for
problems involving unit-demand agents.\footnote{Alaei also presented
solutions for certain additive-value settings under the assumption
that the agents' type spaces are small and given explicitly.} For
the third ``stitching'' step, he suggested composing the
single-agent mechanisms sequentially (similar to the approach of
\citet{CHMS-STOC10}). However, this does not work for arbitrary
single-agent mechanisms. Once the composite mechanism has sold off a
few items, fewer bundles are available to subsequent buyers, and the
mechanism may obtain far less revenue than its single-agent
counterparts. We show that two-part tariffs compose well without
much loss in revenue when each buyer's value function is additive
up to a matroid feasibility constraint (and, more generally, when the
value functions satisfy the gross substitutes condition).
\paragraph{Core-Tail Decomposition.}
In order to bound the single-agent revenue as required in step two of
the {ex~ante}\ approach, we use the core-tail decomposition of
\citet{LY-PNAS13}, and its extensions due to \citet{bilw-focs14} and
\citet{rw-15}. Roughly speaking, in the absence of {ex~ante}\ supply
constraints, for any vector of item values, we can partition items
into those with small value and those with large value. This
partitioning is done in such a manner that the set of large-value
items (a.k.a. the tail) contains only a few items in expectation; the
revenue generated by these items behaves essentially like unit-demand
revenue, and can be recovered by selling the items separately via an
argument of \citet{cms-10}. The set of small-value items (a.k.a. the
core), on the other hand, displays concentration of value and the
revenue generated by these items can be recovered via bundling
\citep{rw-15}.
Under an {ex~ante}\ supply constraint the revenue generated by the tail
can still be recovered via item pricing as before. Bounding the
revenue from the core is trickier, however, because different items
may face very different {ex~ante}\ constraints, and their total values
may not concentrate well. Furthermore, selling the grand bundle
allocates all items with the same probability to the buyer and
consequently may not respect the given {ex~ante}\ constraint. We make a
careful choice of thresholds for partitioning values into the core and
the tail in such a manner that we can recover the value of the core in
two parts:
(1) when the {ex~ante}\ constraint is strong (i.e. the allocation
probabilities are mostly small), selling separately recovers most of
the core revenue; (2) when the {ex~ante}\ constraint is weak (i.e. the
allocation probabilities are mostly large), bundling as part of a
two-part tariff recovers most of the core revenue while continuing to
respect the {ex~ante}\ constraint.
\paragraph{Prophet Inequalities.}
Observe that the {ex~ante}\ approach described above relaxes the
seller's supply constraint, but continues to enforce the buyer's
demand constraint\footnote{The buyer's demand constraint refers to,
e.g., whether the buyer desires one item as in the unit-demand case,
or all items as in the additive case.} {ex~post}. It is unclear how a
relaxation of the buyer's demand constraint would capture revenue due
to bundling, and whether such a relaxation is useful for mechanism
design. Nevertheless, our analysis gives rise to a term which
corresponds to item-pricing revenue from a common relaxation of the
seller's and buyer's constraints.
Roughly speaking, this term captures the total revenue that the seller
can obtain from the buyer by selling each item separately subject to a
bound on the probability of sale, under the condition that these
bounds respect both the seller's and the buyer's feasibility
constraints in an {ex~ante}\ sense. For example, for a unit-demand
buyer, the probabilities of sale over the items must sum up to no more
than $1$. We then employ a prophet inequality to relate this term to
the optimal item-pricing revenue for that buyer. A prophet inequality
in this context specifies an item pricing that, regardless of which
maximal feasible set of items the buyer purchases, obtains in
expectation a constant fraction of the {ex~ante}\ optimal
revenue. Prophet inequalities of the above form are known to hold for
several classes of feasibility constraints, such as uniform matroids,
partition matroids, and their intersections (see, e.g.,
\citealp{CHMS-STOC10}). For general matroid constraints, it is not
known whether a prophet inequality with static item prices as
described above can obtain a constant approximation
factor.\footnote{\citet{KW-STOC12} present a prophet inequality with
adaptive prices, but this is unsuitable for our
setting.} However, \citet{fsz-15} give a prophet inequality that obtains a
constant approximation by restricting the buyer's demand -- in other words, by
forbidding the buyer to purchase certain feasible sets. We discuss and use these
results in Section~\ref{sec:single-agent}.
\paragraph{The Final Mechanism.}
As mentioned earlier, our final mechanism is a sequential two-part
tariff mechanism. We remark that buyers in our mechanism are required
to pay the entry fee before finding out whether their favorite items
will be available when it is their turn to buy; therefore, our
mechanism is only Bayesian incentive compatible (BIC), and not
necessarily dominant strategy incentive compatible (DSIC). We leave
open the question of whether it is possible to approximate the optimal
revenue within a constant factor via a DSIC mechanism. In some
settings, our mechanism restricts the subsets of items that a buyer is
allowed to buy; we call such a mechanism a {\em demand-limiting
sequential two-part tariff}. This is seen, for instance,
in market-style CSA programs in which members can buy only certain
quantities and combinations of~produce.
\paragraph{Other Contributions.} As special cases of our general
result, we also obtain improvements to the results of \citet{rw-15}.
Recall that \citeauthor{rw-15} show that for a single buyer with
subadditive values, either item pricing or bundle pricing obtains a
constant-factor approximation. We improve this result in two
ways. First, for constrained additive values, we improve the
approximation factor from about 340 to 31.1
(Corollary~\ref{cor:true-single-agent}).\footnote{It is possible to
use \citeauthor{rw-15}'s techniques to obtain a better approximation
for the special case of constrained additive values, however, the
resulting bound is still much weaker than ours.} Second, we show
that the result holds also under an {ex~ante}\ constraint for a suitable
definition of item pricings and bundle pricings that respect the same
{ex~ante}\ constraint (see
Corollary~\ref{cor:general-ex-ante}). Finally, for revenue
maximization with multiple additive buyers, we adapt arguments from
\citep{bilw-focs14} to obtain an approximation factor of 28
(Appendix~\ref{sec:additive});
this is an improvement over \citet{yao-15}'s
approximation factor of 69 for the same setting, but is worse than
\citet{CDW-16}'s improvement of \citeauthor{yao-15}'s analysis to an
8-approximation. Arguably, our analysis for this setting is
conceptually simpler than both of those~works.
\paragraph{Symmetric Settings.} In an interesting special case of our
setting, the buyers are a~priori symmetric (but items are
heterogeneous). That is, each buyer has a value vector drawn from
identical independent distributions, and also desires the same bundles
of items. In this setting, our mechanism sets the same entry fee as
well as item prices for all buyers. Furthermore, these fees and prices
can be computed~efficiently (Section~\ref{sec:symmetric}).
\paragraph{Further Directions.} For settings with asymmetric buyers, we leave
open the question of efficiently solving the {ex~ante}\ relaxation.
Our main result requires buyers' demand constraint to be matroids for two
reasons: this allows us to use a prophet inequality for a single agent, and it
also enables us to combine single-agent mechanisms sequentially without much
loss in revenue. It is an interesting challenge to apply the {ex~ante}\ approach
for demand constraints beyond matroids, or for more general classes of
subadditive values.
\section*{Acknowledgements}
We are grateful to Anna Karlin for feedback on early drafts of this
work, and to Jason Hartline for insights on efficiently solving the
ex ante relaxation for symmetric agents.
\bibliographystyle{apalike}
\section{Matroid Concepts}
\label{sec:matroids}
A matroid $M$ is a tuple $(G, \mathcal{I})$ where $G$ is called the {\em ground set} and
$\mathcal{I} \subseteq 2^G$ is a collection of {\em independent sets} satisfying the
following two properties:
\begin{enumerate}
\item If $I \subseteq J$ and $J \in \mathcal{I}$, then $I \in \mathcal{I}$ ($\mathcal{I}$ is
downward-closed); and
\item If $I,J \in \mathcal{I}$ and $|J| > |I|$, then there exists $e \in J\setminus I$
such that $(I\cup\{e\}) \in \mathcal{I}$.
\end{enumerate}
A {\em basis} is an independent set of maximal size: $B \subseteq G$
is a basis if $B \in \mathcal{I}$ and $|I| \leq |B|$ for all $I \in \mathcal{I}$. The
following lemma is a simple consequence of the fact that the greedy
algorithm finds the maximum weight basis in any matroid.
\begin{lemma}
\label{lem:matroid-greedy}
Let ${\mathcal F}$ be any matroid over ground set $G$, $I$ be any subset of $G$,
and $w$ be any vector of weights defined on elements in $G$. If $j\in G$
belongs to a maximum weight basis of ${\mathcal F}$ and $j\in I$, then $j$ also
belongs to a maximum weight basis of ${\mathcal F}|_{I}$.
\end{lemma}
Several classes of matroids are of special interest. A {\em $k$-uniform} matroid
is a matroid in which any $S \subseteq G$ with $|S| \leq k$ is an independent
set; the class of uniform matroids generalizes the extensively studied additive
($k=m)$ and unit-demand ($k=1$) settings. A {\em partition} matroid is the
union of uniform matroids: $G = G_1\cup\ldots\cup G_N$, where $(G_i,\mathcal{I}_i)$
is a $k_i$-uniform matroid, and a set $S\subseteq G$ is independent if
$S\cap G_i \in \mathcal{I}_i$ for all $i$.
\section{Efficient Approximation for Symmetric Agents}
\label{sec:optimizing-beta-dot-q}
We will now discuss how to solve the optimization problem \eqref{eq:bq-max} efficiently when ${\mathcal F}$ is a matroid. We first modify the distribution ${\mathbf \dist}$ so that for every item $j$, any value below quantile $1-1/2n$ is mapped to $0$. The problem then simplifies to the following.
\begin{equation}
\label{eq:bq-max-2}
\begin{aligned}
\text{maximize }\; & {\mathbf \eaprob}\cdot{\mathbf \dist}^{-1}(1-{\mathbf \eaprob}) & \text{s.t. }\; & {\mathbf \eaprob} \in {\mathcal P}_{\feas}
\end{aligned}
\end{equation}
This problem is related to the {ex~ante}\ relaxation of the
single-parameter revenue maximization problem with $m$ buyers, where
buyer $j$'s value is distributed independently according to $\distj$
and the seller faces the feasibility constraint ${\mathcal F}$ (i.e., he can
sell to any subset of buyers that form an independent set in
${\mathcal F}$). When the distributions $\distj$ are all regular, the
objective ${\mathbf \eaprob}\cdot{\mathbf \dist}^{-1}(1-{\mathbf \eaprob})$ is concave, and the
above problem can be solved using standard convex optimization
techniques.
When the distributions $\distj$ are not all regular,
\eqref{eq:bq-max-2} is not necessarily convex. In this case, allowing
for a randomized solution convexifies the problem.
Consider the following
relaxation that maximizes the objective over all distributions over
vectors ${\mathbf \eaprob}$:
\begin{equation}
\label{eq:bq-max-3}
\begin{aligned}
\text{maximize }\; & \operatorname{E}\expectarg{{\mathbf \eaprob}\cdot{\mathbf \dist}^{-1}(1-{\mathbf \eaprob})} & \text{s.t. }\; & \operatorname{E}\expectarg{{\mathbf \eaprob}} \in {\mathcal P}_{\feas}
\end{aligned}
\end{equation}
This problem can in turn be restated as follows: maximize the ironed
virtual surplus of a BIC mechanism for the single-parameter revenue
maximization problem stated above subject to the feasibility
constraint ${\mathcal P}_{\feas}$ imposed {ex~ante}.
\citet{Hartline-comm} describes an alternative to standard convex
optimization for solving the above problem to within arbitrary
accuracy. Pick a sufficiently small $\epsilon>0$. Discretize the
problem by creating a new discrete distribution $\distj'$ for every
$j\in [m]$ as follows: for every integer $z$ in $[0, 1/\epsilon]$,
place a mass of $\epsilon$ on the ironed virtual value for
distribution $\distj$ at quantile $z\epsilon$. Let $R_j$ denote the
support of distribution $\distj'$. Over these discrete supports, the
ironed virtual value maximization problem becomes one of selecting a
subset of $\cup_j R_j$ of maximum total (ironed virtual) value subject
to the constraint that the subset can be partitioned into at most
$1/\epsilon$ parts each of which is independent in ${\mathcal F}$. In other
words, this is the problem of finding a maximum weight basis over a
matroid formed by the union of $1/\epsilon$ identical copies of
${\mathcal F}$. The standard greedy algorithm for matroids solves this
problem efficiently for any constant $\epsilon$. This algorithm
approximates \eqref{eq:bq-max-3} to within an additive error of
$\epsilon\sum_j\distj^{-1}(1)$, and in the case of non-regular
distributions, produces a distribution over two vectors ${\mathbf \eaprob}_1$
and ${\mathbf \eaprob}_2$ that in expectation satisfies the constraint
${\mathcal P}_{\feas}$.
\section{Preliminaries}
\label{sec:prelim}
We consider a setting with a single seller and $n$ buyers. The seller has $m$
heterogeneous items to sell. Each buyer $i\in [n]$ has a type composed of a
public downward-closed demand constraint $\feasi\subseteq 2^{[m]}$ and a private
value vector $\vali=(\valij[1], \cdots, \valij[m])$ that maps items to
non-negative values. Roughly speaking, the demand constraint $\feasi$ describes
the maximal sets of items from which the buyer derives value. Formally, the
buyer's value for a set of items is described by a {\em constrained additive}
function: for $S\subseteq 2^{[m]}$,
\[ \vali(S) = \max_{S'\in\feasi; S'\subseteq S} \sum_{j\in S'} \valij\]
It will sometimes be necessary to consider feasibility restricted to subsets of
the available items. For $M' \subseteq [m]$, the {\em restriction of $\feasi$ to
$M'$}, denoted $\feasi|_{M'}$, is formed by dropping items not in $M'$.
Formally, $\feasi|_{M'} = \feasi\intersect2^{M'}$. We will typically assume
that for all $i$, $\feasi$ is a matroid; see Appendix~\ref{sec:matroids} for a
review of matroid concepts.
We assume that the values $\valij$ are drawn from distribution $\distij$
independently of all other values; we use $\disti=\prod_j \distij$ to denote the
joint distribution of buyer $i$'s value vector and ${\mathbf \dist} = \prod_i \disti$ to
denote the joint distribution over all value vectors. The demand constraints
$\feasi$ may be different for different buyers. Let ${\mathcal F} =
\{\feasi\}_{i\in[n]}$ denote the tuple of feasibility constraints, one for each
buyer.
\subsection{Incentive Compatible Mechanisms and Revenue Maximization}
A mechanism ${\mathcal M}$ takes as input the value vectors ${\mathbf \val}=(\vali[1], \cdots,
\vali[n])$ and returns an allocation ${\mathbf \alloc}({\mathbf \val})$ and payment vector
${\mathbf \price}({\mathbf \val})$. Here $\alloci({\mathbf \val})$ denotes the (potentially random) set of
items that is allocated to buyer $i$. A mechanism ${\mathcal M}$ is {\em
supply-feasible} if every item is allocated to at most one buyer; in other
words, for all ${\mathbf \val}$, and $i_1\ne i_2$,
$\alloci[i_1]({\mathbf \val})\cap\alloci[i_2]({\mathbf \val}) = \emptyset$ with probability $1$.
We use $\allocij({\mathbf \val})$ to denote the probability with which buyer $i$ receives
item $j$. Without loss of generality, we focus on mechanisms that for every
value vector ${\mathbf \val}$ and every buyer $i$ satisfy $\alloci({\mathbf \val})\in\feasi$ with
probability $1$; we call such mechanisms {\em demand-feasible}. Consequently,
we note that the vector $(\allocij[1]({\mathbf \val}), \cdots, \allocij[m]({\mathbf \val}))$ lies
in the polytope enclosing $\feasi$, which we denote\footnote{Formally,
$\ptopei$ is the convex hull of the incidence vectors of all sets in $\feasi$
in $\Re^m$.} $\ptopei$.
In the rest of the paper we will overload notation and use
$\alloci({\mathbf \val})$ to denote the vector $(\allocij[1]({\mathbf \val}), \cdots,
\allocij[m]({\mathbf \val}))$.
We assume that buyers are risk neutral and have quasi-linear utilities. In other
words, the utility that a buyer derives from allocation $\alloci$ and payment
$\pricei$ is given by $\alloci\cdot\vali - \pricei$. We consider mechanisms
which are {\em Bayesian incentive compatible (BIC)}. A mechanism is BIC if
truthtelling is a Bayes-Nash equilibrium; that is, if a buyer maximizes his own
utility---in expectation over other buyers' values, assuming they report
truthfully, as well as randomness inherent in the mechanism---by reporting
truthfully. In contrast, a mechanism is {\em dominant-strategy incentive
compatible (DSIC)} if truthtelling is a dominant strategy; that is, if a buyer
maximizes his own utility by reporting truthfully, regardless of what other
buyers report.
We are interested in revenue maximization for the seller. The seller's revenue
from a BIC mechanism ${\mathcal M}=({\mathbf \alloc},{\mathbf \price})$ at value vectors ${\mathbf \val}$ is
$\sum_i\pricei({\mathbf \val})$, and the expected revenue is $\revm({\mathbf \dist}) =
\operatorname{E}\expectarg[{\mathbf \val}\sim{\mathbf \dist}]{\sum_i\pricei({\mathbf \val})}$. The revenue maximization
problem seeks to maximize $\revm({\mathbf \dist})$ over all BIC mechanisms that are
demand- and supply-feasible; we use $\mathsc{Rev}({\mathbf \dist},{\mathcal F})$ to denote this maximum
revenue.
\subsection{{Ex~Ante}\ Constrained Revenue Maximization}
We will reduce the multiple buyer revenue maximization problem
described above to single-buyer problems with {ex~ante}\ supply
constraints. The following definitions are for a single agent $i$; we
omit the subscript $i$ for clarity. Let ${\mathbf \eaprob} = (\eaprobj[1],
\cdots, \eaprobj[n])$ be a vector of probabilities with $\eaprobj\in
[0,1]$ for all $j\in [m]$. A mechanism ${\mathcal M}=({\mathbf \alloc},{\mathbf \price})$ is
{\em ${\mathbf \eaprob}$-constrained under ${\mathbf \dist}$} if for all items $j\in
[m]$, its {ex~ante}\ probability for selling item $j$ when values are
drawn from ${\mathbf \dist}$, $\operatorname{E}\expectarg[{\mathbf \val}\sim{\mathbf \dist}]{\allocj({\mathbf \val})}$, is at
most $\eaprobj$. We will consider both revenue and welfare
maximization problems over ${\mathbf \eaprob}$-constrained mechanisms.
Formally, we define
\begin{align}
\label{eq:constrained-revenue}
\earev(\mathcal D,{\mathcal F}) & = \max_{{\mathcal M}=({\mathbf \alloc},{\mathbf \price}):
\operatorname{E}\expectarg[{\mathbf \val}\sim{\mathbf \dist}]{\allocj({\mathbf \val})}\le\eaprobj \,\,\forall j\in [m]}
\revm({\mathbf \dist})
\end{align}
and
\begin{align*}
\eaVal(\mathcal D,{\mathcal F}) & = \max_{{\mathcal M}=({\mathbf \alloc},{\mathbf \price}):
\operatorname{E}\expectarg[{\mathbf \val}\sim{\mathbf \dist}]{\allocj({\mathbf \val})}\le\eaprobj \,\,\forall j\in [m]}
\Valm({\mathbf \dist}),
\end{align*}
where the maximum is taken over all BIC demand-feasible mechanisms\footnote{We
don't need to impose the supply-feasibility constraint explicitly --- this is
already implicit in the {ex~ante}\ probability constraint.} and
$\Valm({\mathbf \dist}) = \operatorname{E}\expectarg[{\mathbf \val}\sim{\mathbf \dist}]{{\mathbf \alloc}({\mathbf \val})\cdot{\mathbf \val}}$.
It will sometimes be convenient to express the {ex~ante}\ constraint in the form
of {ex~ante}\ prices defined as: $\eapricej = \distj^{-1}(1-\eaprobj)$. In other
words, for every $j\in [m]$, $\eapricej$ is defined such that the probability
that $\valj$ exceeds this price is precisely $\eaprobj$. Note that there is a
one-one correspondence between {ex~ante}\ probabilities and {ex~ante}\ prices.
\subsection{Special Single-Agent Mechanisms}
\paragraph{Item Pricing.} An item pricing is defined by a set of
prices $\pricej$, one for each item $j$. A buyer is allowed to select
as many items as he pleases, up to some downward-closed constraint
${\mathcal F}$, and he pays the sum of the associated prices. That is, if the
buyer selects the set $S \subseteq [m]$, he pays $\sum_{j\in S}p_j$.
The buyer then selects the set $S \in {\mathcal F}$ which maximizes
$\sum_{j\in S}(\valj - \pricej)$. We use $\mathsc{SRev}({\mathbf \dist},{\mathcal F})$ to
denote the optimal revenue obtainable by any item pricing from a buyer
with value distribution ${\mathbf \dist}$ and demand constraint ${\mathcal F}$.
\paragraph{Bundle Pricing.} A bundle pricing is defined by a single
price (a.k.a. entry fee) $\pi$. A buyer can buy any subset of items
satisfying the demand constraint ${\mathcal F}$ at price $\pi$. A rational
buyer chooses to participate (i.e. pay the fee) if
$v([m])=\max_{S\in{\mathcal F}}v(S) \geq \pi$ and then selects a
corresponding maximal set $S$. We use $\mathsc{BRev}({\mathbf \dist},{\mathcal F})$ to
represent the optimal revenue obtainable by any bundle pricing from a
buyer with value distribution ${\mathbf \dist}$ and demand constraint ${\mathcal F}$.
\paragraph{Two-Part Tariffs.} A two-part tariff is a common
generalization of both item pricings and bundle pricings. It is
described by an $m+1$ dimensional vector of prices: $(\pi, \pricej[1],
\cdots, \pricej[m])$. The mechanism offers each set $S\subseteq [m]$
of items to the buyer at a price of $\pi+\sum_{j\in S} \pricej$; the
buyer can then choose to buy his favorite set at these offered
prices. Informally speaking, the mechanism charges the buyer an {\em
entry fee} of $\pi$ for the right to buy any set of items, with item
$j$ offered at a fixed price of $\pricej$. Like other pricing-based
mechanisms, two-part tariffs are deterministic, dominant strategy
incentive compatible mechanisms.
A utility-maximizing buyer with values ${\mathbf \val}$ and feasibility constraint
${\mathcal F}$ when offered a two-part tariff $(\pi, {\mathbf \price})$ buys the set $S\in{\mathcal F}$
of items that maximizes ${\mathbf \val}(S)-\pi-\sum_{j\in S} \pricej$, if that quantity
is non-negative\footnote{This is essentially an ex-post IR condition.}; in that case, we say that the buyer participates in the
mechanism. We denote the revenue of a two-part tariff $(\pi, {\mathbf \price})$ offered
to a buyer with feasibility constraint ${\mathcal F}$ and value distribution ${\mathbf \dist}$
by $\revt({\mathbf \dist},{\mathcal F})$. We use $\mathsc{TRev}({\mathbf \dist},{\mathcal F})$ to denote the optimal
revenue that a two-part tariff can obtain from a buyer with value distribution
${\mathbf \dist}$ and demand constraint ${\mathcal F}$.
Two-part tariffs are known to be approximately optimal in certain
single-agent settings. The following results\footnote{Here $\feasi[\mathsc{UnitDemand}] =
\{S\subset [m] \mid |S|=1\}$ represents a unit-demand buyer, and
$\feasi[\mathsc{Additive}] = 2^{[m]}$ represents a buyer with fully additive values.}
are due to \citet{cms-10} and \citet{bilw-focs14}
respectively. \citet{rw-15} proved a similar result for constrained
additive values, but with a very large approximation factor (about
340).
\begin{align*}
\mathsc{Rev}({\mathbf \dist},\feasi[\mathsc{UnitDemand}]) & \leq 4\,\mathsc{TRev}({\mathbf \dist},\feasi[\mathsc{UnitDemand}])\\
\mathsc{Rev}({\mathbf \dist},\feasi[\mathsc{Additive}]) & \leq 6\,\mathsc{TRev}({\mathbf \dist},\feasi[\mathsc{Additive}])
\end{align*}
\paragraph{Pricings with an {Ex~Ante}\ Constraint.}
Next we extend the above definitions to respect {ex~ante}\ supply
constraints. We say that a two-part tariff $(\pi, {\mathbf \price})$ satisfies
{ex~ante}\ constraint ${\mathbf \eaprob}$ if for all $j$,
$\pricej\ge \eapricej=\distj^{-1}(1-\eaprobj)$. Note that this is a
stronger condition than merely requiring that the mechanism allocates
item $j$ with {ex~ante}\ probability at most $\eaprobj$. We use
$\eatrev({\mathbf \dist},{\mathcal F})$ to denote the optimal revenue achieved by a
demand-feasible two-part tariff that satisfies {ex~ante}\ constraint
${\mathbf \eaprob}$. Likewise, we use $\easrev({\mathbf \dist},{\mathcal F})$ to denote the
optimal revenue achievable by an item pricing ${\mathbf \price}$ with
$\pricej\ge \eapricej$ for all $j$.
\subsection{Multi-Agent (Sequential) Two-Part Tariff Mechanisms} We
now extend the definition of two-part tariffs to multi-agent settings.
Consider a setting with $n$ agents and demand constraints
${\mathcal F}=\{\feasi\}_{i\in[n]}$. A {\em sequential two-part tariff} for
this setting is parameterized by an ordering $\sigma$ over the agents,
a set of entry fees ${\mathbf \ef}=(\efi[1], \cdots, \efi[n])$, and a set of
prices ${\mathbf \price}=\{\priceij\}$. The mechanism proceeds as follows.
\begin{enumerate}
\item The ordering $\sigma$ and prices ${\mathbf \ef};{\mathbf \price}$ are announced.
\item Each agent $i$ independently decides whether or not to participate in
the mechanism. If the agent decides to participate, then he pays his
corresponding entry fee~$\efi$.
\item The mechanism considers agents in the order given by $\sigma$. When an
agent $i$ is considered, if the agent previously declined to participate, no
items are allocated and no payment is charged. Otherwise, of the items
unallocated so far, the agent is allowed to purchase his favorite feasible set
of items at the prices $\priceij$.
\end{enumerate}
Observe that agents choose whether or not to participate in the
mechanism before knowing which items will be available when it is
their turn to purchase. Accordingly, a sequential two-part tariff is
BIC but not necessarily DSIC.
The sequential two-part tariff mechanisms that we develop in this
paper are {\em order oblivious} in the sense that their revenue
guarantees hold regardless of the ordering $\sigma$ chosen over the
agents. Accordingly, in describing these mechanisms, we need only
specify the prices ${\mathbf \ef};{\mathbf \price}$.
In some cases, our two-part tariff mechanisms disallow agents from
buying certain sets of items. Specifically, a {\em demand-limiting
sequential two-part tariff} is parameterized by an ordering
$\sigma$, prices ${\mathbf \ef}; {\mathbf \price}$, as well as feasibility constraints
${\mathcal F}'=\{\feasi'\}_{i\in[n]}$ where, for every agent $i$,
$\feasi'\subseteq {\mathcal F}$ is a matroid constraint stronger than the
agent's original demand constraint. When it is agent $i$'s turn to buy
items, the agent is allowed to buy any subset of items in
$\feasi'$. In particular, the agent is not allowed to buy sets of
items in~$\feasi\setminus\feasi'$.
\section{Single-Agent Approximation for General Matroids}
\label{sec:general-matroid}
The proof of Lemma~\ref{lem:approx-partition} relied upon the existence of a
threshold-based prophet inequality for partition matroids, which translates
directly into a pricing scheme. No such prophet inequality is known for general
matroids, but a recent result of \citet{fsz-15} provides a
\scnote{demand-limiting pricing scheme}.
\begin{theorem}
(\cite{fsz-15}) For a general matroid ${\mathcal F}$, constant $b \in (0,1)$, {ex~ante}\
constraints ${\mathbf \eaprob} \in b{\mathcal P}_{\feas}$, and corresponding {ex~ante}\ prices
${\mathbf \eaprice}$, there exists a submatroid ${\mathcal F}' \subseteq {\mathcal F}$ such that
\[{\mathbf \eaprice}\cdot{\mathbf \eaprob} \leq \frac{1}{1-b}\easrev({\mathbf \dist},{\mathcal F}').\]
Furthermore, the prices and constraint ${\mathcal F}'$ which achieve this bound are
efficiently approximable.
\end{theorem}
Combining Theorems~\ref{thm:single-agent} and \ref{thm:ocrs}, we obtain
Lemma~\ref{lem:approx-general} for general matroids.
\begin{lemma}[\bf Single-agent approximation \# 2]
\label{lem:approx-general}
Let $\mathcal D$ be any value distribution and ${\mathcal F}$ be an arbitrary
matroid with feasible polytope ${\mathcal P}_{\feas}$. Then, for any $q\in
\frac 12{\mathcal P}_{\feas}$, there exists a submatroid ${\mathcal F}'\subseteq{\mathcal F}$,
such that
\[ \earev(\mathcal D, {\mathcal F}) \le 33.1 \eatrev(\mathcal D, {\mathcal F}') \]
\end{lemma}
\section{The {Ex~Ante}\ Relaxation and Stitching}
\label{sec:relaxation}
In this section we prove Lemmas~\ref{lem:relaxation} and
~\ref{lem:stitching-trevs}.
\begin{proofof}{Lemma~\ref{lem:relaxation}}
Let ${\mathcal M}$ be the optimal mechanism for feasibility constraints
${\mathcal F}$ and value distributions ${\mathbf \dist}$ achieving revenue
$\mathsc{Rev}({\mathbf \dist},{\mathcal F})$. We will now consider a buyer $i$ and construct a
mechanism ${\mathcal M}_i$ for this buyer as follows. When the buyer $i$
reports a value vector $\vali$, the mechanism ${\mathcal M}_i$ draws value
vectors $\tilde{\mathbf \val}_{-i}$ from the joint distribution ${\mathbf \dist}_{-i}$; It
then returns the allocation and payment that ${\mathcal M}$ returns at
$(\vali, \tilde{\mathbf \val}_{-i})$. It is easy to see that if ${\mathcal M}$ is
BIC, then so is ${\mathcal M}_i$. Furthermore, ${\mathcal M}_i$ obtains the same
revenue from buyer $i$ as ${\mathcal M}$. Therefore, we have:
\[ \mathsc{Rev}({\mathbf \dist},{\mathcal F}) = \sum_i \revm[{\mathcal M}_i](\disti).\]
Now let $\alloci$ denote the allocation rule of ${\mathcal M}_i$ and let
$\eaprobij = \operatorname{E}\expectarg[\vali\sim\disti]{\allocij(\vali)}$. Then,
recalling equation~\eqref{eq:constrained-revenue}, we have
$\revm[{\mathcal M}_i](\disti) \le \earev[\eaprobi](\disti,\feasi)$, and so,
\[\mathsc{Rev}({\mathbf \dist},{\mathcal F})\le \sum_i \earev[\eaprobi](\disti,\feasi).\]
Finally, the demand feasiblity of ${\mathcal M}$ implies that the vector
$\eaprobi$ lies in the polytope $\ptopei$, while the supply
feasiblity of ${\mathcal M}$ implies that $\sum_i \eaprobij\le 1$ for all
$j$. This completes the proof.
\end{proofof}
\section{Omitted Proofs}
\label{sec:single-agent-proofs}
\subsection{Proofs from Section \ref{sec:single-agent}}
We make use of the following result of \citet{cms-10}.
\begin{lemma}
\label{lem:unit-srev}
(\cite{cms-10})
For any product distribution ${\mathbf \dist}$,
\[\mathsc{Rev}({\mathbf \dist},\feasi[\mathsc{UnitDemand}]) \leq 4\mathsc{SRev}({\mathbf \dist},\feasi[\mathsc{UnitDemand}]).\]
\end{lemma}
\begin{proofof}{Claim~\ref{lem:RevSRevBound}}
Let ${\mathcal M}$ be a BIC mechanism such that $\revm({\mathbf \dist}) =
\mathsc{Rev}({\mathbf \dist},{\mathcal F})$. Let $({\mathbf \alloc}({\mathbf \val}), p({\mathbf \val}))$, where
$\sum_{j=1}^m\allocj({\mathbf \val}) \leq m$, be the lottery offered by ${\mathcal M}$ to an
agent who reports ${\mathbf \val}$. We modify ${\mathcal M}$ to get ${\mathcal M}'$, a BIC
mechanism which allocates at most one item and has revenue
$\revm[{\mathcal M}']({\mathbf \dist}) = \frac{1}{m}\revm({\mathbf \dist})$.
For every type ${\mathbf \val}$, let $\mathbf{x}'({\mathbf \val}) = \frac{1}{m}{\mathbf \alloc}({\mathbf \val})$ and
$p'({\mathbf \val}) = \frac{1}{m}p({\mathbf \val})$ be the lottery offered by ${\mathcal M}'$.
Since $|{\mathbf \alloc}({\mathbf \val})|_1 \leq m$, we have $|{\mathbf \alloc}'({\mathbf \val})|_1 \leq 1$, and so
${\mathcal M}'$ is feasible for the unit-demand setting. Because the buyer's utility
is quasi-linear, scaling the allocation probabilities and payments by $m$ simply
scales the utility of each outcome by $m$. Therefore, the buyer will select
corresponding outcomes in ${\mathcal M}$ and ${\mathcal M}'$, and ${\mathcal M}'$ is BIC with
revenue $\frac{1}{m}\revm({\mathbf \dist})$. Combined with
Lemma~\ref{lem:unit-srev}, this completes the proof.
\end{proofof}
\subsection{Proofs from Section~\ref{sec:symmetric}}
\begin{proofof}{Lemma~\ref{lem:symmetric-reduction}}
Let ${\mathcal M}$ be a demand- and supply-feasible BIC mechanism such that
$\revm({\mathbf \dist}) = \mathsc{Rev}({\mathbf \dist},{\mathcal F})$. Let $\eaprobij$ be the
probability with which ${\mathcal M}$ sells item $j$ to agent $i$. By
symmetry, we can permute the identities of the agents uniformly at
random before running ${\mathcal M}$ without hurting the expected
revenue. Under this permutation, the ex ante probability with which
${\mathcal M}$ sells $j$ to $i$ is at most $1/n$. We can therefore assume
without loss of generality that $\eaprobij \leq 1/n$. Now, consider
a single agent $i$; with probability $1/2$, allocate the empty set to
this agent at price $0$, and with probability $1/2$, draw values for
all other agents from ${\mathbf \dist}_{-i}$ and simulate mechanism
${\mathcal M}$. The resulting mechanism is a single agent mechanism that
obtains a $1/2n$ fraction of the revenue of ${\mathcal M}$ and satisfies an
ex ante constraint ${\mathbf \eaprob}\in
{\mathcal P}_{\feas}\cap\left[0,\tfrac{1}{2n}\right]^m$. The lemma follows.
\end{proofof}
\section{Two-Part Tariffs for a Single Agent}
\label{sec:single-agent}
We now turn to bounding the revenue from a single agent subject to an {ex~ante}\
constraint. In this section we will prove Lemma~\ref{lem:approx-partition}.
In the following discussion, we assume that the buyer has a product
value distribution $\mathcal D=\prod_j\distj$,
and faces a demand feasibility constraint ${\mathcal F}$, while the mechanism is
subject to an {ex~ante}\ supply constraint ${\mathbf \eaprob}$. Recall that we define the
{ex~ante}\ prices ${\mathbf \eaprice}$ as $\eapricej = \distj^{-1}(1-\eaprobj)$ for all
items $j$.
\subsubsection*{Core-Tail Decomposition with {Ex~Ante}\ Constraints}
We begin by defining the notation for the core-tail decomposition (see
Table~\ref{tab:notation}). Let
$\tau \geq 0$ be a constant to be defined later. We use $t_j =
\eapricej+\tau$ to denote the threshold for classifying values into
the core or the tail. Specifically, for any item $j$, if $\valj >
t_j$, we say item $j$ is in the tail, otherwise it is in the
core. Let $\dist^C_j$ (resp., $\dist^T_j$) denote the distribution for item
$j$'s value conditioned on the item being in the core (resp., tail).
For a set $A\subseteq [m]$ of items, let $\tprobA$ denote the
probability that the items in $A$ are in the tail and the remaining
items are in the core; that is, $\tprobA=\left(\prod_{j\in
A}\prob[\valj\sim\distj]{\valj > t_j}\right)
\left(\prod_{j\not\in A}\prob[\valj\sim\distj]{\valj \le
t_j}\right)$. Then $\tprobA[\emptyset]$ denotes the probability that all
items are in the core. Observe that as we increase the constant
$\tau$ (thereby increasing the core-tail thresholds uniformly), the
probability $\tprobA[\emptyset]$ increases. We pick $\tau$ to be the smallest
non-negative number such that $\tprobA[\emptyset]\ge 1/2$. Observe that
$\tau>0$ implies\footnote{For simplicity, we are assuming that the
value distribution does not contain any point masses; it is easy to
modify our argument to work in the absence of this assumption, but
we omit the details.} $\tprobA[\emptyset]=1/2$.
We now state our version of the core-tail decomposition, extended to
respect {ex~ante}\ constraints. We defer the proof to
Section~\ref{sec:core-decomp}. Note that although the sum over tail
revenues does not explicitly enforce the {ex~ante}\ constraints, the
tail distributions are supported only on values above the {ex~ante}\
prices ${\mathbf \eaprice}$.
\begin{lemma}[\bf Core Decomposition with {Ex~Ante}\ Constraints]
\label{lem:core-decomposition}
For any product distribution ${\mathbf \dist}$, feasibility constraint ${\mathcal F}$,
and {ex~ante}\ constraint ${\mathbf \eaprob}$,
\[\earev({\mathbf \dist},{\mathcal F}) \leq \eaVal(\core,{\mathcal F}) + \sum_{A\subseteq
[m]}\tprobA\mathsc{Rev}(\dists^T_A,{\mathcal F}|_A)\]
\end{lemma}
\begin{table}[t]
\renewcommand{\arraystretch}{1.5}
\caption{Notation for Section~\ref{sec:single-agent}.}
\begin{tabular}{r l l}
\hline
Notation & Definition & Formula \\
\hline
${\mathbf \eaprob}$ & {Ex~ante}\ probabilities & \\
${\mathbf \eaprice}$ & {Ex~ante}\ prices &
$\eapricej = \distj^{-1}(1-\eaprobj)\,\,\forall j\in [m]$ \\
$t_j$ & Core-tail threshold for item $j$ &
$\eapricej+\tau$ \\
$\tau$ & Difference between $t_j$ and $\eapricej$; same for
all items & $\min\{t \mid \prob[{\mathbf \val}\sim{\mathbf \dist}]{\valj \le
t+\eapricej \forall j} \ge 1/2 \} $ \\
$\dist^C_j$ & Core distribution for item $j$ &
$\distj|_{{\valj \leq t_j}}$ \\
$\dist^T_j$ & Tail distribution for item $j$ &
$\distj|_{{\valj > t_j}}$ \\
$\dists^C_A$ & Core distribution for items not in $A$ &
$\prod_{j\not\in A}\dist^C_j$ \\
$\dists^T_A$ & Tail distribution for items in $A$ &
$\prod_{j\in A}\dist^T_j$ \\
$\tprobj$ & Probability item $j$ is in the tail &
$\prob[\valj\sim\distj]{\valj > t_j}$ \\
$\tprobA$ & Probability exactly items in $A$ are in the tail &
$\left(\prod_{j\in A}\tprobj\right)
\left(\prod_{j\not\in A}(1-\tprobj)\right)$ \\
${\mathbf \dist}-{\mathbf \price}$ & Distribution ${\mathbf \dist}$ shifted to the left by ${\mathbf \price}$ &
\\
\hline
\end{tabular}
\label{tab:notation}
\end{table}
\subsection{Stitching together the multi-agent mechanisms}
\begin{proofof}{Lemma~\ref{lem:stitching-trevs}}
For every buyer $i$, let $\efi$ and $(\priceij[1], \cdots,
\priceij[m])$ denote the entry fee and item prices respectively in
the mechanism ${\mathcal M}_i$. We will compose the mechanisms ${\mathcal M}_i$ to
obtain a single mechanism ${\mathcal M}$ as follows.
The mechanism ${\mathcal M}$ considers buyers in an arbitrary order and
offers items for sale sequentially to the buyers in that order. When
it is buyer $i$'s turn, some (random set of) items have already been
sold to other buyers. The mechanism offers the remaining items to
buyer $i$ via a two-part tariff: it charges the buyer an entry fee
of $\frac 12\efi$ for the right to buy any subset of the remaining
items, with item $j$ priced at $\priceij$. Importantly, buyers must
make the decision of whether or not to participate (that is, whether
or not to pay the entry fee) before knowing which items are left
unsold.
By definition, the mechanism is BIC: buyers may choose whether or
not to participate and which subset of items to purchase.
Let us now consider a single buyer $i$. We first claim that when the
mechanism ${\mathcal M}$ considers buyer $i$, for every item $j$, the
probability (taken over value vectors of other agents) that item $j$
is available to be bought by $i$ is at least $1/2$. Recall that for every pair $i,j$,
$\prob[\valij\sim\distij]{\valij>\priceij}=1-\distij(\priceij)\le 1-\distij(\eapriceij)=\eaprobij$. So the
probability that some agent $i'$ buys item $j$ is at most
$q_{i'j}$. Therefore, the probability (over values of agents
other than $i$) that item $j$ is allocated to an agent other than
$i$ is at most $\sum_{i'\ne i} q_{i'j}\le 1/2$, and this
proves the claim.
We will now use the above claim to argue that if after drawing his
value vector the buyer chooses to participate (i.e. pay the entry
fee) in mechanism ${\mathcal M}_i$, then he chooses to participate in
${\mathcal M}$. If agent $i$ participates in mechanism ${\mathcal M}_i$, then for
some set $S\in {\mathcal F}_i$ his value vector satisfies $\sum_{j\in S}
(\valij-\priceij)-\efi>0$. In the mechanism ${\mathcal M}$, the agent
derives from the same set $S$ an expected utility of
\[\left(\sum_{j\in S} \prob{j\text{ is
available for } i}(\valij-\priceij) \right) -\frac 12\efi,\]
which by the above claim is at least $1/2(\sum_{j\in S}
(\valij-\priceij)-\efi)>0$. Consequently, if ${\mathcal M}_i$ obtains the
entry fee $\efi$ from agent $i$, then ${\mathcal M}$ obtains the entry fee
$\efi/2$.
Next we claim that if agent $i$ buys item $j$ in mechanism ${\mathcal M}_i$
and item $j$ is available for him in mechanism ${\mathcal M}$, then the
agent buys item $j$ in ${\mathcal M}$. This follows directly from
Lemma~\ref{lem:matroid-greedy} (Appendix~\ref{sec:matroids}) by
noting that ${\mathcal M}_i$ and ${\mathcal M}$ offer the same item prices to the
agent and that the agent is a utility maximizer. As argued
previously, item $j$ is available with probability at least $1/2$,
therefore, this claim implies that if ${\mathcal M}_i$ obtains the price
$\priceij$ from agent $i$, then ${\mathcal M}$ obtains the same price
$\priceij$ with probability $1/2$. Putting this together with the
above observation about entry fee, we get that ${\mathcal M}$ obtains in
expectation at least half of the total revenue obtained by the
mechanisms ${\mathcal M}_i$.
\end{proofof}
The proof of the lemma relies upon three facts: (1) mechanism ${\mathcal M}$
offers each item with probability at least half to each buyer, (2)
under these probabilities, the buyer's expected utility from a set $S$
is at least half his utility from obtaining $S$ with certainty, and,
(3) in the composed mechanism, the buyer selects those items in $S$
that are still available. Fact (2) holds more generally for a buyer
with any monotone submodular value function~\cite{FMV-11}. Fact (3) follows
directly from the definition of gross substitutes valuations,\footnote{A
valuation $v$ satisfies the gross substitutes condition if for all price
vectors ${\mathbf \price}$, ${\mathbf \price}'$ where ${\mathbf \price} \leq {\mathbf \price}'$, for all $S$ such
that $v(S) - {\mathbf \price} \geq v(S') - {\mathbf \price}$ for all $S'$, there exists $T$ such
that $v(T) - {\mathbf \price}' \geq v(T') - {\mathbf \price}'$ for all $T'$ and $\{j \in S :
\pricej = \pricej'\} \subseteq T$.} a special case of submodular value
functions. So Lemma~\ref{lem:stitching-trevs} holds more generally for buyers
with gross substitutes valuations.
\section{Approximation for Symmetric Agents}
\label{sec:symmetric}
Computing the approximate mechanisms of
Theorem~\ref{thm:main-partition} requires being able to efficiently
solve the {ex~ante}\ optimization, $\max_{{\mathbf \eaprob}} \sum_i
\earev[\eaprobi](\disti,\feasi) \; \text{s.t. } \sum_i
\eaprobi\le\vec{\mathbf 1}$. This is not necessarily a convex
optimization problem and it is not clear whether this can be solved or
approximated efficiently in general. In this section we show how to
solve this problem in the special case where agents are a priori
identical.
In a {\em symmetric agents} setting, agents share a common feasibility
constraint and value distribution. In particular, $\feasi =
\feasi[i']={\mathcal F}$ and $\disti = \disti[i']=\mathcal D$ for all $i, i'\in
[n]$. Note that the values of different items are not necessarily
distributed identically, neither is ${\mathcal F}$ necessarily symmetric
across items. Since each agent is identical, we can focus on
maximizing the revenue obtained from a single agent, while ensuring
that the {ex~ante}\ probability of selling each item is small
enough that we may apply Lemma~\ref{lem:stitching-trevs}. We formalize
this in the following lemma. See Appendix~\ref{sec:single-agent-proofs} for a
proof.
\begin{lemma}
\label{lem:symmetric-reduction}
In a symmetric agents setting with $n$ agents, a matroid feasibility constraint ${\mathcal F}$ and
product distribution ${\mathbf \dist}$,
\[\mathsc{Rev}(\times_n {\mathbf \dist}, \times_n {\mathcal F}) \leq
2n \max_{{\mathbf \eaprob}\in {\mathcal P}_{\feas} \cap\left[0,\tfrac{1}{2n}\right]^m}
\earev[{\mathbf \eaprob}](\mathcal D,{\mathcal F}),\]
\end{lemma}
For the remainder of this section, we focus on efficiently
approximately maximizing the single agent objective
$\earev[{\mathbf \eaprob}](\mathcal D,{\mathcal F})$ subject to ${\mathbf \eaprob}\in
\widehat{{\mathcal P}_{\feas}}$, where we use $\widehat{{\mathcal P}_{\feas}}$ as short form for
${\mathcal P}_{\feas}
\cap\left[0,\tfrac{1}{2n}\right]^m$. Lemma~\ref{lem:single-agent}
bounds the revenue by three terms; we observe that $\easrev({\mathbf \dist},
{\mathcal F})$ is at most $\max_{{\mathbf \eaprob}'\le{\mathbf \eaprob}}
{\mathbf \eaprob}'\cdot{\mathbf \dist}^{-1}(1-{\mathbf \eaprob}')$. Therefore,
\begin{align}
\notag \max_{{\mathbf \eaprob}\in \widehat{{\mathcal P}_{\feas}}}\earev({\mathbf \dist},{\mathcal F}) & \leq
6\,\max_{{\mathbf \eaprob}\in \widehat{{\mathcal P}_{\feas}}}\mathsc{BRev}({\mathbf \dist}-{\mathbf \dist}^{-1}(1-{\mathbf \eaprob}),{\mathcal F}) + 26.1\, \max_{{\mathbf \eaprob}\in \widehat{{\mathcal P}_{\feas}}}{\mathbf \eaprob}\cdot{\mathbf \dist}^{-1}(1-{\mathbf \eaprob})\\
&\label{eq:symmetric} \le 6\,\mathsc{BRev}({\mathbf \dist}-{\mathbf \dist}^{-1}(1-1/2n),{\mathcal F}) + 26.1\, \max_{{\mathbf \eaprob}\in \widehat{{\mathcal P}_{\feas}}}{\mathbf \eaprob}\cdot{\mathbf \dist}^{-1}(1-{\mathbf \eaprob})
\end{align}
\noindent
The first term on the LHS of Equation~\eqref{eq:symmetric} is easy to capture.
We can use sampling to efficiently compute the optimal bundle price
for the value distribution ${\mathbf \dist}-{\mathbf \dist}^{-1}(1-1/2n)$. Call this
price $a$, and let $\pricej=\distj^{-1}(1-1/2n)$ for all items $j\in
[m]$. Then, by Lemma~\ref{lem:stitching-trevs} the multi-agent
sequential two-part tariff mechanism that offers each agent an entry
fee of $a$ and per item pricing ${\mathbf \price}$ obtains revenue at least
$\frac n2 \mathsc{BRev}({\mathbf \dist}-{\mathbf \dist}^{-1}(1-1/2n),{\mathcal F})$.
This leaves us with the following maximization problem:
\begin{equation}
\label{eq:bq-max}
\begin{aligned}
\text{maximize }\; & {\mathbf \eaprob}\cdot{\mathbf \dist}^{-1}(1-{\mathbf \eaprob}) & \text{s.t. }\; & {\mathbf \eaprob} \in {\mathcal P}_{\feas} \cap \left[0,\tfrac{1}{2n}\right]^m
\end{aligned}
\end{equation}
In Appendix~\ref{sec:optimizing-beta-dot-q} we discuss how to solve (a
relaxation of) this problem efficiently when ${\mathcal F}$ is a matroid. We
obtain a (potentially random) vector ${\mathbf \eaprob}$ that in expectation
satisfies the feasibility constraint $\widehat{{\mathcal P}_{\feas}}$ and obtains an
expected objective function value no smaller than the optimum of
\eqref{eq:bq-max}.
Then, for partition matroids, we can employ a constructive version of
Lemma~\ref{thm:partition-matroid} due to \citet{CHMS-STOC10} to obtain
a (potentially random) sequential two-part tariff mechanism that
obtains revenue at least $\frac n4$ times the optimum of
\eqref{eq:bq-max}. For general matroids, we can likewise employ a
constructive version of Theorem~\ref{thm:ocrs} due to \citet{fsz-15}
to obtain a (potentially random) demand-limiting sequential two-part
tariff mechanism that obtains revenue at least $\frac n4$ times the
optimum of \eqref{eq:bq-max}. We obtain the following theorem.
\begin{theorem}
For any symmetric, matroid feasibility constraint ${\mathcal F}$ and
symmetric, product distribution ${\mathbf \dist}$, there is an efficiently
computable randomized demand-limiting sequential
two-part tariff mechanism ${\mathcal M}$ and a constant $c$ such that
\[\mathsc{Rev}({\mathbf \dist},{\mathcal F}) \leq c\, \revm({\mathbf \dist}).\]
When ${\mathcal F}$ is a partition matroid, we obtain a sequential two-part
tariff mechanism, and when $\distj$ is regular for all $j$, our
mechanism is deterministic.
\end{theorem}
\subsection{Bounding the Tail}
We first show that the tail revenue can be bounded by selling items
separately under the given {ex~ante}\ supply constraint ${\mathbf \eaprob}$. The main
result of this section is as~follows.
\begin{lemma}
\label{lem:tail-bound}
For any product distribution ${\mathbf \dist}$ over $m$ independent items and
any ${\mathcal F}$,
\[\sum_{A\subseteq [m]}\tprobA\mathsc{Rev}(\dists^T_A,{\mathcal F}|_A) \leq 8(1+\ln 2) \easrev({\mathbf \dist},{\mathcal F})\]
\end{lemma}
\begin{proof}
We make use of the following weak but general relationship between the optimal
revenue and the revenue generated by selling separately for a single-agent
constrained additive value setting; this follows by noting that $\mathsc{Rev}$ and
$\mathsc{SRev}$ are within a factor of $4$ of each other for unit demand agents (see
Appendix~\ref{sec:single-agent-proofs} for a proof).
\begin{claim}
\label{lem:RevSRevBound}
For any product distribution ${\mathbf \dist}$ over $m$ items and any ${\mathcal F}$,
\[\mathsc{Rev}({\mathbf \dist}, {\mathcal F}) \leq 4m\mathsc{SRev}({\mathbf \dist},\feasi[\mathsc{UnitDemand}]).\]
\end{claim}
Applying this claim to the revenues $\mathsc{Rev}(\dists^T_A,{\mathcal F}|_A)$, we get
that
\[
\sum_{A}\tprobA\mathsc{Rev}(\dists^T_A,{\mathcal F}|_A) \leq 4 \sum_A\tprobA|A|\mathsc{SRev}(\dists^T_A,\feasi[\mathsc{UnitDemand}]).
\]
We will now use the fact that the tail contains few items in expectation. Let
$\tprobj$ denote the probability that item $j$ is in the tail: $\tprobj =
\prob[\valj\sim\distj]{\valj > t_j}$. We can write the following series of
inequalities.
\begin{align}
\label{eq:1} \sum_{A}\tprobA|A|\mathsc{SRev}(\dists^T_A,\feasi[\mathsc{UnitDemand}]) &\leq
\sum_{A}\tprobA|A|\sum_{j\in A}\mathsc{Rev}(\dist^T_j) \\
&\notag = \sum_{j\in[m]}\mathsc{Rev}(\dist^T_j)\sum_{A\ni j}\tprobA|A| \\
&\notag = \sum_{j\in[m]}\tprobj\mathsc{Rev}(\dist^T_j)\operatorname{E}\expectarg{\lvert A\rvert \, | j \in A} \\
& \label{eq:4} \leq (1+\ln 2)\sum_{j\in[m]}\earev[\tprobj](\distj) \\
& \label{eq:5} \leq \frac{1}{\tprobA[\emptyset]}(1+\ln 2)\easrev[{\mathbf \xi}]({\mathbf \dist},{\mathcal F})
\end{align}
Here inequality~\eqref{eq:1} follows by removing the demand constraint
$\feasi[\mathsc{UnitDemand}]$.
Inequality~\eqref{eq:4} follows from three observations: (1) the tail is
non-empty with probability at most $1/2$; (2) if $\{z_i\}_{i\in[n]}$ are
probabilities satisfying $\prod_i(1-z_i)\ge 1/2$, then $\sum_iz_i \leq \ln 2$;
(3) a single-agent single-item mechanism for value distribution $\dist^T_j$ that
achieves revenue $\mathsc{Rev}(\dist^T_j)$ would achieve $\tprobj$ times that revenue on
the value distribution $\distj$ while satisfying an {ex~ante}\ supply constraint
of $\tprobj$.
Inequality \eqref{eq:5} follows from the standard argument that the revenue
obtained by selling each item individually at prices $t_j$ (or higher) is
at least $\tprobA[\emptyset]$ times the sum of the corresponding per-item revenues.
Finally, the result follows by recalling that $\tprobA[\emptyset]\ge 1/2$ and relaxing the
{ex~ante}\ constraint.
\end{proof}
\section{Main Results}
\label{sec:theorems}
We now state our three main results corresponding to the three parts
of the {ex~ante}\ approach for approximating $\mathsc{Rev}({\mathbf \dist}, {\mathcal F})$.
Lemma~\ref{lem:relaxation} corresponds to the first {\bf relaxation}
step, and states that the revenue $\mathsc{Rev}({\mathbf \dist}, {\mathcal F})$ can be bounded
by the sum of single-agent revenues with appropriate {ex~ante}\
constraints. While the lemma is stated here for buyers with
constrained additive values, it holds for arbitrary value functions as
long as values are independent across buyers.
\begin{lemma}[\bf Relaxation]
\label{lem:relaxation}
For any feasibility constraints ${\mathcal F}=\{\feasi\}_{i\in[n]}$ and value
distributions ${\mathbf \dist}=\prod_i\disti$, there exist {ex~ante}\ probability vectors
$\eaprobi[1], \cdots, \eaprobi[n]$, satisfying: (1)
$\eaprobi\in\ptopei$ for all $i$, and, (2) $\sum_i \eaprobij\le 1$
for all $j$, such that
\[\mathsc{Rev}({\mathbf \dist},{\mathcal F})\le \sum_i \earev[\eaprobi](\disti,\feasi).\]
\end{lemma}
Lemma~\ref{lem:stitching-trevs} corresponds to the last {\bf stitching} step, and
shows that any single-agent two-part tariff mechanisms that
collectively satisfy an {ex~ante}\ constraint on every item can be
stitched together into a multi-agent sequential two-part tariff
mechanism without losing much revenue.
\begin{lemma}
\label{lem:stitching-trevs}
For every agent $i$, let ${\mathcal M}_i = (\efi,\pricei)$ be any two-part
tariff that is demand-feasible with respect to a matroid feasibility
constraint $\feasi$ and that satisfies {ex~ante}\ supply constraints
$\eaprobi$ under value distribution $\disti$. Let
${\mathcal F}=\{\feasi\}_{i\in[n]}$ and $\mathcal D=\prod_i\disti$. Then, if
$\sum_i \eaprobij\le 1/2$ for all $j$, there exists a sequential
two-part tariff mechanism ${\mathcal M}$ that is supply-feasible and
demand-feasible with respect to ${\mathcal F}$ such that
\[\revm({\mathbf \dist})\ge \frac 12\sum_i \revm[{\mathcal M}_i](\disti).\]
\end{lemma}
We therefore obtain the following corollary.
\begin{corollary}[\bf Stitching]
\label{cor:stitching}
For any value distributions ${\mathbf \dist}=\prod_i\disti$ and feasibility constraints
${\mathcal F}=\{\feasi\}_{i\in[n]}$, where each $\feasi$ is a matroid, let
$\eaprobi[1], \cdots, \eaprobi[n]$ be any {ex~ante}\ probability vectors
satisfying $\sum_i \eaprobij\le 1/2$ for all $j$. Then, there exists a
demand- and supply-feasible sequential two-part tariff mechanism
${\mathcal M}$ such that
\[\revm({\mathbf \dist})\ge \frac 12\sum_i \eatrev[\eaprobi](\disti,\feasi).\]
\end{corollary}
In order to put together the Relaxation Lemma and the Stitching
Corollary, it remains to relate $\earev$ for a single agent to
$\eatrev$ for the same agent. The following lemma presents such a
relationship when the buyer's demand constraint is a matroid.
\begin{lemma}[\bf Single-agent approximation]
\label{lem:approx-partition}
Let $\mathcal D$ be any product value distribution and ${\mathcal F}$ be a matroid with
feasible polytope ${\mathcal P}_{\feas}$. Then, for any $q\in \frac 12{\mathcal P}_{\feas}$, there
exists a submatroid ${\mathcal F}' \subseteq {\mathcal F}$ such that
\[ \earev(\mathcal D, {\mathcal F}) \le 33.1\,\eatrev(\mathcal D, {\mathcal F}') \]
If ${\mathcal F}$ is a partition matroid, then ${\mathcal F}' = {\mathcal F}$.
\end{lemma}
Putting Lemmas~\ref{lem:relaxation} and \ref{lem:approx-partition},
and Corollary~\ref{cor:stitching} together, and observing that by the
concavity of the revenue objective, $\earev[\frac
12\eaprobi](\disti,\feasi)\ge \frac 12\earev[\eaprobi](\disti,\feasi)$
for all $i$, we get our main result.
\begin{theorem}
\label{thm:main-partition}
For any product value distribution ${\mathbf \dist}$ and feasibility constraints
${\mathcal F}=\{\feasi\}_{i\in[n]}$, where each $\feasi$ is a matroid, there exist
submatroids $\feasi' \subseteq \feasi$ and a supply-feasible
$\{\feasi'\}$-limited sequential two-part tariff mechanism ${\mathcal M}$ such that
\[ \mathsc{Rev}({\mathbf \dist}, {\mathcal F})\le 133\,\revm({\mathbf \dist}) \]
If $\feasi$ is a partition matroid, then $\feasi' = \feasi$.
\end{theorem}
\subsection*{Further Results}
As a consequence of our single-agent approximation
(Lemma~\ref{lem:single-agent} in Section~\ref{sec:single-agent}), we also
obtain an improved approximation for the single-agent revenue maximization
problem with constrained additive values. Specifically, taking ${\mathbf \eaprob} =
\vec{\mathbf{1}}$ and noting ${\mathbf \eaprice} = \vec{\mathbf{0}}$,
Lemma~\ref{lem:single-agent} gives the following bound on the optimal revenue
for the single-agent setting.
\begin{corollary}
\label{cor:true-single-agent}
For any downward closed feasibility constraint ${\mathcal F}$ and any
value distribution ${\mathbf \dist}$,
\[\mathsc{Rev}({\mathbf \dist},{\mathcal F}) \leq 31.1\,\max\left\{\mathsc{SRev}({\mathbf \dist},{\mathcal F}),
\mathsc{BRev}({\mathbf \dist},{\mathcal F})\right\}.\]
\end{corollary}
Also as a consequence of Lemma~\ref{lem:single-agent}, we show the following
bound for revenue maximization under an arbitrary {ex~ante}\ constraint.
\begin{corollary}
\label{cor:general-ex-ante}
Let $\mathcal D$ be any product value distribution and ${\mathcal F}$ be a matroid. Then
for any $q\in[0,1]^m$, there exists a submatroid ${\mathcal F}' \subseteq
{\mathcal F}$ such that
\[\earev(\mathcal D, {\mathcal F}) \le 35.1\,\eatrev(\mathcal D,{\mathcal F}') \]
If ${\mathcal F}$ is a partition matroid, then ${\mathcal F}' = {\mathcal F}$.
\end{corollary}
|
1,314,259,995,946 | arxiv | \section{Introduction}
\label{sec:introduction}
The promising properties of the reversed magnetic shear~(RMS) tokamak
configuration have led to a recent increased interest in the double tearing
mode. RMS configurations result in safety factor profiles that are
non-monotonic. If two rational surfaces of the same $q$ exist near each other
within the plasma, they may couple together via ideal MHD scale processes to form
a single double-tearing mode~(DTM). Linearly the interaction of the two surfaces
creates a self-driven reconnecting instability that depends weakly on
resistivity.\cite{Pritchett1980} Nonlinearly the DTM can
potentially disrupt the annular current ring of RMS devices,\cite{Chang1996}
generate strong sheared flows,\cite{Wang2008} and release large bursts of
kinetic energy.\cite{Ishii2000} As such, they are a proposed driver of off-axis sawtooth
behavior.
One means of stabilizing double-tearing mode activity is
the application of differential rotation to the two DTM layers. In slab
Cartesian simulations equilibrium sheared flow has been shown to interfere with
the coupling between the resonant layers to result in two decoupled, drifting
single tearing modes.\cite{Mao2013} Further increase in the flow amplitude
generates Alfv\'{e}n resonance layers that couple to the tearing surfaces,
increasing or decreasing the mode growth depending on their proximity.
Nonlinearly these Alfv\'{e}n resonances may shield the plasma core and
suppress DTM mode growth.\cite{Wang2011a,Voslion2011} The appearance of such
layers requires, however, flow amplitudes near the in-plane Alfv\'{e}n speed and
shears near the threshold for Kelvin-Helmholtz instability,\cite{Mao2013}
potentially triggering greater instability. Thus we are motivated to explore
alternate means of providing differential plasma rotation.
Diamagnetic drifts emerge as a result of
finite Larmor radius physics in the presence of a pressure gradient such as
the internal transport barriers~(ITBs) observed in RMS plasmas.\cite{Yuh2009}
They have long been studied as means of stabilizing reconnecting modes, and have
several advantages over equilibrium flow. In particular, diamagnetic drifts
local to the reconnecting
layer interfere with the conversion of magnetic energy to
kinetic.\cite{Ara1978a} This local effect has been shown to saturate the
$m=1$ kink-tearing mode in conventional tokamaks,\cite{Rogers1995} leaving
finite sized islands during incomplete sawtooth crashes.\cite{Beidler2011}
The influence of both pressure gradients and
diamagnetic drifts on double-tearing modes has been considered previously by
other authors. In resistive, reduced MHD simulations
Zhao et al.~\cite{Zhao2011} examined the impact of equilibrium pressure
gradients on a cylindrical DTM with a small inter-resonant spacing. Their
results show that the pressure gradient modifies the dependence of the DTM on
resistivity, causing variations in the spectrum of modes at a given surface.
In this work we will expand their study to more widely spaced modes and a wider
variety of pressure gradients, as well as introduce finite Larmor radius
effects. Maget et al.~\cite{Maget2014} considered
neoclassical effects on a DTM in toroidal simulations, targeting Tore Supra
experiments. They found some mode numbers were stabilized by the addition of
diamagnetic drifts, whereas others were enhanced due to toroidal effects. Their
simulations target, however, specific discharges and do not consider variations
in the pressure and drift
profiles, thus do not illuminate the role of differential rotation in DTM
evolution.
In this work, we use Hall MHD simulations to examine the impact of diamagnetic
drifts on a cylindrical $m=2$, $n=1$ double-tearing mode, considering both the
ability of an electron diamagnetic drift to decouple the two tearing layers and
to stabilize them once separated. To this end, we structure this paper as
follows. In Section~\ref{sec:model} we introduce the Hall MHD model and
describe our simulation code \texttt{MRC-3d}. Section~\ref{sec:equilibrium}
defines the equilibrium safety factor and density profiles used for this study.
In Section~\ref{sec:linear} we report the results of linear resistive and Hall
MHD simulations. We find that the addition of a pressure gradient to our
equilibrium destabilizes an ideal MHD instability that competes with the
stabilizing effects of the diamagnetic drift. As a consequence, we are able to
decrease the linear DTM growth rate only by locating a strong diamagnetic drift
at the dominant, outer rational surface. We use this result to choose
characteristic profiles for nonlinear Hall MHD simulations in Section~\ref
{sec:nonlinear}, and show that the DTM may be saturated before disruption of the
annular current ring. Finally, we summarize our results in Section~\ref
{sec:conclusion} and discuss the consequence of this work for advanced tokamaks.
\section{Hall magnetohydrodynamic model}
\label{sec:model}
Our simulation code \texttt{MRC-3d} uses a standard Hall MHD model.
\begin{align}
\label{eq1:mass}
\frac{\partial \rho}{\partial t} &= -\mathbf{\nabla} \cdot (\rho \mathbf{U} - D \nabla \rho)\\
\label{eq1:momentum}
\frac{\partial \mathbf{P}}{\partial t} &= -\mathbf{\nabla} \cdot
[\rho\mathbf{UU} - \mathbf{BB} + \mathbf{I}(p + B^{2}/2) - \rho \nu \mathbf{\nabla U}]\\
\label{eq1:temperature}
\frac{\partial T_{e}}{\partial t} &= -\mathbf{U} \cdot \mathbf{\nabla}T_{e} - (\gamma - 1)T_{e} \mathbf{\nabla} \cdot \mathbf{U} \\
\label{eq1:pressure_species}
p_{s} &= \rho T_{s}\\
\label{eq1:ohmslaw}
\mathbf{E} &= -\mathbf{U} \times \mathbf{B} + \frac{d_{i}}{\rho}(\mathbf{J} \times \mathbf{B} - \nabla p_{e}) + \eta \mathbf{J}\\
\label{eq1:faraday}
\frac{\partial \mathbf{B}}{\partial t} &= -\mathbf{\nabla} \times \mathbf{E}\\
\label{eq1:current}
\mathbf{J} &= \mathbf{\nabla} \times \mathbf{B}
\end{align}
where $p = (1 + \tau) \rho T$is the pressure and $\tau = T_{i}/T_{e}$ is the
ratio of the ion to electron temperatures. In this work we focus on the cold-ion
regime ($\tau=0$), thus excluding ion diamagnetic drifts.
The simulation code \texttt{MRC-3d}\cite{mrcdocs} implements the above model in
a fully conservative, finite-volume scheme similar to Chac\`{o}n\cite{Chacon2004},
with additional $d_{i}$ scale Hall and electron pressure gradient terms.
Lengths are normalized to the cylinder radius, magnetic fields to the
asymptotic in-plane magnitude, and velocities to the in-plane Alfv\'{e}n speed.
All other normalizations follow from these. Data management and implicit
time integration are accomplished via
the PETSc~\cite{petsc-web-page,petsc-user-ref,petsc-efficient}
interface in the \texttt{LIBMRC} computational library.\cite{libmrc}
We conduct simulations in 2D helically symmetric
cylindrical geometry. Derivatives in radial $0\leq r \leq 1$ and poloidal
$0 \leq \theta \leq 2\pi$ coordinates are discretized directly. The cylinder is
assumed to be periodic in $z$ with a length of $2\pi R$ where $R=10$ is
the major radius of an approximately equivalent torus with inverse aspect
ratio $\epsilon=0.1$. Derivatives in the axial
coordinate $\phi=z/R$ are taken to be $\mathrm{d}\phi = \iota^{-1} \mathrm{d}
\theta$ where $\iota=n/m$ is the twist of the helix. Thus for analysis we
define the helical coordinates
\begin{align}
\hat{u} &= \frac{1}{\sqrt{1 + \frac{n^{2}}{m^{2}}\frac{r^{2}}{R^{2}}}} \left [ \hat{\theta} - \frac{n}{m} \frac{r}{R} \hat{z} \right ] \\
\hat{h} &= \frac{1}{\sqrt{1 + \frac{n^{2}}{m^{2}}\frac{r^{2}}{R^{2}}}} \left [ \hat{z} + \frac{n}{m} \frac{r}{R}\hat{\theta} \right ]
\end{align}
where $\hat{r}$ and $\hat{u}$ represent the two dimensional perpendicular plane
and $\hat{h}$ is directed along the helix. The helical flux function $\psi^{*}$
is then defined as
\begin{align}
\bm{B} &= \nabla \psi^{*} \times \hat{h}+ B_{h} \hat{h}
\end{align}
To reduce the computational costs of this work we use resistivities on
the order of $\eta\sim10^{-5}$. This unrealistically large level of diffusion
causes the equilibrium to decay on a time scale comparable to the growth time of
resistive instabilities. \texttt{MRC-3d} features a mechanism to prevent
equilibrium decay that is equivalent to the addition of a source electric field
in Ohm's law~(Eqn.~\ref{eq1:ohmslaw}). In nonlinear simulations we enable this
mechanism and prevent the resistive decay of the equilibrium. We have confirmed
that the major results of this work persist when the source electric field is
disabled, and will note explicitly when it impacts the mode behavior.
\section{Equilibrium}
\label{sec:equilibrium}
To generate an equilibrium with two nearby $q=2$ rational surfaces we use the
safety factor profile given by Bierwage\cite{Bierwage2005}:
\begin{equation}
\begin{aligned}
\label{eq2:dtm_q_profile}
q(r) &= q_{0} F_{1}(r) \left \{ 1 + (r/r_{0})^{2 w(r)}
\right \}^{1/w(r)} \\
r_{0} &= r_{A} | [m/(nq_{0})]^{w(r_{A})} - 1|^{-1/[2w(r_{A})]} \\
w(r) &= w_{0} + w_{1}r^{2} \\
F_{1}(r) &= 1 + f_{1} \exp \left \{ - [(r - r_{11}) /
r_{12}]^{2} \right \}
\end{aligned}
\end{equation}
with the following constants:
\begin{equation}
\begin{aligned}
\label{eq2:q_profile_fixed_vals}
r_{A} &= 0.655, & w_{0} &= 3.8824, & w_{1} &= 0\\
f_{1} &= -0.238, &r_{11} &= 0.4286, &r_{12} &= 0.304
\end{aligned}
\end{equation}
We set $q_{0}=2.5$, resulting in two $q=2$ rational surfaces
located at $0 < r_{s1} < r_{s2} < 1$, spaced a distance $D = r_
{s2}-r_{s1} \approx0.26$ apart.
Diamagnetic drifts require the introduction of a pressure gradient. We use a
monotonic density profile to represent an internal transport barrier as given
by Zhao\cite{Zhao2011}:
\begin{align}
\label{eq2:cyl_pressure_profile}
\rho(r) = N_{0} \left \{ 1 - (1 - N_{b}) \frac{\tanh
(r_{0}/\delta_{N}) + \tanh [ (r -
r_{0})/\delta_{N}]}{\tanh(r_{0}/\delta_{N}) + \tanh[(1 - r_{0}) \delta_{N}]}
\right \}
\end{align}
We fix the core density with $N_{0}=1$ and vary $r_{0}$, $\delta_{N}$, and
$N_{b}$ to change the center, width, and edge magnitude of the density profile.
For simplicity we take the equilibrium electron temperature to be a
constant $T_ {0}=1.0$, and assume cold-ions $\tau=T_{i}/T_{e}=0$ .
We initialize the equilibrium magnetic field $\bm{B}_{0}$ by specifying a
density profile and iteratively refining $B_{\theta 0}$ and $B_{z0}$ toward the
above safety factor profile,
subject to the constraints of force-balance and that $B_ {z0}\sim 10$.
This equilibrium has a plasma parameter of $\beta \approx 0.01$ so that
$\beta\sim\epsilon^{2}$, consistent with the standard tokamak ordering
assumption. The equilibrium safety factor and example density gradient are shown
in Figure~\ref{fig:q_P_ex}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{q_P_ex.pdf}
\caption{\label{fig:q_P_ex}Equilibrium safety factor profile (solid, left axis)
with two nearby $q=2$ rational surfaces, indicated by dot-dashed vertical
lines, and an example pressure profile (dashed, right axis).}
\end{figure}
This pressure gradient produces~(in the cold ion regime with a constant
electron temperature $T_{0}=1.0$) an electron diamagnetic drift given
by:
\begin{align}
\label{eq:drift}
\omega_{*}(r) &= \bm{k} \cdot \bm{v}_{*e} = - \frac{m d_{i} B_{h}}{r\rho B^{2}} \frac{\partial \rho}{\partial r}
\end{align}
where we have assumed the perturbation wave vector is $\bm{k} = m/r \hat
{\theta} - n/R \hat{z}$, commensurate with our helical symmetry.
We choose three classes of density profiles, shown in Figure~\ref
{fig:drift_profiles}, which center the maximum gradient at different locations
$r_{s1} \leq r_{0} \leq r_{s2}$.
\emph{Equal drift} profiles have the peak gradient centered between the two
$q=2$ resonant surfaces at $r_ {0}= (r_ {s1} + r_ {s2})/2$ and a broad width of
$\delta_{N}=0.2$, producing equal $\omega_ {*}$ at both singular layers.
\emph{Inner drift} profiles have a narrow pressure profile of $\delta_{N}=0.05$
centered at $r_ {0}=r_ {s1}$, providing a strong drift at the inner rational
surface and negligible $\omega_{*}$ at the outer. Finally, \emph{outer drift}
profiles are localized near $r_ {0}=r_ {s2}$ with $\delta_{N}=0.05$ so that the
inner rational surface experiences negligible $\omega_{*}$. Equal drift profiles
demonstrate the stabilizing effects of local diamagnetic drift
on both surfaces. The inner and outer drift profiles produce a differential
diamagnetic drift $\Delta \omega_{*}=|\omega_{*}(r_{s1}) - \omega_{*}(r_{s2})|$,
resulting in an additional differential rotation effect and asymmetric local
stabilization of the two surfaces. The impact of diamagnetic drifts in more
realistic ITB-like profiles will likely be some intermediate form of these three
prototypical equilibrium types.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{{drifts}.pdf}
\caption{\label{fig:drift_profiles}Examples of the three pressure profile
types (solid lines) and the electron diamagnetic drifts they produce (dashed
lines) from left to right: equal drift; inner drift; and outer drift. Vertical
dash-dot lines indicate the locations of the $q=2$ rational surfaces. All
three profiles results in a drift of $\omega_{*}=0.8$ at one or both of the
rational surfaces, respectively.}
\end{figure*}
\section{Linear drifts}
\label{sec:linear}
The basic consequences of introducing diamagnetic drifts to DTM evolution
are clearly observable during the linear phase. \texttt{MRC-3d} includes a one
dimensional, linearized form of the model given
in Section~\ref{sec:model}, which we will use for this portion of the study.
We treat derivatives in $r$ using finite volume discretization and apply the
Fourier ansatz $F(r,\theta,z)=f(r)\exp{(m\theta/r - nz/R)}$ to
derivatives in $\theta$ and $z$, where the poloidal and
toroidal mode numbers $m=2$ and $n=1$ are chosen to capture the lowest (and
fastest growing) harmonic. From initial value simulations we fit
the growth rate $\gamma_{R}$ of the mode from the time evolution of
the magnetic and momentum field amplitudes. To extract the mode drift
frequency $\gamma_{I}$ we apply a discrete Fourier
transformation (DFT) to the time series output of the helical flux function
$\psi^{*}$.
The reduction to a one dimensional linear model allows us to easily
conduct scaling studies of DTM behavior in the three equilibria types given
above. For each profile type, we run simulations over a range of
diamagnetic drift values $0\leq \omega_ {*} \leq 0.16$. For each simulations we
specify the center of the gradient $r_ {0}$ and the desired drifts at the inner
($\omega_{*}(r_{s1})$) and outer ($\omega_{*}(r_{s2})$) $q=2$ rational surfaces,
then iteratively refine the density height $N_{b}$ and the magnetic fields to
produce the desired profile. We then seed a small $m=2$, $n=1$ perturbation onto
this equilibrium and run the initial value simulation using a resistivity
$\eta=\num {1e-5}$. To enhance numerical stability we add in small
amounts of viscosity, particle diffusivity, and temperature diffusivity
$\nu=D=DT=10^{-1}\times\eta$. These extra dissipation coefficients smooth noise
in the linear simulations and allow easier analysis; we have confirmed that they
have a negligible impact on the measured linear growth rates.
\subsection{Ideal MHD instability}
Before examining the impact of the diamagnetic drift we first set $d_{i}=0$ and
consider the addition of a pressure gradient in resistive MHD. In Figure~\ref{fig:linear_res}
we have plotted the dependence of the linear growth rate $\gamma_{R}$
on the maximum of the pressure gradient $\partial_{r}P|_{r=r_{0}}$. Although the
diamagnetic drift is not present in these resistive simulations, we will
continue referring to the three types of profiles as
equal~($r_{0}=(r_{s1}+r_{s2})/2$), inner~($r_{0}=r_{s1}$), and outer~($r_{0}=r_{s2}$)
`drift' configurations.
\begin{figure}
\includegraphics[width=\columnwidth]{linear_res.pdf}
\caption{\label{fig:linear_res}Variation in linear $m=2$, $n=1$ DTM growth rates
$\gamma_ {R}$ with pressure gradient in resistive MHD. For better
comparison with Figure~\ref{fig:linear_hall} we represent the pressure
gradients using $\omega_{res}$, which is the electron diamagnetic drift each
profile \emph{would} produce at the specified resonant surfaces were $d_
{i}=0.1$ rather than $0$. The three
classes of profile described in Section~\ref{sec:equilibrium} are
represented as: triangles--$r_ {0}= (r_{s1}+r_{s2})/2$~(equal drift);
squares--$r_{0}=r_{s1}$~(inner drift); circles--$r_{0}=r_{s2}$~(outer drift).}
\end{figure}
Increasing the pressure gradient increases the growth rate for all three classes
of profile, although most dramatically for the `equal drift' case. This
dependence of the growth rate on pressure gradient suggests the $m=2$, $n=1$ DTM
couples to an ideal MHD instability, similar to the interaction between the
$m=1$, $n=1$ kink and tearing modes.\cite{Ara1978a} We verify the presence of
this ideal instability by running a scaling study of growth rate with
resistivity in three sample equilibria. Figure~\ref{fig:ideal_eta}
shows that in the presence of a pressure gradient there is a minimum growth
rate below which $\gamma$ no longer varies with resistivity. This minimum value
increases with increasing pressure gradient, and is not observed in the
force-free equilibrium. While we have not completed the analysis of
this ideal mode, Pritchett et al.\cite
{Pritchett1980}~showed in Cartesian geometry that the DTM tearing layers couple
to a slab-kink mode, the stability of which determines the dependence of the
growth rate on resistivity. We propose that in cylindrical geometry the addition
of a pressure gradient may cause this kink mode to become unstable, thus further
driving the DTM growth.
\begin{figure}
\includegraphics[width=\columnwidth]{ideal_eta_scale.pdf}
\caption{\label{fig:ideal_eta}Dependence of the $m=2$, $n=1$ DTM on
resistivity $\eta$ in the presence of a pressure gradient centered at $r_ {0}=
(r_{s1}+r_ {s2})/2$~(equal drift profile) for different values of the peak
pressure gradient. Triangles $\partial_{r}P|_{r=r_{0}}=0$; squares $\partial_
{r}P|_ {r=r_{0}}=-1.12$; circles $\partial_{r}P|_{r=r_{0}}=-1.79$.}
\end{figure}
We note that Zhao et al.\cite{Zhao2011} found a
similar increase in the DTM growth rate in the presence of a nontrivial pressure
profile. In their simulations, however, the pressure gradient increased the
dependence of the $m=2$, $n=1$ DTM on resistivity~($\gamma \propto \eta^{5/6}$).
The inter-resonance distance of $D=0.26$ examined here is much larger
than than the $D=0.06$ mode considered by Zhao, suggesting that the impact of
the pressure gradient may depend on the spacing between the rational surfaces.
\subsection{Diamagnetic drift effects}
Having established via resistive
simulations that the pressure gradient introduces an ideal MHD instability, we
now introduce finite Larmor radius effects and consider how the diamagnetic
drift impacts the cylindrical DTM. We fix the ion inertial length at
$d_{i}=0.1$ in Eqn.~\ref{eq1:ohmslaw},
which results in an ion-sound Larmor radius of
$\rho_{s} = \sqrt{\beta}d_{i}\approx 0.014$. This large ion
scale is required to provide sufficient scale separation given our use of a
large resistivity to enhance numerical
stability. With $d_{i}$ fixed, increasing the maximum pressure gradient
increases the diamagnetic drift frequency. In Figure~\ref{fig:linear_hall} we
show the effect of increasing diamagnetic drift at the inner~($r_{0}=r_
{s1}$), outer~($r_{0}=r_{s2}$), or both $q=2$ rational surfaces.
\begin{figure}
\includegraphics[width=\columnwidth]{linear_hall.pdf}
\caption{\label{fig:linear_hall}Variation in growth rate $\gamma_{R}$ and
mode drift frequency $\gamma_{I}$ with
a diamagnetic drift of $\omega_{*}$ at: both $q=2$ rational surfaces~
(triangles, equal drift); the $r_{s1}$ surface~(squares, inner drift); the
$r_{s2}$ surface~(circles, outer drift). Simulations are conducted in Hall MHD,
$d_ {i}=0.1$, $\rho_{s}=0.014$, with pressure gradient as described in
Sec.~\ref{sec:equilibrium}}
\end{figure}
Both the equal drift and inner drift profiles are dominated by the
ideal MHD behavior observed in resistive MHD simulations. For
$\omega_{*}\lesssim0.05$, equal drift at
both $q=2$ rational surfaces counterbalances the enhanced driving energy and
the growth rate remains almost constant with increasing pressure gradient. The
diamagnetic drift is, however, unable overcome the ideal mode at large pressure
gradients and $\gamma_{R}$ again tracks the resistive simulations. We do not
observe any region of constant
growth rate for the inner drift equilibria, and the DTM behavior is dominated by
the ideal MHD driving for all pressure gradients.
Localized $\omega_{*}$ at the outer resonant surface has a much stronger
stabilizing affect on the DTM. At small drifts ($\omega_{*}\lesssim
0.025$) the growth rate decreases slowly with increasing
$\omega_{*}$. An inflection point is evident in the scaling near $\omega_
{*}=0.025$, after which the growth rate decreases more rapidly and nearly
linearly. The eigenmode at a drift of $\omega_{*}=0.02$ ($r_{0}=r_
{s2}$, $N_{b}=0.949$, $\delta_{N}=0.05$) shows a shearing of the perturbed
helical flux $\psi^{*}$ between the inner and outer $q=2$ surfaces that is
absent in resistive simulations of the same equilibrium~(Figure~\ref
{fig:linear_sheared_modes}). Similar shearing of the eigenmode has been observed
as consequence of equilibrium sheared flow.\cite{Mao2013,Wang2011a}
Differential diamagnetic drifts above the critical value of $\Delta \omega_
{*}=\omega_{*}(r_{s2})=0.025$ cause decoupling of the reconnecting layers, i.e.
the system acts
predominantly as two independent, drifting, single tearing modes rather than a
single double-tearing mode. Comparing two different times of an $\omega_
{*}=0.1$ outer-drift simulation in Figure~\ref{fig:linear_decoupled_modes}
shows independent movement of the structure around the inner and outer rational
surfaces. This decoupling is, again, similar to that observed in resistive MHD
sheared flow studies.\cite{Mao2013} The continued decrease of the growth rate
above $\omega_{*} \approx 0.25$ is not present in sheared flow equilibria; it is
instead due to stabilizing effects of the diamagnetic drift local to the
singular layer.\cite{Ara1978a,Rogers1995} Thus the outer drift equilibrium
manifests both the decoupling properties of equilibrium sheared flow and the
reconnection inhibiting benefits of the diamagnetic drift.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{{linear_sheared_modes}.pdf}
\caption{\label{fig:linear_sheared_modes}Linear eigenmodes of the helical flux
function $\psi^{*}$ in the presence of pressure gradient centered at $r_{0}=r_
{s2}$ with $N_{b}=0.949$. In restive MHD ($d_{i}=0$, left) there is no
diamagnetic drift; in Hall MHD ($d_{i}=0.1$, right) the outer surface
experiences a drift of $\omega_{*}=0.02$ while the inner surface does not,
resulting in a shearing of the eigenmode.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{{linear_decoupled_modes}.pdf}
\caption{\label{fig:linear_decoupled_modes}Two simulation times for a linear
$m=2$, $n=1$ DTM with
pressure profile centered at $r_{0}=r_{s2}$ producing a localized drift of
$\omega_{*}=0.1$. The perturbed helical flux $\psi^{*}$ at the later time
(right) is not a simple rotation of the earlier time (left). Instead the inner
and outer rational surfaces evolve independently, overlapping between the
surfaces. The two tearing layers are decoupled from each
other by the differential diamagnetic drift.}
\end{figure}
The scaling behavior in Figure~\ref{fig:linear_hall} shows
significant differences depending on where we apply the diamagnetic drift.
These differences are due to the inherent asymmetry between the two rational
surfaces of the cylindrical DTM. Consider the
growth rate $\gamma_{R}$ and eigenmode drift frequency $\gamma_{I}$ for the
stabilized outer drift~($r_{0}=r_{s2}$) and weakly destabilized inner
drift~($r_{0}=r_{s1}$) profiles. Both feature a strong diamagnetic drift
localized at one of the rational surfaces, producing a differential
$\Delta \omega_ {*}$ that results in decoupling. Increasing $\omega_{*}$
at $r_{0}=r_{s1}$ does not, however, produce any measurable eigenmode drift~
($\gamma_{I}$).
Even in the outer drift ($r_{0}=r_{s2}$) case, where the
perturbation does rotate, only one drifting eigenmode can be found via
Fourier analysis. This behavior is in contrast to slab-Cartesian sheared flow
studies where two oppositely drifting eigenmodes are observed
post-decoupling.\cite{Mao2013}
The magnetic shear is much greater at the outer rational surface than the inner
(see Fig.~\ref{fig:q_P_ex}), and as a consequence the driving energy local to
the $r_{s2}$ tearing layer is much larger. When the two layers are coupled they
grow as a single mode, but the equilibrium asymmetry causes the
eigenfunction to be biased toward the outer rational surface~(Figure~\ref
{fig:linear_sheared_modes}).
This surface largely dominates the DTM growth. When decoupled
the single layer at $r_{s2}$ is the fastest growing mode; the slower
mode at the inner rational surface cannot easily be detected in our initial
value simulations.
A diamagnetic drift localized near $r_{0}=r_{s1}$ stabilizes and
decouples the weaker inner rational surface. The outer,
dominant surface does not experience any drift, and thus the inner drift
pressure profile does not result in a measurable eigenmode drift~($\gamma_{I}$).
Nor does the dominant, outer surface encounter
any of the stabilizing effects of the diamagnetic drift. As a consequence,
$\gamma_{R}$ is largely controlled by the destabilization of the ideal MHD mode.
When the drift is instead localized near $r_{0}=r_{s2}$, the outer surface
does rotate and we measure a finite $\gamma_{I}$. The dominant
surface now experiences the stabilizing diamagnetic effects and the growth
rate decreases. For $\omega_{*} > 0.1$ the growth rate again
slowly increases with increasing pressure gradient, and the measured mode drift
frequency $\gamma_{I}$ suddenly drops to a much lower value. Considering
Figure~\ref{fig:linear_decoupled_modes},
the eigenfunction near the inner~($r_{s1}$) singular layer is clearly visible.
The local diamagnetic drift at $r_{s2}$ has thus stabilized
the outer layer sufficiently that it now has a slower growth rate than the
inner, unstabilized tearing mode. Increasing the pressure gradient beyond this
value will have not further decrease the growth rate, as now the ideal MHD
driving of the inner surface has become dominant.
Our linear simulation results highlight two qualities of this $m=2$, $n=1$
double-tearing mode which limit the stabilizing properties of the diamagnetic
drift. Firstly, the DTM is strongly driven by the interaction between the $q=2$
rational surfaces. Unless a profile provides some differential
rotation effect to decouple the two tearing layers, the diamagnetic drift is not
sufficient to overcome the ideal MHD driving of the increased pressure gradient.
Secondly, the asymmetric magnetic shear inherent to cylindrical geometry causes
one of the $q=2$ rational surfaces to be dominant. For the eigenmode growth rate
to decrease, this fastest growing surface must experience the stabilizing
diamagnetic drift.
\section{Nonlinear Diamagnetic Drifts}
\label{sec:nonlinear}
Based on the linear results of the previous section, we choose the drift of
$\omega_{*}=0.1$ for nonlinear simulations. When localized near the outer
rational surface this diamagnetic drift resulted in the lowest observed growth
rate. To better understand the nonlinear evolution of the DTM we will compare
the $\omega_{*}=0.1$ outer drift profile to both the force-free DTM and equal
and inner drift profiles with the same $\omega_{*}$.
The nonlinear evolution of double-tearing modes is commonly classified by the
growth of the kinetic and perturbed magnetic energies:
\begin{align}
E_{k} &= \int \frac{1}{2} \rho \mathbf{U} \cdot \mathbf{U} d\mathrm{V} \\
E_{m} &= -\int \frac{1}{2} \left ( \mathbf{\delta B} \cdot \mathbf{\delta B} +
2 \mathbf{\delta B} \cdot \mathbf{B}_{0} \right ) d\mathrm{V}
\end{align}
The introduction of diamagnetic drifts in this work will cause the perturbations
to rotate, thus the kinetic energy of all the drifting systems will typically be
larger than the force-free case. As a consequence, the absolute magnitude of
$E_{k}$ is not as good a representation of DTM stability as $E_{m}$. The general
features of the kinetic energy will, however, provide an indicator of the stage
of DTM evolution.
The kinetic and magnetic energy growth of the force-free, $m=2$, $n=1$ baseline
is shown in Figure~\ref{fig:ek_em}.
The long period of nearly exponential energy growth
represents the development of finite sized magnetic islands such as those
in Figure~\ref{fig:force_free_psi}.
In our simulations these magnetic islands do not develop the magnetic structure
necessary for the explosive growth phase observed in higher mode number
DTMs;\cite{Ishii2000,Janvier2011} the kinetic energy of the force-free DTM in
Figure~\ref{fig:ek_em} approaches a maximum value at simulation time
$t\approx265$ smoothly. This maximum
$E_{k}$ corresponds to the separatrix at the inner rational surface merging
with that of the outer, as show in Figure~\ref{fig:force_free_psi}. At this time
the flux between the rational surfaces has been consumed by the reconnecting
layers and the annular current ring is disrupted. Continued
evolution beyond this point results in the magnetic islands reconnecting
completely and relaxation of the system.
A particular feature of this moderately spaced, low
mode number DTM is that the flux surrounding the magnetic axis is consumed by
reconnection at the $r_{s1}$ rational surface, causing the inner current
sheets to merge across $r=0$. Other simulations of off-axis sawtooth activity in
TFTR\cite{Chang1996} have shown similar behavior, however it is not typically
observed in higher mode number or more closely spaced DTMs.\cite{Ishii2000,Bierwage2005}
In this work we treat the separatrix merging event as a complete loss of system
stability, and thus will not consider whether such highly symmetric behavior
is relevant to realistic devices.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{{ek_em}.pdf}
\caption{\label{fig:ek_em}Nonlinear kinetic and energy growth of the DTM in the
presence of various pressure profiles. The force-free profile has no
equilibrium diamagnetic drift. The remaining profiles have drifts of $\omega_
{*}=0.1$: $r_{0}=(r_{s1}+r_{s2})/2$ - equal drift at both $q=2$ surfaces;
$r_{0}=r_{s1}$ - localized at the inner surface; $r_{0}=r_{s2}$ - localized at
the outer surface.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{{force_free_psi}.pdf}
\caption{\label{fig:force_free_psi} Contours of the helical flux function
$\psi^{*}$ show the island growth regime~(left) and separatrix merging
event~(right) of the nonlinear, force-free DTM.}
\end{figure}
To evaluate the impact of electron diamagnetic drifts nonlinearly we consider
examples from each of the three classes of pressure profile (equal, inner,
and outer drift) that have an equilibrium drift of $\omega_{*}=0.1$ at
both rational surfaces, just the inner, or just the outer.
To decrease simulation time we seed our 2D helically
symmetric simulations with an $m=2$, $n=1$ perturbation approximately $\num
{1e-4}$ times the equilibrium field, and evolve the system using an
implicit, second order Crank-Nicolson method. For these simulations we use a
grid of 2048 cells in $r$ and 512 in $\theta$, with a nonuniform distribution in
the radial coordinate that increases resolution at and within the $r_{s2}$
surface while decreasing it towards the $r=1$ conducting wall boundary. To aid
numerical stability we choose a resistivity of $\eta=\num{2e-5}$ and set all
other dissipation coefficients to $\num{1e-5}$. We have successfully convergence
tested the following results in both spatial and time resolution, and
also verified that the extra dissipation coefficients do not significantly
impact the mode behavior.
\subsection{Inner drift}
During the linear phase, locating a pressure gradient at the inner rational
surface produced a marginal increase in the growth rate but otherwise did not
significantly impact the mode evolution. Nonlinearly we find that
the dominance of the $r_{s2}$ rational surface persists and a strong
pressure gradient~(and thus drift) at $r_{s1}$ results in slightly faster
growth of the magnetic perturbation when compared to the force-free system~
(Figure~\ref{fig:ek_em}). We find that the island at the
outer rational surface quickly grows large enough to interlock the layer at the
inner surface, which has been previously been observed in nonlinear differential
rotation simulations.\cite{Wang2011} Once recoupled, the separatrix merging event
proceeds with only minor deviation from the force-free system.
Thus locating a drift at the inner, sub-dominant rational surface results in
more system kinetic energy (due to plasma flows near the inner
surface) but is not an effective means of slowing DTM mode growth.
\subsection{Outer rational surface}
Locating a strong diamagnetic drift at the outer resonant surface strongly
stabilizes the nonlinear DTM. The early time growth of $E_{m}$ in Figure~\ref
{fig:ek_em} shows large oscillations, indicating that the two tearing surfaces
are initially decoupled. At later times these fluctuations continue,
but at a smaller amplitude compared to the total perturbed magnetic
energy.
Considering the state of the system
at the last simulation time ($t=750$, Figure~\ref{fig:outer_final}),
a significant amount of flux remains between the two $q=2$ surfaces and the
annular current ring is
intact. The plasma flow is largely circulating inside the outer
islands rather than between the two surfaces. The flattening of
the energy growth, together with the relaxed magnetic structure in Figure~\ref
{fig:outer_final}, shows that the fundamental $m=2$, $n=1$ double-tearing mode
is effectively saturated. Intermittent reconnection activity causes late
time fluctuations of $E_{k}$ and $E_{m}$ as the system oscillates, but does
not significantly disturbed the saturated state. At these late times the
source electric field causes slow growth of the perturbed
perturbed magnetic energy, as it drives the system back toward
equilibrium. By comparing to simulations without the source field we
have determined that it has the effect of `pumping' the simulation with energy
rather than allowing it to fully relax.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{{outer_final}.pdf}
\caption{\label{fig:outer_final} Applying a diamagnetic drift of $\omega_
{*}=0.1$ at the outer rational surface ($r_{0}=r_{s2}$) saturates the DTM, as
shown by contours of the helical flux $\psi^{*}$~(left) and perpendicular
flow~(right) at the final simulation time of $t=750$.}
\end{figure}
\subsection{Equal drift}
The nonlinear development of the equal drift profile, as viewed through the
energy evolution, is significantly more complicated than the previous case.
$E_{k}$ growth slows significantly near $t\approx250$ (Figure~\ref {fig:ek_em}),
then rises at a reduced, fluctuating rate. In Figure~\ref{fig:equal_cf} (at
this crest in $E_{k}$) a significant amount of flux remains between the two
tearing surfaces. This roll-over does not, therefore, correspond to a separatrix
merging event. Even at the end of our simulation ($t=390$), when the inner
current sheets approach each other across the magnetic axis, a small amount of
flux remains between the two separatrices.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{{equal_cf}.pdf}
\caption{\label{fig:equal_cf} Simulation times $t=250$~(kinetic energy
roll-over) and $t=390$~(last simulation time) for an equal drift $\omega_
{*}=0.1$ DTM. Nonlinear enhancement of the pressure gradient severely inhibits
reconnection so that flux remains between the two rational surfaces, but cannot
halt the structural instability.}
\end{figure}
The decrease in $E_{k}$ and $E_{m}$ growth is a result of
nonlinear evolution of the pressure profile. Figure~\ref{fig:equal_gradp} shows
cuts of $\partial_{r}p/\rho$ (the pressure and density
contribution to Eqn.~\ref{eq:drift}) across the inner and outer current sheets
at time $t=250$ (the roll-over point) compared to the equilibrium profile. The
nonlinear growth of the magnetic islands distorts the equilibrium pressure
gradient and results in a significant enhancement to the diamagnetic drift
within the tearing layers. As a consequence, reconnection is highly suppressed.
Similar nonlinear enhancement of the pressure gradient has been proposed as a
mechanism for saturation $m=1$ kink-tearing mode.\cite{Rogers1995,Beidler2011} In
our DTM simulation, however, this nonlinear enhancement of $\omega_{*}$ does
not lead to complete stabilization. The magnetic islands continue to evolve, and
intermittent reconnection occurs. This continued growth of the kinetic and
magnetic energies~(Fig.~\ref{fig:ek_em}), and deformation of the
separatrices~(Fig.~\ref{fig:equal_cf}) is a consequence of the large
islands sizes required to enhance the pressure gradient sufficiently to
cut off reconnection. The magnetic structure has
become unstable, similar to previous simulations of explosive-type
double-tearing modes,\cite{Ishii2002,Janvier2011} and thus the instability
continues to develop instead of saturating. In this respect the
equal drift profile is less desirable than the outer drift, which stabilized the
mode before significant deformation.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{{equal_gradp_cuts}.pdf}
\caption{\label{fig:equal_gradp} Cuts of $\partial_{r} p/\rho$ across the
inner and outer current sheets at $t=250$ show significant enhancement of
the pressure gradient compared to the equilibrium.}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this work we have shown that electron diamagnetic drifts can have a
stabilizing effect on the $m=2$, $n=1$ double-tearing mode. Their efficacy
depends, however, on where they are located. Linearly we were only able to
achieve a substantial decrease in the DTM growth rate by localizing a strong
pressure gradient at the outer rational surface. This class of profile combined
the decoupling properties of differential plasma rotation with the local
stabilizing
benefits of the diamagnetic drift on the dominant tearing layer. The equal and
inner drift profiles contained only one of these stabilizing effects, and thus
could not overcome the increased ideal MHD driving caused by the pressure
gradient.
Nonlinearly we found the decoupling and local stabilization effects of the outer
drift profile resulted in saturation of the DTM at finite amplitude. The
preservation of the annular current ring in this simulation indicates that
diamagnetic drifts may act as a mechanism for stabilizing off-axis sawtooth
crashes. We also found that nonlinear enhancement of the pressure gradient
in an equal drift profile was able to significantly slow the growth of the
instability. The large islands necessary to halt reconnection, however, resulted
in unstable magnetic structure.
Our work shows that when finite Larmor radius effects are included
the presence of DTM activity depends strongly on the details of the plasma
pressure. The equilibria considered in this work are highly constrained, and
more realistic profiles will likely include some intermediate form of our
results. Whether such profiles are a viable mechanism for avoiding DTM
driven off-axis sawtooth behavior in RMS devices is currently unclear.
Extending these results to high mode number DTMs and toroidal geometry would
allow better comparison to experimental data, as would consideration of
non-constant temperatures and hot ions. We have provided strong evidence,
however, for the dependence of DTM activity on the location of steep
pressure profiles in the plasma.
\begin{acknowledgments}
This work was supported by the U.S.~Department of Energy, Office of Science,
Office of Fusion Energy Sciences under Award Number DESC0006670. It contains
research from a dissertation submitted to the Graduate School at
the University of New Hampshire as part of the requirements for completion of
Stephen Abbott's doctoral degree in Physics.
Calculations were performed using: Trillian, a Cray
XE6m-200 supercomputer at UNH supported by the NSF MRI program under grant
PHY-1229408; and Fishercat, an IBM Blade Center H supported by the NSF CRI
program under grant CNS-0855145.
\end{acknowledgments}
|
1,314,259,995,947 | arxiv | \section{Introduction}
Flux monitoring of quasars has provided evidence that the dark matter
might be composed of objects of planetary mass, ${\rm M\la 10^{-3}
\,M_\odot}$. This evidence comes from two types of observations:
optical variability that may be due to distant gravitational microlenses
(Irwin et al. 1989; Hawkins 1993; Schild 1996); and radio monitoring data
(Fiedler et al. 1987), which show ``Extreme Scattering Events'' (ESEs),
can be sensibly interpreted as plasma lensing by cool clouds in the Galactic
halo (Walker \& Wardle 1998). However, these data can be interpreted
in other ways (Wambsganss, Paczy\'nski \& Schneider 1990; Baganoff \&
Malkan 1995; Schmidt \& Wambsganss 1998; Romani, Blandford and Cordes 1987),
and the idea that dark matter takes the form of planetary mass lumps
is currently just an interesting suggestion. What is needed
now is to move away from suggestive evidence, which has served its
purpose in drawing attention to the proposed picture, and towards some
decisive observational tests. Such tests have previously been contemplated
(Press \& Gunn 1973; Gott 1981; Canizares 1982; Vietri \& Ostriker 1983;
Paczy\'nski 1986), with firm negative results for some mass ranges
(Press \& Gunn 1973; Dalcanton et al. 1994; Carr 1994; Alcock et al. 1998).
However, all of these tests admit the possibility of a
substantial quantity of dark matter residing in planetary-mass
gas-clouds. In view of the quasar monitoring data, observations designed
specifically to investigate this particular mass range would be worthwhile.
In this paper we demonstrate that
clean experimental tests are, in fact, quite straightforward to arrange
in respect of quasar microlensing. The most important consideration
is selection of the sample of sources to be monitored; as described
in \S2 and \S3, these should be apparently close to low redshift galaxies.
In \S4 we discuss the potential of such data for distinguishing
between low-mass gas clouds and more compact microlenses.
\section{Microlensing at low optical depth}
The large-scale distribution of dark matter within a galaxy can
be approximated by an isothermal sphere, for which
the surface density as a function of radius, $r$, is (Gott 1981)
$$
\Sigma(r)={{\sigma^2}\over{2Gr}},\hfill\stepeq
$$
where $\sigma$ is the line-of-sight velocity dispersion, and we
have neglected the possibility of a core in the density distribution.
Suppose now that this surface density is {\it entirely\/} in compact lumps
of material (the meaning of ``compact'' will be addressed in \S3),
then it follows that the optical depth to gravitational microlensing
by these lumps is just
$$
\tau={{\Sigma(r)}\over{\Sigma_c}}=2\pi{{\sigma^2}\over{c^2}}
{1\over\chi},\hfill\stepeq
$$
where $\Sigma_c\simeq{{c^2}/{4\pi GD_d}}$
is the critical surface density for multiple imaging, and
$\chi\equiv r/D_d$, for a galaxy at distance $D_d$. Here we
have assumed that we are observing a distant quasar behind a
low-redshift galaxy. If, for simplicity, we suppose that
all large galaxies can be approximated as having $\sigma\simeq150\;
{\rm km\,s^{-1}}$, then we arrive at the convenient formulation
$$
\tau\sim{1\over{3\chi}},\hfill\stepeq
$$
where $\chi$ is now expressed in arcseconds. From this we can
see immediately that any quasar which lies within, say, an arcminute
of a bright galaxy stands a modest chance of being microlensed, providing
only that the dark matter is in compact form.
A corollary of the above is that any quasar which is aligned within
about an arcsecond of the centre of an intervening galaxy has near unit
probability of being microlensed at any given time. So is such an
alignment the most favourable place to investigate microlensing? No.
This situation is favourable in only two respects: first there will
very likely be macrolensing -- i.e. multiple imaging of the quasar on the
scale of arcseconds -- which, by photometric monitoring of the individual
macro-images, permits microlensing to be distinguished from variations
which are intrinsic to the quasar. Secondly the large optical
depth means that there will be essentially continuous microlensing
variations. Unfortunately these benefits are offset by two substantial
disadvantages: first the central arcsecond of any large galaxy is composed
predominantly of stars, not dark matter; and secondly the network of
caustics which occurs at high optical depths (see, for example,
Schneider, Ehlers \& Falco 1992) leads to complex light-curves which are
difficult to interpret. (Note that it is not easy even to measure,
accurately, the brightness of the individual macro-images, as they are
blended with the core of the lensing galaxy, and with each other, in
ground-based observations). Indeed these problems are evident in the literature
on the gravitationally lensed quasar 2237+0305, which is seen through
the central bulge of a low-redshift galaxy. Rapid variations in the flux of
one of the macro-images were initially interpreted (Irwin et al 1989)
in terms of microlensing by a low-mass object, but this interpretation
was later challenged (Wambsganss, Paczy\'nski \& Schneider 1990)
and the data re-interpreted in terms of microlensing by stars. Subsequently
it has been emphasised that the light-curves for this system {\it do\/} admit a
population of low-mass lenses (Refsdal \& Stabel 1993), leaving the
whole question quite open.
The difficulty of interpreting light-curves which arise from a dense
network of caustics argues for a shift in emphasis towards monitoring
of systems where the optical depth is small.
In this regime we may observe individual microlensing events, superimposed
on a more-or-less steady baseline (distinguishing microlensing from other
forms of variability is addressed in \S5). This is a great advantage in that
the observed event time-scales can then be related more-or-less directly
to mass scales. The lower event-rate associated with a small optical
depth is the principal disadvantage of this regime, but this can be offset
by studying a larger number of targets in order to accumulate good
statistics.
\beginfigure{1}
\psfig{figure=figure1.ps,width=8cm}
\caption{{\bf Figure 1.} Detectability of gravitational microlensing
by compact masses ($m\equiv M/{\rm M_\odot}$) as a function of lens
redshift, $z$. In order for a microlens to introduce significant magnification,
it must be more massive than indicated by the ``source-size limit''.
The adopted source size is $10^{15}$~cm, located at $z_Q=2$, and
an empty ($\Omega=0$) universe is assumed.
Approximate time-scales for microlensing events are shown by the
dashed lines (assuming a transverse speed of $600\;{\rm km\,s^{-1}}$).
The upper bound of Dalcanton et al. (1994) is also plotted: this corresponds
to the largest permitted mass, if galaxy halos contribute $\Omega_g\sim0.1$,
are entirely composed of microlenses, and $\Omega<0.6$.}
\endfigure
\section{Microlensing at low redshift}
So far we have given no reason to prefer galaxies in any particular
redshift range. In constructing a sample of quasars seen through
galaxy halos we would find that most of the cases involve
distant galaxies, simply because there is a greater surface density
of distant galaxies than nearby ones. Unfortunately these examples
are less useful for investigating microlensing by low-mass objects;
the reason is that at large distances the angular size of the (Einstein
ring of the) lens becomes smaller than the angular size of the quasar, and
so the apparent changes in quasar flux
are small. If this happens we lose not only signal-to-noise ratio
but also our ability to predict light-curves, because our current understanding
of the emission from quasars is so poor that the point-source approximation
is the only one which enables confidence. (Of course, one can use
lensing phenomena to investigate source structure, but that is not
our concern here.) This is especially important because a resolved
source may exhibit substantial differences between the light-curves
seen at different frequencies, whereas microlensing events of a point-like
source by a point-like lens are achromatic and this feature aids the
interpretation of any observed variability.
Figure 1 illustrates the limits imposed by source size on the detectability
of microlenses, of various masses, as a function of lens redshift. (We note
here that the mass limits quoted by Walker \& Ireland [1995] are too pessimistic
-- they appear to have been derived from a comparison between the linear
dimensions, rather than the angular dimensions, of lens and source -- thanks
to Steve Warren for drawing attention to this.) To be definite we take a source
of radius $10^{15}\;{\rm cm}$ (c.f. Wambsganss, Paczy\'nski \& Schneider 1990)
at $z_Q=2$; we adopt a Hubble constant of $75\;{\rm km\,s^{-1}\,Mpc^{-1}}$, and
angular-diameter/redshift relations appropriate to an $\Omega=0$
universe. From figure 1 it is evident that for microlenses at $z_d\sim1$
there will be relatively little sensitivity to the mass range
$M\la10^{-3}{\rm M_\odot}$. This is just the mass range of interest and so
it is critical to select lines-of-sight which intersect {\it low-redshift\/} galaxies.
For example, at $z_d\sim10^{-2}$ we have sensitivity to microlenses of
$M\ga10^{-5}{\rm M_\odot}$, with the upper end of the mass range being fixed by
the duration of the monitoring experiment. We note that if galaxies contribute
$\Omega_g\sim0.1$ to the cosmological density parameter, $\Omega$, and $\Omega<0.6$,
then microlenses which are more massive than $10^{-2}{\rm M_\odot}$
cannot dominate their halos (Dalcanton et al. 1994) --- a result
which follows from analysis of the equivalent widths of quasar emission lines.
This bound is plotted in figure 1. Approximate
microlensing event time-scales are also plotted in figure 1, and from
these we see an added advantage of low redshift galaxies, namely that the
time-scales are well matched to an observing program. By contrast events
involving $10^{-3}{\rm M_\odot}$ microlenses take years at $z_d\sim1$.
The considerations we have given also apply to
the lowest redshift halo, namely the Galactic halo,
which has a very small optical depth, $\tau<10^{-6}$.
Paczy\'nski (1986) suggested that its compact constituents could be revealed
by their microlensing influence on the flux from LMC stars. Indeed microlensing
by stellar-mass objects has now been detected in this way (Alcock et al.
1997). Precisely because the observed lenses are stellar mass, however, there
remains a concern that these signals are due to the known Galactic/Magellanic
stellar populations (Sahu 1994), or to tidal debris (Zhao 1998), and are
unrepresentative of the Galactic halo as a whole. This notion is reinforced
by the Dalcanton et al. (1994) constraints, mentioned earlier.
Significantly, no signal has been seen from planetary-mass objects
(Alcock et al. 1998), which calls into question the microlensing interpretation
of quasar variability. One possible resolution of this apparent conflict is that
the low-mass microlenses suggested by Irwin et al. (1989), Hawkins (1993,1995)
and Schild (1996) are not actually dense enough to be strong microlenses in the
context of the Galactic halo, implying that their characteristic surface density
lies in the range
$$
0.1\quad \la \quad\Sigma({\rm g\,cm^{-2}}) \quad\la\quad 10^4.\hfill\stepeq
$$
This range includes the estimated mean surface density of the individual gas clouds
($\sim10^2\;{\rm g\;cm^{-2}}$) in the model of Walker \& Wardle (1998),
but excludes black holes and planets. It is worth noting that {\it all\/}
baryonic, Galactic dark matter candidates are required to have a characteristic
surface density $\Sigma\ga3\;{\rm g\,cm^{-2}}$, in order for them not to have
collided with each other within the age of the Universe (Gerhard \& Silk 1996).
This implies that all baryonic dark matter candidates associated with galaxies
must be strong gravitational lenses by $z_d\sim0.03$.
An important point has recently been made by Draine (1998): dense gas clouds
can act as strong lenses purely on account of the refractive index of the
gas itself. Draine further notes that the optical light curves for {\it gas\/}
microlensing events are very similar to those for {\it gravitational\/} microlensing
(see also Henriksen and Widrow 1995), raising the
startling possibility that some of the observed microlensing events
might actually be due to gas lensing! At present predictions
concerning gas lensing are limited primarily by our ignorance of the likely
run of density within the putative clouds. But given any specific density
distribution we can incorporate the refractive index of the gas in
our calculation of lensing behaviour; this is the approach we shall take.
\beginfigure{2}
\psfig{figure=figure2.ps,width=8cm}
\caption{{\bf Figure 2.} Model light-curves for lensing of a $z_Q=2$
quasar by a $3\times10^{-4}\,{\rm M_\odot}$ gas cloud, of $\Sigma_0=100\;{\rm
g\,cm^{-2}}$, at $z_d=0.002$ (dashed line) and 0.01 (solid line). These
correspond to $\kappa_0=2.3, 12$, respectively, while $\kappa_0^\prime$
(describing refraction by the gas itself) is, in the optical band, roughly 30\%
larger than $\kappa_0$ in each case.
The impact parameter for each event is taken to be 0.5 Einstein
ring radii, and time is given in units of the crossing-time for
one Einstein ring radius. The same source model is adopted as for
figure 1.}
\endfigure
\section{Microlensing by gas clouds}
In the previous sections we concentrated on the means by which
one can best test the picture that dark matter takes the form of planetary
mass lumps, with little regard for the specific nature of these lumps.
The observations which we advocate can, however, tell us
more than just the mass of any microlens: they also give us information
on the surface density distribution of the individual lenses. At a crude
level this is already obvious from our discussion in \S3.
On a more subtle level
there are diagnostic features present in the light-curves even when
the clouds are securely in the strong lensing regime (see also
Henriksen \& Widrow 1995).
To demonstrate this we take the example of a Gaussian surface
density profile for each microlens
$$
\Sigma(r)=\Sigma_0\,\exp(-r^2/2\sigma^2).\hfill\stepeq
$$
The corresponding
mass is then $M=2\pi\sigma^2\Sigma_0$. If we express all angles
in units of the Einstein ring radius for this mass then we arrive at
the lens equation which gives the image locations ($\theta$)
implicitly in terms of the source location ($\beta$):
$$
\beta = \theta\left[1-\kappa_0^\prime\exp(-\kappa_0\theta^2)\right] -
{1\over\theta}\left[1-\exp(-\kappa_0\theta^2)\right]. \hfill\stepeq
$$
Here we have written $\kappa_0\equiv\Sigma_0/\Sigma_c$ for the
central surface density of the cloud in units of the critical surface
density for multiple gravitational imaging; and similarly
$\kappa_0^\prime\equiv\Sigma_0/\Sigma_c^\prime$ where, following
Draine (1998), and making use of his quantity $\alpha$,
$\Sigma_c^\prime\simeq\sigma^2/\alpha D_d=M/2\pi\Sigma_0\alpha D_d$.
In the optical band, $\alpha\simeq1.2\;{\rm cm^3\,g^{-1}}$, not strongly
dependent on frequency (Draine 1998). In equation 6, then, the term in
$\kappa_0^\prime$ describes refraction by the gas.
\beginfigure{3}
\psfig{figure=figure3.ps,width=8cm}
\caption{{\bf Figure 3.} As figure 2, but with refraction by the gas neglected
($\kappa_0^\prime=0$), so that lensing is entirely due to the gravitational field.}
\endfigure
There are two simple analytic limits of equation 6: for $\kappa_0\theta^2
\gg1$ we recover the Schwarzschild lens mapping $\beta\simeq\theta-1/\theta,$
appropriate to the point-mass lens approximation; while at the other
extreme ($\kappa_0\theta^2\ll1$) we have $\beta\simeq\theta(1-\kappa_0-\kappa_0^\prime)$
as, indeed, we might expect (see Schneider, Ehlers \& Falco, 1992,
for discussion of these cases). The individual image magnifications are
determined from $\mu=(\theta/\beta)\partial\theta/\partial\beta$,
and so the light-curves corresponding to these lenses will evidently
be very different. Of more interest, though, is the general
case for which we require the exact mapping
(equation 6). In figure 2 we show theoretical light-curves for clouds
of mass $3\times10^{-4}{\rm M_\odot}$, and central surface density
$\Sigma_0=100\;{\rm g\,cm^{-2}}$, at $z_d=0.002,0.01$. These curves
pertain to the optical band, where refraction by the gas itself contributes
substantially (Draine 1998) --- for our particular model $\kappa_0^\prime
\simeq4\kappa_0/3$.
It is reasonable to anticipate that target quasars could be monitored
with a standard error of 0.01~magnitudes, so the differences between
these two light-curves are easily measurable. The more distant of the two
examples is not quite distinguishable from a truly point-like lens.
An important qualitative feature of each curve, which is not present for
the Schwarzschild lens, is the existence
of a fold caustic at $\theta\simeq1/\sqrt{\kappa_0+\kappa_0^\prime}$.
This caustic introduces a thin annulus of high magnification which,
by virtue of its small angular extent, is expected to be chromatic even
if the principal peak in the light-curve is not; this caustic is evident
in figure 2 as the subsidiary peaks at $t\simeq\pm1.2$ for the lower redshift
lens. For the more distant lens the caustic crossing occurs at $t\simeq\pm3.7$,
but the high magnification region is so thin (in comparison with the
source dimension) that there is no peak in the light-curve at these locations.
For reference we show in figure 3 light-curves for our model clouds
at wavelengths where there is negligible refraction by the gas itself
(i.e. $\kappa_0^\prime\ll\kappa_0$). In figure 3 both light curves are
readily distinguished from microlensing by a point-mass lens. Relative
to figure 2, where refraction by the gas is non-negligible, the principal
difference is that the caustic ring shrinks in radius, because of the
smaller central ``beam convergence'' (sum of $\kappa_0$ and $\kappa_0^\prime$),
and becomes broader. For the more distant lens (solid line), the increased
width of the annulus of high magnification renders the caustic more
visible in figure 3 than figure 2. For the lower redshift lens, however,
the source only grazes the caustic, rather than crossing it, and this
leads to a single central peak in the light-curve.
\beginfigure{4}
\psfig{figure=figure4.ps,width=8cm}
\caption{{\bf Figure 4.} As figure 2, but with $\kappa_0^\prime=0$, and
adopting an opacity of $0.4\;{\rm cm^2\,g^{-1}}$ --- these values are appropriate
to lensing in the hard X-ray band, where Thomson scattering dominates the
opacity of the gas.}
\endfigure
A final point to make is that where light passes through a gas cloud some
absorption may occur. This is expected to be a small effect in the optical
band (else the putative clouds should have already been discovered in this way),
but the extinction is certainly large in the far UV and throughout the X-ray
region. X-ray light-curves are therefore expected to appear quite different
to their optical counterparts, in cases where the lens is not point-like.
To demonstrate this we have computed X-ray light-curves for our model
clouds. At energies of several keV to several hundred keV the extinction
is principally due to electron scattering, so across this broad range each
cloud presents an optical depth $\tau\simeq40\,\exp(-r^2/2\sigma^2)=
40\,\exp(-\kappa_0\theta^2)$. Image locations are as given by equation 6,
with $\kappa_0^\prime=0$ (refraction by the gas is negligible in the X-ray
band), and each image is attenuated by a factor $\exp(-\tau)$.
The resulting light-curves are shown in figure 4; these curves may be
compared directly with those of figure 2, which involve the same lensing
geometry and differ only in observing wavelength. Interestingly, while the
more distant lens has a light-curve which qualitatively still resembles
microlensing by a point mass, the nearby lens ($z_d=0.002$, dashed curve)
manifests an extinction event. Now both lenses have the same physical optical
depth profile, with a central optical depth of 40, so this difference is entirely
a consequence of the different lensing geometries. More specifically: during both
events magnified images of the source lie close to the Einstein ring of the
lens, i.e. at $\theta\sim1$ in each case; this corresponds to an
optical depth of $40\,\exp(-\kappa_0)$, which is $\simeq4\times10^{-4}$
for the more distant lens, but $\simeq4$ for the closer one, leading
to substantial attenuation of the lensed images in the latter case.
In other words, for sufficiently distant clouds the strongly magnified
images are located at large physical separations from the cloud,
and the lens can be regarded as effectively point-like.
\section{Discussion}
The main barrier to the investigations we advocate is not so much the
actual photometric monitoring, which is routine, but the identification of
suitable targets. One approach which has previously been suggested
(Walker \& Ireland 1995; Tadros, Warren \& Hewett 1998) is
to monitor quasars lying behind rich clusters of galaxies at
low redshift, but this approach is really only feasible for cameras
which have exceptionally large fields of view.
An alternative is to construct a very large sample of quasars, and then
select out the small fraction which are viewed through halos
of foreground galaxies, by cross-correlating with a galaxy catalogue.
We can estimate the microlensing optical depth which is contributed
by galaxies within redshift $z_d$ ($z_d<1$) from $\tau(z_d)\sim
\Omega_g\, z_d^2$ (c.f. Press \& Gunn 1973), where $\Omega_g$ is the
average mass per unit volume in galaxies, expressed in units of the
critical density, and we have assumed that galaxies are
composed predominantly of microlenses. Taking $\Omega_g\sim0.1$
it follows that we need a sample of $N_Q\sim10^5$ quasars in
order to amass a combined optical depth in excess of unity from
galaxies within $z_d\simeq10^{-2}$; no such sample exists. However,
the dependence on redshift is quadratic, and within $z_d\simeq0.02$
we need only $N_Q\sim3\times10^4$ sources; so with the largest available
quasar survey (Boyle et al. 1998) we expect a combined optical
depth of $\tau(z_d=0.02)\sim1$. Of course the bulk of the
quasars in any survey make a trivial contribution to
this estimate, because they do not lie behind galaxy halos.
If, for example, we suppose that each galaxy halo extends to a
radius of 50~kpc, and we take the space density of galaxies to be
$4\times10^{-3}\;{\rm Mpc^{-3}}$, then only 80 quasars contribute.
Thus by selecting out the close angular coincidences between
quasars and galaxies at low-redshift, it becomes feasible to monitor
a sub-sample in which there is always a microlensing event
in progress. At $z_d\la0.02$ the influence of the quasar dimensions
on the observed light-curves should be small, provided the microlenses
have masses $M\ga3\times10^{-5}\,{\rm M_\odot}$ (see figure 1).
It is worth setting out the criteria by which microlensing can
be distinguished from other causes of variability. Most importantly, for
the proposed experimental conditions we expect only weakly chromatic
light-curves for the optical continuum. If the lenses are sufficiently
point-like then similar light curves are expected for the X-ray band,
assuming that the X-ray source is comparable in size to the optical
source. (But non point-like lenses introduce attenuation which may
lead to X-ray extinction events --- compare figs. 4 and 2.) However,
one does not expect any associated changes in radio flux,
because the emission region in this case is too large to be significantly
affected. The same holds true for the optical emission lines, which
are believed to arise from a region of much larger dimensions than the
optical continuum. Note that for broad-band optical photometry this means
there will always be a non-varying component required when fitting
to theoretical light-curves. In this case one needs to subtract the steady,
emission-line flux in each band prior to testing for achromaticity. For a
single lens, and no strong external shear, one also expects the light-curves
to be time-symmetric. These criteria can be employed for individual events.
In addition, a strong test for microlensing becomes possible when a sample
of candidate events is available: correlation of measured optical depth with
the theoretical estimate. This correlation is expected to be good because the
connection between theory (e.g. \S2) and experiment is very close. This,
coupled with the fact that intrinsic variations should have absolutely nothing
to do with foreground objects -- i.e. zero correlation predicted for intrinsic
variations -- makes for a very powerful test indeed.
\section{Conclusions}
It is desirable to initiate photometric monitoring of quasars seen through
the outer halos of low-redshift galaxies. For this type of configuration,
planetary-mass lumps of dark matter introduce discrete microlensing
events which are unlikely to be confused with intrinsic outbursts. A
strong test of any putative microlensing is available in the ensemble properties
of the quasar sample: the measured optical depth should correlate well
with the theoretical value. An observed correlation of this type would eliminate
the possibility of events being intrinsic to the sources, while non-detection
could, for example, eliminate {\it all\/} Jovian-mass dark matter candidates
--- a conclusion which cannot be reached on the basis of LMC microlensing
observations. If microlensing is indeed detected, the observed light-curves
have the potential to differentiate between point-like lenses (black holes/planets)
and gas clouds.
\section*{Acknowledgments}
Thanks to Mark Wardle, Brian Boyle and Ken Freeman for their thoughts on
the various issues herein.
\section*{References}
\beginrefs
\bibitem Alcock C. et~al. 1997 ApJ 486, 697
\bibitem Alcock C. et~al. 1998 ApJL 499, L9
\bibitem Baganoff~F.~K. \& Malkan~M.~A. 1995 ApJL 444, L13
\bibitem Boyle B.~J., Smith~R.~J., Shanks~T., Croom~S.~M.,
Miller~L. 1998 proc. IAU Symp. 183, Cosmological parameters and the
evolution of the Universe
\bibitem Canizares~C.~R. 1982 ApJ 263, 508
\bibitem Carr~B. 1994 ARAA 32, 531
\bibitem Dalcanton J.~J., Canizares C.~R., Granados~A., Steidel~C.~C.,
Stocke~J.~T. 1994 ApJ 424, 550
\bibitem Draine B.~T. 1998 ApJL 509, L41
\bibitem Fiedler R.~L., Dennison~B., Johnston~K.~J., Hewish~A. 1987 Nat 326, 675
\bibitem Gerhard O. \& Silk J. 1996 ApJ 472, 34
\bibitem Gott~J.~R. 1981 ApJ 243, 140
\bibitem Hawkins M.~R.~S. 1993 Nat 366, 242
\bibitem Hawkins M.~R.~S. 1996 MNRAS 278, 787
\bibitem Henriksen~R.~N. \& Widrow~L.~M. 1995 ApJ 441, 70
\bibitem Irwin M.~J., Webster~R.~L., Hewett~P.~C., Corrigan~R.~T., Jedrzejewski~R.~I.
1989 AJ 98, 1989
\bibitem Paczy\'nski B. 1986 ApJ 304, 1
\bibitem Press W.~H., Gunn J.~E. 1973 ApJ 185, 397
\bibitem Refsdal S., Stabell P. 1993 A\&A 278, L5
\bibitem Romani R., Blandford~R.~D. \& Cordes~J.~M. 1987 Nat 328, 324
\bibitem Sahu K. 1994 Nat 370, 275
\bibitem Schild R.~E. 1996 ApJ 464, 125
\bibitem Schmidt R. \& Wambsganss J. 1998 A\&A 335, 379
\bibitem Schneider P., Ehlers W., Falco E.~E. 1992 Gravitational Lenses, Springer-Verlag, Berlin
\bibitem Tadros~H., Warren~S. \& Hewett~P. 1998 New Ast (astro-ph/9806176)
\bibitem Vietri~M., Ostriker~J.~P. 1983 ApJ 267, 488
\bibitem Walker M.~A., Ireland~P.~M. 1995 MNRAS 275, L41
\bibitem Walker M., Wardle M. 1998 ApJL 498, L125
\bibitem Wambsganss J., Paczy\'nski~B., Schneider~P. 1990 ApJL 358, L33
\bibitem Zhao H.~S. 1998 ApJL 500, L149
\endrefs
\bye
|
1,314,259,995,948 | arxiv | \section*{Abstract}
Crop phenology is crucial information for crop yield estimation and agricultural management. Traditionally, phenology has been observed from the ground; however Earth observation, weather and soil data have been used to capture the physiological growth of crops. In this work, we propose a new approach for the within-season phenology estimation for cotton at the field level. For this, we exploit a variety of Earth observation vegetation indices (derived from Sentinel-2) and numerical simulations of atmospheric and soil parameters. Our method is unsupervised to address the ever-present problem of sparse and scarce ground truth data that makes most supervised alternatives impractical in real-world scenarios. We applied fuzzy c-means clustering to identify the principal phenological stages of cotton and then used the cluster membership weights to further predict the transitional phases between adjacent stages. In order to evaluate our models, we collected 1,285 crop growth ground observations in Orchomenos, Greece. We introduced a new collection protocol, assigning up to two phenology labels that represent the primary and secondary growth stage in the field and thus indicate when stages are transitioning. Our model was tested against a baseline model that allowed to isolate the random agreement and evaluate its true competence. The results showed that our model considerably outperforms the baseline one, which is promising considering the unsupervised nature of the approach. The limitations and the relevant future work are thoroughly discussed. The ground observations are formatted in an ready-to-use dataset and will be available at \url{https://github.com/Agri-Hub/cotton-phenology-dataset} upon publication.
\section*{Introduction}
\label{section:intro}
Crop phenology is key information for crop yield estimation and agricultural management and thereby actionable knowledge for the farmer, the agricultural consultant, the insurance company and the policy maker. Crop phenology is the physiological development of the plant from sowing to harvest. The precise and timely knowledge of the growth status of crops is crucial for estimating the yield early in the season, but also for taking prompt action on controlling the growth to i) maximize the production and ii) reduce the farming costs \cite{gao2021mapping}.
Crops' water needs are a function of the phenological stage. Using the example of cotton, which is the crop of interest in this study, there is higher water usage between the flowering and early boll opening stages than in the emergence and late boll opening stages \cite{vellidis2016development}. Irrigation can be interrupted on the onset of boll opening to stop the continuous growth of cotton and allow the photosynthetic carbohydrates to start contributing to the development of bolls and not the development of leaves and flowers \cite{Cotman}. Therefore, we need crop phenology information in order to make irrigation recommendations towards fully utilizing the expensive and often scarce water, and at the same time reduce water stress and its potential adverse effects on the yield \cite{anderson2016relationships}. Irrigation is one of the many examples of how phenology can benefit the agricultural practice management. Other examples include the precise application of plant growth regulators, pest management and harvesting \cite{gao2021mapping}. For instance, pix (mepiquat chloride), which is the most widely used cotton growth regulator, when applied on the early flowering stage can reduce the excessive cotton vegetation growth and therefore reduce the probability of diseases and also improve lint yield and quality \cite{reddy1996mepiquat}. In the same manner, cotton picking could be rushed prior to an anticipated extreme weather event (e.g., hail), if phenology estimations show a near complete boll opening status.
For many years, phenology has been observed from the ground, through field visits and in-situ sensors. These approaches however are expensive, time-consuming and lack spatial variability. To this end, space-borne and aerial remote sensing Vegetation Index (VI) time-series have been used to systematically monitor crop phenology over large geographic regions; often termed land surface phenology \cite{gao2021mapping, duarte2018qphenometrics}. The freely available Sentinel-2 data offer optical imagery of high temporal and spatial resolution that introduced new opportunities for the large-scale and within-season monitoring of phenology \cite{jianwu2016emerging,sitokonstantinou2020sentinel}.
The recently published positioning papers \cite{gao2021mapping,lacueva2020multifactorial,potgieter2021evolution}, identify the problem of remote phenology estimation as a fundamental one for the future of agriculture monitoring. Particularly, the authors underline the importance and the expected impact of within-season estimations at high spatial resolutions. Many of the related studies focus on the prediction of few principal phenological stages, failing to truly exploit the frequency of remote sensing data. This is mostly true because the required ground observations for training and/or evaluation are usually infrequent and lack spatial variability. This kind of ground observations are usually achieved with the use of networks of phenology stations or phenocams, which are always sparse and limited in number.
In the past two decades, there has been a number of related studies that focus on the estimation of vegetation phenology using both Earth Observation (EO) and weather data, under a wide variety of methodological frameworks. Initial approaches to the problem, many of which continue to develop to this day, offered after-season phenology estimations and were usually applied at large geographic scales using medium resolution imagery \cite{liang2011validating,xin2020evaluations, verbesselt2010phenological, dineshkumar2019phenological, chen2016simple}. The term after-season indicates that phenology is estimated after the crop is harvested and thus leverages the entire data time-series. This large-scale monitoring of the dynamics of phenology has been very popular in the scientific domains of ecology and climate change monitoring \cite{liang2011validating,xin2020evaluations, almeida2012remote, verbesselt2010phenological, tian2020development, bolton2020continental, dineshkumar2019phenological,chen2016simple, sitokonstantinou2020sentinel}. Nevertheless, detailed information that is offered within the cultivation period is very important from the perspective of the farmer. Using timely and high spatial resolution phenology predictions, farmers can protect their yield and maximize their profit. Towards this direction, there has been a number of studies that provide within-season phenology predictions at the field level \cite{lopez2013estimating, yang2017improved, nieto2021integrated,zheng2016detection,czernecki2018machine,vicente2013crop,de2016particle}. Phenology estimation can be found in literature both as a classification and as a regression problem. For the first, the phenological cycle is divided into stages or classes that last for a given period of time \cite{lopez2013estimating,yang2017improved, nieto2021integrated}. Usually the crop growth period is broken down into i) the sprouting, ii) the vegetative, iii) the budding, iv) the flowering and v) the ripening phases. On the other hand, phenology as a regression problem translates to predicting the day of the onset of these key phenological phases \cite{zheng2016detection,sakamoto2010two,czernecki2018machine,de2016particle}.
Each crop type has its own growth cycle and hence unique characteristics with respect to i) how phenology is affected by agro-climatic conditions and ii) how responsive are the space-borne or aerial EO data with respect to the various growth stages.
There are crop types for which phenology and yield are highly correlated to the vegetation canopy we see from EO platforms, e.g., tobacco. This is not the case for other crops, especially those that bear fruits or bolls, where phenology and vegetation canopy are not closely coupled. In this study, we focus on cotton (Gossypium hirsutum L.), which is a unique case with non-linear relationships between its growth and the VIs that we have in our capacity to monitor it \cite{toulios1998spectral, gutierrez2012association, jiang2018quantitative}. In literature, one can find many publications related to the estimation of phenology for rice, barley, soybean and maize \cite{yang2017improved, lausch2015deriving, zeng2016hybrid, sakamoto2010two, nieto2021integrated}. However, cotton appears to be underrepresented. When searching for phenology estimation studies on cotton, one can find only few publications that date back more than a decade \cite{palacios2012derivation, tsiros2009assessment}, and some more recent ones that deal with multiple crop types and do not focus explicitely on cotton \cite{sakamoto2018refined, vijaya2021algorithms}. There are also a handful of papers that evaluate the process-based model CSM-CROPGRO-Cotton, but with small-scale experiments (few fields) \cite{ur2019application, li2019simulation, mishra2021evaluation}. On the other hand, there are dozens of recent papers that focus on the large-scale prediction of phenology for other major crops \cite{misra2020status, zeng2020review}. Indicatively, there have been interesting recent studies on maize \cite{gao2020within, niu202230, huang2019optimal, zeng2016hybrid, diao2020remote}, rice \cite{luo2020chinacropphen1km, huang2019optimal, onojeghuo2018rice}, wheat \cite{nasrallah2019sentinel, huang2019optimal, mercier2020evaluation} and soybean \cite{gao2020within, zeng2016hybrid, diao2020remote}.
Phenology is affected by the temperature \cite{chuine2010does, cleland2007shifting}, the photoperiod and the effective solar radiation that enables photosynthesis \cite{flynn2018temperature, korner2010phenology}, the soil properties \cite{menzel2002phenology}, and many other agro-meteorological parameters \cite{piao2019plant}. Indeed, in cotton phenology literature we can find older studies that use exclusively meteorological data, such as soil and/or air temperatures \cite{tsiros2009assessment,reddy1993temperature, reddy1994modeling} and other more recent ones that combine them with optical images \cite{nieto2021integrated,czernecki2018machine,cai2019integrating}. Synthetic Aperture Radar (SAR) data, usually in combination with optical images, have been mostly used for estimating rice phenology \cite{yang2017improved}, but also other crop types \cite{meroni2021comparing}. The combination of Sentinel-2 and MODIS has been one of the most popular in the field. This is true because the Sentinel-2 missions offer data of high spatial resolution that enable information extraction at the field level, whereas MODIS data and their daily acquisitions, in contrast with the 5-days revisit period of Sentinel-2, allow for the generation of dense SITS \cite{sakamoto2010two,de2016particle,zheng2016detection,arun2021deep,vicente2013crop,zhang2003monitoring,zeng2016hybrid,vina2004monitoring}. Other data sources found in literature include Unmanned Aerial Vehicles (UAV) \cite{yang2020near,selvaraj2020machine} and in-field RGB sensors \cite{wang2021deepphenology}.
There are several published studies that employ supervised Machine Learning (ML) methods for land surface phenology. For instance, the authors in \cite{nieto2021integrated} have used Support Vector Machines (SVM) and Random Forests (RF) to integrate field, weather and satellite data for maize phenology monitoring. Furthermore, the authors in \cite{czernecki2018machine} and \cite{9553456} have used traditional ML regressors to model plant phenology based on both satellite EO and gridded meteorological data. There have also been a few Deep Learning (DL) based approaches. One example is \cite{arun2021deep}, where the authors explore the use of capsules, i.e., a group of neurons to address the issues of translation invariance prevalent in conventional Convolutional Neural Networks (CNN), to learn the characteristic features of the phenological curves. There is also a number of methods that do not use ML. A few examples of such methods include dynamic multi-temporal modeling and Kalman filtering in \cite{vicente2013crop}, particle filtering in \cite{de2016particle}, sigmoid modeling in \cite{zhang2003monitoring}, first-derivative
analysis in \cite{meroni2021comparing} and \cite{zheng2016detection}, and wavelet-based filtering and shape model fitting in \cite{sakamoto2005crop}.
In this work we exploit EO data (Sentinel-2), together with numerical simulations of atmospheric and soil parameters (i.e., soil, surface and ambient temperature, accumulated precipitation, downwards shortwave radiation and soil moisture) to address the within-season phenology estimation for cotton at the field level. Even more, since ground truth data are scarce and expensive to collect, we predict phenological stages using clustering, in order to be truly useful in real world scenarios. We go beyond the estimation of principal phenological stages and additionally identify the fuzzy transitions between stages as individual metaclasses (two ranked labels). We focus on cotton that is a vital crop for the Greek economy and agricultural ecosystem, and even more has been underrepresented in the phenology estimation literature. Finally, we developed and made publicly available a unique dataset of cotton growth ground observations, collected by an expert who performed hundreds of field visits in Orchomenos, Greece.
\section*{Materials and methods}
\label{datasets}
\subsection*{Ethics Statement}
The field campaigns were conducted in cotton fields in Orchomenos, Greece, which are privately owned by the members of the agricultural cooperative of Orchomenos. A memorandum of understanding was signed with the cooperative that explicitly allowed to perform visits in selected fields. During the experiments, no other specific permission was required, as only observational activities were carried out and no endangered or protected species were involved.
\subsection*{Study Area and Field Campaigns}
\label{campaigns}
Greece has the fourth largest production of cotton per person (approx. 29 kg) and is the number one producer in the European Union (EU), with 304,000 tons per year \cite{fao}. Cotton is extensively cultivated and is very important for the national economy, with Greece being the fifth largest exporter in the world. Unfortunately, there is no organized effort to record practice calendars and phenological observations. In Greece, cotton needs between 150 to 200 days in order to complete its phenological cycle. The duration depends on the cotton variety and the agro-climatic conditions \cite{tsiros2009assessment}.
For this study, cotton growth consists of six principal phenological stages. These refer to higher level groupings of the cotton growth micro-stages defined in the official manual for damage assessment of the Greek Agricultural Insurance Organization (ELGA) \cite{elga}. These groupings have been made after consulting experts in cotton growth and the relevant literature\cite{oosterhuis1990growth}. Fig \ref{fig:phenolo} illustrates the phenology of cotton in Greece. The first stage is Root Establishment (RE), referring to the period from sowing to the development of three leaves. This stage lasts between 15 to 30 days, but this can be greatly affected by weather conditions and particularly low temperatures that can slow down the process \cite{tsiros2009assessment}. The second stage is Leaf Development (LD) and encompasses the period from the development of the fourth leaf to the appearance of the first squares. This usually takes between 35 to 45 days, but once again it is subject to weather conditions \cite{danalatos2007introduction}. The third growth stage is Squaring (S) that includes the period between the formation of the first squares to the appearance of the first flowers. This stage takes between 15 to 30 days. Then the first flowers open with the onset of the Flowering (F) stage that lasts for 20 to 40 days. Then follows the Boll Development (BD) stage that takes 25 to 45 days until the start of leaf discoloration and the onset of Boll Opening (BO) that lasts roughly 10 to 20 days until harvest.
\begin{figure}[!ht]
\centering
\includegraphics[width=14cm]{images/Fig1.png}
\caption{{\bf The phenological cycle of cotton in Greece.} The principal phenological stages of cotton are Root Establishment (RE), Leaf Development (LD), Squaring (S), Flowering (F), Boll Development (BD) and Boll Opening (BO). The temporal overlaps between adjacent phenological stages are illustrated. Retrieved from \cite{oosterhuis1990growth} and modified.
}
\label{fig:phenolo}
\end{figure}
It should be noted that phenology is a dynamic variable and in principle can be described in great detail, going beyond these six principal crop growth stages. At any given instance, a cotton plant can be characterized by a combination of adjacent stages. For example, during the late flowering stage, a plant would have both flowers and cotton bolls, i.e., it would be transitioning to the BD stage. One of the most popular crop growth identification scales is the BBCH \cite{meier1997growth}. The BBCH scale makes use of a two digit representation, with the first digit referring to the principal growth stage and the second digit describing the secondary growth stage that corresponds to an ordinal number of percentage value. The BBCH scale ranges from 00 to 99. However, collecting ground observations of this detail is a challenging task because i) a large number of samples cannot be observed in near-daily frequency and ii) it is difficult, even for experts, to assign precise growth stages, especially when this decision needs to be aggregated at the field level.
In order to collect ground truth data that would allow us to evaluate the models of this study, an agronomist, who is a cotton grower and seasoned field scouter, performed extensive and intensive field campaigns in Greece. The campaigns took place during the growing season of 2021, from root establishment to boll opening, which extends between late April and early October. The expert followed the instructions that are summarized below:
\begin{itemize}
\item At least 15 visits per field (approx. 3 per month) during the growing period, including at least one visit per phenological stage.
\item Ideally, visit the fields in the days that Sentinel-2 passes over. If this is logistically impossible, visit the fields maximum one day prior or after the Sentinel-2 pass.
\item If it is cloudy, check the next Sentinel-2 pass, consult weather forecasts and decide if the inspection could be delayed for a few days or should happen irrespective of the cloud coverage.
\item Walk with a zig-zag pattern for typical scouting through the field and inspect the growth status and how it varies in space.
\item Decide on the phenological stage, choosing among the six principal stages that were defined earlier, which best describes the majority of the plants in the field. If the field is in a transitioning phase between two phenological stages, mention both and decide which is the prevailing one, i.e., the primary stage.
\item Decide on the percentage that is explained by the primary and the secondary stage
\item Take a panoramic photo of the entire field. Take two close-up photos of plants. The first one should be representative of the majority of the plants in the field. The second one should be representative of a minority of plants in the field. The latter close-up photo should be captured only when the percentage of the minority class, in terms of area, is deemed significant (Fig \ref{fig:firstfigure}).
\end{itemize}
Fig \ref{fig:firstfigure} helps to further illuminate what is meant by the terms primary, secondary and percentage of prevalence. The close-up photo labelled "majority" shows a representative plant of this aggregate status of the the field, also conveyed by the panoramic photo, which can be described with BD as primary and BO as secondary. On the other hand, the close up photo labelled "minority" refers to a plant that is less common in the field and is more representative of the secondary stage. This becomes even more clear by inspecting the "minority" close-up photo for the 05/09 visit, which shows a plant that is well into the BO phase. The expert would identify the percentage of prevalence based on the area that the plants represented by the "majority" and "minority" close-up photos cover in the field, but also the average density of the two stages in each plant. In this example and for the 05/09 visit, BD was 100\% and BO was 80\% prevalent, which means that BD was found in every plant of the field while BO on 80\% of the plants. Respectively, for the 17/09 visit BO was 100\% and BD was 30\% prevalent.
\begin{figure}[!ht]
\centering
\includegraphics[width=12cm]{images/Fig2.png}
\caption{{\bf Examples of field photos.}
Field photos that were captured in two consecutive visits of the same field. For each visit there is a panoramic photo of the field and two close-up photos of a plant that represent the majority and minority of the field, respectively. \textbf{(a)} Visit on 05/09, when the primary stage was BD and the secondary stage was BO. \textbf{(b)} Visit on 17/09, when the primary stage was BO and secondary was BD.}
\label{fig:firstfigure}
\end{figure}
During the growing season of 2021, our expert made 1285 visits to 80 cotton fields in Orchomenos. Orchomenos is an agrarian municipality in Viotia district of central Greece. The fields that participated in the ground observation campaigns are part of the agriculture cooperative of Orchomenos that has the highest selling price for cotton in Greece. It should be noted that among the 80 fields, 10 different cotton varieties were cultivated. This variability is important, as one can evaluate the performance of the phenology estimation models and draw conclusions on their generalization. The field visits were appropriately scheduled in order to have minimal differences between ground and satellite observations. In total, we acquired 67 different Sentinel-2 images, from mid March until the end of October. The mean difference between the ground and the cloud-free Sentinel-2 observations was 0.86 days and the standard deviation was 0.89 days. Table \ref{tab:day_diff} depicts the distribution of the difference in days between the ground and the satellite observation pairs.
\begin{table}[!ht]
\centering
\caption{
{\bf Difference in days between ground and satellite observation pairs.}}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Difference in days} & \textbf{\#Cloud-free S2\textsuperscript{a} captures} & \textbf{Cum. Frequency(\%)} \\ \hline
\textbf{0} & 475 & 37 \\ \hline
\textbf{1} & 594 & 83 \\ \hline
\textbf{2} & 173 & 97 \\ \hline
\textbf{\textgreater{}3} & 43 & 100 \\ \hline
\textbf{Total} & 1285 & - \\ \hline
\end{tabular}
\begin{flushleft} \textsuperscript{a} Sentinel-2.
\end{flushleft}
\label{tab:day_diff}
\end{table}
Table \ref{table:observations} shows the number of primary and secondary ground observations for each principal phenological stage. The expert was instructed to assign a primary stage label to any visit, thus the number of primary observations equals to the number of field visits. On the other hand, a secondary stage is not necessarily present, since it is observed only in a transitioning phase between two principal phenological stages (e.g., from flowering to boll development). Specifically, a secondary stage was observed only in 669 out of the 1285 visits (52\%).
\begin{table}[!ht]
\centering
\caption{
{\bf Distribution of phenological stages.}}
\begin{tabular}{c|cc|}
\cline{2-3}
\multicolumn{1}{l|}{} & \multicolumn{2}{c|}{\textbf{Ground observations}} \\ \hline
\multicolumn{1}{|c|}{\textbf{Stage}} & \multicolumn{1}{c|}{\textbf{Primary stage}\textsuperscript{a}} & \textbf{Secodary stage}\textsuperscript{b} \\ \hline
\multicolumn{1}{|c|}{\textbf{RE}} & \multicolumn{1}{c|}{75} & 4 \\ \hline
\multicolumn{1}{|c|}{\textbf{LD}} & \multicolumn{1}{c|}{421} & 20 \\ \hline
\multicolumn{1}{|c|}{\textbf{S}} & \multicolumn{1}{c|}{212} & 5 \\ \hline
\multicolumn{1}{|c|}{\textbf{F}} & \multicolumn{1}{c|}{229} & 148 \\ \hline
\multicolumn{1}{|c|}{\textbf{BD}} & \multicolumn{1}{c|}{252} & 315 \\ \hline
\multicolumn{1}{|c|}{\textbf{BO}} & \multicolumn{1}{c|}{96} & 177 \\ \hline
\multicolumn{1}{|c|}{\textbf{Total}} & \multicolumn{1}{c|}{\textbf{1285}} & \textbf{669} \\ \hline
\end{tabular}
\begin{flushleft} \textsuperscript{a,b} The number of ground observations for each principal phenological stage of cotton that have been classified as \textsuperscript{a}primary and \textsuperscript{b}secondary stage labels.
\end{flushleft}
\label{table:observations}
\end{table}
Figs \ref{fig:thirdfigure} and \ref{fig:fourthfigure} show the Kernel Density Estimation (KDE) of the Days of Year (DoY) for which the expert observed the different principal phenological stages as primary and secondary, respectively. It becomes clear that there are many chronological overlaps among the stages, for both the primary and secondary annotations. Inspecting Fig \ref{fig:thirdfigure}, we see that the overlaps get progressively larger as we move towards the end of the growing cycle. This is expected as differences in growth accumulate with time and thus get more pronounced. The two figures also highlight how different the rate of growth can be even for fields that cultivate the same crop type, have similar sowing date and are in close proximity. Finally, it should be noted that for the secondary stage observations, the KDEs appear to have two modes (Fig \ref{fig:fourthfigure}). The first and second mode refer to observations that were made prior and after the onset of the "primary" phase of a phenological category. This is the reason why there are extensive overlaps among the KDEs for the secondary stage observations.
\begin{figure}[!ht]
\centering
\includegraphics[width=12cm]{images/Fig3.png}
\caption{\bf Distribution of DoY for which the inspector observed the phenological stages as primary.}
\label{fig:thirdfigure}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width=12cm]{images/Fig4.png}
\caption{\bf Distribution of DoY for which the inspector observed the phenological stages as secondary.}
\label{fig:fourthfigure}
\end{figure}
The ground observation dataset is public (\url{https://github.com/Agri-Hub/cotton-phenology-dataset}), encouraging the community to use it for training, testing and evaluating the performance of cotton phenology estimation or yield estimation models. The dataset includes a) the geographic location and geometry of fields (EPSG:4326 - WGS 84), b) the days of inspection, c) the primary phenological stage and the percentage it describes, d) the secondary phenological stage and the percentage it describes, e) the sowing and harvest dates, f) a panoramic photo of the field, g) close-up photos of representative plants for the "majority" and "minority" phenological stages.
The quality of the dataset has been evaluated by another expert, Expert 2, who reviewed a randomly selected subset of 145 ground observations (11.28\%) using the available panoramic and close-up photos. Expert 2 was not aware of the ground observations and was asked to decide on the primary stage and secondary stage, if there was one. Then a third expert reviewed the disagreements between the decisions of Expert 1 and Expert 2 using once more the photos captured during the visits. Table \ref{table:intra-rater_metrics} shows a number of metrics for the interrater agreement.
The analysis from Expert 3 yielded the percentages of agreement and disagreement, N/A observations and undecided observations. The agreement score refers to the cases for which Expert 2 provided the same label as Expert 1. N/A observations are the observations that were considered unfair to include in the evaluation. In particular, we do not take into account 3 cases:
\begin{enumerate}
\item When Expert 1, i.e., the one that visited the field, provided only a primary stage observation and Expert 2 provided the same observation as a secondary stage. Such mismatch should be penalized only for the primary stage and it is considered N/A for the secondary stage.
\item When Expert 1 provided a primary stage observation with 100\% prevalence and secondary stage observation with prevalence less than or equal to 60\%, whereas Expert 2 gave the same primary stage observation but no secondary stage observation. Such cases should not be penalized for the secondary stage observation since the photos may not show it clearly.
\item When the primary observation of Expert 1 agrees with the secondary observation of Expert 2 and vice versa, and the prevalence percentage provided by Expert 1 is above 50\%. For example, Expert 1 observes F as primary with 100\% prevalence and stage BD as secondary with 60\% prevalence, whereas Expert 2 observes BD as primary and F as secondary. In fact, such cases can be very similar because both describe the transitional phase from one principal stage to the other. Based on that, and since Expert 2 judges according to a couple of photos, it is not fair to penalize these cases as wrong annotations.
\end{enumerate}
The undecided category refers to instances that Expert 2 claimed and Expert 3 confirmed that the photos were not good enough to make a fair assessment. Finally, the rest of the cases are considered disagreements.
\begin{table}[!ht]
\caption{\bf Interrater agreement metrics\textsuperscript{a}.}
\begin{tabular}{l|c|c}
\cline{2-3}
& \multicolumn{1}{l|}{\textbf{Primary stage}} & \multicolumn{1}{l|}{\textbf{Secondary stage}} \\ \hline
\multicolumn{1}{|l|}{\textbf{Agreement}} & 0.72 & \multicolumn{1}{c|}{0.60} \\ \hline
\multicolumn{1}{|l|}{\textbf{Disagreement}} & 0.10 & \multicolumn{1}{c|}{0.09} \\ \hline
\multicolumn{1}{|l|}{\textbf{N/A}} & 0.17 & \multicolumn{1}{c|}{0.20} \\ \hline
\multicolumn{1}{|l|}{\textbf{Undecided}} & 0.01 & \multicolumn{1}{c|}{0.11} \\ \hline
\multicolumn{1}{|l|}{\textbf{Krippendorff”s alpha (ordinal)}} & 0.95 & \\ \cline{1-2}
\end{tabular}
\begin{flushleft}
\textsuperscript{a} For the primary and secondary growth stage annotations (Expert 1 v. Expert 2).
\end{flushleft}
\label{table:intra-rater_metrics}
\end{table}
Crop growth labeling is not straightforward because one attempts to derive ordinal categories to what is actually a continuous variable. Thus, the results will be heavily dependent on the choice of the category limits, i.e., the instance a principal phenological stage transitions to the next. These limits, although pre-defined (e.g., onset of LD is the appearance of three fully formed leaves) and explained in detail to the various experts, are subject to different interpretations. The annotations collected through the field visits are ordinal and thus we need to select an appropriate measure to reveal information on the reliability. Krippendorff's alpha is a versatile interrater agreement metric that is applicable to any level of measurement, e.g., nominal, ordinal or interval.\cite{krippendorff2011computing} The Krippendorff's alpha between Expert 2 and Experts 1 for ordinal level of measurement was 0.95. This indicates a strong agreement on the primary stage observations, which combined with the analysis performed by Expert 3, constitutes the annotation method reliable. From this point on, we only use Expert 1's ground observations. The aforementioned analysis was merely performed to assure the quality of the ground observations.
\subsection*{Predictor variables}
Table \ref{tab:variables} lists the predictor variable candidates with which we experimented in this study: i) the Sentinel-2 derived products and ii) the atmospheric and soil numerical simulations. In this section we focus on the acquisition and pre-processing the various prediction variables, whereas in the next sections we elaborate on how these variables are incorporated in the feature space and feed the phenology estimation models.
\begin{table}[!ht]
\begin{adjustwidth}{-2.25in}{}
\centering
\captionsetup{margin={0in, -2.25in}}
\caption{{\bf Summary of predictor variable candidates.}}
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Variable} & \textbf{Formula}\textsuperscript{a} & \textbf{Resolution}\textsuperscript{b} \\ \hline
Day of Year (DoY) & sine, cosine & - \\
Temperature at surface & min, max & 2 km \\
Growing Degree Days (GDD) 2m & (T\textsubscript{max}-T\textsubscript{min})/2 - T\textsubscript{base} & 2 km \\
Accumulated Precipitation & max & 2 km \\
Downwards Shortwave Radiation & max & 2 km \\
Soil temperature 0-10 cm depth & min, max & 2 km \\
Soil moisture 0-10 cm depth & min, max & 2 km \\
Normalized Difference Vegetation Index (NDVI) \cite{pettorelli2013normalized} & (B08-B04)/(B08+B04) & 10 m \\
Normalized Difference Water Index (NDWI) \cite{mcfeeters1996use} & (B03-B08)/(B08+B03) & 10 m \\
Normalized Difference Moisture Index (NDMI) \cite{gao1996ndwi} & (B08-B11)/(B08+B11) & 20 m \\
Plant Senescence Reflectance Index (PSRI) \cite{merzlyak1999non} & (B04-B02)/ B06 & 20 m \\
Soil-Adjusted Vegetation Index (SAVI) \cite{huete1988soil} & ((B08 - B04)/(B08 + B04 + 0.428))*(1.0 + 0.428) & 10 m \\
Enhanced Vegetation Index (EVI) \cite{huete2002overview} & 2.5*(B08-B04)/((B08+6*B04-7.5*B02) + 1.0) & 10 m \\
Visible Atm. Resistant Indices Green (VARIgreen) \cite{gitelson2001non} & (B03-B04)/(B03+B04-B02) & 10 m \\
Green Atmospherically Resistant Index (GARI) \cite{gitelson1996use} & (B08-(B03-(B02-B04)))/(B08-(B03+(B02-B04))) & 10 m \\
Structure Insensitive Pigment Index (SIPI) \cite{penuelas1995semi} & (B08-B02)/(B08-B04) & 10 m \\
Wide Dynamic Range Vegetation Index (WDRVI) \cite{gitelson2004wide} & (0.2*B08-B04)/(0.2*B08+B04) & 10 m \\
Global Vegetation Moisture Index (GVMI) \cite{ceccato2002designing} & ((B08+0.1)-(B12 + 0.02))/((B08+0.1)+(B12+0.02)) & 20 m \\ \hline
\end{tabular}
\\
\begin{flushleft}
\textsuperscript{a} B is the spectral reflectance value of the band number of the Sentinel-2 image. \textsuperscript{b} The variables with resolution of 2km are our NWP, whereas those of 10 or 20m resolution are Sentinel-2 derived products.
\end{flushleft}
\label{tab:variables}
\end{adjustwidth}
\end{table}
\subsubsection*{Optical images}
The optical spectrum variables used in this study were derived from Sentinel-2 images. As mentioned earlier, optical SITS have been popular in related research studies, with special attention given to Sentinel-2, but also MODIS data. The method of this work exploits only Sentinel-2 images in order to provide crop-specific phenology predictions at the field level. There are cases where the agricultural landscape is dominated by a single crop cultivation, e.g. U.S. Corn belt, and MODIS can be a tremendous help in crop-specific phenology predictions. But in Greece the landscape is fragmented and the medium spatial resolution of MODIS would yield mixed optical signatures of multiple crop types.
The optical component of this study's variable space comprises the RGB, NIR and SWIR spectral bands of Sentinel-2, but also several VIs. VIs are combinations of the spectral bands that can highlight particular vegetation properties \cite{segarra2020remote}. Table \ref{tab:variables} lists the various VIs that have been used in this work. The VIs investigated are some of the most common in the relevant literature. As the crop grows, its spectral signature changes with time. It starts with bare soil and then there are stems and leaves, and then flowers and bolls. These different phases have different biophysical and biochemical properties and thus different light reflectance profiles. Therefore we investigate multiple VIs so as to capture the maximum possible information at every stage of the growth. With regards to pre-processing, the Sentinel-2 images have been atmospherically corrected using the Sen2Cor software \cite{louis2016sentinel}. Additionally, clouds have been removed using the Sen2Cor scene classification product. Then, the null-valued pixels have been filled using linear interpolation on the SITS.
\subsubsection*{Atmospheric and soil parameters}
A dense, long-term and efficient monitoring of the atmospheric state is rarely met in real-life conditions except for experimental campaigns. Automatic weather stations can contribute, but insufficient spatial coverage or bad distribution of them are typical problems, let alone temporal gaps and discontinuities due to sensor failures or human aspects (poor maintenance). Given the absence of a dense in-situ network or weather radar scans over our area of study that could provide a necessary insight, we relied upon high resolution Numerical Weather Predictions (NWP) (2 km) from our convection-permitting operational configuration of WRF-ARW model \cite{skamarock2019description}. The model is initialized daily with the latest available analysis and after the exclusion of the first few hours (spin-up time) in order to reach a statistical equilibrium, the following 24-hour estimates are utilized.
While this grid spacing may appear quite coarse when compared to the resolution of the EO-derived products, we should consider that this is an outcome of NWP simulations. We are able to provide estimates of atmospheric parameters every 2 km over regions that are heavily under-monitored (in-situ weather station can be available every 100 km over croplands). This scale is considered high resolution in NWP terms and the resolving of cloud microphysical processes, such as convection, starts to happen explicitly under this spatial threshold which is particularly important to resolve fine atmospheric processes on a local scale without having to rely on parameterization schemes. A 2 km forecast fulfils our needs given that the physiographic characteristics of the crop regions we focus upon are not areas of high topographical complexity, so great gradients are not expected.
The specific parameters that were used in our pipeline were an outcome of consultation with agronomist experts and systematic literature review upon their correlation with the evolution of cotton \cite{piao2019plant}. They include Air Temperature, Surface (skin) Temperature, 0-10 cm Soil Temperature and Moisture, Precipitation and Incoming Shortwave Radiation. Growing Degree Days (GDD) is additionally computed, as it is one of the most essential indicators of phenology. Inspecting the GDD equation in Table \ref{tab:variables}, T\textsubscript{max} and T\textsubscript{min} are the maximum and minimum daily air temperatures at 2m (from surface) and T\textsubscript{base} is the crop's base temperature (15.6 °C). The latter is defined as the temperature below which cotton does not develop. The GDD variable is also known as thermal time and is an indicator of the effective growing days of the crop \cite{sharma2021use}.
\subsection*{Fuzzy clustering}
We propose a fuzzy clustering method for the within-season estimation of cotton phenology. The workflow of the proposed approach is depicted in Fig \ref{fig:architecture}. We use clustering to circumvent the ever-present problem of sparse, scarce and difficult-to-acquire ground observations, which constitute the supervised alternatives of limited applicability in operational scenarios. Nevertheless, we visited tens of fields and collected hundreds of ground observations in order to test the performance of our models. The aim of our newly introduced ground observation collection protocol was to extract more information than the principal growth stages, as it is usually the case in related works. This happens at the labelling level, via taking advantage of the primary and secondary stage ground observations, as described in detail earlier.
\begin{figure*}[!ht]
\begin{adjustwidth}{-1.75in}{0in}
\centering
\includegraphics[width=6.5in]{images/Fig5.png}
\captionsetup{margin={+0.25in, -2.25in}}
\caption{\bf The proposed methodology for cotton phenology estimation.}
\label{fig:architecture}
\end{adjustwidth}
\end{figure*}
\(\mathcal{L} = \{\lambda_1, \lambda_2,..., \lambda_k\}\) is the finite ordered scale of the principal cotton growth class labels, where \(\lambda_1\) is RE and \(\lambda_6\) is BO. The ground observation protocol allowed k=6 phenological stages to choose from and a maximum of n = 2 labels to assign to each field, i.e., the primary and secondary stage categorization. This restricted the set of allowable labels \(L_r\) to all possible permutations of n = 2 elements of \(\mathcal{L}\) and all unit sets. Specifically, there are \(k^2 = 36\) allowable labels. This multi-label problem can be reduced to a single-label one by considering each subset as a distinct metaclass \cite{Cheng2010GradedMC}. In reality, however, there are only 16 possible metaclasses, as the primary and secondary stage for a field can differ only a single position in the ordinal scale. Eq. \ref{eq1} lists this set of 16 metaclasses in ordered scale.
\begin{multline}
\begin{aligned}
\label{eq1}
L_r = \{ & \lambda_1, (\lambda_1, \lambda_2), (\lambda_2, \lambda_1), \lambda_2,(\lambda_2, \lambda_3), (\lambda_3, \lambda_2), \lambda_3, (\lambda_3,\lambda_4), (\lambda_4,\lambda_3), \lambda_4, \\ & (\lambda_4,\lambda_5), (\lambda_5,\lambda_4), \lambda_5, (\lambda_5,\lambda_6), (\lambda_6,\lambda_5), \lambda_6\}
\end{aligned}
\end{multline}
In order to estimate the phenological metaclass for each field at any given instance, we used the Fuzzy C-Means (FCM) clustering algorithm. \(X \in R^{K \times E}\) denotes the element space that was used as input to the FCM, where K is equal to the number of fields (M) multiplied by the number of Sentinel-2 acquisitions (J), and E is equal to the number of features. Each row of the two-dimensional space in Eq \ref{eqe} represents field \(i\) at the time instance \(j\) (DoY of Sentinel-2 acquisition). For the Sentinel-2 variables, each element \(x_{(i,j),d}\) gets the mean value of variable \(d\) for all pixels of field \(i\) at the instance \(j\). In the same fashion, for the NWP variables, each element \(x_{(i,j),d}\) gets the value of the nearest grid cell to field \(i\) for the instance \(j\). To calculate the mean value of the pixels and grid cells that fall within each field we used the parcel boundaries retrieved from the Hellenic Cadastre (scale 1:5000).
\begin{equation}\label{eqe}
X_{(i,j),d} =
\begin{pmatrix}
x_{(1,1),1} & x_{(1,1),2} & \cdots & x_{(1,1),E} \\
x_{(1,2),1} & x_{(1,2),2} & \cdots & x_{(1,2),E} \\
\vdots & \vdots & \ddots & \vdots \\
x_{(1,J),1} & x_{(1,J),2} & \cdots & x_{(1,J),E} \\
\vdots & \vdots & \ddots & \vdots \\
x_{(M,J), 1} & x_{(M,J), 2} & \cdots & x_{(M,J), E}
\end{pmatrix}
\end{equation}
Since phenology is dependent on the relative temporal progression of variables, we use the accumulated NWP parameters and the cumulative integrals of the VIs. The starting point for the accumulation is set around the earliest sowing DoY for the fields in the area of interest. In our case, this starting point was the 10th of April (DoY 100). Therefore, the feature space includes the Sentinel-2 VIs and their cumulative integrals, the accumulated NWP parameters, and the cosine and sine of the Sentinel-2 acquisition DoY (Table \ref{tab:variables}).
During the learning phase, the FCM algorithm attempts to partition K elements \(X = \{\boldsymbol{x}_1, ..., \boldsymbol{x}_K\}\) that capture the entirety of the season into c = 6 clusters that is assumed they represent the six principal growth stages of cotton (after-season clustering in Fig \ref{fig:architecture}) \cite{bezdek1984fcm}. This is considered to be a valid assumption given that each element is described by the EO and NWP variables, their time-accumulated variants, and the associated DoY. The assumption is also supported by the results in the next section. The algorithm returns a list of \(C = \{\boldsymbol{c}_1, \boldsymbol{c}_2, ..., \boldsymbol{c}_6\}\) cluster centers and a partition matrix \(W = (w_{k,l}) \in R^{K \times c}\), where \(w_{k,l}\) is the degree to which the element \(\boldsymbol{x}_{k}\) belongs to cluster \(\boldsymbol{c}_l\). We applied FCM on the 2020 variable space (training year) in Orchomenos and then used the clusters C to produce within-season predictions, in dynamic fashion, during the 2021 season (test year).
The training cotton fields of 2020 were 194 in total, and were extracted from a pre-trained crop classification model, based on \cite{sitokonstantinou2021scalable}.
After the clustering, the phenological stages are assigned to the different clusters via exploiting the time order. For this, the most common order of clusters is recorded and then matched to the ordered scale of labels in \(\mathcal{L}\). It is common to address a multi-label problem in an indirect way using a scoring function \(f : X \times \mathcal{L} \rightarrow R\) that assigns a real number to each element-label pair \cite{Cheng2010GradedMC}. The assumption here is that this scoring function corresponds to the probability of each label being relevant to an element.
In our case, the scoring function \(f\) is the FCM and the scores are the membership grades \(w_{k,j}\) of each element \(\boldsymbol{x}\). In other words, the FCM attempts to find the labels in \(\mathcal{L}\) and then the partition scores or weights are used for multi-label prediction via thresholing. Even more, sorting the labels according to their score provides label ranking, enabling the identification of the primary and secondary stages, as given through the field inspections (Eq \ref{eq2}).
\begin{align}
\label{eq2}
\lambda_i \leq_x \lambda_j \Leftrightarrow f(\boldsymbol{x}, \lambda_i) \leq f(\boldsymbol{x}, \lambda_j), i,j = 1...6
\end{align}
where $\lambda$ refers to the 6 principal phenological stages from RE to BO, $\boldsymbol{x}$ is an element from the element space in Eq.\ref{eqe} and $f$ is the scoring function of the FCM algorithm, i.e., the partition score.
\subsection*{Evaluation metrics}
We considered the ML task of phenology estimation as a multi-label classification problem, given the potential duality of phenological stages at a given instance. The two labels are ranked as primary and secondary phenological stages according to their relevance, or prevalence, in the field. Therefore, the metrics for assessing our model should capture these properties.
First, we categorize the predictions to error classes according to the difference or displacement between the prediction and the ground observation in the ordinal scale of metaclasses. These error classes are labelled as diff-\(o\), with \(o \in \{0, 1, 2, 3\}\). For instance, if our model predicted \(\lambda_2\) and the ground observation was \((\lambda_3, \lambda_2)\), then according to Eq \ref{eq1} the prediction is categorized as diff-\(2\). Similarly to the top-N accuracy, we devised the maxdiff-\(o\) accuracy, with \(o \in \{0, 1, 2, 3\}\), measuring the percentage of predictions with a displacement no greater than \(o\). For instance, maxdiff-\(2\) is the percentage of predictions that have at most an error of two displacement units. We also use the well known kappa coefficient, together with its linear and quadratic weighted variants. The weighted kappa metrics allow for disagreements to be weighted differently and are commonly used when labels are ordered.
We additionally incorporate the Normalized Discounted Cumulative Gain (NDCG). It is a popular metric in the world of information retrieval and specifically in tasks such as top-N ranking and item recommendations. In our case, we use the various partition weights or membership probabilities (\(w_{k,j}\)) as relevance values and rank the phenological clusters accordingly. We use NDCG\(@2\), as we take into account only the top 2 ranked stages/clusters that we assume represent the primary and secondary annotations. The highly relevant phenology stage should be ranked higher than the less relevant stage, which is in turn expected to be ranked higher than non-relevant stages. NDCG\(@2\) captures and evaluates exactly this capability of the model.
NDCG is based on the cumulative gain that simply sums the relevance scores for top\(@p\) (\(p=2\)). This is mathematically expressed by:
\begin{align}\label{eq4}
{CG_{p}} = \sum_{i=1}^{p} rel_{i}
\end{align}
In this case, we set \(rel = 2\) for the primary stage and \(rel = 1\) for the secondary stage. The cumulative gain however does not take into account the position of the phenological stage in the rank. This is done by the discounted cumulative gain, as in Eq \ref{eq5}, which makes use of a log-based penalty function and reduces the relevance score that is normalized by a penalty equivalent to each position.
\begin{align}\label{eq5}
{DCG_{p}} = \sum_{i=1}^{p} \frac{rel_{i}}{\log_{2}(i+1)} = rel_1 + \sum_{i=2}^{p} \frac{rel_{i}}{\log_{2}(i+1)}
\end{align}
Finally, the discounted cumulative gain is simply normalized by the ideal order of the relevant items and we end up with NDCG \cite{jarvelin2002cumulated}.
\section*{Results}
The FCM clustering was performed on the EO and NWP variables for the Orchomenos region in 2020 (training year) and was then applied on the equivalent feature space of 2021 (test year) for the within-season prediction of phenological metaclasses. Our FCM-based approach has three parameters, i) the number of clusters, ii) the fuzzifier \(m \in R\), with \(m \geq 1\) and iii) the partition score threshold, above which a cluster is considered as a valid phenological stage label. The fuzzifier m was set to 2, which is commonly preferred when using the FCM algorithm \cite{de2007advances, SINGH2016114, VERMA2016543, lei2018significantly}. According to \cite{pal1995cluster} the best choice for m is in the interval [1.5, 2.5], with $m=2$ being the most common choice. Finally, the partition threshold (th\textsubscript{w}) depends on the distribution of the partition scores during the learning phase of the FCM algorithm.
For each element \(\boldsymbol{x_k}\), FCM gives a partition weight w\textsubscript{k,j} that refers to the degree the element belongs to each of the clusters. The weights are then sorted for each element. The two-labelled nature of our target (primary and secondary stages) implies that the values of partition weights ranked 3rd or lower, should not be considered as valid phenological stage labels. Thus, we set the threshold th\textsubscript{w} equal to the value of the 98th percentile of the partition weights ranked in the third place (Fig \ref{fig:fcmscores}). We used the 98th percentile to eliminate the influence of potential outliers. Indicatively, Fig \ref{fig:fcmscores} illustrates the distribution of the partition weights (for $m=2$) for a representative model. The different colors indicate the rank in which the weights were given for each prediction, i.e., from 1st to 6th. In this case, the threshold is set to 0.11 based on the aforementioned rule. A different threshold was computed for each different model that we tested.
\begin{figure}[!ht]
\centering
\includegraphics[width=12cm]{images/Fig6.png}
\caption{{\bf Distribution of FCM partition weights for m=2.}
The hue indicates the rank in which the weights were given for each prediction. The dotted line shows the threshold above which 2nd ranked clusters are considered as valid secondary phenological stages.}
\label{fig:fcmscores}
\end{figure}
Having configured the three parameters, we trained FCM models (after-season prediction) on the 2020 cotton fields (training year), with multiple combinations of features ($\sim$80K). The FCM algorithm works well in low dimensions. Specifically, \cite{winkler2011fuzzy} suggested that for dimensions larger than ten, the FCM starts to present ill behaviour; and for this reason we ran our experiments on hyperspaces of up to ten features. We had 32 features to choose from, i.e., the variables described in Table \ref{tab:variables} and their cumulative variants, and we generated feature sets of length from 3 to 10 features. However, an exhaustive analysis would have required more than 200 million clusterings. To avoid running so many experiments, which is translated in months of experimentation, we fitted the FCM for feature sets of length \(E \in \{3, ..., 10\}\) using ten thousand random feature combinations for each length. It should be noted that all feature sets include the features cos(DoY) and sin(DoY) that set the time frame.
In order to evaluate the performance of the algorithm on the different feature spaces, we applied each individual FCM model (i.e., the cluster centers C and thresholds th\textsubscript{w}, from the training year) on the 2021 cotton fields, for which we have ground truth labels. We split the data into test and validation (70-30), and then used the kappa coefficient values of the validation data to find the best 1\% (mean kappa = 0.45) of feature combinations. Based on these top feature combinations, we selected the 15 most appeared features. These were i) the VIs SIPI, GVMI, EVI, NDMI and SAVI, ii) the cumulative integrals of WRDVI, PSRI, NDWI and SIPI, iii) the accumulated maximum soil temperature, maximum surface temperature, maximum solar radiation and Accumulated GDD (AGDD) and iv) the always present time features of sin(DoY) and cos(DoY).
Having concluded to the aforementioned set of predictors, we ran the FCM algorithm again for the training year (2020), for every possible combination of those features in spaces of length 6 to 15. This time, apart from the kappa coefficient, we also considered the maxdiff-$1$ score. Specifically, based on the performance of the best 1\% of feature combinations, as mentioned above, we kept solutions with kappa coefficient larger than $0.46$ and maxdiff-$1$ larger than $0.86$, which resulted to 604 cases out of 7,814. Moreover, by setting these values as such, we ensure that we acquire better solutions compared to a baseline model. The baseline model refers to an FCM with only DoYs as input. Since phenology is closely related to the DoY, the baseline is used to capture this chance agreement and showcase by comparison the real competence of our model. Detailed comparisons with the baseline model follow later in this section.
The analysis revealed that for most cases feature sets of length $E > 10$ yielded sub-optimal results. Specifically, from the top 604 models, 84\% contained no more than 10 features. Besides the DoY features, the AGDD is by far the most common feature, since it appeared in more than 86\% of the best solutions. Another important observation here is that the cumulative integrals of VIs and the accumulated NWP features are more important than than the single-date VIs. The importance of these features is great since they also capture the dimension of time, which is essential for an unsupervised phenology prediction model. Nevertheless, the results showed that the majority of well-performing feature sets contained at least one feature from each category, namely VIs, cumulative integrals of VIs and accumulated NWPs.
From the top 15 features, the cumulative integral of SIPI, the accumulated maximum solar radiation, NDMI and SIPI did not appear as frequently in the best 604 models. For this reason, we discarded models that included these features. The final set of models, which was used for our predictions, comprised models with 8 and 9 features that included at least one feature from each category. This resulted to a total of 82 models. In order to ensure the generalization and robustness of our methodology, by not depending on a single feature set, we generate the final predictions through majority voting on those best models. The 82 feature sets are listed in \nameref{S1_Table}.
Table \ref{tab:unsupmetrics} shows the performance of our model and the baseline model on the test set. Indeed, it is shown that our model provides a significantly larger number of diff-$0$ predictions, namely perfect agreements between predicted and ground truth metaclasses. This is also evident via observing Cohen's kappa that is notably higher for our model. In terms of absolute values though, the model shows moderate performance in these two metrics. However, given the unsupervised nature of the FCM algorithm as well as the fact that it works very well in the other metrics and avoids outlier errors, we claim that the overall performance is satisfactory and the proposed approach shows potential. It is also worth mentioning how our model significantly outperforms the baseline in terms of NDCG, denoting a better ranking capacity.
\begin{table}[!ht]
\centering
\caption{\bf Metrics of performance for our phenology prediction model and the baseline model.}
\begin{tabular}{c|c|c|}
\cline{2-3}
\multicolumn{1}{l|}{} & \textbf{Ours} & \textbf{Baseline} \\ \hline
\multicolumn{1}{|c|}{\textbf{maxdiff-$0$}} & 0.53 & 0.38 \\ \hline
\multicolumn{1}{|c|}{\textbf{maxdiff-$1$}} & 0.88 & 0.86 \\ \hline
\multicolumn{1}{|c|}{\textbf{maxdiff-$2$}} & 1.00 & 0.97 \\ \hline
\multicolumn{1}{|c|}{\textbf{maxdiff-$3$}} & 1.00 & 1.00 \\ \hline
\multicolumn{1}{|c|}{\textbf{Cohen’s kappa}} & 0.48 & 0.33 \\ \hline
\multicolumn{1}{|c|}{\textbf{Weighted kappa (Linear)}} & 0.88 & 0.84 \\ \hline
\multicolumn{1}{|c|}{\textbf{Weighted kappa (Quadratic)}} & 0.98 & 0.97 \\ \hline
\multicolumn{1}{|c|}{\textbf{NDCG}} & 0.93 & 0.88 \\ \hline
\end{tabular}
\label{tab:unsupmetrics}
\end{table}
Table \ref{tab:displacement} shows the prediction errors in metaclass displacement units, for phenology metaclasses that had at least 10 ground observations. The displacement units are computed via multiplying the normalized confusion matrix with a weight matrix, for which the cells one off the diagonal of the confusion matrix are weighted 1, those two off are weighted 2, etc. Then the weighted displacement units are summed for each ground observation metaclass. Our model offers a smaller average displacement for six out of the eight metaclasses that account for the majority of ground observations.
\begin{table}[!ht]
\centering
\caption{{\bf Prediction errors in metaclass displacement units.}}
\begin{tabular}{clc|cc|}
\cline{4-5}
\multicolumn{3}{l|}{} & \multicolumn{2}{c|}{Displacement} \\ \hline
\multicolumn{2}{|c|}{\textbf{Metaclass}} & \textbf{Support} & \multicolumn{1}{c|}{\textbf{Ours}} & \textbf{Baseline} \\ \hline
\multicolumn{1}{|c|}{1} & \multicolumn{1}{l|}{(RE, -)} & 72 & \multicolumn{1}{c|}{\textbf{0.41}} & 1.33 \\ \hline
\multicolumn{1}{|c|}{4} & \multicolumn{1}{l|}{(LD, -)} & 415 & \multicolumn{1}{c|}{\textbf{0.62}} & 0.89 \\ \hline
\multicolumn{1}{|c|}{6} & \multicolumn{1}{l|}{(S, LD)} & 17 & \multicolumn{1}{c|}{\textbf{0.41}} & 1.00 \\ \hline
\multicolumn{1}{|c|}{7} & \multicolumn{1}{l|}{(S, -)} & 122 & \multicolumn{1}{c|}{0.58} & \textbf{0.17} \\ \hline
\multicolumn{1}{|c|}{8} & \multicolumn{1}{l|}{(S, F)} & 73 & \multicolumn{1}{c|}{\textbf{0.30}} & 0.56 \\ \hline
\multicolumn{1}{|c|}{11} & \multicolumn{1}{l|}{(F, BD)} & 225 & \multicolumn{1}{c|}{1.04} & \textbf{0.92} \\ \hline
\multicolumn{1}{|c|}{12} & \multicolumn{1}{l|}{(BD, F)} & 75 & \multicolumn{1}{c|}{\textbf{0.37}} & 0.75 \\ \hline
\multicolumn{1}{|c|}{14} & \multicolumn{1}{l|}{(BD, BO)} & 177 & \multicolumn{1}{c|}{\textbf{0.54}} & 0.72 \\ \hline
\multicolumn{1}{|c|}{15} & \multicolumn{1}{l|}{(BO, BD)} & 90 & \multicolumn{1}{c|}{\textbf{0.03}} & 0.68 \\ \hline\hline
\multicolumn{2}{|c|}{\textbf{Average}} & 1266 & \multicolumn{1}{c|}{\textbf{0.48}} & 0.78 \\ \hline
\end{tabular}
\label{tab:displacement}
\end{table}
It is observed that the metaclass (BO, BD) offers the smallest error in displacement units, whereas the (F, BD) metaclass gives by far the largest. This can be explained by the fact that that metaclass (BO, BD) is at the edge of the growing cycle and can only be confused with the metaclass that precedes it. On the other hand, metaclass (F, BD) is at the vegetation peak, where plants are well into the flowering phase and some bolls have started to develop. The consecutive metaclasses (F, -), (F, BD) and (BD, F) are situated near the plateau that is formed around the peak of the VI time-series curve (or valley, given the VI). Therefore, there are not significant differences in VI values among the three metaclasses, which explains the less than optimal performance for metaclass (F, BD). Overall, the average displacement shows significant difference between the two models. Our model achieves a respectable average error of less than half a metaclass.
Table \ref{tab:cmOurs_PC} shows the confusion matrix for the hard clustering predictions. It can be observed that for the principal phenological stages the model performs rather well, with an overall accuracy 87\%. Most misclassifications are observed for the BD stage. This is actually expected given the number of observations for which BD is observed in one of the transitional metaclasses (Table \ref{tab:displacement}). As a matter of fact, BD is never observed as unit set metaclass.
\begin{table}[!ht]
\centering
\caption{{\bf Confusion matrix for the six principal phenological stages.}}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\cline{3-8}
\multicolumn{2}{c|}{} & \multicolumn{6}{c|}{Pred} \\ \cline{3-8}
\multicolumn{2}{c|}{} & \textbf{RE} & \textbf{LD} & \textbf{S} & \textbf{F} & \textbf{BD} & \textbf{BO} \\ \hline
\multirow{7}{*}{\rotatebox[origin=c]{90}{Truth}} & \textbf{RE} & 71 & 4 & 0 & 0 & 0 & 0 \\ \cline{2-8}
& \textbf{LD} & 39 & 361 & 21 & 0 & 0 & 0 \\ \cline{2-8}
& \textbf{S} & 0 & 6 & 195 & 11 & 0 & 0 \\ \cline{2-8}
& \textbf{F} & 0 & 0 & 12 & 204 & 13 & 0 \\ \cline{2-8}
& \textbf{BD} & 0 & 0 & 0 & 28 & 195 & 29 \\ \cline{2-8}
& \textbf{BO} & 0 & 0 & 0 & 0 & 3 & 93 \\ \hline
\end{tabular}
\label{tab:cmOurs_PC}
\end{table}
As mentioned, the expert would visit each field three or four times a month. Thus, it was common to observe a field in a particular crop growth stage for multiple consecutive visits (3 to 6). The chronological order of observation, i.e., the relative position of the ground observation in the range enclosed between the first and last time a stage was observed as primary for a particular field, is useful since it indicates if it is observed in its early, middle or late phases.
Fig \ref{fig:stage4} shows the distribution in the order of observation for the different disagreement categories, diff-$o$. The order of observation is categorized into early, middle and late visits. For a field that was observed in a particular phenological stage for three to five consecutive visits, early visit was the first visit and late visit was the last visit. In the seldom cases that a phenological stage was observed in six consecutive visits, then the first two and last two would be characterized as early and late visits, respectively.
Fig \ref{fig:stage4} shows that the majority of the perfect agreements, diff-$0$, were mostly middle and late observations. On the other hand the vast majority of big disagreements, i.e., diff-$1$, diff-$2$ and diff-$3$, were for early stage observations or, for fewer cases, late stage observations. This is expected, as middle observations would indicate that the field is well into a particular stage, whereas early or late observations denote transitional phases.
\begin{figure}[!ht]
\centering
\includegraphics[width=13cm]{images/Fig7.png}
\caption{{\bf Distribution of the chronological order of observation.}
Early describes ground observations that are found at the beginning of consecutive observations of a stage, with regards to chronological order. Late, similarly, describes ground observations that are found at the end of consecutive observations of a stage and Middle contains ground observations that are found in middle positions The hues show the predictions according to their diff-$o$ categorization of prediction error.}
\label{fig:stage4}
\end{figure}
\section*{Discussion}
\subsection*{Crop phenology}
The results indicated that using clustering for the within-season phenology estimation is promising but challenging. EO and NWP data are competent predictor variables for this type of problems, as they represent both the land cover changes and the crop growth drivers, but cannot fully capture the physiological growth stages of crops. Furthermore, for operational applications in agricultural management, phenology predictions should be made at a within-season basis. However, there are still several challenges in this undertaking. First, within-season predictions are possible using only a part of the data time-series. Additionally, temporally dense EO time-series are crucial to ensure that critical phenological changes are detected as soon as possible \cite{gao2021mapping}. The density of the time-series however is subject to certain trade-offs. Considering the freely available satellite images, higher temporal resolution implies lower spatial resolution. Furthermore, optical SITS is significantly affected by cloud coverage. This might have not proved a problem for this study, having its area of interest in Greece, but is certainly an issue for other parts of the world. Finally, crop growth stages might not be directly related to EO and NWP atmospheric and soil variables. For this, there is a clear gap in the literature for sophisticated modelling that would be able to capture these complex relationships.
\subsection*{Ground observations}
Ground observations are required to assess how well the EO and NWP based land surface phenology relates with the actual phenological stages. Most validation datasets are focused only on few stages using aggregated statistics over large areas \cite{gao2021mapping}. The National Agriculture Statistics Service (NASS) crop progress reports is such an example. Field-level ground observations are limited and rarely systematic. Recently, phenocams have been used to evaluate land surface phenology approaches \cite{zhang2018evaluation}. However, the network of phenocams is still sparse and confined. Furthermore, labeling phenological stages is a complex process and cannot be fully solved by observing photos from the field. Therefore, ground observations are still a necessity.
In this regard, this study offers a unique dataset of ground observations at the field level. The ground observations are accompanied by a panoramic and a couple of close-up photos of representative plants. We introduce a new protocol for collecting ground observations for crop growth that allows up to two label assignments. If two labels are assigned then the inspector should specify which one is the primary growth stage and which one is the secondary growth stage that describes the field. This allows for the detailed description of crop growth stages through the metaclasses that result from combining the primary and secondary stage annotations. Additionally, not having to decide on a single label makes the ground observation easier and can potentially increase the number of people who can perform them. This is true since the choice of the limits that define the principal growth stages differs among studies and ground observation protocols. Even more, closer to the start and end of those limits it gets tricky to decide on a single label. Having two labels can enable the reliable and large-scale crowdsourcing of ground observations.
Furthermore, the reliability of the ground observation collection method has been thoroughly evaluated. For this, we used the blinded interpretation of the field photos by an expert. Then the decisions of the field inspector and the photo inspector were evaluated by a third expert that decided on the percentage of agreement between the two. The quality assurance process yielded satisfactory results, deeming the ground observations reliable. The community is thus encouraged to use the openly available dataset and test their own models. The dataset is accompanied by the photos captured during the visits, which can be used for further interpretation but also computer vision tasks, such as crop classification and phenology classification.
\subsection*{Clustering for phenology estimation}
It was shown that the introduced clustering method managed to learn from the complete time-series of 2020 and successfully infer the phenological metaclasses in a within-season fashion for 2021. Our model significantly outperforms the baseline, making the proposed approach very promising. Furthermore, the model predicts 16 different metaclasses and goes beyond the 6 principal phenological stages, extracting more information on their transitions. This is particularly important since more intricate and precise agricultural management is now possible at the field level.
In many studies, phenology estimation is addressed as a regression problem, aiming to predict the DoYs of growth stage onsets, which is essential information for operational applications. This study's metaclass approach can also be viewed as a classification-based alternative of onset detection. The fuzzy metaclasses (\(\lambda_a, \lambda_b\)) and (\(\lambda_b, \lambda_a\)) denote this transitional phase between principal growth stages. In other words, the end of stage \(a\) and the onset of stage \(b\), respectively. In future work, further testing will be conducted for the evaluation of the spatial and temporal generalization of the proposed methodology. This will require additional ground observations at different areas and years of inspection.
We showed that formulating phenology estimation as a clustering problem, via incorporating the time in our features, is valid. The authors suggest that there is great potential and encourage the community to test more models on the proposed premise. During experiments, it was observed that the FCM was sensitive on the DoY of the first and last element included in the learning phase. This is expected since the clustering is largely dependent on time component of our features. It is therefore important to set the "start" and "end" instances with the aim to enclose the average length of the cotton season. This is easy when the sowing and harvest dates are known, but could also be approximated via observing the mean and standard deviation of the VI time-series.
This work relied on i) feature engineering, incorporating the time in the form of cumulative EO and NWP variables, and ii) feature selection to decide on a set of optimal feature spaces. Tens of thousands of experiments were performed for feature selection, yielding robust results. The robustness lies in the fact that the top 15 features systematically appeared in the best performing models. Furthermore, phenology predictions are based on the majority vote of the best-performing combinations of the top features, making sure our approach can generalize.
Having said that, there is a number of recent studies that look into DL based unsupervised change detection on SITS \cite{kalinicheva2020unsupervised, kalinicheva2018neural, kondmann2021spatial, andresini2021leveraging}. We see great potential in such approaches and we believe could be applicable in the proposed unsupervised premise for phenology estimation. Common denominator of these methods is the learning of a smaller latent or embedding space, in which entities that bear resemblance are located closer to each other. This is particularly important for clustering techniques that aim to group similar samples in the hyperspace. Usually, clustering algorithms, such as FCM, measure this similarity among entities using pair-wise distances. It is known that high dimensional spaces are not ideal for distance based techniques, as they usually fail to capture meaningful clusters. In addition, a latent manifold representation is not greatly dependent on feature engineering and can generalize well.
\section*{Conclusion}
In this paper we proposed a fuzzy clustering method for the within-season phenology estimation for cotton in Greece. Our method is unsupervised to tackle the problem of sparse, scarce and hard to acquire ground observations. It provides predictions within-season and thus enables its usage in operational agricultural management scenarios. It focuses on cotton, which is important for three reasons - i) it is an underrepresented crop type in the related literature, ii) the relationship between remote sensing phenology and the physiological growth of cotton is complex and iii) cotton is a very important crop for the economy and agricultural ecosystem of Greece, which is the study area.
We conducted field visits to collect ground observations that are offered to the community as a ready-to-use label dataset. For this, we used a new protocol that leverages two ranked labels. This makes the observations easier and at the same time provides enhanced information on the growth status. Therefore, we approach the problem as a multi-label one, introducing the notion of metaclasses. We go beyond the principal phenological stages of cotton by providing prediction for 16 metaclasses, using the membership probabilities of the FCM classifier.
Finally, we experimented with numerous combinations of features, including accumulated numerical simulations of atmospheric and soil paramaters, Sentinel-2 based VIs and their cumulative integral variants. Based on these experiments, we provided a list of optimal feature sets that can be used for cotton phenology estimation through majority voting.
\section*{Supporting information}
\paragraph*{S1 Table.}
\label{S1_Table}
{\bf The feature sets of the top 82 FCM models with size 8 or 9 features.}
With (I) we show the cumulative integrals of the VIs. max\_soil refers to the cumulative maximum soil temperature, max\_surf to the cumulative maximum surface temperature and wkappa to the linear weighted kappa coefficient. The last five columns refer to the performance metrics.
\section*{Acknowledgments}
The authors express their gratitude to Mr. Vaggelis Dedes for performing the ground observation campaign and Dr. Dimitra A. Loka, Researcher at the Institute of Industrial and Forage Crops in Greece, for her valuable consultations. Finally, the authors acknowledge the farmers' association of Orchomenos (ASOO) for letting us perform the ground observations on their fields and for giving consent to release these in a publicly available dataset.
\nolinenumbers
|
1,314,259,995,949 | arxiv | \section{Introduction}
\label{sec:intro}
High-dimensional prediction problems are more and more common in many application domains such as computational biology, signal processing, computer vision or natural language processing. To handle this high-dimensionality, one usually resorts to linear modeling and regularization with sparsity-inducing norms, such as the $\ell_1$ norm. This type of regularization results in \emph{sparse} models, meaning that the model is described by relatively few parameters. Besides making parameter learning consistent in high-dimensional settings, the sparsity assumption has the appealing property of yielding more interpretable models. As an example, consider the problem of explaining a particular phenotype of patients, e.g., the disease state, based on the genome sequence of each patient. Sparse linear approaches try to find a handful of genetic loci that govern the disease state, rather than a model involving the whole sequence. The $\ell_1$-regularized sparse linear models, such as the LASSO \citep{Tibshirani94} or basis pursuit \citep{chen}, are well studied by now, with a solid body of theoretical results, efficient algorithms and applications in diverse fields \citep[see, e.g.,][and references therein]{BuhGee11}. However, in practice, we often know that there is more \emph{structure} in the problem at hand, which cannot be captured by simple sparse modeling and $\ell_1$ regularization, and which, if exploited, can improve the estimation of parameters as well as the interpretability of the estimates \citep[see][and references therein]{Cevher2008,Huang2011,Bachetal12a}.
In our example, we could expect the genetic loci that influence the disease to be part of a small number of connected patterns in a known gene-gene interaction network \citep{Rapetal07,Azencottetal13}. In other words, we could be looking for a small number of possibly overlapping subsets of variables such that each subset corresponds to a connected subgraph in a given gene network, and the combination of variables in each subset influences the phenotype.
Given prior knowledge about the relevance of each considered group of variables, several methods exist for learning sparse models guided by this prior knowledge.
These methods achieve different kinds of structured sparsity by regularization (penalization, weighting) with appropriate sparsity-inducing norms, that often correspond to convex relaxations of combinatorial penalties on the support (i.e., the set of indices of non-zero components) of the parameter vector. After the group LASSO \citep{YuaLin06}, a number of convex penalties have been proposed, generalizing the group LASSO penalty to the cases of overlapping groups \citep{ZhaYu09, JacOboVer09, JenAudBac11, Chenetal12}, including tree-structured groups \citep{KimXin10,Jenattonetal11}. See \citep{Bachetal12,Bachetal12a} for a more detailed review of sparsity-inducing norms.
\begin{figure}[t]
\begin{center}
\includegraphics[width=.5\textwidth]{w_with_legend}
\vspace{-.2cm}
\caption{The coefficient vector $w$ is covered by latent variables supported on subsets $A$, $B$ and $C$: $w = v_A+v_B+v_C$.}
\label{fig:w}
\end{center}
\end{figure}
While most of these norms induce \emph{intersection-closed} sets of non-zero patterns, \citet{JacOboVer09} and \citet{OboBac12} introduce a different, latent formulation of sparsity-inducing norms that yields \emph{union-closed} sets of non-zero patterns, meaning that the parameter vector $w$ is represented as a sum of latent vectors $v_A$, identically zero at indices not in~ $A$ for a subset $A$ of indices. If several such sets of indices are considered, then the support of $w$ (i.e., the set of indices $i$ for which $w_i$ is non-zero) is included in the union of such sets (see Figure~\ref{fig:w} for illustration with three sets $A$, $B$ and $C$).
In order to quantify the intuition above, \citet{OboBac12} consider the following function on the support ${\rm supp}(w)$ of $w$:
\begin{equation}
g({\rm supp}(w)) = \min_{\substack{\Acal' \subseteq \Acal,\\ \cup_{A\in\Acal'} A ={\rm supp}(w)}} \sum_{A\in\Acal'} f(A),
\end{equation}
that is, $g({\rm supp}(w))$ is the minimum-weight \emph{cover} of ${\rm supp}(w)$ with the subsets $A$ in the family $\Acal$. The weights $f(A)$ express our prior belief in the subset~$A$ being relevant: If a group $A$ is irrelevant, then $f(A)=\infty$. Using the function $g$ as a regularizer (essentially the approach of~\citet{Huang2011}) will encourage the support of the parameter vector $w$ to be a union of subsets $A \in \Acal$ with finite $f(A)$.
Moreover, \citet{OboBac12} computed a convex relaxation of the function $g$ defined above, leading to the following norm~$\Omega(w)$ equal to:
\begin{equation}
\label{eq:norm_guillaume}
\min_{v_A\in\RR^P} \!\sum_{A\in\Acal} \!\|v_A\|_2 f(A)^{1/2} {\rm \quad s.t.} \sum_{A\in\Acal} v_A\!=\!w.
\end{equation}
However, generally we do not have this prior knowledge about the relevance of individual groups: The problem of automatically choosing appropriate weights for groups of variables, $f(A)$, is an important open research problem in structured sparsity. Assuming that we have several learning problems with similar structure (the relevance of a given group is largely shared across individual problems), in this paper we propose a framework for learning group relevances from data. Note that learning the structure is naturally a multi-task problem, as it is impossible to estimate the prior on a vector of parameters if we only observe one particular instance of it.
To come back to our example, we could assume that we have several phenotypes that can be explained by groups of loci whose relevance is largely shared across phenotypes.
A recent approach to learning group relevances from data has been proposed by \citet{HerHer13}. However, this work only considers learning relevances of pairs of variables and does not make the link with sparsity-inducing norms. Let us also mention that probabilistic modeling for structured sparsity has also been explored by \citet{MarMur09} and \citet{MarSchMur09} in the context of learning Gaussian graphical models, and by \citet{Hanetal14} for multi-task learning with structure on tasks.
We approach the problem using probabilistic modeling with a broad family of heavy-tailed priors and derive a variational inference scheme to learn the parameters of these priors. Our model follows the pattern of \emph{sparse Bayesian} models \citep[][among others]{Palmeretal06,SeeNic11}, that we take two steps further: First, we propose a more general formulation, suitable for structured sparsity with any family of groups; Second, we learn the prior parameters from data.
We show that prior parameter estimation with classical variational inference does not always lead to reasonable estimates in these models, and find a way of regularizing that works well in practice. Moreover, we propose a greedy algorithm that makes this inference scalable to settings in which the number of groups to consider is large. In our experiments, we show that we are able to recover the model parameters when the data are generated from the model, and we demonstrate the utility of learning penalties in image denoising.
\section{A Probabilistic Model for Structured Sparse Linear Regression}
\label{sec:model}
In this section, we formally describe our model and a suitable approximate inference scheme.
\subsection{Model definition}
We consider $K$ linear regression problems with design matrices $X^k \in \RR^{N^k\times P}$ and response vectors $y^k \in \RR^{N^k}$ for $k\in\{1,\ldots, K\}$. For each~$X^k$ and $y^k$, we assume the classical Gaussian linear model with i.i.d.~noise with variance~$\sigma^2$, that is,
\begin{equation}
\label{eq:distr_y}
y^k \sim \Ncal(X^kw^k, \sigma^2 I).
\end{equation}
Let $V$ be the set of indices of variables $\{1,\ldots,P\}$. For a family $\Acal$ of subsets of $V$, we assume
\begin{equation}
\label{eq:w}
\displaystyle w^k = \sum_{A\in \Acal} v_A^k,
\end{equation}
where, for each $k$,
\begin{itemize}
\item $\forall A\in\Acal, v_A^k$ is a vector in $\RR^P$ such that all its components with indices in $V\setminus A$ are zero (in other words, it is supported on $A$),
\item $\{v_A^k\}_{ A \in \Acal}$ are jointly independent, and
\item $\forall A\in\Acal, v_A^k$ has an isotropic density with inverse scale parameter~$f(A)$
\begin{equation}
\label{eq:prior_v_A}
p(v_A^k|f(A))=q_A(\|v_A^k\|_2 f(A)^{1/2})f(A)^{|A|/2},
\end{equation}
where $q_A$ is a heavy-tailed distribution that only depends on $A$ through its cardinality, $|A|$. We specify~$q_A$ in Section \ref{sec:super-Gaussian}.
\end{itemize}
We regard the inverse scale parameter~$f(A)$ as a measure of relevance of the group of variables~$A$\footnote{Abusing notation, we will call ``group $A$'' the subset of variables indexed by elements of $A$ throughout the paper.}: If a group of variables is irrelevant, then~$f(A)$ should equal infinity.
We are interested in priors~$q_A$ such that for each task indexed by~$k$ only a handful of~$v_A^k$ can be significantly away from~zero.
Here it is important to stress the link between the expression of our isotropic prior \eqref{eq:prior_v_A} and the norm~$\Omega(w)$ \eqref{eq:norm_guillaume} from \citet{OboBac12}, introduced above:
The log-likelihood of parameter vectors $\{w^k\}_{k=1,\ldots,K}$ with respect to $f$ will (up to a constant) be equal to the term $\sum_{A\in\Acal} \log q_A(\|v_A^k\|_2 f(A)^{1/2})$, which very closely resembles the norm \eqref{eq:norm_guillaume}. If~$q_A$ is the generalized Gaussian distribution (cf. Section \ref{sec:specialcases}), the two expressions match exactly. Thus, learning with our prior is a natural probabilistic counterpart of learning with the sparsity-inducing norm~\eqref{eq:norm_guillaume}.
Given data $\{X^k, y^k\}_{k=1,\ldots,K}$ and such a model for the prior, our goal will be to infer the parameters~$f(A)$ by maximizing the likelihood with respect to~$f$,
\begin{equation}
\label{eq:ML_brute}
\begin{aligned}
\ p(y^1,\ldots,y^K |f) = \prod_{k=1}^K \int p(y^k|X^kw^k,\sigma^2I) \prod_{A\in \Acal}p(v_A^k|f(A)) dv_A^k,\\
\end{aligned}
\end{equation}
where the parameters~$v_A^k$ are marginalized.
\subsection{Super-Gaussian priors}
\label{sec:super-Gaussian}
We assume that $q_A$ is a \emph{scale mixture of Gaussians}, i.e.,
\begin{equation*}
q_A(u) = \int_0^{\infty} \Ncal(u|0,s) r_A(s) ds
\end{equation*}
for some mixing density $r_A(s)$.
The main reason why we choose to work with the family of scale mixtures of zero-mean Gaussians is that it contains distributions that are heavy-tailed and therefore suitable for modeling sparsity; One such distribution is Student's $t$ which we use in our experiments.
The inverse scale parameter of the distribution on $v_A^k$, $f(A)$, captures the relevance of the group~$A$: the smaller $f(A)$, the more relevant the group, that is, the larger the values $v_A^k$ is likely to take. Note that even if the group $A$ is relevant, not all $v_A^k, k=1,\ldots,K$ have to be large. In fact, if the parameters $v_A^k, k=1,\ldots,K$ are drawn from a heavy-tailed distribution with small $f(A)$, then only a fraction of them will be significantly away from zero. Moreover, as we show in Section \ref{sec:variational}, learning in such models is amenable to variational optimization with closed-form updates and leads to an approximate Gaussian posterior on~$v_A^k$.
In general, the integral in~\eqref{eq:ML_brute} is intractable for Gaussian scale mixtures, therefore one has to resort to sampling or approximate inference to learn parameters in such models.
The fact that $q_A$ is a Gaussian scale mixture implies that it is also \emph{super-Gaussian}, that is, the logarithm of $q_A(u)$ is convex in $u^2$ and non-increasing \citep{Palmeretal06}\footnote{Note that the converse is not true: complete monotonicity of the log-density is a necessary and sufficient condition for the existence of a Gaussian scale mixture representation~\cite[Section 3]{Palmeretal06}.}. It therefore admits a representation of the following form by convex conjugacy
\begin{equation}
\label{eq:q_superG}
\log q_A(u) = \sup_{s \geq 0} -\frac{u^2}{2s} - \phi_A(s),
\end{equation}
where $\phi_A(s)$ is convex in $1/s$. Note that the expression inside the supremum in \eqref{eq:q_superG} has a unique maximizer.
In this work we only consider~$q_A$ for which this maximizer has an analytical simple form. From~\eqref{eq:prior_v_A} and~\eqref{eq:q_superG}, we get the following variational representation for $p(v_A^k|f(A))$:
\begin{equation}
\label{eq:var_repr_p_v_A}
\begin{aligned}
p(v_A^k|f(A))& = f(A)^{\frac{|A|}{2}} \!\! \sup_{\zeta_A^k \geq 0} \exp{\Big( -\frac{\|v_A^k\|_2^2f(A)}{2\zeta_A^k} - \phi_A(\zeta_A^k) \Big)}\\
& = f(A)^{\frac{|A|}{2}} \!\! \sup_{\zeta_A^k \geq 0} \! \Big[ \! \Ncal \! \Big(v_A^k \Big| 0,\frac{\zeta_A^k I}{f(A)} \Big) \!\Big(2\pi\frac{\zeta_A^k}{f(A)}\Big)^{\!\!\!\frac{|A|}{2}} \! \! \! e^{-\phi_A(\zeta_A^k)} \Big].
\end{aligned}
\end{equation}
For a particular choice of the prior $q_A$, we measure the relevance of the group of variables $A$ by the expectation of $\|v_A^k\|_2^2$ (which amounts to the sum of the variances of the individual components of~$v_A^k$),
\begin{equation*}
\EE\big[\|v_A^k\|_2^2\big] = \frac{\EE_{\|z\|_2\sim q_A} \big[\|z\|_2^2\big]}{f(A)},
\end{equation*}
where $\EE_{\|z\|_2\sim q_A} \big[\|z\|_2^2\big]$ is the expectation of $\|z\|_2^2$ under the standardized distribution $q_A$ on $\|z\|_2$. In fact, as we have
\begin{equation*}
\EE\big[\|w^k\|_2^2 \big] = \sum_{A\in\Acal} \EE\big[\|v_A^k\|_2^2 \big]
\end{equation*}
given our independence assumption, the expected value of $\|v_A^k\|_2^2$ allows us to measure the contribution of the group $A$ with respect to $\EE\big[\|w^k\|_2^2 \big]$. We somewhat abusively call $\EE\big[\|w^k\|_2^2 \big]$ the \emph{signal variance} in our experiments, as opposed to $P\sigma^2$, the \emph{noise variance}.
\begin{figure}
\begin{center}
\includegraphics[width=.47\textwidth]{graphical_model}
\caption{The graphical representation of our model.}
\label{fig:graphical_model}
\end{center}
\end{figure}
Figure~\ref{fig:graphical_model} represents the graphical model corresponding to our assumptions.
Note that we have explicitly incorporated the variational parameter $\zeta_A^k$ into the graphical model: In fact, the same parameter can also be interpreted as the scale parameter of the Gaussian in the Gaussian scale mixture representation of $p(v_A^k|f(A))$ \citep{Palmeretal06}.
\subsection{Inference}
\label{sec:variational}
Our model described above, namely the combination of the density of $y^k$ \eqref{eq:distr_y} and the variational representation of the prior density on~$v_A^k$~\eqref{eq:var_repr_p_v_A}, leads to the following variational bound on the marginal distribution of~$y^k$:
\begin{equation*}
\begin{aligned}
\log &\,p(y^k|f)\\
& =\log \int p(y^k|X^kw^k,\sigma^2I) \prod_{A\in \Acal}p(v_A^k|f(A)) dv_A^k\\
&\geq \sup_{\substack{{\zeta_A^k \geq 0}\\{ A \in \Acal}}} \Big\{\log \Ncal(y^k|0, X^kM Z^kF^{-1} M^{\top}{X^k}^{\top} + \sigma^2I) \\[-.1cm]
& + \sum_{A\in\Acal} \Big[ \frac{|A|}{2}\log f(A) \! + \! \frac{|A|}{2}\log\Big(2\pi\frac{\zeta_A^k}{f(A)}\Big) \!-\! \phi_A(\zeta_A^k) \Big]\Big\},\\
\end{aligned}
\end{equation*}
where $M$ is a matrix of dimension $P\times\sum_{A \in \Acal} |A|$ that ensures $w^k = Mv^k$ where $v^k$ is the concatenation of all elements indexed by elements of $A$ in $v_A^k, A \in \Acal$, and $F$ and $Z^k$ are square diagonal matrices of size $\sum_{A \in \Acal} |A|$ whose diagonals consist of $f(A)$ and $\zeta_A^k$ respectively, replicated $|A|$ times, for each $A\in \Acal$.
Thus, as an approximation to minimizing the negative log-likelihood,
we would like to minimize the following overall bound with respect to $f$ and~$\zeta_A^k$ for all $A \in \Acal$ and $k\in\{1,\ldots,K\}$:
\begin{equation}
\label{eq:LBgeneral_sum}
\begin{aligned}
-\sum_{k=1}^K \Big\{ &\!- \frac{1}{2} {y^k}^{\top} \!\Big(X^kM Z^kF^{-1} M^{\top}{X^k}^{\top}\!\! + \sigma^2I\Big)^{-1} \! y^k \! - \!\frac{1}{2} \log\det\Big(X^kM Z^kF^{-1} M^{\top}{X^k}^{\top}\!\! + \sigma^2I\Big) \\[-.1cm]
& + \sum_{A\in\Acal}\frac{|A|}{2}\log f(A) + \frac{\sum_{A\in\Acal}|A|\!-\! N^k}{2}\log(2\pi) + \frac{1}{2}\log\det (Z^kF^{-1} ) -\sum_{A\in\Acal}\phi_A(\zeta_A^k) \Big\}.\\
\end{aligned}
\end{equation}
In its form given by \eqref{eq:LBgeneral_sum}, the bound is difficult to optimize. However, we recognize parts of it as minima of convex functions, which allows us to design an iterative algorithm with analytic updates, finding a local minimum (see the appendix for details). Our optimization problem becomes
\begin{equation}
\label{eq:obj}
\begin{aligned}
\inf_{\zeta^k\geq 0} \inf_{v^k} \inf_{\Sigma^k\succcurlyeq 0} \sum_{k=1}^K\Big\{ &\frac{1}{2\sigma^2} \|y^k-X^kMv^k\|^2_2 + \frac{1}{2} \sum_{A\in\Acal} \frac{f(A)}{\zeta_A^k} (\|v_A^k\|^2_2 + \tr \Sigma_{AA}^k) - \frac{1}{2} \log\det \Sigma^k\\[-.1cm]
& + \frac{N^k}{2}\log(\sigma^2) + \frac{N^k}{2} \log (2\pi) + \frac{1}{2\sigma^2} \tr{M^{\top}{X^k}^{\top}X^kM\Sigma^k} -\frac{1}{2} \sum_{A\in\Acal}|A|\\
&+ \sum_{A\in\Acal} \Big[ - \frac{1}{2}|A|\log f(A) -\frac{|A|}{2} \log 2\pi + \phi_A(\zeta_A^k) \Big]\Big\},\\
\end{aligned}
\end{equation}
and the closed-form updates are
\begin{equation}
\label{eq:updates}
\begin{aligned}
\Sigma^k & = \sigma^2(M^{\top}{X^k}^{\top}X^kM + \sigma^2 F{Z^k}^{-1} )^{-1}\\
v^k & = (M^{\top}{X^k}^{\top}X^kM + \sigma^2 F{Z^k}^{-1})^{-1}M^{\top}{X^k}^{\top}y^k\\
\zeta_A^k & = \argmin_{z \geq 0} \phi_A(z) + \frac{1}{2} \frac{f(A)}{z} (\|v_A^k\|^2_2 + \tr \Sigma_{AA}^k)\\
\sigma^2& = \frac{\sum_{k=1}^K \!\! \big\{ \|y^k \!\!-\!\! X^kMv^k\|^2_2 \!+\! \tr M^{\top}{X^k}^{\top}X^kM\Sigma^k \big\}} {\sum_{k=1}^{K}N^k} \\
f(A) & = \frac{K|A|}{\sum_{k=1}^K \frac{1}{\zeta_A^k} (\|v_A^k\|^2_2 + \tr \Sigma_{AA}^k)},\\
\end{aligned}
\end{equation}
iterated until convergence.
\begin{remark}
Note that the only update that depends on the specific prior distribution is that for the variational parameter $\zeta_A^k$, all others apply to all super-Gaussian priors.
\end{remark}
\begin{remark}
It can be shown that the updates \eqref{eq:updates} exactly correspond to the updates yielded by mean-field variational inference in the special case of Gaussian scale mixtures \citep{Palmeretal06}. However, the approach presented here is more general, as it also applies to super-Gaussian priors that are not Gaussian scale mixtures.
\end{remark}
\begin{remark}
Using the matrix inversion lemma, the update for $\Sigma^k$ can be rewritten in such a way that we avoid the expensive inversion of a $\sum_{A\in\Acal}|A|\times \sum_{A\in\Acal}|A|$ matrix and we only have to invert a $P\times P$ or $N^k\times N^k$ matrix instead, which can even be diagonal in certain cases (see the appendix for details).
When it is not diagonal, matrix inversions can be avoided by making an extra diagonal assumption on the covariance matrix of the Gaussian posteriors of all $v_A^k$.
\end{remark}
\begin{remark}
While we do provide an update equation for $\sigma^2$ for completeness, in general it is customary to assume the noise level known, which we also do in all our experiments.
\end{remark}
\subsection{Special cases}
\label{sec:specialcases}
The family of super-Gaussian distributions includes Student's $t$ and generalized Gaussian distributions among many others. We here give the densities of these distributions, as well as the expressions for the quantities in our model and inference that depend on the particular prior on~$v_A^k$.
\paragraph{Student's $t$:} The density of this distribution is given by
\begin{equation}
p(v_A^k|a, f(A)) =f(A)^{\frac{|A|}{2}} \frac{\Gamma( a + |A|/2)}{\Gamma(a)} \Big(\frac{1}{2\pi}\Big)^{\frac{|A|}{2}}\Big(1 + \frac{{\|v_A^k\|_2}^2f(A)}{2} \Big)^{-a-\frac{|A|}{2}},
\end{equation}
where $a$ is a parameter governing the shape of the distribution. The smaller $a$, the heavier-tailed the distribution (for $a\le 1$, there is no finite variance).
For this distribution,
\begin{equation}
\begin{aligned}
\phi_A(\zeta_A^k) = & \frac{1}{\zeta_A^k}+ (a+1/2)\log(\zeta_A^k) + \frac{|A|}{2}\log(2\pi) - (a + |A|/2)+(a+|A|/2)\log(a+|A|/2) \\
& - \log(\Gamma(a+|A|/2))+ \log(\Gamma(a)),\\
\end{aligned}
\end{equation}
and, therefore, the update for $\zeta_A^k$ is written as
\begin{equation}
\zeta_A^k = \frac{1+\frac{1}{2} f(A) (\|v_A^k\|^2_2 + \tr \Sigma_{AA}^k)}{a+\frac{|A|}{2}} .
\end{equation}
The variance of a Student's $t$-distributed random variable, if $a>1$, is
$\EE(v_A^k {v_A^k}^{\top})=\frac{1}{f(A)(a-1)}I,
$
and therefore
$\EE(\| v_A^k\|_2^2)=\frac{|A|}{f(A)(a-1)}.
$
Student's $t$ has a natural representation as a Gaussian scale mixture with the inverse Gamma as the mixing distribution.
All our experiments are carried out using Student's~$t$.
\paragraph{Generalized Gaussian:} The density is given by
\begin{equation}
p(v_A^k|\gamma, f(A)) = f(A)^{\frac{|A|}{2}}\frac{\frac{\gamma}{2}\Gamma(\frac{|A|}{2})} {\pi^{\frac{|A|}{2}} \Gamma(\frac{|A|}{2} )} e^{-\|v_A^k f(A)^{\frac{1}{2}}\|_2^\gamma }
\end{equation}
\citep{Pascaletal13}. Consequently, we have
\begin{equation}
\begin{aligned}
\phi_A(\zeta_A^k) = & -\log \frac{\frac{\gamma}{2}\Gamma(\frac{|A|}{2})} {\pi^{\frac{|A|}{2}} \Gamma(\frac{|A|}{2} )} + \frac{{\zeta_A^k}^\frac{\gamma}{2-\gamma}( \frac{1}{\gamma} - \frac{1}{2}) }{\gamma^{\frac{2}{\gamma-2}} },\\
\end{aligned}
\end{equation}
\begin{equation}
\zeta_A^k = \Big(-\frac{\frac{1}{2} f(A) (\|v_A^k\|^2_2 + \tr \Sigma_{AA}^k)}{(1/\gamma -1/2)\gamma^{\frac {\gamma-2}{2}} } \frac{\gamma-2}{\gamma} \Big)^{\frac{2-\gamma}{2}},
\end{equation}
and $\EE(\| v_A^k\|_2^2)=\frac{\Gamma(|A|/\gamma + 2/\gamma)}{f(A)\Gamma(|A|/\gamma)}$.
\subsection{Learning with all groups}
\label{sec:allgroups}
While our model and the associated inference algorithm described earlier are valid for any set of groups~$\Acal$, including $\Acal=2^V$, the algorithm is impractical when $\Acal$ is large: Indeed, even if we only have 20 variables and 1000 tasks, learning with $\Acal = 2^{\{1,\ldots,20\}}$ implies that the number of variational parameters~$\zeta_A^k$ will exceed a billion. To avoid working with a prohibitively large number of groups at once, one can leverage an \textit{active set}-type heuristic that maintains a list of relevant groups and iteratively updates it. Algorithm~\ref{alg:greedy}, which we discuss in detail in the following, describes one way to do this. It requires setting the maximal allowed cardinality $T$ of $\Acal$, and the number $D$ of groups to be discarded in each active set update. We start by learning with singletons only (steps 1 and 2); After ranking the groups in $\Acal$ according to their relevance measured by $\frac{f(A)}{|A|}$ into the sequence $(A_1,\ldots,A_{|\Acal|})$ (step 3), we determine the additional groups to be considered, $\Acal'$, by taking the first $T$ sets from the sequence $(A_1\cup A_2 ,\ldots, A_1 \cup A_{|\Acal|}, A_2\cup A_3,\ldots, A_2\cup A_{|\Acal|}, \ldots)$, ignoring groups that have been considered in the past and making sure we do not add the same group more than once; In steps 5-11 we repeatedly (a) learn with $\Acal\cup\Acal'$, (b) rank the groups, (c) update $\Acal$ and $\Acal'$. In step 8 we choose not to discard the singletons just to make sure that $\Acal$ always covers $\{1,\ldots,P\}$. The stopping criterion (step 5) may be that we have no more groups to consider (if $P$ is small enough), or that we have reached a predefined maximal number of iterations.
\begin{algorithm}
\caption{Active set procedure for the discovery of relevant groups}\label{alg:greedy}
\begin{algorithmic}[1]
\REQUIRE $T\in\NN$, $D\in\NN$
\STATE Let $\Acal =\{1,\ldots,P\}$ and $\Dcal=\emptyset$
\STATE $f \gets \text{variational}(\Acal)$
\STATE Rank all $A\in\Acal$ according to their relevance
\STATE Determine $\Acal'$ \\
(make sure $\left\vert\Acal\cup\Acal'\right\vert\!\le\! T, \Acal'\cap(\Acal\cup\Dcal)\!=\!\emptyset$)
\WHILE {stopping condition not met}
\STATE $f \gets \text{variational}(\Acal \cup \Acal')$
\STATE Rank $A\!\in\Acal\! \cup \!\Acal'$ according to their relevance
\STATE Add to $\Dcal$ the $D$ least relevant non-singletons in $\Acal\cup\Acal'$
\STATE $\Acal \gets \Acal\cup\Acal' \setminus\Dcal$
\STATE Determine $\Acal'$\\
(make sure $\left\vert\Acal\cup\Acal'\right\vert\!\le\! T, \Acal'\cap(\Acal\cup\Dcal)\!=\!\emptyset$)
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\section{Approximation Quality and Regularization}
\label{sec:reg}
The goal of this section is to experimentally study the behavior of our approximate inference scheme in terms of estimation quality and to clarify how we can control it.
As we empirically show below, the variational approximation scheme from Section \ref{sec:variational} tends to overestimate the variance of the prior distribution (i.e., underestimate the inverse scale parameter $f(A)$) when this variance is smaller than~$\sigma^2$, the noise variance. This is undesirable, as we would like $f(A)$ to tend to infinity for irrelevant groups of variables.
To circumvent this problem, we use an improper hyperprior of the form $p(f(A)) \propto f(A)^\beta$ to encourage $f(A)$ to go to infinity when the variance of $p(v_A)$ is smaller than $\sigma^2$.
Consequently, the regularization term $-K\beta\sum_{A\in\Acal}\log f(A)$ with $\beta>0$ is added to the objective function \eqref{eq:obj}, and the only update that changes is that for $f(A)$:
\begin{equation}
f(A) = \frac{K(\beta + \frac{|A|}{2})}{\frac{1}{2}\sum_{k=1}^K \frac{1}{\zeta_A^k} (\|v_A^k\|^2_2 + \tr \Sigma_{AA}^k)}.
\end{equation}
Thus, we substitute the approximate type-II maximum likelihood estimation of $f(A)$ by approximate (also ``type-II'') maximum a posteriori estimation. In Sections \ref{sec:exp_p_1} and \ref{sec:exp_p_2} we empirically study the effect of the parameter $\beta$ on the approximation quality.
\subsection{Scale parameter inference with only one variable}
\label{sec:exp_p_1}
In this experiment, we evaluate the performance of the variational method described in Section \ref{sec:variational} in recovering the unknown scale parameter $f$ of the prior in the simplest, 1-dimensional case (note that in this subsection we omit the subscripts $A$ as $\Acal =\{\{1\}\}$). More specifically, our goal here is to answer the following questions: Given an i.i.d. sample drawn from a univariate Student's $t$ with shape and inverse scale parameters $a$ and $f$, corrupted by Gaussian noise, and supposing we know both the noise variance $\sigma^2$ and the shape parameter $a$, can we precisely estimate the inverse scale parameter $f$ using the variational method from Section \ref{sec:variational}? In the settings where we cannot, does regularization improve our estimates?
\paragraph{Experimental setup.} We consider 10,000 tasks with one variable and one observation each ($P$, $N^k$ for all $k$, and $X^k$ for all $k$ equal to 1). Data are generated from the model with Student's $t$ prior on $v^k$ with parameters $a$ set to 1.5 and $f$ varying in the set $\mathcal{F}$ of 14 values between $0.02$ and $50$ taken roughly uniformly on the logarithmic scale, and Gaussian noise with variance $\sigma^2$ set to $1$.
We compare the performance of the variational method with that of a grid search over $\mathcal{F}\cup\{10^5\}$, where we use the trapezoidal rule to numerically solve the intractable integral in \eqref{eq:ML_brute}. The grid search, feasible in this basic setting, provides the best available approximation to the regularized maximum likelihood solution.
To reduce the effect of random fluctuations, we repeat all experiments 5 times with different random seeds and report averaged results.
\paragraph{Results.} Figure~\ref{fig:reg_scale_p1} summarizes the results. For three values of the parameter $\beta$, we plot (on the logarithmic scale) the estimated against the true variance for the considered range of the parameter $f$ (recall that the variance of a Student's $t$-distributed random variable with parameters $a$ and $f$ equals $\frac{1}{(a-1)f}$). In all figures, we also plot the variance of the Gaussian noise $\sigma^2$. We observe that in the absence of regularization ($\beta=0$) and when the signal is not much stronger than noise, the variational method overestimates the signal variance while the grid search does not. As we add regularization, this effect gradually goes away and the signal variance estimate is set to 0 (i.e., the estimate of $f$, $\widehat{f}$, goes to infinity) if the true signal variance is smaller than a certain threshold. When the regularization is too strong ($\beta=0.25$), the estimated signal variance drops to 0 even when the signal is stronger than the noise, and the variance of the signal is heavily underestimated. With the right amount of regularization ($\beta=0.05$ in this case) we observe the desired behavior: The variational method recovers the signal when it is stronger than noise, and sets $\widehat{f}$ to infinity otherwise. In all cases, variational estimates are close to the maximum likelihood estimates obtained by the grid search when the signal is much stronger than the noise.
\begin{figure}
\begin{center}
\includegraphics[width=.95\textwidth]{reg_scale_p1}
\end{center}
\caption{Recovery of the variance of the univariate Student's $t$ distribution with added Gaussian noise of known variance with grid search and the variational method, with different levels of regularization. The x and y axes represent the variance based on the true and on the estimated $f$ parameter values, respectively.}
\label{fig:reg_scale_p1}
\end{figure}
\subsection{Structured sparsity with two variables}
\label{sec:exp_p_2}
In this section, we empirically study the most basic case of the group relevance learning problem. Suppose that in each task we only have 2 variables, and therefore 3 possible groups, $\Acal =\{\{1\},\{2\},\{1,2\}\}$. Let $X^k$ be the identity matrix in each task. In this basic setting, and supposing that the data come from the model, can our inference algorithm distinguish the case where the data $\{y^k\}_{k=1,\ldots,K}$ are generated by the group of variables $\{1,2\}$ from the opposite case, where the relevant groups are the two singletons $\{1\}$ and~$\{2\}$?
These two settings differ in fact significantly in the case of a heavy-tailed prior on $v_A^k$: We have $w^k = v_{ \{1\} }^k + v_{ \{2\} }^k + v_{ \{1,2\} }^k$;
If $\{1, 2\}$ is relevant and $\{1\}$ and $\{2\}$ are not,
then $v_{ \{1\} }^k$ and $v_{ \{2\} }^k$ will have to be close to zero for all $k$, however, $v_{ \{1,2\} }^k$ will be significantly far from zero for some $k$. As the prior on $v_A^k$ only depends on $v_A$ through its norm, these $v_{ \{1,2\} }^k$ can be anywhere on the circle with radius $\|v_{ \{1,2\} }^k\|_2$ with the same probability and therefore $y^k$ can also be anywhere on the circle with radius $\|y^k\|_2$.
In contrast,
when $\{1, 2\}$ is irrelevant and $\{1\}$ and $\{2\}$ are relevant,
the rare events of $v_{ \{1\} }$ and $v_{ \{2\} }$ both being significantly away from zero will not occur at the same time for most $k$, and therefore the $y^k$ with a large norm will tend to be concentrated along the axes. This behavior (using Student's $t$ prior with parameter $a=1.5$ on $v_A^k$) is illustrated in Figure~\ref{fig:singleton_pair_data}, where we have plotted the data $\{y^k\}_{k=1,\ldots,K}$ for $K=5,000$ in both settings.
\begin{figure}
\begin{center}
\includegraphics[width=.5\textwidth]{singleton_pair_data_a15}
\end{center}
\caption{On the left, the singletons are the relevant groups. On the right, the pair is the relevant group.}
\label{fig:singleton_pair_data}
\end{figure}
\paragraph{Experimental setup.} We consider 5,000 tasks with $P$ and $N^k$ for all $k$ equal to 2, with the set of groups $\Acal =\{\{1\},\{2\},\{1,2\}\}$.
The data are generated from the model with Student's $t$ prior on $v^k$ with parameters $a$ set to 1.5 and each $f(A)$ varying in a set of 14 values between $0.01$ and $25$ taken roughly uniformly on the logarithmic scale ($f(\{1\})$ and $f(\{2\})$ always equal each other), and Gaussian noise with variance $\sigma^2$ set to $1$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=.92\textwidth]{singleton_pair_res_a15.png}
\caption{A red (blue) square means that the estimate of the singleton (group) variance is larger than the estimate of the group (singleton) variance for the corresponding true singleton and pair variances indicated by the axes. A black square means that both singleton and pair variances are under $2\sigma^2$, the noise variance. Best seen in color.}
\label{fig:singleton_pair_results}
\end{center}
\end{figure}
\paragraph{Results.} Figure~\ref{fig:singleton_pair_results} summarizes the results for three values of the regularization parameter $\beta$ ($\beta=0$ corresponds to the absence of regularization). We report when the estimated pair variance $\frac{2}{(a-1)\widehat{f}(\{1,2\})}$ dominates (blue) or is dominated (red) by the estimated singleton variance $\frac{1}{(a-1)\widehat{f}(\{1\})}+\frac{1}{(a-1)\widehat{f}(\{2\})}$, provided that one of them is larger than the noise variance, $2\sigma^2$. We see that when we do not regularize, the variational method explains everything with the singletons. As we add regularization, the pair explains more and more variance, however in such a way that the pair also explains the signal coming from singletons. Nonetheless, there is a regime ($\beta=0.03$) where a strong signal coming from both the singletons and the pair is identified correctly. If we regularize too strongly ($\beta=0.15$), the entire signal is explained by the pair, regardless of its source.
\section{Experiments}
\label{sec:exp}
In our experiments we consider two different instances of the denoising problem and we empirically evaluate the performance of our approach in recovering both the signal and the structure.
\subsection{Structured sparsity in the context of denoising}
In this section, we study toy multi-task structured sparse denoising problems. Our goal is to answer the following questions: Given data $\{y^k\}_{k=1,\ldots,K}$, generated from the model, and assuming that we know the true shape parameter $a$ of the Student's $t$ and the noise variance~$\sigma^2$, (a) can we recover the structure (i.e., the relevant groups and their weights), and (b) if we use the correct structure, is our denoising more accurate than when using a different structure?
\paragraph{Experimental setup.} To this end, we consider 10,000 tasks with $P$ and $N^k$ for all $k$ equal to 10, with the set of groups $\Acal =\{\{Q\}_{Q=1,\ldots,P}, \{1,\ldots,Q\}_{Q=2,\ldots,P}\}$.
Each signal $w^k$ is generated using Student's $t$ with parameters $a$ set to $1.5$ and $f(A)$ set to 0.2 or to 200, depending on whether $A$ is considered relevant or irrelevant: In this fashion, the variance of the signal coming from relevant $A$ is $\frac{|A|}{(a-1)f(A)}=10\times|A|$ (respectively, $0.01\times|A|$ for irrelevant $A$). For each task~$k$, $y^k$ is a perturbed version of the signal $w^k$ with additive Gaussian noise of variance $\sigma^2I$.
We consider three different ways of generating data:
\begin{itemize}
\item {\bf Singletons}: Here, only $\{1\},\ldots,\{5\}$ are relevant, all other groups in $\Acal$ are irrelevant.
\item {\bf One group}: Only $\{1,2,3,4,5\}$ is relevant.
\item {\bf Overlapping groups}: The groups $\{1\}$, $\{1,2\}$,$\ \ldots\ $, $\{1,2,3,4,5\}$ are relevant.
\end{itemize}
For the three cases, we choose $\sigma^2$
so that the total noise variance $P\sigma^2$ equals the total signal variance in each case.
We consider four models of increasing complexity for inference:
\begin{itemize}
\item {\bf LASSO-like}: In this simplest model, we only use the singletons, $\Acal = \{\{1\},\ldots,\{P\}\}$, and moreover, we force $f(A)$ to be constant across $\Acal$; In order to do so, we change the update for $f(A)$ to
$f(A) = \frac{K\sum_{A\in\Acal}(\beta + \frac{|A|}{2})}{\frac{1}{2}\sum_{k=1}^K\sum_{A\in \Acal} \frac{1}{\zeta_A^k} (\|v_A^k\|^2_2 + \tr \Sigma_{AA}^k)}.$
This mimics the behavior of the LASSO, as the prior (that we are learning here) is the same for each coefficient.
\item {\bf Weighted LASSO-like}: The usual model with
$\Acal \!=\! \{\{1\},\ldots,\{P\}\}$.
\item {\bf Structured}: The usual model with $\Acal \!=\!\{\{Q\}_{Q=1,\ldots,P},\{1,\!\ldots,\! Q\}_{Q=2,\ldots,P}\}$.
\item {\bf Structured (active set)}: The model where we also learn $\Acal$ using Algorithm~\ref{alg:greedy} (with parameters $T=4P$, $D=2P$, and 5 active set updates).
\end{itemize}
We examine each of the 12 combinations of data generation and learning models. In each case, we use half of the tasks to find the optimal $\beta$ in terms of the mean squared prediction error (i.e., the mean squared difference between the true and the learned signals $w^k$) from a predefined range of 7 values, and the other half to learn with this $\beta$ and evaluate the test error.
\begin{table}[t]
\begin{center}
\begin{tabular}{|r|c| c |c|}
\hline
& Singletons & One group & Overlapping \\ \hline
LASSO-like & 18.5$\pm$0.3 & 18.6$\pm$0.4 & 58.4$\pm$1.1\\ \hline
W. LASSO-like & {\bf 14.5}$\pm$0.3 & 14.5$\pm$0.3 & {\bf 42.8}$\pm$0.9\\ \hline
Structured & 14.8$\pm$0.3 & {\bf 13.8}$\pm$0.3 & 43.0$\pm$0.9 \\ \hline
Structured(AS) & 14.6$\pm$0.3 & 14.0$\pm$0.3 & {\bf 42.8}$\pm$0.9 \\ \hline
\end{tabular}
\end{center}
\caption{Squared error averaged over the tasks with $95\%$-confidence error bars for each combination of data generation and learning models. The usage of boldface indicates that the corresponding method significantly outperforms the others, as measured using a $t$-test at the level $0.05$.\label{tab:12comb}}
\end{table}
\paragraph{Results.} We begin by examining the performance of each of the four models in \emph{signal recovery}: In Table \ref{tab:12comb} we report the mean squared error on the 5,000 test tasks with $95\%$-confidence error bars. For all three regimes for data generation, the LASSO-like model performs far worse than the three others in recovery. This is due to the fact that this model learns the same prior for all variables, although not all variables have the same marginal variance. In the first and third data generation regimes W.LASSO performs slightly better than Structured in signal recovery, while Structured has an advantage when a single group is relevant. The performance of Structured(AS) is systematically close to, or on a par with that of the best-performing model.
\begin{figure}[t]
\begin{center}
\includegraphics[width=.8\textwidth]{p10_exp_variance_23_3.png}
\caption{For each group of variables on the y axis, the intensity of gray indicates the percentage of total explained variance per $\beta$. \label{fig:p10_exp_variance_23_3}}
\end{center}
\end{figure}
In terms of \emph{structure recovery}, for all three data generation regimes, we find one or more values of $\beta$ that lead to the recovery of the relevant groups by Structured and Structured(AS), with either the same or a slightly different $\beta$ value leading to the smallest error in signal recovery. Figure~\ref{fig:p10_exp_variance_23_3} illustrates the percentage of total explained signal variance by each group for the One group and Overlapping regimes and for the Structured model, for all considered regularization parameters: With no regularization, the model explains the signal with both the relevant group(s) and the singletons included in the relevant group(s), however with more and more regularization, the signal variance explained by smaller groups is taken over by larger ones. The groups containing elements from $\{6,\ldots,10\}$, not shown in the plot, explain no variance in no regularization regime, with the exception of the largest group $\{1,\ldots,10\}$ that explains the weak signal coming from the irrelevant groups (recall that we have non-zero signal variance $0.01\times|A|$ for the irrelevant groups $A$) in weak and moderate regularization regimes and takes over the whole signal variance when the regularization is too strong.
In summary, the performance in denoising does not change drastically depending on the amount of regularization, unless it is too strong; However, a small amount of regularization is likely to better capture the structure than no regularization; If there is a strong group structure among the variables, regularization may also lead to better recovery.
A formal criterion to set the value of the hyperparameter $\beta$ would be to maximize its likelihood, as is customary in Bayesian methods.
\subsection{Image denoising with wavelets}
In this section, we consider the image denoising problem using wavelets. The Haar wavelet basis for 2-dimensional images~\citep{mallat} can naturally be arranged in three rooted directed quad-trees, which can be connected to form one tree by attaching the three roots to an artificial parent node; The structured sparsity-inducing norms with non-zero groups that are paths from the root in this tree have shown improvements over the $\ell_1$ norm~\citep{Jenattonetal11}. Our goal is to find out whether, in this task, (a) a value of $\beta$ that leads to good recovery for a set of images is also close to optimal for another set of images of roughly the same size, at least when the noise level is unchanged (stability of the hyperparameter); (b) learning a non-uniform prior on singletons improves recovery with respect to using a uniform prior (importance of learning a non-uniform prior); (c) learning the group structure helps beyond learning a non-uniform prior on singletons (importance of learning group relevances).
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.2\textwidth]{barbara.png} \quad \includegraphics[width=0.2\textwidth]{fingerprint.png} \quad \includegraphics[width=0.2\textwidth]{lena.png} \quad \includegraphics[width=0.2\textwidth]{house.png}
\end{center}
\caption{The images used in our experiments (Barbara, Fingerprint, Lena, House).\label{fig:images}}
\vspace{.3cm}
\end{figure}
\paragraph{Experimental setup.} In order to denoise a large grayscale image, we cut it into possibly overlapping patches of $32\times 32$ pixels, which compose the multiple 1024-dimensional signals that we denoise simultaneously by learning the appropriate (structured) prior. We use four well-known images (see Figure~\ref{fig:images}), Barbara, Fingerprint, Lena ($512\times 512$ pixels each), and House ($256\times 256$ pixels). Each signal $w_k$ is formed by the wavelet coefficients of one $32\times 32$ patch. For each of the $K=961$ tasks ($841$ for House) we form $y^k$ by adding Gaussian noise of variance $\sigma^2=400$ along each dimension.
As in the previous section, we examine the performance of three instances of our model: the model with a uniform factorized sparse prior (LASSO-like), a non-uniform factorized sparse prior (W.LASSO-like), the structured norms on all descending (equivalently, ascending) paths in the rooted tree (Structured), and the structured norms on groups that we discover in the process of learning, with~2 active set updates (Structured(AS)).
We consider a predefined range of 6 values for the regularization hyperparameter $\beta$, and 3 values ($0.5, 1.1, 1.5$) for the shape parameter~$a$ of Student's~$t$.
We compare the behavior of our methods with that of existing algorithms based on sparsity-inducing norms, which are not designed to learn group weights from data. From the family of such approaches, we choose the ``Tree-$\ell_2$'' structured norm proposed by \citet{Jenattonetal11}, and the classical LASSO \citep{Tibshirani94} on the wavelet coefficients. (We would like to stress here that ``Tree-$\ell_2$'' does need group weights to be specified, but does not provide a systematic way to learn them. They are usually set by introducing a group-weighting parameter $\alpha$ so that $\alpha^d$ is the weight of all groups at depth $d$ in the tree, and then optimizing $\alpha$ over a predefined range of values using cross-validation.)
We run these methods on each set of small images with the regularization parameter $\lambda$ and the group-weighting parameter $\alpha$ (only for Tree-$\ell_2$) varying over
predefined ranges of 75 and 7 values
respectively, and report the smallest error.
To train the LASSO and learn with the Tree-$\ell_2$ norm, we use the ``proximal'' toolbox of the software package SPAMS~\citep{Jenattonetal11}.
\paragraph{Results.} Table \ref{tab:mse_images_32} shows the best performance in terms of the mean squared error of each method on each image (which corresponds to a set of $K$ small images). The values in the parentheses for our proposed methods indicate the value of $\beta$ corresponding to the minimal error. The performance of our proposed methods with respect to the shape parameter $a$ is systematically slightly better for larger $a$, and all reported results correspond to $a=1.5$.
According to these results, (a)~the performance of a given value of $\beta$ in signal recovery indeed seems to be stable across images (note that we have also observed that the performance on a given image is robust to small changes of the value of the hyperparameter); (b)~the fact that the LASSO and our LASSO-like model are systematically outperformed by models that weight each variable confirms the intuition that learning how to weight individual variables should boost the estimation quality; (c)~it seems that learning a prior on joint relevances of variables can lead to improved performance, as shown in the column corresponding to Fingerprint, although this is not always the case: on House and Lena, the performance of methods that learn group relevances is not significantly different from that of Tree-$\ell_2$, and in the case of Barbara they perform worse. Inspecting the relevances of different groups (paths in the wavelet tree) learned by Structured, we see that the groups explaining the bulk of the variance are overlapping groups of 2, 3, or 4 elements, mostly descending from the roots of the three quad-trees. In contrast, the relevant groups selected by Structured(AS) tend to consist of one to three roots of the three wavelet quad-trees and one or two wavelets of higher frequency, suggesting that paths in the wavelet tree may not always be the most natural groups in this problem. At last, let us stress that while ``Tree-$\ell_2$'' is applicable in problems where variables can be structured in a tree given in advance, our proposed approach applies to any known or unknown group structure.
The Matlab code used in our experiments is available at \url{http://cbio.ensmp.fr/~nshervashidze/code/LLSS}.
\begin{table}
\begin{center}
\resizebox{0.98\linewidth}{!}{
\begin{tabular}{|r |c|c|c|c|}
\hline
& Barbara & House & Fingerprint & Lena \\
\hline
LASSO-like & 179.0$\pm$4.6 (0.001) & 107.5$\pm$2.6 (0.001) & 247.5$\pm$1.7 (0.005) & 110.3$\pm$2.8 (0.001) \\ \hline
W.LASSO-like & 163.3$\pm$5.1 (0) & 93.7$\pm$2.6 (0) & 195.0$\pm$1.8 (0.0001) & 89.5$\pm$3.2 (0) \\ \hline
Structured & 164.8$\pm$5.3 (0) & 95.3$\pm$2.9 (0) & {\bf 193.6}$\pm$1.8 (0.0005) & 90.3$\pm$3.5 (0) \\ \hline
Structured(AS) & 163.1$\pm$5.0 (0.0001) & 92.9$\pm$2.3 (0.0001) & 194.9$\pm$1.8 (0.001) & 89.5$\pm$2.8 (0.0001) \\ \hline
\hline
Tree-$\ell_2$&{\bf 155.3}$\pm$6.4 & 93.3$\pm$3.8 & 214.9$\pm$2.4 & 88.7$\pm$3.7 \\ \hline
LASSO &176.7$\pm$6.4 & 102.1$\pm$3.6 & 250.0$\pm$2.2 & 106.6$\pm$3.9 \\ \hline
\end{tabular}
}
\end{center}
\caption{Squared error averaged over the images with $95\%$-confidence error bars for each combination of data generation and learning models. The usage of boldface indicates that the corresponding method significantly outperforms the others, as measured using a $t$-test at the level $0.05$. (Each number is divided by 1000 for readability.)\label{tab:mse_images_32}}
\end{table}
\section{Conclusions and Future Work}
\label{sec:concl}
In this paper, we have proposed a flexible and general probabilistic model and an associated inference scheme for automatically learning the weights of possibly overlapping groups in the context of structured sparse multi-task linear regression. We have shown that the classical variational inference scheme is not well adapted for learning with this model, and have proposed a regularization method that closes this gap. This has allowed us to investigate the effect of learning group weights in denoising problems, leading to the conclusion that learning penalties can significantly improve prediction quality, as well as the interpretability of the models, in this context. We have furthermore devised an active-set procedure that makes the inference with our model scalable to settings with large~$P$ and a large number of potential groups in~$\Acal$.
In our future work we may consider different likelihood models to handle settings different from linear regression, such as binary classification. Learning group relevances for classification is indeed crucial, e.g., in the context of genome-wide association studies with binary phenotypes in computational biology, or for image segmentation in computer vision.
In the appendix we provide details on the derivation of the variational inference scheme for our model (briefly introduced in Section \ref{sec:variational}) and discuss efficient ways of implementing the closed-form updates~\eqref{eq:updates}.
|
1,314,259,995,950 | arxiv | \section{Introduction}
The diversity of atomic motion in metallic glasses (MGs) is central to their
unique physical and mechanical properties. The primary or $\alpha$-relaxation
underlies the drastic slowing down of the collective atomic dynamics during the
transition from a viscous supercooled liquid to a glassy solid upon cooling,
and its origin is still an outstanding problem in condensed matter physics.
Indeed, like many other disordered solids, such as polymers and molecular
glasses, MGs exhibit an entire class of secondary relaxations that persist even
well below the glass transition temperature
$T_g$~\cite{Yu2014a,Yu2013b,Ku2017}. These phenomena are broadly referred to as
$\beta$-relaxations and occur on time scales much shorter than that of the
$\alpha$-relaxation. The Johari-Goldstein (JG) $\beta$-relaxation is the most
well known amongst these, due to its ubiquity in all types of
glasses~\cite{Jo1970,Ng2000}. Although the exact atomic-scale mechanism
underlying the JG $\beta$-relaxation in MGs is still not clear, there appears
to be a correlation to the $\alpha$-relaxation, deformation and mechanical
properties (see ~\cite{Yu2014a} and references therein). In this regard,
unraveling the atomic-scale dynamical features of the JG $\beta$-relaxation
would represent considerable progress in our current understanding of its
microscopic origin and its impact on the physical and materials properties of
glasses~\cite{Ruta2017}.
A key open question is about the role of different atomic/molecular
constituents
in the various relaxation processes, and in particular whether a relaxation
process is
controlled by the dynamics of a particular type of constituent(s). In the case
of organic molecular glasses
it has been recently argued that all molecules seem to participate in the JG
relaxation, although not all at once ~\cite{Cicerone2017}.
This problem has not been investigated in metallic glasses, although the
relative contributions of different atomic species
to the peak temperature of the JG relaxation has been addressed in
~\cite{Zhu2014}.
While many studies have examined both the structural and relaxational features
of the JG $\beta$-relaxation in
MGs~\cite{Evenson2014b,Yu2017,Wang2015a,Liu2014} the connection to the
atomic-scale vibrational properties remains to date greatly unexplored. The JG
$\beta$-relaxation in MGs generally occurs on microsecond time scales, some
several orders of magnitude smaller than the $\alpha$-relaxation of the
glass~\cite{Yu2017, Liu2017}. However, accessing the atomic-scale dynamics of
MGs in this temporal regime is both experimentally and computationally
challenging. Novel coherent x-ray scattering techniques probe collective atomic
motion on time scales larger than about one second~\cite{Wang2015a, Ruta},
while molecular dynamics (MD) simulations of the MG glassy-state dynamics have
been only recently successfully tested up to 10 microseconds~\cite{Yu2017}.
Here, we combine experimental and simulation investigations with a microscopic
theoretical framework of viscoelastic response and relaxation of MGs. With this
novel approach, we are able to unveil the atomic-scale dynamics in MGs on
time-scales over some 12 orders of magnitude, thus providing necessary,
complementary information for advanced simulation and experimental studies.
Considering the success of our recent theoretical work in linking the
low-energy boson peak (BP) with $\alpha$-relaxation and dynamical heterogeneity
in glasses~\cite{Cui,Cui2}, the results presented in this paper give new
insight into the atomic-scale dynamical facets of the JG $\beta$-relaxation in
MGs. In particular, we are able to show strong evidence that the JG
$\beta$-relaxation is controlled by the smallest (lightest) atomic scale
species present in the MG, and that the existence of two relaxation modes
~($\alpha$ and JG $\beta$) can be traced down to the large differences in
atomic mass of the metallic elements that comprise the MG.
\section{Experimental methods}
\subsection{Dynamical mechanical analysis}
The dynamical mechanical analysis (DMA) experiments we carried out according to
the procedure outlined in Ref.~\cite{Zhu2014} using a TAQ800 dynamical
mechanical analyzer. Fully amorphous cylindrical samples of
La$_{60}$Ni$_{15}$Al$_{25}$ with a diameter of 2 mm were tested using the
single-cantilever bending method in an isothermal mode with a strain amplitude
of 5 $\mu$m, temperature step of 3 K and discrete testing frequencies of 1, 2,
4, 8, and 16 Hz. The complex viscoelastic shear modulus is obtained as
$G(\omega,T)=G'(\omega,T)+iG''(\omega,T)$ as a function of test frequency
$\omega$ and temperature $T$, with mechanical relaxations appearing peaks in
the loss modulus $G''(\omega,T)$.
\subsection{Inelastic neutron scattering}
Glassy ribbons of La$_{60}$Ni$_{15}$Al$_{25}$ were produced by melt spinning at
the Institute for Physics, Chinese Academy of Sciences in Beijing. About 12 m
of ribbons with a cross-section of 2.5 $\times$ 0.06 mm$^2$ were placed in a
thin-walled aluminum hollow cylinder (height 51 mm, diameter 20 mm, thickness
0.55 mm) for the inelastic neutron scattering (INS) experiments at the
time-of-flight spectrometer TOFTOF in Garching. An incident wavelength of
$\lambda_i = 2.8$\,\AA\, resulted in an accessible momentum transfer range of
$0.8 \leq q \leq 4.2$ \AA$^{-1}$ at zero-energy transfer. The raw data were
normalized to a vanadium standard, corrected for empty container scattering and
sample shelf-absorption, and interpolated to constant $q$ in order to obtain
the dynamic structure factor. The background was corrected by separate
measurements of the cryostat with an empty sample holder. As the scattering
probability of the ribbons was calculated to be around 8\,\%, multiple
scattering effects were neglected.
In order to access the largest energy transfer range available, only the data
located on the neutron energy gain side of the spectrometer were analyzed. In a
multi-component system with predominantly coherent scatterers, a generalized,
neutron-weighted vibrational density of states (VDOS) $D(\omega_p)$ can be
obtained under the incoherent one-phonon approximation, where the measured
dynamic structure factor, integrated over the accessible $q$-range, is
proportional to $D(\omega_p)/\omega_p^2$~\cite{Meyer1996}. The neutron-weighted
VDOS was obtained in an iterative procedure using the FRIDA-1
software~\cite{frida,Wuttke1993}.
\section{Molecular Dynamics simulations}
Classical molecular dynamics (MD) simulations were performed for the
La$_{60}$Ni$_{15}$Al$_{25}$ metallic alloy system using the LAMMPS
package~\cite{Pl1995}. The interatomic interactions were described by the
embedded-atom method (EAM) potential in Ref.~\cite{Sh2008}. Details can be
found in the Appendix A.
To obtain the VDOS $D(\omega_p)$ of the system at various temperatures, the
direct diagonalization method was adopted, in which the steepest-descent method
is carried out for the final configuration.
The structure model contains 10,000 atoms in a cubic box with periodic boundary
conditions applied in three dimensions. It was first fully equilibrated at
T=2000 K for 1 ns in the NPT (isobaric and isothermal) ensemble, then cooled
down to 300 K with a cooling rate of 10$^{12}$ K/s. In the cooling process, the
box size was adjusted to give zero pressure. At 300 K, the structure was then
relaxed for 2 ns in the NPT ensemble. To obtain the atomic structures at 330,
360, 390, and 410 K, the structure at 300 K was then heated with heating rate
of 10$^{10}$ K/s, and then relaxed for 2 ns in NPT ensemble at each temperature
of interest. The MD step was set to be 2 fs.
The dynamical matrix corresponding to the potential energy minimum reached by
LAMMPS line search algorithms minimization is given by
\begin{equation}
H_{ij} = \frac{1}{\sqrt{m_{i}m_{j}}} \frac{\partial^{2} U}{\partial
\underline{x}_i
\partial \underline{x}_j}
\end{equation}
where $U$ is the total internal energy of the system (which is a function of
all atoms' coordinates), $m_{i}$ is the mass of atom $i$ and $\underline{x}_i$
is the coordinate vector of atom $i$. The VDOS can be calculated by directly
diagonalizing the dynamical matrix as
\begin{equation}
D(\omega_p) = \frac{1}{3N-3} \sum_{\lambda} \delta (\omega_p -
\omega_{\lambda}),
\end{equation}
where $\omega_{\lambda}$ is the eigenfrequency.
\section{Nonaffine lattice dynamics}
\subsection{From the Generalized Langevin Equation to the dynamic viscoelastic
moduli}
The dynamics of atoms in disordered solids is typically nonaffine, which means
that the atoms in the deformed configuration do not sit in the positions
prescribed by the strain tensor, i.e. they do not get displaced according to an
affine transformation. The latter would give the new position of the atom from
the left-multiplication of strain tensor and position vector of the atom at
rest. Instead, in disordered systems, the atom in the affine position receives
forces from the nearest-neighbours which do not balance (they would balance and
cancel to zero in a centrosymmetric crystal, owing to local inversion symmetry
of the lattice). Hence lattice dynamics for amorphous materials has to be
rewritten to take these facts into account~\cite{Lemaitre} which eventually
leads to softening of the elastic constants~\cite{Zaccone2011} and new physics
which is currently being explored.
Upon applying a deformation described by the strain tensor
$\underline{\underline{\eta}}$, the dynamics of a tagged particle $i$
interacting with other atoms in the reference frame satisfies the following
equation for the (mass-scaled) displacement
$\{\underline{x}_i(t)=\underline{\mathring{q}}_i(t)-\underline{\mathring{q}}_i\}$
around a known rest frame $\underline{\mathring{q}}_i$ (see Ref.~\cite{Cui} for
derivation):
\begin{equation}
\label{eq:NAD1}
\frac{d^2\underline{x}_i}{dt^2}+\int_{-\infty}^{t}\nu_i(t-t')\frac{d\underline{x}_i}{dt'}dt'+\sum_{j}\underline{\underline{H}}_{ij}\underline{x}_j=\underline{\Xi}_{i,xy}\eta_{xy}.
\end{equation}
Note that the summation convention over repeated indices is not used. This
equation can be solved by performing Fourier transformation followed by normal
mode decomposition that decomposes the 3N-vector $\tilde{\underline{x}}$ (that
contains positions of all atoms) into normal modes
$\tilde{\underline{x}}=\hat{\tilde{x}}_p(\omega)\underline{\phi}_p$ ($p$ is the
index labeling the normal modes). Note that we specialize on time-dependent
shear
strain $\eta_{xy}(t)$. {For this case, the vector
$\underline{\Xi}_{i,xy}$ represents the force per unit strain acting on atom
$i$ due to the motion of its nearest-neighbors (see e.g.~\cite{Lemaitre} for a
more detailed discussion).
As shown in the Appendix A, Eq. (3) can be manipulated into the
following form:
\begin{equation}
-\omega^2(\underline{\underline{\Phi}}^{T}\cdot\tilde{\underline{x}})+i\omega\underline{\underline{\Phi}}^T\tilde{\underline{\underline{\nu}}}(\omega)\underline{\underline{\Phi}}\underline{\underline{\Phi}}^{T}\cdot\tilde{\underline{x}}
+\underline{\underline{D}}~(\underline{\underline{\Phi}}^{T}\cdot\underline{\tilde{x}})
=\underline{\underline{\Phi}}^{T}\cdot\underline{\Xi}_{xy}\tilde{\eta}_{xy},
\end{equation}
where the matrix $\underline{\underline{\Phi}}$ consists of the $3N$
eigenvectors $\underline{\phi}_p$ of the Hessian.
Here, we have
$(\underline{\underline{\Phi}}^{T}\underline{\underline{\tilde{\nu}}}\underline{\underline{\Phi}})_{mn}=\sum_i\Phi_{im}\Phi_{in}\tilde{\nu}_i$
and $(\Phi^T\Phi)_{mn}=\sum_i\Phi_{im}\Phi_{in}=\delta_{mn}$ where
$\underline{\underline{\tilde{\nu}}}$ is the diagonal matrix made by
$\tilde{\nu}_i(\omega)$ along the diagonal.
for different tagged particles $i$ and in general, one cannot find a solution
without simplifying the term
$\underline{\underline{\Phi}}^T\underline{\underline{\nu}}
\underline{\underline{\Phi}}$,
which establishes coupling between different eigenmodes contributions to the
friction.
The friction term coupled to the \textit{p}-th normal mode is thus
$i\omega\sum_{im}\Phi_{im}\Phi_{ip}\tilde{\nu}_i$. At this point of the
analysis, we need to work with the assumption that
$\underline{\underline{\Phi}}^T\underline{\underline{\nu}}
\underline{\underline{\Phi}}$
is a diagonal matrix. In physical terms, this means that the damping is not
correlated across different eigenmodes. This is an approximation used within
this framework to make the model solvable~\cite{Cui}. Thus, the friction that
the \textit{p}-th mode feels is dominated by
$i\omega\sum_{i}(\Phi_{ip})^2\tilde{\nu}_i$. This result is used in the
section below to justify the form of memory kernel for the
friction coefficient based on differences in atomic mass of the
constituents.
As derived in our previous work~\cite{Cui}, we use the GLE Eq. (3) under normal
mode decomposition while accounting for nonaffine displacements to derive a
microscopic expression for the complex viscoelastic modulus
\begin{equation}
\label{eq:GLE}
G^*(\omega)=G_A-3\rho\int_0^{\omega_{D}}\frac{D(\omega_p)\Gamma(\omega_p)}{\omega_p^2-\omega^2+i\tilde{\nu}(\omega)\omega}d\omega_p
\end{equation}
where we have dropped the Cartesian indices for convenience and $\rho=N/V$
denotes the atomic density of the solid. $\Gamma(\omega_p)$ is a function which
describes the correlation of nonaffine forces in the frequency
shell~\cite{Lemaitre, Zaccone2011, Milkus_viscoelastic}.
\subsection{Qualitative arguments for the form of friction kernel in
La$_{60}$Ni$_{15}$Al$_{25}$}
As has been shown above in the context of Eq. (4), the friction that the
p-th mode feels is given by $\sum_i(\Phi_{ip})^2\nu_i$. We expand this term
explicitly in terms of the different atomic species which form the alloy:
\begin{align}
\sum_i(\Phi_{ip})^2\nu_i&\sim\sum_{Al}^{25}(\Phi_{ip}^2)\sum_\alpha\frac{m_\alpha}{m_{Al}}\frac{c_{\alpha,Al}^2}{\omega_\alpha^2}\cos{(\omega_\alpha
t)}\notag\\
&+\sum_{La}^{60}(\Phi_{ip}^2)\sum_\alpha\frac{m_\alpha}{m_{La}}\frac{c_{\alpha,La}^2}{\omega_\alpha^2}\cos{(\omega_\alpha
t)}\notag\\
&+\sum_{Ni}^{15}(\Phi_{ip}^2)\sum_\alpha\frac{m_\alpha}{m_{Ni}}\frac{c_{\alpha,Ni}^2}{\omega_\alpha^2}\cos{(\omega_\alpha
t)}\notag\\
&=\sum_\alpha\sum_{Al}^{25}(\Phi_{ip})^2\frac{m_\alpha}{m_{Al}}\frac{c^2_{\alpha,Al}}{\omega_\alpha^2}\cos{(\omega_\alpha
t)}\notag\\
&+\sum_\alpha\sum_{La}^{60}(\Phi_{ip})^2\frac{m_\alpha}{m_{La}}\frac{c^2_{\alpha,La}}{\omega_\alpha^2}\cos{(\omega_\alpha
t)}\notag\\
&+\sum_\alpha\sum_{Ni}^{15}(\Phi_{ip})^2\frac{m_\alpha}{m_{Ni}}\frac{c^2_{\alpha,Ni}}{\omega_\alpha^2}\cos{(\omega_\alpha
t)}\notag\\
\end{align}
The role of $\Phi_{ip}$ here is to give a weight to each $\nu_i$ contribution
in the sum. All these sums could be written also as integrals upon replacing
the discrete variable $\omega_{\alpha}$ with the continuous eigenfrequency
$\omega_p$ and introducing the VDOS as a factor in the integral over
$\omega_p$. Here, one can find that each term is inversely proportional to the
mass of the atomic species in question. We note that the atomic mass of La
(138.9\,u) is more than twice as large as the mass of Ni (58.7\,u) and five
times larger than the mass of Al (26.98\,u), which gives a much larger weight
in the sum to the Al and Ni terms. Hence, taking also stoichiometry into
account, the two terms relative to Ni and Al considered together are about
three times larger than the contribution of the La term.
In order to strengthen this claim, we also consider the role of the unknown
dynamical coupling coefficients $c_\alpha$ which appear in Eq. (6).
While the values of these coefficients cannot be determined from first
principles, we can still obtain valuable indications about the probable
magnitude by considering quantities like the partial $g(r)$ functions in the
system. Since these coefficients are associated with medium-range (or
generically, beyond-short-range) dynamics, features in the $g(r)$ may give an
indication about relative magnitude of dynamical coupling between different
species in the alloy.
Also, while $g(r)$ is a static structural quantity, it is also true that it is
directly related
to dynamics via the Boltzmann inversion relation which yields the potential of
mean force as
$V_{mfp}/k_B T= -\ln g(r)$. In turn, the potential of mean force represents the
interaction energy between two atoms mediated by the presence of all other
atoms in the system, hence it also contains many-body effects. Therefore,
$g(r)$ is directly related to the potential of mean force which in turn
influences the correlated motions (hence the dynamics) of the atoms and
establishes (e.g. through long-range attractions) the dynamic coupling.
Consideration of the pair correlation function obtained from simulations and
shown in Fig. (1) indicates that there is a clear broad peak
for Al-Al in the regime of the medium-range order. This supports our claim that
the JG $\beta$-relaxation is due to medium-range correlations and coupling
between Al atoms. This broad peak of Al-Al with respect to the short-range
order peak stands out in comparison with the other contributions in the
medium-range regime.
Finally, not only the pre-factor of the memory function of La will be smaller
compared to the other two atomic species, for the reason above, but also the
characteristic time-scale of memory decay associated with La will be
comparatively larger, as the relaxation time is typically
inversely proportional to the mass (or at least inversely proportional to
square root of the mass). Hence, the contribution of La to memory and, hence,
to the intermediate scattering function (ISF) would be at a somewhat longer
time-scales compared to Ni. Additionally, this contribution would be probably
hybridized or obscured by Ni, which has a larger prefactor and would explain
why we result in only two decays in our model for the ISF and memory function.
These arguments, which indicate that the La-term in the form of
the memory function given by Eq. (6) may be negligible, can be summarized
as follows: (i) the mass-factor in the denominator makes the contribution of
La about three times smaller than the two contributions of Ni and Al taken
together; (ii) the main medium-range contributions to the features of the
$g(r)$ emanate from Al, which corroborates the hypothesis that the $c_\alpha$
coefficients are larger for Al and justify dominance of Al dynamics in the JG
$\beta$-relaxation; (iii) if modeled as a third stretched-exponential function,
the contribution of La would have a larger characteristic time-scale of decay
and would show up at longer times, probably masked or hybridized with the Ni
contribution.
Based on this approximation, the form of memory function for the interatomic
friction in Eq. (6) reduces to
\begin{align}
\nu(t)&=\sum_\alpha\sum_{Ni}^{15}(\Phi_{ip})^2\frac{m_\alpha}{m_{Ni}}\frac{c^2_{\alpha,Ni}}{\omega_\alpha^2}\cos{(\omega_\alpha
t)}\notag\\
&+\sum_\alpha\sum_{Al}^{25}(\Phi_{ip})^2\frac{m_\alpha}{m_{Al}}\frac{c^2_{\alpha,Al}}{\omega_\alpha^2}\cos{(\omega_\alpha
t)}\notag\\
&=\nu_{1}(t)+\nu_{2}(t).
\end{align}
where $\nu_1(t)$ and $\nu_2(t)$ are two generic functions of time that will be
specified in the next section.
\section{Relation between friction memory kernel and intermediate scattering
function}
For a supercooled liquid, a relationship between the time-dependent friction,
which is dominated by slow collective dynamics, and the intermediate scattering
function
has been famously derived within kinetic theory by Sjoegren and
Sjoelander~\cite{Sjoegren} (see also Ref.\cite{Bagchi}):
\begin{equation}
\label{eq:Sj}
\nu(t)=\frac{\rho k_{B}T}{6\pi^2 m}\int_{0}^{\infty}dq q^{4} F_{s}(q,t)c(q)^{2}
F(q,t)
\end{equation}
where $m$ is a characteristic mass, $c(q)$ is the direct correlation function
of liquid-state theory, $F(q,t)$ is the intermediate scattering function, and
$F_{s}(q,t)$ is the self-part of $F(q,t)$~\cite{Sjoegren}. All of these
quantities are functions of the wave-vector $q$ and the integral over $q$
leaves a time-dependence of
$\nu(t)$, which is exclusively given by the product $F_{s}(q,t)F(q,t)$. Upon
further approximating $F_{s}(q,t)F(q,t)\sim F(q,t)^{2}$, we obtain an
intermediate scattering function via
\begin{equation}
\label{eq:Fqt}
F(q,t) \sim \sqrt{\nu(t)}.
\end{equation}
That the VDOS is related to $\nu(t)$ becomes evident upon considering the
following relation, which holds for the particle-bath Hamiltonian from which
Eq.~\ref{eq:NAD1} is derived~\cite{Cui,Cui2,Zwanzig}
\begin{equation}
\nu(t)=\int_0^{\infty}d\omega_p
D(\omega_p)\frac{\gamma(\omega_p)^2}{\omega_p^2}\cos{\omega_p t},
\end{equation}
where $\gamma(\omega_p)$ is the continuous spectrum of coupling constants
which couple the dynamics of the tagged atom to that of all the oscillators
forming the bath, which represent all the other atomic degrees of freedom in
the material.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{gr}
\caption{\label{fig:rdf} Partial contributions to the radial distribution
function $g(r)$, as calculated from MD simulations for
\textrm{La}$_{60}$\textrm{Ni}$_{15}$\textrm{Al}$_{25}$ at $T=300$ K. The
large maximum of the Ni-Al partial in (b) occurs at $g(r_{\textrm{max}}) = 12$,
which falls out of the range of the vertical axis of the plot.}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{fig_DOS2}
\caption{\label{fig:DOS} Vibrational density of states (VDOS) of
\textrm{La}$_{60}$\textrm{Ni}$_{15}$\textrm{Al}$_{25}$ at $T=300$ K as
determined in INS experiments (solid line) and MD simulations (symbols).}
\end{center}
\end{figure}
\section{Results and discussion}
\subsection{Radial distribution function and partials thereof}
From the MD simulations we obtain the partial pair correlation functions $g(r)$
for all atomic pairs and show these in Fig.~\ref{fig:rdf}. The partial
functions shown in Fig.~\ref{fig:rdf}b clearly indicate that, in the regime of
the medium-range order (between $r=4$\,\AA\,and $r=7$\,\AA), there are broad
peaks for Ni-Ni and Al-Al, which are either much larger or comparable in
magnitude to the primary peak associated with the short-range order (up to
$r\sim 3$\,\AA). In contrast to the La-pairs, in which the short-range order
peak appears to be the most dominant (Fig.~\ref{fig:rdf}a), the more active
Ni-Ni and Al-Al pair-interactions at the length-scale of the medium-range order
would also indicate a stronger dynamical coupling in this spatial regime.
\subsection{Vibrational density of states (VDOS)}
The filled gray circles in Fig.~\ref{fig:DOS} represent the total $D(\omega_p)$
as obtained from MD simulations. A more detailed look at the VDOS can be seen
through the respective contributions of the La, Ni and Al atoms. It is clear
that the initial maximum of the total $D(\omega_p)$ at around 8\,meV is
attributed to low-energy vibrations involving the heavy La atoms, while
vibrations of the Ni atoms occur around 15\,meV and are responsible for an
apparent shoulder on the high-energy side of the main vibrational band. The
vibrational dynamics of the light Al atoms are, in contrast, well separated
from that of the other elements and exhibit a double-band structure at around
25 and 35\,meV. The $D(\omega_p)$ as obtained in INS experiments is shown
alongside the simulation data. It is important to note here that the
experimental $D(\omega_p)$ is additionally weighted by the isotope-specific
neutron scattering cross-sections of the constituent elements, of which Ni-Ni
and Ni-La atomic pairs will dominate. Hence, the experimental $D(\omega_p)$
should be taken only to represent a generalized, neutron-weighted VDOS. In any
case, it is apparent that the predominant contribution to the high-frequency
side of both VDOS of this MG stems from the vibrations of the Al atoms.
\subsection{Dynamic mechanical analysis and comparison with theory}
In Fig.~\ref{fig:master} we show a master curve of the experimentally measured
$G''(\omega)$ obtained from Ref.~\cite{Zhu2014} for La$_{60}$Ni$_{15}$Al$_{25}$
at a reference temperature of 453 K, together with a theoretical fitting
provided by Eq. (5). The $\alpha$-relaxation appears as the main loss peak
situated around 1\,Hz. A distinct feature of this system is the prominent and
well separated loss peak on the high-frequency side around $10^6$\,Hz and is
attributed to the JG $\beta$-relaxation.
The nonaffine lattice dynamic theory of viscoelasticity of glasses
outlined above allows us to
quantitatively link the macroscopic features of the JG $\beta$-relaxation with
the atomic-scale vibrational properties of this MG}. Within this framework, it
is possible to rationalize the average friction in the atomic motion of a
tagged atom in the glass in terms of the respective contributions of the atomic
components, for which the friction coefficient of the $i$-th atom, $\nu_i$, is
proportional to the reciprocal of the atomic mass of atom
$i$~\cite{Zwanzig,Cui}. Thus, as was shown above in Sec. IV.B, when summing
over all tagged atoms in $i\omega\sum_{i}(\Phi_{ip})^2\tilde{\nu}_i$, the
contributions to the friction coefficient coming from the heaviest atoms, i.e.
La, turn out to be smaller by at least a factor of $1/3$ in comparison with the
contributions of Al and Ni (taken together). For the case of
La$_{60}$Ni$_{15}$Al$_{25}$ we thus find that the contribution of La can be
neglected, given the comparatively very large mass of La, which leaves the
average friction as the sum of two contributions, those of Ni and Al,
respectively, which carry widely different relaxation time scales, by virtue of
the different atomic masses.
\begin{figure}
\begin{center}
\includegraphics[height=5.7cm,width=8.6cm]{mastercurve}
\caption{\label{fig:master} Master curve of the imaginary part of the complex
viscoelastic modulus, $G''(\omega)$, at a reference temperature $T= 453$~\,K.
The red and blue curves are fit results to our theoretical model using the
experimental and simulated VDOS, respectively, as input.}
\end{center}
\end{figure}
As derived in Sec. IV.B, in the sum over $i$ only terms
corresponding to Ni and Al atoms survive, which are well separated in magnitude
given the difference in mass between Ni and Al. We then divide the sum into two
groups, for Ni and Al, respectively, and then average each group separately.
The final result is that the average friction memory function consists of two
distinct contributions, according to Eq. (7),
both of which will decay in time but with two different and well-separated
relaxation times, $\tau_1$ and $\tau_2$, respectively.
The shorter relaxation time $\tau_2$ (associated with the JG
$\beta$-relaxation) is related to the atomic dynamics of the lighter element,
Al, whereas the other term has a longer relaxation time $\tau_1$, dominated by
the atomic dynamics of the heavier element, Ni, which contributes to the
$\alpha$-relaxation time.
With an appropriate \textit{ansatz} for $\nu(t)$ we obtain the intermediate
scattering function $F(q,t)$ via $\nu(t)\sim F(q,t)^2$~\cite{Sjoegren,Bagchi}.
From experiments and simulations, we know that in supercooled liquids
$F(q,t)\sim \exp[(-t/\tau)^b]$ for the $\alpha$-relaxation, where $\tau$ is the
characteristic structural relaxation time and $b$ is the stretching exponent
with values normally between $b=0.5-0.7$~\cite{Hansen}.
When both $\alpha$- and $\beta$-relaxation are present, $F(q,t)$ has a two-step
decay, with a first decay at shorter times due to the $\beta$-relaxation, and a
second decay at much longer times due to the $\alpha$-relaxation. On the basis
of this evidence, we take the time dependence of each of the two terms in the
memory function to be stretched-exponential with different values of $\tau$ and
$b$,
\begin{equation}\label{eq:strexp}
\nu(t)\sim \exp[-(t/\tau_1)^{b_1}]+c\exp[-(t/\tau_2)^{b_2}],
\end{equation}
where $c$ is a constant.
The curves in Fig.~\ref{fig:master} are our fits to experimental data using the
VDOS obtained in both INS experiments (red) and MD simulations (blue). It is
apparent that our theoretical model excellently captures both peaks in the loss
spectrum over a frequency range of some 10 orders of magnitude with the
resulting parameters: $\tau_1=0.67$~\,s, $b_1=0.45$,
$\tau_2=4.04\cdot10^{-7}$~\,s, $b_2=0.47$ and $c=0.07$. We note here that the
two-component \textit{ansatz} is the simplest model with the minimum number of
free parameters that completely describes the experimental $G''$ data, which is
congruent with our theoretical result derived in the last section, where
$\nu(t)$ reduces
to a sum of two terms. Surprisingly, we obtain the same fitting parameters for
both the experimental and the simulation VDOS, although the two data sets
exhibit noticeably different features. In a way, this result reassures us that
the differences in the two VDOS didn't simply ``disappear" into the fitting
parameters and genuinely implies that these differences do not play a
substantial role in the mechanical response. Moreover, it suggests that the
qualitative shape of the VDOS, i.e. the location of the peaks, especially on
the low-frequency side, is of primary importance. In a broader perspective,
this result implies that the origin of the JG $\beta$-relaxation in various
types of glasses can be traced back to the generic shape of the VDOS and
encourages the development of a universal theory based on the microscopic
framework employed here.
\subsection{Qualitative behaviour of intermediate scattering function from
theoretical fitting}
The square-root of $\nu(t)$ is shown in Fig.~\ref{fig:ISF} following the
relation $F(q,t) \sim \sqrt{\nu(t)}$ from Eq.~(\ref{eq:Fqt}). We see the
characteristic two-step decay of $F(q,t)$ present in systems with well
separated $\alpha$ and $\beta$ relaxations, with the first decay occurring on
the typical time scale of the $\beta$ relaxation, $\tau_{\beta} \sim
10^{-7}~s$, followed by a much slower decay given by the time scale set by
$\tau_1$. While the time scale $\tau_{\beta}$ closely matches the time scale
$\tau_2$ set by atomic dynamics dominated by Al, the typical $\alpha$
relaxation time of glasses, $\tau_{\alpha} \sim 10^{2}$~s, is significantly
different from the time scale $\tau_1$ associated with Ni, as the $\alpha$
process is more complex and the square-root mixing of the different time scales
of the above relaxation reflects this fact. Moreover, the $\alpha$ peak in
$G''$, and the corresponding decay in $F(q,t)$, cannot be reduced to just
$\tau_1$, as the time scale range of the $\alpha$-relaxation contains a strong
contribution from soft modes (the boson peak~\cite{Milkus}) in the VDOS. This
is clear from Eq.~(\ref{eq:GLE}) where the term $\omega_p^{2}$ in the
denominator
gives a large weight to the low-$\omega_p$ part of the VDOS, which contains the
BP-proliferation of soft modes, as was shown in previous work for the case of
CuZr alloys which present $\alpha$-relaxation only~\cite{Cui} and also for
dielectric relaxation of glycerol~\cite{Cui2}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{ISF_full}
\caption{\label{fig:ISF}Time decay of the square-root of total memory function
for the friction $\nu(t)$, exhibiting two decays corresponding to $\alpha$ and
$\beta$ decay in the intermediate scattering function $F(q,t)$, respectively,
according to the relation $F(q,t) \sim \sqrt{\nu(t)}$ that follows from
Eq.~(\ref{eq:Fqt})}
\end{center}
\end{figure}
\section{Conclusion}
We have presented a combined experimental, simulation and theoretical analysis
of the viscoelastic response of a metallic glass exhibiting a strong
Johari-Goldstein (JG) $\beta$-relaxation. The appearance of the JG
$\beta$-relaxation in this metallic glass is attributed to (i) the wide mass
disparity between the light Al atoms and the other atomic species, and (ii) a
strong dynamical coupling involving the Ni and Al atoms at the medium-range
order length-scale. The results of our theory shed light
onto the microscopic glassy-state dynamics over a temporal range of 12 orders
of magnitude and reproduce the distinctive two-step decay of the intermediate
scattering function that is a characteristic feature of systems exhibiting both
$\beta$ and $\alpha$-relaxations. A crucial input to our theory is the
vibrational density of states (VDOS). Surprisingly, only the qualitative
features (i.e. peak positions) of the VDOS appear to play the main role in
determining the viscoelastic response of the glass, implying a common behavior
linking the JG $\beta$-relaxation to vibrational dynamics in glassy systems.
These results should be useful for developing a universal theory of
secondary relaxations in glasses.\\
\begin{acknowledgements}
We are grateful to the MLZ for the beamtime at TOFTOF. B. Cui acknowledges the
financial support from CSC-Cambridge Scholarship. P. Luo is gratefully acknowledged for sample preparation.
\end{acknowledgements}
\begin{appendix}
\section{Derivation of Eq. (4) in the main article}
After taking Fourier transformation of Eq. (3) in the main article, this
becomes
\begin{equation}
-\omega^2\tilde{\underline{x}}_i+i\tilde{\nu}_i(\omega)\omega\tilde{\underline{x}}_i
+\underline{\underline{H}}_{ij}\underline{\tilde{x}}_j
=\underline{\Xi}_{i,xy}\tilde{\eta}_{xy}.
\end{equation}
Next, we take normal mode decomposition. This is equivalent to diagonalize the
Hessian matrix $\underline{\underline{H}}$. From now on all matrices and
vectors are meant to be $3N \times 3N$ and $3N$-dimensional, respectively. The
$3N\times3N$ matrix
$\underline{\underline{H}}$ can be decomposed as
$\underline{\underline{H}}=\underline{\underline{\Phi}}~\underline{\underline{D}}~\underline{\underline{\Phi}}^{-1}=\underline{\underline{\Phi}}~\underline{\underline{D}}~\underline{\underline{\Phi}}^T$
where $\underline{\underline{D}}$ is a diagonal matrix filled with the
eigenvalues of $\underline{\underline{H}}$, that is, in components,
$D_{pp}=\omega_p^2$. Further, the matrix $\underline{\underline{\Phi}}$
consists of the $3N$ eigenvectors $\underline{\phi}_p$ of the Hessian, i.e.
$\underline{\underline{\Phi}}=(\underline{\phi}_1,...,\underline{\phi}_p,...,\underline{\phi}_{3N})$,
and is an orthogonal matrix.
Then, we left-multiply both sides with the matrix
$\underline{\underline{\Phi}}^{-1}=\underline{\underline{\Phi}}^T$, which leads
to Eq. (4) in the main article:
\[-\omega^2(\underline{\underline{\Phi}}^{T}\cdot\tilde{\underline{x}})+i\omega\underline{\underline{\Phi}}^T\tilde{\underline{\underline{\nu}}}(\omega)\underline{\underline{\Phi}}\underline{\underline{\Phi}}^{T}\cdot\tilde{\underline{x}}
+\underline{\underline{D}}~(\underline{\underline{\Phi}}^{T}\cdot\underline{\tilde{x}})
=\underline{\underline{\Phi}}^{T}\cdot\underline{\Xi}_{xy}\tilde{\eta}_{xy},
\]
where we used the fact that $\underline{\underline{D}}$ is diagonal and we have
dropped all indices $i$ and $j$ and $\tilde{\underline{\underline{\nu}}}$ is
the diagonal matrix $diag\{\tilde{\nu}_i\},i=1,2,...$.
\end{appendix}
\bibliographystyle{unsrt}
|
1,314,259,995,951 | arxiv | \section{Introduction}
This paper describes an undergraduate project to produce a compact, self-contained ``desktop muon counter.'' The muon detector is contained in a small light-tight enclosure that measures 2.75$\times$3.00$\times$1.00 in$^3$. This sits on a small electronics box that performs the data acquisition and readout for the detector. The process of making the counters and the readout will teach students valuable skills in machining and in constructing and debugging electronics. We can use the end product as a single device (or in sets) to make interesting physics measurements or give introductory physics demonstrations. As an example, we present a measurement made at Fermi National Accelerator Laboratory (Fermilab). In the supplementary material \cite{sup}, we provide the computer-aided design (CAD) drawings for machining, the files for the printed circuit boards (PCBs), the code required to program the microcontroller, and the Python program to write the detector data to a computer. The overall cost per counter is about \$100. A cost breakdown is also supplied in the supplementary material (Purchasing\_list.xml). An array of desktop muon counters is shown in Fig.~\ref{fig:array}.
This paper is inspired by the Phys. 063 class at Swarthmore College, ``Procedures in Experimental Physics,'' which was taken by one of the authors. This class introduces students to ``the techniques, materials, and the design of experimental apparatus; shop practice; printed circuit design and construction'' \cite{Swat}. This desktop muon counter project combines all of these aspects into a unified program and delivers a useful physics tool.
Similar devices are used in particle physics experiments to identify well-reconstructed muon track samples for detector calibration. An existing example includes the ``muon cubes'' that were installed in the MiniBooNE neutrino experiment at Fermilab \cite{miniboone}. These optically isolated scintillator cubes were used in combination with a set of scintillation counters located above the detector to accurately track and tag stopping muons. Another similar project, ArduSiPM\cite{Valerio}, was developed as a cost-effective way to read out information from silicon photomultipliers (SiPMs) and has found applications in radio-guided surgery as a $\beta$-probe~\cite{nature}.
The original desktop muon counter, produced at MIT, was a prototype for a subdetector of PINGU, an upgrade to the IceCube Neutrino Observatory, located at the South Pole \cite{IceCube}. In PINGU, optically isolated cubes located throughout the detector could provide well-defined hits on a set of muon tracks, allowing tests of track reconstruction. Thus, development of this detector is a realistic exercise for students who intend to participate in particle physics and astrophysics experiments in the future.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\columnwidth]{b.jpg}
\caption{An array of desktop muon detectors and the corresponding components. When a muon passes through one of the light-tight aluminum boxes, the electronics box records the event and displays the information on the 0.96-inch organic light-emitting diode (OLED) screen. The green light-emitting diode (LED) in the front flashes for a period governed by the number of photons observed by the photodetector. The data can then be transmitted to a computer via a mini-USB cable.}
\label{fig:array}
\end{figure}
\section{The components of this project}
The device consists of a small slab of solid scintillator instrumented with a silicon photomultiplier (SiPM) to detect scintillation light. It is contained within a light-tight aluminum enclosure machined by the student. This connects to a readout box consisting of electronics that register the time of the event, count number, peak amplitude, and dead time. The threshold for a signal to trigger the data acquisition can be tuned in the microcontroller (Arduino) software. We discuss the individual components for the project below as well as provide pictures of each component in the supplementary material (pictures/).
\subsection{Scintillator \label{scint}}
Scintillators emit light when a charged particle passes through them and deposits a fraction of its initial energy due to electromagnetic interactions. The amount of emitted light is related to the energy of the incident particle and the distance the particle traverses through the scintillator. In this case, the scintillator responds to this energy because the plastic is doped with a fluorescing agent that glows very slightly when some kinetic energy is transferred to the fluorescing molecules. Within nanoseconds, the de-excitation of these fluorescing molecules produces visible light, typically in the 300 to 600~nm wavelength range, that travels through and exits the scintillator.
Scintillators come in a number of forms other than solid plastic. For example, there are inorganic solid scintillators and liquid scintillators. However, we recommend a plastic scintillator for this project because it is inexpensive and easy to handle.
One can purchase new scintillators or use old scintillator paddles, as long as they are sufficiently thick (in the described design, we use a 5$\times$5$\times$1~cm$^3$ solid piece of scintillator). Because the detector is very small, it is not a problem if the used scintillator has some minor damage. If your department has no used scintillators available, we have found used paddles for sale for a very reasonable price on eBay. New scintillators can be purchased if necessary from companies such as Saint Gobain \cite{stgobain} or Eljen \cite{eljen}.
The scintillator slabs may have to be machined to size on a mill and then polished in order to make the faces optically transparent. Polishing the scintillator improves the photon collection efficiency of the SiPM by increasing the optical transparency at the interface between the SiPM and the plastic scintillator. It also promotes total internal reflection off the walls of the scintillator, thus increasing the overall number of photons. We also wrap the plastic scintillator in reflective foil to increase the number of photons reflected back towards the SiPM face. We use optical gel to match the refractive indices of the plastic scintillator and the protective layer on the SiPM's photocathode to increase efficiency.
\subsection{Silicon Photomultipliers}
The light emitted when a particle travels through the scintillator must be observed using a light collection device. Traditionally, one attaches the scintillator to photomultiplier tubes (PMTs) like those found in the portable muon detector project $\mu-Witness$ \cite{witness}. These are large, require high voltages, and are expensive. In this case, we use a single SiPM, since it requires only a low reverse bias voltage (positive voltage to the cathode, negative voltage to the anode), has a peak sensitivity in the blue region where the majority of scintillators emit most of their light, and is only a few millimeters thick with a cross-sectional area equal to the size of the photocathode. The low reverse bias means that we can use an inexpensive DC-DC boost converter to power the circuit.
A SiPM consists of a large number of microcells, each composed of silicon P-N junctions. Electrons migrate into the P-side and holes migrate into the N-side. This creates a region known as the ``depletion region," where the electrons and holes eliminate through recombination. When a photon traverses the depleted region, it can deposit sufficient energy to an electron in the valence band to move it to the conduction band, thereby creating a current. Biasing the P-N junction increases the depletion region and creates an electric field $>$5$\times$10$^5$~V/cm. When a charge carrier accelerates through this field, it can gain sufficient kinetic energy to ionize the surrounding atoms through impact ionization. This creates an avalanche of electron-hole pairs, which can have a gain as high as 1$\times$10$^7$. Thus, a single photon producing a single electron-hole pair can generate a very large, measurable signal.
SiPMs have a very high dark-noise rate. These are signals that occur randomly when thermodynamic processes in the silicon generate an electron-hole pair that proceeds to avalanche. This signal is indistinguishable from that produced by a single photon. Therefore, it is essential that a muon passing through the scintillator produces enough visible photons within a short time period so that the resulting signal is much larger than the background noise. Our counter is designed with this goal in mind.
The detector described in this paper uses a 6$\times$6~mm$^2$ C-Series 60035-SMT (surface-mount technology) SensL SiPM~\cite{sensl}. These SiPMs have a breakdown voltage of roughly 24.7~V and can sustain an overvoltage of up to 5.0~V. The cost of SiPMs drops rapidly with the number that are purchased. Thus, it is most cost-effective for a department to buy a relatively large number of SiPMs for multiple classes at once or to purchase them in conjunction with other classes or experiments. At the time of writing this paper, the unit price of a bulk purchase of 100 SiPMs was below \$50/SiPM. This cost represents approximately half of the total cost of construction of the desktop muon counter.
\subsection{Electronics components}
There are two PCBs, each with surface mount components that the students will install.
The simplest of the two boards is used to mount the SiPM and provide bias filtering. This PCB is mounted directly onto the plastic scintillator by means of two No. 0 1/4$''$ screws to maintain pressure on the SiPM face thus ensuring good optical contact between the photocathode area and the plastic scintillator. The second PCB contains the main electronics used to amplify and shape the signal from the SiPM such that it can be measured by the microcontroller. It also filters and regulates the voltages used in the detector. The amplification and shaping of the waveform is accomplished using dual rail-to-rail input and output operational amplifiers (op amps), whose functions are described in detail in Sec. \ref{sec:electronics}. We use an inexpensive 16~MHz Arduino Nano ATmega328 as a microcontroller and read the data out to a 0.96-inch OLED screen through a mini-USB cable to a computer. The code necessary to run the Arduino (Arduino/Arduino\_code) as well as a list of the required libraries (which all can be installed in the Arduino integrated development environment, IDE) are supplied in the supplementary material (Arduino/Arduino\_library\_list). We also provide a Python script to run on a computer to log the data (Arduino/Import\_data.py). The Python script requires that the students install the pyserial module. Students are asked to design their own program to analyze the data.
\subsection{The light-tight enclosure} \label{sec:box}
The plastic scintillator and SiPM circuit are mounted in a light-tight aluminum enclosure. The enclosure not only keeps light from the scintillator inside, but it also protects against photons entering from outside. This prevents environmental noise from producing false signals. However, the metal box has to be penetrated to power the SiPM and to send the signal from the SiPM to the electronics box. Commercial DC jacks and BNC (Bayonet Neill-Concelman) connectors are not quite light-tight; therefore, as a precaution, it is recommended that the plastic scintillator and SiPM be wrapped in black electrical tape. This component can also be completed externally at a machine shop, simply by providing the supplied drawings in the supplementary material.
\subsection{Cables and electronics case}
The signal from the SiPM is transmitted out of the light-tight enclosure to an electronics case via a 6$''$ BNC cable. Here, the students are asked to manufacture their own BNC cable and check the continuity of their connections. Since BNC cables are so prevalent in physics labs due to their robust coaxial design and RF shielding characteristics, understanding how to make your own or repair a noisy cable is an important skill. We use a simple 2.1$\times$5.5~mm$^2$ DC cable to power the SiPM circuit.
The electronics case houses the main PCB. There are many design options for the electronics case. It is crucial to make sure that the electronics can be securely mounted with enough room to allow for the final soldering of connections. There should be at least an internal volume of 3$\times$4$\times$1 in$^3$ to accommodate the electronics. In the designs shown in Fig.~\ref{fig:array}, we use repurposed Ethernet switching cases.
\section{Manufacturing the components}
This section describes the machining, 3D-printing, and PCB board manufacturing required for each component. Further technical material (CAD drawings, files, programs, and documentation) on the manufacturing can be found in the supplementary material and will be referenced as needed.
\subsection{Machining the light-tight enclosure}
The light-tight enclosure, shown in Fig.~\ref{fig:metal_box}, houses the plastic scintillator, SiPM, and SiPM PCB. The enclosure consists of an aluminum box and lid, which are mated together with four 6-32$\times$3/8 socket head screws.
\begin{figure}[h!]
\centering
\includegraphics[width=0.8\columnwidth]{light_tight_enclosure.jpg}
\caption{A rendering of the light-tight enclosure. Computer-aided design files are found in the supplementary material~\cite{sup}.}
\label{fig:metal_box}
\end{figure}
The enclosure box is made from a stock piece of 2.75$\times$3.0$\times$3/4 in$^3$ 6061 aluminum. A pocket is carved to a depth of 0.65 inches using a 2-flute, 1/2 inch end mill into the aluminum on a programmable mill. The pocket can also be carved manually but it will take more time. A .dxf file containing the outline of the pocket can be found in the supplementary material~(machining/box\_CNC.dxf). Six 6-32 through holes are tapped on the outer edges of enclosure. The outer four holes are used to secure the enclosure lid to the box, while the middle two are used to secure the enclosure to the electronics case. We use through holes here, instead of blind holes, to allow the chips to fall through when tapping. This decreases the chances of breaking the tap. After the holes are tapped, the top surface of the box and bottom surface of the lid should be faced on the mill in order to provide a light-tight seal. We use a 3-inch fly cutter to face the surfaces in a single pass.
There are two holes on the back end of the enclosure box. The smaller of the two measures 3/8 inches in diameter and is used for the female BNC nut bulkhead connector. The larger, a 2.1$\times$5.5~mm$^2$ DC power jack, is used to supply the 29.4~V required to power the SiPM circuit.
The enclosure lid is made from a stock piece of 2.75$\times$3.0$\times$1/4 in$^3$ 6061 aluminum plate. The bottom side must also be faced to ensure the box is light-tight when closed. Four countersunk through holes for the four 6-32 socket head screws are drilled on the outer edge to line up with the four holes in the enclosure box. The CAD drawings (machining/box.pdf,lid.pdf)
and files for programming the mill (machining/box\_CNC.dxf) are provided in the supplementary material. We recommend that the students use the mill for the entire manufacturing process of this component. This will help ensure that the holes are properly aligned and the edges are square. Further, after the enclosure is assembled, all six faces can be faced to provide a smooth, polished final finish.
\subsection{Polishing the plastic scintillator}
The plastic scintillator can be cut to the approximate size using a band saw and then side-milled using a 2-flute, 1/2 inch endmill at approximately 1200 rpm to the final dimensions. Our final size measured 50$\times$50$\times$10 mm$^3$. The faces that were already transparent did not need to be machined. We found that a very shallow final pass on the mill provided a very smooth, although visibly murky finish. Two holes are then drilled into the face of the plastic scintillator to mount the SiPM PCB. These holes must be drilled at low speeds (less than 100~rpm). High-speed drilling creates too much heat and the scintillator will often crack as it cools. The holes are spaced 40~mm apart, to a depth of 1/4 inches, using a number 54 drill bit. These holes are sufficiently large that a No. 0 machine screw will be able to self-tap into the plastic scintillator.
Upon finishing the machining of the plastic scintillator, the surfaces must be polished to provide an optically transparent face for the SiPM. We have experimented with two different methods to polish the plastic scintillator. The simplest but most time-consuming method to improve the optical transmission through the machined face is to use incrementally finer grit sand paper and then polish the final surface on a polishing wheel. The second method is to ``flame" polish the machined surface using a hot air gun from a soldering station. This is a quick maneuver in which we heat the surface of the plastic scintillator just long enough for it to become clear (roughly several seconds). As the solid scintillator melts and resolidifies, the surface becomes optically transparent. Flame polishing introduces new stresses in the plastic scintillator, and therefore all machining, including drilling the holes to mount the SiPM PCB, must be completed prior to this step. Flame polishing has the added benefit that the machine surface does not need to be perfectly flat. We found that flame polishing the surface even after cutting the scintillator on the band saw and filing the surface flat provided adequate results. This may be preferable for students attempting to build a device without access to a mill.
\begin{figure}[h!]
\centering
\includegraphics[width=0.90\columnwidth]{SiPM_PS.jpg}
\caption{A rendering of the plastic scintillator and SiPM assembly. The plastic scintillator is wrapped in aluminum foil and secured in place with black electrical tape.}
\label{fig:foil}
\end{figure}
\subsection{Machining the electronics box}
Here we provide an example of the electronics box design for the muon detector using a repurposed Ethernet switch box. Ethernet switch boxes tend to be correctly sized, have very few internal components, and are therefore a relatively inexpensive enclosure. Our chosen Ethernet switch box also came with the required 5~V, 1.0~amp wall adapter supply. The box must have adequate volume to support the main PCB board, which is roughly 3$\times$4$\times$1~in$^3$.
Several through holes in the box need to be machined to mount the other components. When drilling through plastic, it is useful to use a sharp, 60$^{\circ}$ point angle drill bit to eliminate edge chipping and chip wrapping. In our case, the top of the electronics box is used to mount the light-tight enclosure and the OLED screen case. Both are secured in place using 6-32 machine screws. A separate hole, directly underneath the OLED case, is used to run the OLED cables. A rectangular hole for the mini-USB connection is easily machined using a rotary drill but can also be done using a mill. It was also necessary to attach an aluminum plate on the back of the electronics box to mount a reset button, BNC connector, and DC barrel jack to connect to the light-tight enclosure. Photos of the electronics box can be found in the supplementary material.
\subsection{OLED screen case}
The 0.96-inch OLED screen readout is mounted on a 3D-printed frame to secure it to the electronics box to protect it from damage. The file required to print the case is found in the supplementary material (machining/OLED\_case.stp). It is printed face down and requires approximately 0.5 cubic inches of printing filament, including scaffolding. The screen is held in place by gluing the front end of the OLED PCB board to the interior of the case.
\begin{figure}[h!]
\centering
\includegraphics[width=0.80\columnwidth]{oled}
\caption{Front and back view of the 3D printed OLED screen case.}
\label{fig:elecs}
\end{figure}
\subsection{Circuit boards}
The two PCBs must first be manufactured from an electronics company. For example, Elecrow.com~\cite{elecrow} provides manufacturing of custom PCBs at a reasonable cost. The necessary files (PCB/MAIN\_PCB.zip and PCB/SiPM\_PCB.zip) are provided in the supplementary material. The color of the SiPM PCB should be white to improve reflection in the scintillator. The SiPM PCB measures 50$\times$10$\times$1.6~mm$^3$, while the main PCB is 50$\times$50$\times$1.6~mm$^3$. Both circuits were designed in KiCAD~\cite{KiCAD} as two-layer boards. A rendering of the PCBs can be found in the supplementary material.
The required electronic components for populating the PCBs can all be purchased from either Amazon~\cite{Amazon} or Digi-Key~\cite{DigiKey}. The reference locations for all the surface mounted components are listed in PCB/SMT\_reference.xlxs.
Both PCBs were designed using 0805 surface mount components (0.08$\times$0.05~in$^2$ or 2.0$\times$1.2~mm$^2$). These are small but relatively easy to manipulate with a good pair of tweezers. We suggest searching online for a video to learn the proper techniques for using surface mount components. Adafruit provides an excellent tutorial here in \cite{tutorial}, but there are hundreds to choose from. The PCBs should be populated using a fine tip soldering iron but reflow solder and an oven can also be used. If one uses an oven, consult the SiPM documentation for the temperature profile.
The SiPM PCB has a silkscreen outline where the SiPM is to be mounted. We found the best way to solder the SiPM in place is to first put a bit of solder on one of the PCB pads, then line the SiPM up exactly with the silkscreen and solder a single pin in place. Once the SiPM is properly aligned between all the pads, the other pins can be soldered in place. Although the SiPM PCB has four pads for the SiPM footprint, it is only necessary to solder pads 1 and 3. Pin 1 on the SiPM can be identified by looking carefully at the number of metal legs on each corner. Pin 1 has three legs, all the others have two.
The main PCB board, can be assembled according to the component reference list in PCB/SMT\_reference.xlxs. The outline on the silkscreen provides the location of where the larger components are to be mounted. The IN+ and IN- terminals labeled on the DC-DC boost converter silkscreen are used to solder the booster onto the corresponding main PCB pads. Leads from the OUT+ and OUT- on the DC booster must be connected to the DC power jack on the electronics case. The potentiometer on the DC booster can be used to adjust the output voltage. It should be set such that the output voltage is +29.4~V.
There are several header pins used to attach the reset switch, OLED screen, and boost converter and to receive the signal input from the SiPM. These should be all be attached during the assembly of the detector, once the main PCB has been mounted to the electronics box.
\section{Assembling the detector}
\begin{figure}[h!]
\centering
\includegraphics[width=1\columnwidth]{wiring_diagram.jpg}
\caption{A wiring diagram of the detector components.}
\label{fig:wiring}
\end{figure}
The SiPM PCB is fixed to the face of the plastic scintillator using the two No. 0 1/4 inch screws. A small amount of optical gel is used to interface the SiPM to the plastic scintillator. The screws provide enough pressure to remove air bubbles from the optical gel, but not enough pressure to bend the PCB or damage the SiPM. A reflective foil is then wrapped around the plastic scintillator and held in place using electrical tape. It is crucial that the reflective foil not come into contact with any part of circuit. Taping around the SiPM PCB and the plastic scintillator will also improve the overall light-tightness.
The scintillator is then inserted into the light-tight enclosure box. Leads from the signal connection (labeled SNG) and ground connection (GND) are connected to the female BNC connector, and the V$_{\mathrm{IN}}$ and GND are connected to the 2.1$\times$5.5 mm$^2$ DC power jack according to the wiring in diagram Fig.~\ref{fig:wiring}. The aluminum enclosure lid can then be secured using the four 6-32$\times$3/8 inch screws and mounted onto the electronics box (see Fig.~\ref{fig:assembly}).
Once the assembly of the light-tight enclosure has been completed, it can be tested using an oscilloscope and variable 30~V power supply. Supplying the DC jack connection with 24.7 to 29.5~V should create positive pulses with an amplitude of 10-100~mV that will exponentially decay in roughly 0.5~$\mu$s when a muon passes through the scintillator. At sea level, one should expect to see roughly one pulse per cm$^2$ per minute due to cosmic ray muons. Low-amplitude pulses may indicate that the enclosure is not light-tight, the face of the SiPM is not in good contact with the plastic scintillator, the scintillator face was not adequately polished, or the SiPM face could be damaged. A larger count rate than roughly one pulse per cm$^2$ per minute can be explained by the background contamination of radioisotopes in and around the enclosure. We found that gamma rays from the outside environment will also penetrate the aluminum enclosure, but $\beta$ and $\alpha$ are significantly attenuated.
\begin{figure}[h!]
\centering
\includegraphics[width=1\columnwidth]{assembly.jpg}
\caption{The complete assembly of the desktop muon detector.}
\label{fig:assembly}
\end{figure}
The main PCB is secured to the electronics case through the two holes on the bottom of the case. For our design, we required a 1/4$''$ standoff under the main PCB to bring it to the right level for the DC power jack port. Once the main PCB has been secured to the electronics case, the leads for the reset switch, OLED screen, DC-DC boost converter, power for the SiPM, and the signal input from the SiPM can all be attached according to Fig.~\ref{fig:wiring}.
The OLED screen is fixed to the 3D-printed screen case with a small amount of epoxy, and leads are to be connected to the four terminals on the back. These leads are then fed through a hole on the top of the electronics case to the main PCB and are wired according to Fig.~\ref{fig:wiring}.
The final step in the assembly process is to upload the Arduino code using a mini-USB cable. This requires the student to install the Arduino IDE and the libraries listed in the supplementary material (Arduino/library\_list.pdf). The libraries are used to communicate with the OLED screen and Arduino timer interrupts. All the libraries can be installed through the Arduino IDE except for OzOLED, which is referenced in the supplementary material. The particular Arduino Nano that we purchased required a specific driver in order to communicate with the Mac OS. The manufacturer should provide a link to the location of their driver files.
Powering the device requires either the USB or a 5~V power cable of at least 250~mA to be connected to the main PCB. The full detector consumes less than 1~W of power. If running the detector off a battery or at high count rates, it is recommended that the OLED and LED are turned off. This can be done by changing the booleans ``OLED" and ``LED" to 0 in the Arduino code.
\section{The electronics circuitry}\label{sec:electronics}
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\columnwidth]{full_circuit.jpg}
\caption{The complete circuit diagram. The position ``A" represents the signal from the SiPM; ``B," the signal from the amplifier; and ``C," the signal from the peak detector.}
\label{fig:electronics}
\end{figure}
The complete electronics circuit is shown in Fig.~\ref{fig:electronics}. The SiPM circuit, outlined in blue on the left, is mounted on the plastic scintillator in the light-tight enclosure, while the rest of the circuitry is contained on the main PCB. In the following, we will describe the general principle behind how the circuit was designed and then give a more in-depth explanation of the various components and their respective functions.
\subsection{The electronics overview}
The general principle behind how we are measuring the signal from a muon interaction is shown in Fig.~\ref{fig:principle}. In this figure, there are three waveforms, labeled ``A," ``B," and ``C," that correspond to the positions labeled similarly in Fig.~\ref{fig:electronics}. A muon-induced photo-avalanche in the SiPM will create a positive pulse, whose width is $\mathcal{O}(0.5~\mu$s) and height is typically between 10--100~mV. This pulse is sent through a noninverting amplifying circuit. Here, the pulse is amplified by approximately a factor of six and passed to a peak-detector circuit that outputs a pulse that rises to the peak of the amplified pulse but decays slowly over a period of roughly 100~$\mu$s. The Arduino samples the decaying pulse and uses this information to calculate the initial pulse amplitude.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\columnwidth]{osc.jpg}
\caption{Traces on an oscilloscope at the various parts of the circuit. The software trigger threshold, shown as a dotted green line, is set in the Arduino code. The waveform labeling (A, B, C) corresponds to the parts of the circuit in Fig.~\ref{fig:electronics} that we are measuring. The red ``X'' represents the first Arduino sample.}
\label{fig:principle}
\end{figure}
\subsection{The SiPM circuit}
The DC-DC boost converter circuit, shown in the bottom left of Fig. \ref{fig:electronics}, is an off-the-shelf device that converters the 5~V input to the SiPM breakdown voltage (approximately 24.7~V) plus an overvoltage. We have chosen to operate the SiPM at an overvoltage of 4.7~V, which improves the low-level signal response but in return also increases the dark rate. The potentiometer on the DC-DC boost converter is used to bring the potential between the OUT+ and OUT- pins to +29.4~V.
There is a 10~uF electrolytic capacitor (C1 in Fig.~\ref{fig:electronics}) at the output of the power supply, which is known as a bypass capacitor (decoupling capacitor). This capacitor has two main purposes: to locally store energy for when the SiPM discharges and to act as a filter to decouple noise generated by the power supply. The internal impedance of the capacitor causes it to act as a low-pass filter, letting low frequencies through and suppressing high frequencies.
Prior to the SiPM, the bias voltage is sent through a series of low-pass filters. A low-pass filter attenuates frequencies higher than the ``cut-off" frequency. The cut-off frequency is defined as the frequency at which the signal intensity drops to 63.2\% its original value. Schematically, it is represented by a resistor followed by a capacitor to ground, and the equation representing the cut-off frequency is $\mathrm{f_{cut}} = \frac{1}{2 \pi R_1 C_1}$. Two low-pass filters can be seen on the left of the SiPM circuit.
We positively bias the cathode (Pin 3) of SiPM to +29.4~V, while Pin 4 is connected to ground (optional) and Pins 2 and 5 are left open. The resistor to ground on the anode (Pin 1) of the SiPM is called a ``pull-down resistor," which holds the line at ground when there is no signal. An induced pulse in the SiPM circuit is then sent to the amplifying circuit.
\subsection{The amplification circuit}
This part of the circuit is know as a ``single-supply noninverting operational amplifier circuit." It takes the positive pulse from the SiPM, V$_{\mathrm{IN}}$, and amplifies it to a positive pulse, V$_{\mathrm{OUT}}$, according to Eq. \ref{equ:amp}. The ``single-supply" refers to the fact that we are supplying +5~V to the positive rail, V$_{\mathrm{+}}$, and setting the negative rail, V$_{\mathrm{-}}$, to ground. An in-depth description of the operational principles behind op amps can be found in Ref. \cite{horrowitz}.
Using the resistor values in Fig. \ref{fig:electronics} (R1 = 100~$\Omega$, R2 = 1~k$\Omega$), the ratio between output voltage and input voltage in Eq. \ref{equ:amp} indicates we should expect an amplification, or gain, of 11. However, due to the limited frequency response of the op amp, this is not quite the case. The circuit was designed using an op amp (LT6201) with a gain-bandwidth product of 145~MHz, which, at a gain of 11, gives a bandwidth of approximately 13.2 MHz. Since the rise time of a signal from the SiPM is a few tens of nanoseconds, we expect this high-frequency component to be attenuated. The measured peak-to-peak amplification was found to be approximately 6, as indicated by the traces in Fig.~\ref{fig:principle}.
\begin{figure*}
\centering
\begin{minipage}{.45\linewidth}
\centering
\includegraphics[width=1\columnwidth]{NIamp.jpg}\label{fig:amp}
\end{minipage}%
\hfill%
\begin{minipage}{.45\textwidth}
\centering
\centering
\begin{equation}\label{equ:amp}
\mathrm{V_{OUT}} = \mathrm{V_{IN}}(1+\frac{R_2}{R_1})
\end{equation}
\end{minipage}\\[-7pt]
\begin{minipage}[t]{1\linewidth}
\caption{Single supply noninverting op amp circuit and associated gain equation.}
\end{minipage}%
\hfill%
\end{figure*}
\subsection{The peak detector circuit}
The purpose of the peak detector circuit is to detect the amplitude of the amplified pulse and hold the voltage at that level for a sufficient time such that the Arduino can measure it, then decay and wait for the next pulse.
Fig. 10 shows the electronic schematic for our peak detector circuit. This circuit was modified from a circuit found in \cite{peak}. Once a pulse from the amplifying circuit enters the noninverting input of the op amp (+), the Schottky diode D2 becomes forward-biased and allows the op amp to charge the sampling capacitor C1. While charging, there is an unavoidable leakage current through the resistors R1 and R2 to ground. However, these resistors were chosen to be large enough so that this is negligible. When the pulse from the amplifying circuit subsides, D2 becomes back-biased and forces C1 to discharge through R2. The current will then flow to ground via two different paths depending on the voltage on C1.
If there is a large voltage on C1 (greater than the forward voltage drop on D1), D1 becomes forward-biased and will allow current to flow to the output of the op amp, which is now sitting at the negative rail, in our case it is ground. The decay time associated with this is then R2$\times$C1. If the voltage on C1 is smaller than the forward voltage drop on D1, the diode will be back-biased and current will flow through the series of resistors R1 and R2. The decay constant associated with this is (R1+R2)$\times$C1. This bifurcation was found to greatly improve the response of the circuit to very small and very large incoming pulses. The decay time was found to be sufficiently long for the Arduino to sample the pulse multiple times.
Since the output of the op amp can only be driven to 4.78~V (with 5~V supplied to the positive rail) and the voltage drop across D2 is approximately 0.4~V, the maximum output voltage now becomes approximately 4.28~V. We have specifically chosen the diodes to minimize the forward voltage drop, thus allowing us to measure a higher possible voltage.
\begin{figure*}
\centering
\begin{minipage}{.45\linewidth}
\centering
\includegraphics[width=1\columnwidth]{peakd.jpg}\label{fig:pd}
\end{minipage}%
\hfill%
\begin{minipage}{.45\textwidth}
\centering
\centering
\begin{equation}\label{eqn:time}
\mathrm{\tau} = R \times C
\end{equation}
\end{minipage}\\[-7pt]
\begin{minipage}[t]{1\linewidth}
\caption{The peak detector circuit. We have selected R1~=~R2~=~100k$\Omega$ and C1 = 1~nF.}
\end{minipage}%
\hfill%
\end{figure*}
\subsection{The Arduino circuit}
The analog Arduino pin, A0, monitors the output waveform from the peak detector. If the voltage rises above the trigger threshold, the Arduino makes several measurements to calculate the original pulse height. The analog pins have a 10-bit resolution that ranges from 0--5~V. This corresponds to a voltage measurement resolution of approximately 5~mV. Although the Arduino has a clock speed of 16~MHz, we cannot sample the waveform at this rate. We found that, using a prescaler of 4, the sample frequency was measured to be approximately 172~KHz (5.8~$\mu$s per sample). This is sufficient for our purposes but limits the triggering signal resolution to a few microseconds.
Since the trigger sample (red ``X'' in Fig.~\ref{fig:principle}) may have been measured during the rise time of the peak detector, it does not accurately represent the initial pulse amplitude. Instead, we record the following five samples and use a simple exponential regression fit to calculate the amplitude at the time of the triggering sample. This is used as the measured peak amplitude.
As shown in Fig. \ref{fig:electronics}, the Arduino performs several other functions as well. It is used to:
\begin{enumerate}
\item update the OLED screen. The provided Arduino code (Arduino/Arduino\_code/Arduino\_code.ino) communicates with the OLED screen to output the number of observed events, run time, count rate, and a bar indicating the pulse amplitude of the last event.
\item pulse an LED light with a pulse length proportional to the calculated SiPM pulse amplitude.
\item monitor the detector dead time. Each command issued to the Arduino increases the total amount of time in which the detector is unavailable to make a measurement. The Arduino code measures the time each command takes and subtracts it from the total detector live time. The main source of dead time at sea level is due to the time it takes to update the OLED screen. Each update, which happens every second, takes approximately 50~ms. The next largest source of dead time is flashing the LED proportionally to the number of photons the SiPM observed. The serial readout of the detector takes approximately 5~ms on average and is therefore not a significant component of the dead time.
\item communicate with a computer via the mini-USB terminal. This is used to record data directly to a computer through a serial port. The first three lines of the output are header file information. The remainder of the file saves data in the following format:
date stamp of the event given by Python, time stamp of the event given by Python, event number, time stamp of the event from the Arduino in milliseconds, measured pulse amplitude from the peak detector in volts, calculated SiPM pulse amplitude in mV, and measured dead time for a given event number in milliseconds.
\end{enumerate}
The code can be modified by installing the Arduino IDE and the required drivers for the Arduino Nano. To record data to the computer, one needs to ensure the Arduino Nano driver is properly installed and then run the Import\_data.py python program in the supplementary material (Arduino/Import\_data.py). The python program will list the available serial ports for you to select from.
\subsection{Detector calibration}\label{sec:cal}
To determine how the measured pulse amplitude, given by the Arduino, corresponds to pulse amplitude from the SiPM, we remove the SiPM PCB and injected waveforms (of the same shape as SiPM pulses) of known amplitude into position A shown in Fig. \ref{fig:electronics}. The waveforms were generated by first measuring the amplitude of a SiPM pulse as a function of time, then inputting this information into a pulse generator. The pulse generator allowed us to scale the waveform to an arbitrary amplitude between 0 to 5000~mV. The waveforms were then injected into the circuit at a desired frequency. The Arduino was then used to record the measured pulse amplitude after the peak detector circuit. With this, we are able to convert between Arduino measurements and SiPM outputs. Fig.~15 shows the resulting measurements from input pulse amplitudes varying from 0 to 1000~mV. The pulses were injected into the circuit at a frequency of 20 Hz. There is a strong correlation between the input pulse amplitude and the measured pulse amplitude (light blue circles). A 2$^{\mathrm{nd}}$-order polynomial fit from 30 and 700~mV yields a relationship of:
\begin{equation} \label{eq:linear}
\mathrm{y}= -8.5432\times10^{-4}~\mathrm{x}^2~+ 1.7859 ~x~- 33.3687
\end{equation}
where y is the measured pulse amplitude from 0 to 1024 (0--5~V on the right axis) and x is the input pulse amplitude. With the positive root of the quadratic equation, we can use the inverse of this equation to convert a measured pulse amplitude by the Arduino to an input pulse (SiPM) amplitude.
The standard deviation of the 250 measurements for a given input voltage was found to be approximately 3.2~\%.
\begin{figure*}
\includegraphics[width=0.8\columnwidth]{data_plot.jpg}\label{fig:dat}
\caption{Calibration data for the complete circuit. There are 250 samples per input pulse amplitude (light blue circles). The measurements are semi-opaque to show the relative distribution. }
\end{figure*}
The data in Fig.~15 also shows that for input pulses with an amplitude greater than roughly 700~mV, the measured pulse amplitude becomes saturated at approximately 4.3~V. This is due to the limited voltage range on the op amp combined with the voltage drop across the diode in the peak detector circuit. While pulses with initial amplitudes greater than 700~mV are observed, they are relatively rare.
\section{Learn about Cosmic Ray Muons}\label{sec:cr}
The desktop muon counter triggers on muons that are produced when high-energy astrophysical particles, called cosmic rays, collide with the Earth's atmosphere, producing particles that decay to muons.
In his 1950 Nobel Lecture, C.F. Powell described cosmic rays as a ``thin rain of charged particles''~\cite{Powell}.
Most cosmic rays are produced in our galaxy and are nuclei expelled in supernova explosions. About 90\% of cosmic rays are protons, 9\% are helium nuclei, and the remaining 1\% are heavier nuclei. When cosmic rays hit the nuclei of the atmosphere, a shower of particles is produced, including pions and kaons. These are the progenitors of the muons. Students may be assigned to read three classic works by physicist Bruno Rossi about cosmic rays~\cite{Bruno1, Bruno2, Bruno3}. The origin and content of cosmic rays remains a hot topic of study today, with major conferences devoted to the latest results~\cite{ICRC}. A useful resource for lectures on cosmic rays is Chapter 28 of Ref.~\cite{PDG}, The Particle Data Book. This summarizes our most up-to-date knowledge.
The muons that are ultimately produced in the shower are fundamental particles that carry electric charge of $+1$ or $-1$ and have mass that is about 200 times that of the electron. For a brief introduction to muons and their place within the Standard Model of particle physics, we recommend that students visit The Particle Adventure website~\cite{ParticleAdventure}. Muons are unstable and will decay to an electron, a neutrino, and an antineutrino. At rest, the lifetime of the muon is approximately 2.2 microseconds. Given that muons are produced in the shower at more than 10 km above the Earth's surface, Galilean relativity calculations show a very small probability of survival to reach the desktop muon counter. However, because muons are produced at high energies, relativistic time dilation extends their lifetime. As a result, muons can survive and be detected on Earth. Calculation of the different expectations for Galilean and special relativity is a useful exercise for the student.
The muon flux at sea level is about one per square centimeter per minute for a horizontal detector~\cite{PDG}.
This constant bombardment by muons has pros and cons for a particle physicist.
On the plus side, cosmic ray muons are commonly used in surface-based particle physics experiments in order to commission and calibrate detectors before they are exposed to beam produced by accelerators.
Often the muons detected at sea level are accompanied by other particle debris, such a photons and protons. A relatively small amount of shielding material is often used to remove this accompanying debris, leaving only the muons for use in calibration. On the other hand, many particle physics experiments are looking for rare events, and the rare signal can be swamped by the muon signal. These experiments must be located in deep underground laboratories. The U.S. is in the process of building a new deep underground laboratory in Lead, South Dakota, which is described in Ref.~\cite{SanfordLab}.
\section{Example measurements and final remarks}
To illustrate some of the capabilities of the detector and to hopefully inspire students to make their own measurements, we have used the desktop muon counter to make several measurements. This section includes a coincidence measurement between two detectors arranged to measure the angular distribution of cosmic ray muons and several rate measurements at various altitudes and levels of overburden.
A common measurement to make with muon detectors is to determine the muon rate as a function of polar angle. According to the Particle Data Group, the angular muon dependence is proportional to cos$^2\,\theta$, where $\theta$ represents the polar angle with respect to vertical, for minimum ionizing muons with roughly 3~GeV of energy. For lower energy muons, the distribution becomes increasingly steep, while at much higher energies, it flattens out, approaching sec\,$\theta$ for directions near $\theta < 70^\circ$~\cite{PDG}.
We placed two detectors side by side in order to minimize the angular acceptance between the two pieces of scintillator. The data for both detectors was recorded on the same computer and given an uncertainty of the time stamp (given by the computer) of 5~ms to account for the serial communication delay. Data was recorded at several polar angles over the course of a day. The relative rate for the measurements is shown in Fig. \ref{fig:coincidence} and was found to be in good agreement with a cos$^2\,\theta$ dependence.
\begin{figure}[h!]
\centering
\includegraphics[width=0.6\columnwidth]{coincidence.jpg}
\caption{The relative rate as a function of polar angle. The data is shown in black and the theoretical minimum ionizing muon distribution is show in solid blue. }
\label{fig:coincidence}
\end{figure}
We also performed several measurements at high altitudes and at underground facilities with a significant amount of overburden. Overburden is shielding by overhead material that attenuates cosmic ray muons. Since the density of the material shielding from cosmic rays may vary, we define overburden in terms of the number of meters of water that would provide the same attenuation, abbreviated as meter-water-equivalent (m.w.e.).
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\columnwidth]{rate.jpg}
\caption{Sample data from measurements made at various overburdens and altitudes. We see several orders of magnitude change in the raw detector count rate between an airplane flight at 41,000~ft and underground at the Super-Kamiokande detector (2700~m.w.e. overburden).}
\label{fig:measurements}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\columnwidth]{flight.jpg}
\caption{A rate measurement during a short flight. The maximum altitude was 33,000~ft.}
\label{fig:flight}
\end{figure}
Fermilab is home to several high profile neutrino experiments, each of which utilizes different methods to remove the cosmic ray muon contamination. MINER$\nu$A~\cite{minerva} and the near detector of MINOS~\cite{minos}, for example, are buried over 100~m underground in order to attenuate the cosmic ray muon rate, whereas MicroBooNE~\cite{microboone} is located in a building with very little overburden. The MiniBooNE~\cite{miniboone} detector, on the other hand, was buried under several meters of soil inside of a concrete building. Rate measurements with the desktop muon detector were made at these locations over the period of a few days. The rates as a function of measured SiPM pulse amplitude are shown in Fig.~\ref{fig:measurements} along with a high-altitude measurement made during a Trans-Atlantic flight.
Super-Kamiokande (SK)~\cite{sk} is a 50~kton Cherenkov neutrino detector located in the Kamioka mine, with a meter-water-equivalent of overburden of approximately 2700~m. The measurement at SK was performed using a different detector, and therefore the SiPM pulse amplitude cannot be directly compared to the other measurements. However, the SK measurement does represent a signal that originates purely from background contamination. The detector is sensitive to $\alpha$, $\beta$, and $\gamma$ radiation; however, the aluminum enclosure and the foil surrounding the plastic scintillator is sufficiently thick to attenuate most $\alpha$ and $\beta$ radiation.
For a high-altitude rate measurement, we were given permission to record data in an airplane at 41,000~ft. We see roughly a 50x increase in rate compare to the ground-level measurement. This is near the peak in the cosmic ray muon production region at approximately 45,000--60,000~ft. Another measurement was made during take-off and landing for a short flight. The resulting measurement is shown in Fig.~\ref{fig:flight}. It shows that we can easily identify the altitude of the airplane and correlate it to the cosmic ray muon flux.
There are many interesting physics measurements that the desktop muon detector, alone or as an array of multiple detectors, can be used to measure. Variations of the project in the previous section could include:
\begin{enumerate}
\item Expand on the coincidence measurement presented at the beginning of this section by following the measurement outlined in \cite{angle}. They include a calculation on the finite size of a detector and provide a more in-depth description of the measurement procedure.
\item Add bluetooth, wifi, temperature sensors, or in-situ data storage with a microSD card reader to the Arduino to expand the capabilities of the detector. There is a large online community of Arduino users, and they have built up a pool of examples of how to implement these technologies.
\item Measure the relative depths of subway stations across the city using the measured muon rates.
\item Test relativistic time dilation on the cosmic ray flux by measuring the flux at various elevations, such as in an airplane or on a mountain, compared to sea level~\cite{muon}.
\item Investigate seasonal variations in muon rates. The National Oceanic and Atmospheric Administration's National Weather Service \cite{noaa} records local atmospheric conditions that can be used to investigate weather and rate correlations.
\item Determine the correlation between muon rate and altitude. This may require elevation differences of at least several hundreds of meters. Multiple detectors can be used in coincidence to improve the muon purity of the measurement.
\item Use GEANT4 to simulate the angular response function and correlate the pulse height to the energy deposited in the scintillator. This requires the knowledge of the scintillator material and familiarity with C++.
\end{enumerate}
The construction of desktop muon detectors will teach useful skills in machine- and electronics-shop activities. The code, libraries, and technical drawings are all provided. The time scale for a student to produce a muon detector is expected to be less than 100 hours. Once proficient with the machinery, we have found a student can produce approximately one detector per day. The total cost of a single detector is approximately \$100 and may decrease in time as SiPMs become less expensive.
\begin{acknowledgments}
This work is supported by the NSF grant 1505858. The authors would like to thank SensL and Fermilab, for donations that made the development of this project possible, as well as P. Fisher at MIT and IceCube collaborators at WIPAC, for their support in developing this as a high school and undergraduate project. We also extend thanks to K. Frankiewicz for performing the measurements at Super-Kamiokande; J.~Moon, D.~Torretta, and J.~Zalesak for their aid in making the measurements at Fermilab; B. Jones for the idea of developing this project for the IceCube detector; and those who taught Phys 063 at Swarthmore College for the inspiration.
\end{acknowledgments}
\pagebreak
|
1,314,259,995,952 | arxiv | \section{Global Magnetic Phase Diagram}
We will focus on
the
Kondo lattice model,
\begin{eqnarray}
H &=& H_f + H_c + H_K .
\label{kondo-lattice}
\end{eqnarray}
The Hamiltonian for the $f-$electron local moments is
\begin{eqnarray}
H_f &=&
\frac{1}{2}
\sum_{ ij}
I_{ij}^a
~S_{i}^a
~ S_{j}^a .
\label{H-f}
\end{eqnarray}
Here, $a=x,y,z$ are spin projections, and
$I_{ij}^a$ is the RKKY interaction between
the spin-$1/2$ moments (one per site).
We use $I$ to label the typical RKKY interaction (say, the dominant
component of the
nearest-neighbor interactions), which is antiferromagnetic.
In addition, $G$ describes the degree of frustration
({\it e.g.} $G=I_{\rm nnn}/I_{\rm nn}$, the ratio of the
antiferromagnetic next-nearest-neighbor interaction over the nearest
neighbor one), or the degree of spatial anisotropy. For our purpose,
it's adequate to know that increasing $G$ corresponds to a decrease
in the strength of the N\'{e}el order.
\begin{eqnarray}
H_c &=&
\sum_{\bf k \sigma} \epsilon_{\bf k}
c_{{\bf k}\sigma}^{\dagger} c_{{\bf k}\sigma}
\label{H-c}
\end{eqnarray}
describes a band of conduction electrons -- $x$ per site
,
with $0<x<1$ without loss of generality. The bandwidth of the
conduction electron is $W$.
The two components interact with each other through
\begin{eqnarray}
H_K
&=& \sum_i J_K ~{\bf S}_{i} \cdot {\bf s}_{c,i} ,
\label{H-K}
\end{eqnarray}
where the Kondo interaction, $J_K$, is antiferromagnetic.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.45\textwidth]{gpd.eps}
\end{center}
\caption{The global magnetic phase diagram of the Kondo
lattice at zero temperature.
$j_K$ is the Kondo coupling measured in terms of
the conduction-electron bandwidth.
$G$ labels
frustration.
As illustrated,
three phases, ${\rm AF_S}$, ${\rm AF_L}$ and ${\rm PM_L}$,
have distinct Fermi surfaces.
The dashed lines
``$I$'' and ``$II$'' label two types
of
transitions. More detailed descriptions are given
in the main text.}
\label{gpd}
\end{figure}
The
zero-temperature
phase diagram can be specified in the multi-dimensional parameter
space of $x$, $I/W$, $J_K/W$, and $G$. In a given material,
the conduction electron density $x$ is fixed, but the other
parameters can be varied. Here, we will consider a fixed and
(as in real materials)
relatively
small $I/W$.
In Fig. \ref{gpd}, the horizontal axis labels
$j_K \equiv J_K/W$,
while the
vertical axis describes the local moment magnetism that is completely
decoupled from the
conduction electrons. When $G$ is sufficiently
large,
the
conventional N\'{e}el
state becomes unstable
towards
states which
preserve spin-rotational invariance but is translational-invariance
breaking (spin Peierls) or preserving (spin liquid). We will not get
into that regime
, but will instead focus
on the region of $G$ where
the local moment component itself remains in the N\'{e}el state.
Still,
incorporating the parameter $G$
allows us to discuss the phase diagram
beyond the traditional picture~\cite{Doniach,Varma}, which
arises from considering only an energetic competition between
the RKKY ($I$) and Kondo ($J_K$) couplings.
The magnetic phase diagram is shown in Fig. \ref{gpd}.
The ${\rm PM_L}$ phase describes a heavy Fermi
liquid with a Fermi surface that encloses $1+x$ electrons per
unit cell within the paramagnetic Brillouin zone~\cite{Bickers}.
This phase
can be most easily seen at $J_K/W \gg 1$, as illustrated
in Fig. \ref{grip}a.
At each of the $xN_{\rm site}$ sites (where $N_{\rm site}$
is the number of unit cells in the system),
a local moment and a conduction electron form a
tightly bound
singlet,
\begin{eqnarray}
|s>_i = (1/\sqrt{2})(|\uparrow>_f|\downarrow>_c
-
|\downarrow>_f|\uparrow>_c ),
\label{tight-singlet}
\end{eqnarray}
with a large binding energy of order $J_K$.
Each of the remaining $(1-x)N_{\rm site}$ sites hosts a lone
local
moment
which, when projected to the low energy subspace, is written as
\begin{eqnarray}
|{\rm lone~
local~
moment}~\sigma>_i = (-\sqrt{2}\sigma) c_{i,\bar{\sigma}}|s>_i .
\label{lone-moment}
\end{eqnarray}
In other words, if we consider the $|s>$ as the vacuum state,
a lone
local
moment behaves as a hole with infinite repulsion (there is only
one conduction electron in the singlet) but with a kinetic energy
of order $W$~\cite{LaCroix}. In the paramagnetic
phase, we can invoke the Luttinger theorem to conclude that the Fermi
surface encloses $(1-x)$ holes or, equivalently,
$(1+x)$ electrons per unit cell. This is the heavy fermion state in which
local moments, through an entanglement with conduction electrons,
participate in the
electron fluid~\cite{Bickers}.
The Fermi surface is large in this sense,
and the phase is labeled as ${\rm PM_L}$.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.45\textwidth]{grip1.eps}
\vskip 0.8cm
\includegraphics[width=0.45\textwidth]{grip2.eps}
\end{center}
\caption{(a) Kondo singlets and lone
local
moments in the
$J_K \gg W \gg I$ limit;
(b)
In the opposite $J_K \ll I \ll W $ limit,
static Kondo singlets do not form,
but dynamical singlet correlations do exist.
}
\label{grip}
\end{figure}
Another corner of the phase diagram where exact statements can be made
is for
$J_K/W
(
\ll I/W
)
\ll 1$. For simplicity, we will consider the case
with Ising spin anisotropy. The spin excitation spectrum of the
N\'{e}el ordered local moment component is gapped; it follows that the
Kondo coupling is irrelevant in the
renormalization group sense. This is pictorially illustrated in
Fig.~\ref{grip}b, where the Kondo coupling provides
dynamical
singlet correlation, but does not succeed in forming
any
``grip''
(static Kondo singlet).
Local moments stay charge neutral, and they do not contribute to the
electronic excitations.
The Fermi surface is small. We call this phase ${\rm AF_S}$.
As the system moves from the ${\rm AF_S}$ phase to the
${\rm PM_L}$ phase, two things happen. First, the singlet
correlation between the local moments and conduction
electrons becomes stronger: when the ``grip'' finally forms,
Kondo screening is realized. Second, the
N\'{e}el order becomes more fragile and eventually goes away.
Traditionally, it is believed that the Kondo screening develops
before the N\'{e}el order disappears. There is then an
intermediate magnetically ordered state, in which the Fermi volume
in the magnetic zone (for commensurate order) is the same as
that of the ${\rm AF_S}$ phase. Nonetheless, the Fermi surface
of this intermediate phase is labeled as large,
in the sense that local moments have become
a
part of the electron fluid.
This
${\rm AF_L}$ phase can be thought of as a spin-density-wave
(SDW) state formed out of the heavy fermion quasiparticles
of the ${\rm PM_L}$ phase;
indeed, the Fermi surface of
the ${\rm AF_L}$ phase is adiabatically connected to that of
${\rm PM_L}$, when the magnetic order parameter is switched
off. This Fermi surface of the ${\rm AF_L}$ phase
has a different topology from that
of
the ${\rm AF_S}$ phase (as can be most easily seen near
the
multicritical
point);
the two phases are separated by a Lifshitz transition.
The magnetic quantum transition is between
the ${\rm AF_L}$ and ${\rm PM_L}$ phases, and is labeled type II.
It is also possible, however, for a direct transition
between the ${\rm AF_S}$ and ${\rm PM_L}$ phases.
This is the type I transition shown in Fig.~\ref{gpd}.
\section{Quantum Critical Points}
At the type II magnetic transition,
the effective Kondo screening scale of the lattice -- the coherence
temperature -- is finite. The quantum critical point belongs
to the Hertz-Moriya-Millis type~\cite{Hertz76,Moriya,Millis}.
The type I magnetic transition, however, goes directly from
the ${\rm AF_L}$ phase to the ${\rm PM_L}$ phase.
The transition is second order if
$z_L$, the quasiparticle residue of the large Fermi surface in
the ${\rm PM_L}$ phase, and $z_S$, its counterpart of the small Fermi
surface in the ${\rm AF_S}$ phase, go to zero as the transition is
reached from
respective sides.
The coherence temperature vanishes
-- and
the
Kondo singlets
disintegrate -- as the QCP is approached from
the ${\rm PM_L}$ side.
At such a magnetic QCP, the destruction of Kondo screening coincides with
the onset of magnetic ordering. The understanding of
actually
how the quasiparticles
are destroyed at the QCP comes from microscopic considerations.
One mechanism
is
the local quantum
criticality~\cite{Si-Nature,GrempelSi,ZhuGrempelSi,SiZhuGrempel}.
Fluctuations
of the magnetic order parameter are the softest at the magnetic QCP.
These slow fluctuations in turn decohere the Kondo screening, making
the Kondo effect critical. The latter characterizes the
emergent
non-Fermi
liquid critical excitations, which are in addition to the critical
fluctuations of the magnetic order parameter.
The local QCP has a number of characteristics.
Electronically,
the $f-$electrons turn from being itinerant to being localized across
the QCP. There are two corollaries. The Fermi surface undergoes a sudden
reconstruction at the QCP. In addition, the continuous vanishing of
both
$z_L$ and $z_S$ implies that the effective mass
diverges as the QCP is approached from both the paramagnetic and
magnetic sides. It is worth expanding on this feature for the
magnetic side. The mass enhancement in heavy fermions has traditionally
been associated with the formation of Kondo resonance. How can the
${\rm AF_S}$ phase, having no Kondo resonance, acquire small quasiparticle
residue and large effective mass? As Fig.~\ref{grip}b illustrates,
here, even though Kondo singlet is not formed in the static sense,
dynamical
singlet correlation
does occur
and becomes
stronger as the QCP is approached.
It is this dynamical effect that enhances
both
the thermodynamic
mass (as measured in
,{\it e.g.},
specific heat coefficient)
and
electronic
mass (as measured
in, {\it e.g.},
dHvA).
A second feature of the local QCP arises in the magnetic dynamics.
In contrast to the Gaussian fixed point of the
$T=0$
SDW transition,
where $\omega/T$ scaling is violated~\cite{Hertz76,Moriya,Millis},
the interacting nature of the
local QCP
produces
an $\omega/T$ scaling. Moreover, the magnetic
dynamics contains a fractional exponent. The dynamical spin
susceptibility turns out to have the
form~\cite{Si-Nature,GrempelSi,SiZhuGrempel}
\begin{eqnarray}
\chi({\bf q},\omega ) =
\frac{\rm const.}
{I_{\bf q} - I_{\bf Q} + (-i\omega)^{\alpha}
M
(\omega/T)} ,
\label{chi-q-omega}
\end{eqnarray}
where ${\bf Q}$ is the antiferromagnetic ordering wavevector.
\section{Experiments}
Experimental data
in heavy fermions
suggest that both the antiferromagnetic and
paramagnetic phases are
indeed
Fermi liquids. In YbRh${\rm _2}$Si${\rm _2}$, for
instance, the resistivity is $T^2$ on both sides of the QCP~\cite{Custers}.
Related features have been observed
in CePd${\rm _{2}}$Si$_{\rm _2}$~\cite{Grosche,Flouquet}
and
CeCu${\rm _{6-x}}$Au${\rm _x}$~\cite{Lohneysen}.
In the
case of CeCu${\rm _{6-x}}$Au${\rm _x}$,
for $x>\sim x_c$ with small $T_N$,
however, the specific heat coefficient
does not appear to saturate at the lowest measured temperatures;
this region remains to be clarified.
There are also extensive Fermi surface measurements via dHvA.
It is well established that the paramagnetic metal phase has
a large Fermi surface~\cite{Lonzarich}.
Perhaps less well known is the fact that antiferromagnetic heavy fermions
are typically found to have a small Fermi surface
(for
recent
reviews,
see Ref.~\cite{Julian}).
Since a large magnetic field
-- which is a big perturbation to heavy fermions --
is necessarily involved
in the experiment
,
it is natural that the
dHvA
measurement
generically
probes
the parts of the phase diagram
sufficiently away from the magnetic-transition region. By extension,
it is natural that the ${\rm AF_S}$ and ${\rm PM_L}$ phases are
the ones that are commonly identified in such measurements.
We now turn to experiments which zoom in on the transition region.
Consider first the inelastic neutron scattering experiments.
CeCu${\rm _{6-x}}$Au${\rm _x}$, at $x =0.1 \approx x_c$, is the
most striking case of a single crystal showing a dynamical spin
susceptibility with a fractional exponent and an $\omega/T$ scaling,
of the form given in Eq.~(\ref{chi-q-omega})~\cite{Schroder,Stockert}.
Recent measurements~\cite{Kadowaki} have been carried out in
Ce(Ru${\rm _{1-x}}$Rh$_{\rm x}$)$_{\rm 2}$Si${\rm _2}$. This
single crystal displays a paramagnetic to antiferromagnetic
metal transition at $x_c \approx 0.04$. Close to this concentration,
the inelastic neutron
scattering data
is
well described
by the Lorentzian form,
$\chi({\bf q},\omega ) = {\rm \chi_{\bf q}(T)}/
{[1-i\omega / \Gamma_{\bf q}(T)]}$,
with $\Gamma_{\bf Q} \sim T^{3/2}$.
This form, violating $\omega/T$ scaling,
is what is expected in a 3D AF SDW QCP~\cite{Hertz76,Moriya,Millis}.
In addition,
the electrical resistivity and specific heat data are
also reasonably consistent with the SDW picture.
While it will be important for future experiments to map out the $T_N$
line closer to the $T=0$ transition (the lowest finite $T_N$ that has
been determined so far is of the order of 3 K), the evidence seems
quite strong that we are finally seeing a Hertz QCP!
Unlike the
quasi-2D nature~\cite{Stockert,Schroder} seen in
CeCu${\rm _{6-x}}$Au${\rm _x}$, the magnetic fluctuations in
Ce(Ru${\rm _{1-x}}$Rh$_{\rm x}$)$_{\rm 2}$Si${\rm _2}$
are
three-dimensional~\cite{Kadowaki}. This
makes
Ce(Ru${\rm _{1-x}}$Rh$_{\rm x}$)$_{\rm 2}$Si${\rm _2}$
to be located
at the lower part of our global
phase
diagram (Fig.~\ref{gpd})
than
CeCu${\rm _{6-x}}$Au${\rm _x}$.
This placement
is consistent with
the identification of
type II and type I
quantum
transitions in these
two materials, respectively.
Consider next
electronic measurements in the immediate
vicinity of the transition. Detailed Hall effect
studies~\cite{Paschen}
have
been carried out
in YbRh${\rm _2}$Si${\rm _2}$.
In this
material,
the anomalous
Hall component is relatively
small
at low temperatures,
allowing the
extraction of the normal Hall component. The Hall coefficient
shows a rapid crossover at finite temperatures, extrapolating
to a jump in the $T=0$ limit at the magnetic QCP.
The result
provides
strong evidence that
the second order quantum transition in
YbRh${\rm _2}$Si${\rm _2}$
goes
directly from ${\rm AF_S}$ to ${\rm PM_L}$.
We
already
mentioned that the large magnetic field needed in dHvA
makes it
generally
difficult to use this method
to
zero in on the quantum
critical
point.
A fortuitous situation arises in CeRhIn$_{\rm 5}$.
A magnetic field, of the order used in the dHvA measurement,
is just what is needed to entirely suppress superconductivity
and expose a pressure induced zero-temperature transition from
an antiferromagnetic metal to a paramagnetic metal~\cite{Park}.
Indeed, the dHvA result~\cite{Onuki}
can be interpreted in terms of a sudden reconstruction
of
Fermi
surface,
from
that
of ${\rm AF_S}$ to
its counterpart
of ${\rm PM_L}$, across
the
critical pressure.
Moreover, the (electronic) dHvA mass shows a large (more than
10-fold) increase as the QCP is approached. Taken together,
these measurements provide strong evidence for a field-
and pressure-induced type I magnetic QCP in CeRhIn$_{\rm 5}$.
Finally, some thermodynamic ratios also turn out to be illuminating
in this context.
We have
shown in Ref.~\cite{Zhu03} that the Gr\"{u}neisen
ratio $\Gamma$ -- the ratio of the thermal expansion, $\alpha
\equiv {1 \over V}
{\partial V \over \partial T}$, over the specific heat, $c_p$ --
has to diverge at any QCP where the control parameter is linearly
coupled to pressure. Scaling implies that, at the QCP,
$\Gamma \sim 1/T^x$, with the exponent $x=1/z\nu$ (where $z$ is
the dynamic exponent and $\nu$ the correlation length exponent).
Measurement~\cite{Kuchler03} in YbRh${\rm _2}$Si${\rm _2}$ does
indeed find such a divergence. Moreover, the exponent $x \approx 0.7$
is different from the value ($1$) expected from an AF SDW QCP, but
is consistent with the value calculated from the local QCP picture.
\section{Summary and Outlook}
We have shown that two types of magnetic metal phases - ${\rm AF_S}$ and
${\rm AF_L}$ - can occur in Kondo lattices, along with the standard
heavy fermion paramagnetic metal phase ${\rm PM_L}$. This opens up
a new type of magnetic quantum phase transition,
which goes
directly from
${\rm AF_S}$ to ${\rm PM_L}$. The transition is second order when
quasiparticle residues vanish. At this magnetic QCP, the critical
excitations include not only the fluctuations of the order parameter
but also those associated with a critical Kondo screening. Local quantum
criticality is one form of such type of QCP.
We close with a few general remarks.
The
global phase diagram
makes it
desirable to systematically
study magnetic quantum transitions in heavy
fermion metals
with different
degrees of frustration. For instance, ${\rm YbAgGe}$ has a hexagonal
lattice and its spin interactions may very well be frustrated.
Indeed, the magnetic phase transitions in this material
are rather unusual~\cite{Canfield}.
The venerable ${\rm UPt_3}$ also has a hexagonal lattice and
it could be instructive to
study
quantum phase
transitions in this material or its relatives.
The prominent role played by the destruction of Kondo screening
in our global phase diagram has other implications.
We may, for instance, replace the N\'{e}el order parameter discussed
so far by a spin glass one.
We are then led to two
types of quantum spin glass
transitions
.
The type II transition
(${\rm SG_L}$ to ${\rm PM_L}$) is expected to be described by
a Gaussian fixed point~\cite{Sachdev95,Sengupta95},
with
a violation of $\omega/T$ scaling in the magnetic dynamics.
A type I transition (${\rm SG_S}$ to ${\rm PM_L}$), on the other
hand, can correspond to an interacting fixed point,
yielding an
$\omega/T$ scaling. Recent inelastic neutron scattering
study~\cite{Dai95}
near a spin-glass QCP of
${\rm Sc_{1-x}U_{x}Pd_3}$~\cite{Maple00} does indeed find
an $\omega/T$ scaling and a fractional exponent, suggesting
a destruction of Kondo screening at this spin-glass QCP.
The striking similarity of these data with those of
${\rm UCu_{5-x}Pd_x}$~\cite{Aronson95,MacLaughlin} naturally suggests
that the later too originate from a destruction
of Kondo screening at a spin-glass QCP.
Finally, it is possible that the physics of magnetic quantum
criticality with critical Kondo screening in heavy fermion metals
connects to that of certain quantum critical spin liquid
states
in quantum
insulating
magnets~\cite{Senthil03,Si04,Senthil05}. Itinerant systems
such as heavy fermions are inherently spin-$1/2$ systems. This is
in contrast to
insulating
magnetic materials,
in which
the size of spin is
typically larger than $1/2$,
making
quantum effects less pronounced.
So, perhaps, heavy fermion metals can also play an important role
in the on-going search for
both
critical
and
stable spin liquid states.
I am particularly gratefully to D. Grempel, K. Ingersent,
S. Kirchner, E. Pivovarov,
S. Rabello, J. L. Smith,
J.-X. Zhu, and L. Zhu
for collaborations in this area, and many colleagues
for discussions. The work has been partially supported
by NSF Grant No.\ DMR-0424125 and the Robert A. Welch
Foundation.
|
1,314,259,995,953 | arxiv | \section{Motivation and Introduction}\label{sec:intro}
In heavy-ion collision experiments it is possible to generate densities and temperatures
that are comparable to the conditions in the early universe. These experiments are an
important tool to study open questions in cosmology, astrophysics and high-energy
physics. The hot and dense plasma generated in heavy-ion collisions is dominated by quarks
and gluons. QCD is an asymptotically free theory and at high enough temperatures and
densities one expects that the quarks and gluons become deconfined. If the collisions are
off-centre very large magnetic fields can be
generated~\cite{Kharzeev:08:01,Skokov:09:01,McLerran:13:01}. For these reasons one can
expect that anomalous transport phenomena~\cite{Kharzeev:16:01, Liao:16:01} might play a
role in heavy-ion collision experiments and it is of great interest to study anomalous
transport in QCD.
Prominent examples of anomalous transport phenomena are the induction of an axial or
vector current parallel to an external magnetic field in a dense chiral medium, the
so-called Chiral Separation effect (CSE)~\cite{Son:04:01,
Metlitski:05:01,Son:07:01,Kharzeev:07:01} and Chiral Magnetic effect
(CME)~\cite{Vilenkin:80:01,Fukushima:08:01}, respectively. In combination the CSE and the
CME can give rise to a gap-less hydrodynamic mode, the Chiral Magnetic
Wave~\cite{Burnier:11:01,Kharzeev:11:02}. For reviews about the experimental
signatures of anomalous transport effects see for example~\cite{Kharzeev:16:01,
Liao:16:01}.
Because of their relation to the axial anomaly it has been argued that the anomalous
transport coefficients are universal and do not get any corrections in interacting
theories. Closer investigations revealed, however, that there are two scenarios where
corrections to the anomalous transport coefficients can occur: If chiral symmetry is
spontaneously broken~\cite{Newman:06:01, Buividovich:13:02, Buividovich:14:02} and in an
unquenched theory if the currents couple to dynamical gauge
fields~\cite{Jensen:13:01,Gorbar:13:01,Gursoy:14:01}.
The focus of this work is the CSE in QCD, where non-perturbative corrections to the
transport coefficient can be expressed in terms of the in-medium amplitude $\gpgg$ of the
decay $\pi^0 \to \gamma \gamma$~\cite{Newman:06:01}:
\begin{equation}
\label{eq:chiralsep} \ja_i = \scsc B_i, \quad \scsc= \scscf \lr{1 - g_{\pi^0 \gamma\gamma}} ,
\end{equation}
where $\ja_i$ is the axial current density and $\B_i$ the external magnetic
field. In the limit $\gpgg \to 0$ the transport coefficient $\scsc$ reduces to the value
for free chiral quarks $\scscf$. For a single quark flavour with $N_c$ colour degrees of
freedom it is given by
\begin{equation}
\label{eq:scscf} \scscf = \frac{q N_c \mu}{2 \pi^2},
\end{equation} where $q$ is the electrical charge of the quark and $\mu$ the quark
chemical potential.
In the linear sigma model $\gpgg$ can be calculated and in the phase with broken chiral
symmetry (for sufficiently small chemical potential) it is given by $\gpgg = \frac{7
\zeta\lr{3} m^2}{4 \pi^2 T^2}$, where $\zeta$ is the Riemann $\zeta$-function, $m$ is the
constituent quark mass and $T$ is the temperature~\cite{Newman:06:01}. Plugging in the
values $m \sim 300 \ \MeV$ and $T \sim 150 \ \MeV$, which give a realistic low-energy
description of the chirally broken phase of QCD~\cite{Baboukhadia:97:01}, we find a
correction of order $100 \%$ which suppresses the CSE current. Corrections suppressing the
CSE were also found in other model calculations~\cite{Gorbar:09:01, Gorbar:11:01,
Gorbar:11:02, Amado:14:01,Jimenez-Alba:14:01}.
For accurate predictions of signatures of anomalous transport effects in
heavy-ion collision experiments it is desirable to gain a quantitative, model-independent
understanding of possible corrections to the anomalous transport coefficients from
first-principle lattice QCD simulations. Previous lattice studies looked at the infrared
values of the anomalous transport coefficients for the
CME~\cite{Yamamoto:11:01,Yamamoto:11:02} and the Chiral Vortical Effect
(CVE)~\cite{Braguta:13:01,Braguta:14:01}. These studies found a significant suppression of
the CME and the CVE at both low and high temperatures, conflicting with expectations based
on the hydrodynamic approximation. At least at high temperatures the thermodynamic
consistency arguments fixing the anomalous transport coefficients within this
approximation should be valid~\cite{Son:09:01,Sadofyev:11:01}. It is possible that the
origin of this discrepancy lies in the use of a naively discretised non-conserved vector
current~\cite{Yamamoto:11:01,Yamamoto:11:02} and energy-momentum
tensor~\cite{Braguta:13:01,Braguta:14:01}. Moreover, the simulations
in~\cite{Yamamoto:11:01,Yamamoto:11:02} were performed with non-chiral Wilson--Dirac
lattice fermions.
In this contribution we report on a first-principles lattice study of the CSE, previously
published in~\cite{Puhr:16:03}. To avoid unquantifiable systematic errors we work with
finite-density overlap fermions~\cite{Bloch:06:01}, which respect a lattice version of
chiral symmetry, and use the properly defined conserved lattice axial vector current
density~\cite{Hasenfratz:02:01,Kikukawa:98:01}:
\begin{equation}
\label{eq:j5} \jxa = \tfrac{1}{2} \bar{\psi} \left( - \gamma_5\K + \K
\gamma_5(1-\Dov)\right) \psi ,
\end{equation}
where $\K = \frac{\partial \Dov}{\partial \Theta_{x,\mu}}$ is the
derivative of the overlap operator $\Dov$ over the $U(1)$ lattice gauge field
$\Theta_{x,\mu}$. With the definition \eqref{eq:j5} the lattice axial current transforms
covariantly under the lattice chiral symmetry. For vanishing bare quark mass it is
therefore protected from renormalisation and can be directly related to the continuum
axial current density $j^5_\mu=\bar{\psi}\gamma_5\gamma_\mu\psi$, which enters
Equation~\eqref{eq:chiralsep}. Taking the expectation value of~(\ref{eq:j5}) and using the
Ginsparg--Wilson equation to simplify the resulting expression finally yields
\begin{equation}
\label{eq:<j5>} \langle \jxa \rangle = \Tr\left(\Dov^{-1}\frac{\partial \Dov}{\partial
\Theta_{x,\mu}} \gamma_5 \right).
\end{equation}
Efficiently computing the derivatives $\frac{\partial \Dov}{\partial
\Theta_{x,\mu}}$ with high accuracy is a non-trivial numerical problem and we developed a
new numerical algorithm for this purpose. For details on the evaluation of the derivatives
we refer the reader to~\cite{Puhr:16:01}.
\section{Simulation parameters and numerical setup}\label{sec:setup}
Lattice QCD with dynamical fermions has a sign problem at finite quark chemical
potential. In order to avert the sign problem we work in the quenched approximation and
neglect the effects of sea quarks. While calculations within a random matrix model show
that the chiral condensate in quenched QCD vanishes and chiral symmetry is restored for
any non-zero chemical potential~\cite{Stephanov:96:01}, the presence of an external
magnetic field can potentially change this picture. On the one hand random matrix theory
is no longer applicable in this case and on the other hand non-perturbative corrections to
the CSE due to the formation of a new type of condensate, the so-called chiral shift
parameter~\cite{Gorbar:09:01,Gorbar:11:01,Gorbar:11:02}, are possible.
The $\Su(3)$ gauge configurations are generated using the tadpole-improved Lüscher--Weisz
gauge action~\cite{Luescher:85:01}. We use three different parameter sets for our
simulations: $V=\LT\times \LS^3 = 6\times 18^3$ with $\bzero=8.45$ corresponding to a
temperature $T>T_c$ and $V = 14\times 14^3$ and $V = 8 \times 8^3$ with $\bzero=8.10$
corresponding to $T<T_c$, where $\LT$ and $\LS$ are the temporal and spatial extent of the
lattice and $T_c \approx 300 \ \MeV$ is the deconfinement transition temperature of the
Lüscher--Weisz action~\cite{Gattringer:02:01}. To fix the lattice spacing $a$ we take the
results from~\cite{Gattringer:01:01}. The values of all parameters in lattice and physical
units are summarised in Table~\ref{tab:params}.
\begin{table}[thb]
\small
\centering
\begin{tabular}{p{3pt}llccc}
\toprule[1pt]
\multicolumn{2}{l}{\multirow{2}{*}{Setup}} & $\bzero$ & $8.1$ & $8.1$ & $8.45$ \\
& & Volume & $14 \times 14^3 $ & $8 \times 8^3 $& $6 \times 18^3 $ \\
\midrule[1pt]
& & Lattice & \multicolumn{3}{c}{Physical Value} \\
\midrule[1pt]
$a$ & $[\fermi]$ & $1$ & $0.125$ & $0.125$ & $0.095$ \\
$V_S $ & $ [\fermi^3]$ & $\LS^3$ & $5.4$ & $1.0$ & $5.0$ \\
$T $ & $ [\MeV]$ & $\LT^{-1}$ & $113$ & $197$ & $346$ \\
$\mu $ & $ [\MeV]$ & $0.050 $ &$79$ & $\cdots$ & $\cdots$ \\
& & $0.100 $ & $\cdots$ & $158$ & $\cdots$ \\
& & $0.300 $ & $474$ & $\cdots$ & $\cdots$ \\
& & $0.040 $ & $\cdots$ & $\cdots$ & $83$ \\
& & $0.230 $ & $\cdots$ & $\cdots$ & $478$ \\
$\frac{qB}{\MF}$ & $[\MeV]^2$ & $\frac{2\pi}{a^2\LS^2}$ & $283^2 $ & $495^2$& $289^2 $ \\
\bottomrule[1pt]
\end{tabular}
\caption{Simulation parameters}
\label{tab:params}
\end{table}
For the $6 \times 18^3$ and $14 \times 14^3$ lattices approximately $10^3$ configurations
were generated, from which we randomly picked $100$ with topological charge
$Q=0$\footnote{One of the configurations for the parameters $V=14 \times 14^3$,
$\bzero=8.1$, $\mu = 0.050$ and a magnetic flux of $\MF = 1$ caused a serious breakdown in
the Lanczos algorithm when computing the overlap operator and only the remaining $99$
configurations were used for this parameter set.}. Additionally we chose $100$
configurations with topological charge $|Q| = 1$ for $V=6 \times 18^3$ and $111$
with $|Q|= 1$ and $97$ with $|Q|= 2$ for $V=14 \times 14^3$. For the $V=8 \times 8^3$ lattice
$5 \cdot 10^3$ configurations were generated, from which we selected three random sets of
$200$ configurations with $Q = 0$, $|Q| = 1$ and $|Q| = 2$.
The topological charge of a given gauge configuration can be calculated by taking the
difference of the number of left- and right-handed zero modes of the overlap operator:
\mbox{$Q= n_L- n_R$}. In practice configurations with zero modes with both chiralities do
not occur and the overlap operator always has either $n_R = |Q|$ right-handed or $n_L =
|Q|$ left-handed zero modes (see e.g. Section~7.3.2 in
\cite{GattringerLATTICE_QCD}). Exploiting this fact we calculated the absolute value of
the topological charge $|Q| = |n_R - n_L|$ as the number of zero eigenvalues of the
operator $\Dov \Dov^{\dag}$.
To introduce a constant, homogeneous external magnetic field on the lattice we
follow~\cite{Al_Hashimi:09:01} and introduce a magnetic flux quantum $\MF = 1, 2, 5,10$
for $V = 14 \times 14^3$ and $V = 6 \times 18^3$ at $Q=0$, and $\MF = 0, 1,2,3,4$ for $V =
8 \times 8^3$ at all $Q$. For $V = 6 \times 18^3$ we chose $\MF = 0, 1,2,3,5$ at $|Q|= 1$
and $\MF = 1,3,5,8,10$ for $V = 14 \times 14^3$ at $|Q|= 1, 2$. The axial current density
is computed by averaging (\ref{eq:<j5>}) over all lattice sites $x$. To evaluate the trace
we use the stochastic estimator technique with $Z_2$-noise. The number of stochastic
estimators is increased until the results are stable. The axial current
density is only well defined if the overlap operator is invertible, i.e., if
$Q=0$. Working exclusively on configurations with $Q=0$ introduces an systematic error and
in order to perform a cross-check of our results we also consider configurations with
$|Q|>0$. Since the computations are numerically very expensive, we only do the
cross-checks for a single value of the chemical potential. By introducing a small finite
quark mass $m_q = 0.001 \ a^{-1}$ on configurations with non-zero topological charge we
make the overlap operator invertible. Strictly speaking the axial current defined via
Equation~(\ref{eq:<j5>}) is no longer protected from renormalisation in this case. To
demonstrate that the effect of the finite quark mass on $\scsc$ is negligible in practice,
we consider a second mass value $m_q = 0.002 \ a^{-1}$ for the $V = 8 \times 8^3$
configurations.
The value of $\scsc$ is given by the slope of the axial current density as a function of
the external magnetic field. We extract $\scsc$ from our axial current data by performing
a one parameter linear fit. Confidence intervals for $\scsc$ are calculated with the
statistical bootstrap method: For every bootstrap sample we first independently draw $100$
configurations for every value of $\MF$ and then perform a fit to the data generated in
this way.
\begin{figure}[htb]
\centering
\resizebox{0.48 \textwidth}{!}{\input{6x18b845_slope_CI}
\resizebox{0.48 \textwidth}{!}{\input{6x18b845_slope_CI_Q01}
\caption{The axial current density $\ja$ as a function of the magnetic field strength
$B$ for $T < \Tc$. The left plot shows results for $Q=0$ and on the right
$|Q|=1$ (note the different scales). The red dots with errorbars are our data
and the shaded regions mark the bootstrap confidence intervals for $\scsc$ for
a different number of stochastic estimators. Solid black lines correspond to
the free fermion result $\scscf$. }
\label{fig:cse_6x18_CI}
\end{figure}
\section{Results}\label{sec:res}
First we present results for the high-temperature deconfinement phase, where
$T~=~346~\MeV~>~T_c$. Here the chiral symmetry should be restored\footnote{The restoration
of chiral symmetry in the deconfinement phase of quenched lattice QCD is discussed, e.g.,
in~\cite{Edwards:99:01,Kiskis:01:01}} and we expect that there are no corrections to the
CSE current~\cite{Alekseev:98:01,Metlitski:05:01,Newman:06:01}. Our data is plotted in
Figure~\ref{fig:cse_6x18_CI} and as expected in general we find good agreement with the
free fermion result $\scscf$. The sole exception is the data point for $Q=0$, $\mu = 0.230
\ a^{-1}$ and $\MF=10$, where we might see the onset of saturation. To make sure that our
results for $\scsc$ are not affected by possible statuartion effects, we additionally
perform fits where the data for the largest value of $\MF$ is left out (see
Figure~\ref{subfig:CIs}).
\begin{figure}[!ht]
\subfloat[
Summary of the results for the confidence intervals for the ratio $\scsc/\scscf$ for
the lattices with $V=14\times14^3$ and $V=6\times18^3$. Results for $T>\Tc$ and
$T<\Tc$ are marked by open and closed boxes, respectively. The boxes denote the
results of a fit to all data points and the whiskers show the results if the data for
the largest value of $\MF$ are excluded.\label{subfig:CIs}]
\resizebox{0.48\textwidth}{!}{\input{CI_graph}
}
\hfill
\subfloat[
The axial current density $\ja$ in different topological sectors for the
$V=8\times8^3$ lattice. The results for $m_q = 0.001 \ a^{-1}$ are denoted by filled
symbols, the data for $m_q = 0.002 \ a^{-1}$ are shifted by $0.02 \ a^{-2}$ in the
$qB$ axis for better visibility and are marked by open symbols. The black dots show
the axial current with $Q = 0$ for vanishing quark mass and the black dashed line
corresponds to the free fermion result $\scscf$. To guide the eye a linear ($Q=0$) or
second order polynomial ($|Q|>0$) fit to the data is shown.\label{subfig:8x8}]
\resizebox{0.48\textwidth}{!}{\input{8x8b81_mu0100_all_mo}
}
\caption{}
\end{figure}
Next we examine the results for the low-temperature confinement phase, where
non-perturbative corrections to the CSE are expected. As a proof of concept we first
consider the small $V=8\times8$ lattice. In the topological sector $Q=0$ we
again find a very good agreement with $\scscf$, but for $|Q| \neq 0$ there are large
deviations from the free fermion result. The results for different bare quark masses
lie on top of each other and we conclude that for small quark masses the
renormalisation of the axial current is negligible.
\begin{figure}[htb]
\centering
\resizebox{0.48 \textwidth}{!}{\input{14x14b81_slope_CI}} \\
\resizebox{0.48 \textwidth}{!}{\input{14x14b81_mu0300_slope_CI_Q01}
\resizebox{0.48 \textwidth}{!}{\input{14x14b81_mu0300_slope_CI_Q02}
\caption{The axial current density $\ja$ as a function of the magnetic field strength
$B$ in different topological sectors for the lattice with $V=14\times14^3$ at
$T < \Tc$ (red dots with errorbars). In the plot on the top $Q=0$,
$|Q|=1$ on the bottom left plot and on the bottom right $|Q|=2$. Solid black
lines denote the free fermion result $\scscf$ and shaded regions mark the
confidence intervals for $\scsc$.}
\label{fig:cse_14x14_CI_Q1_Q2}
\end{figure}
A lattice volume of $V=8\times8^3$ is very small and to check for finite size effects we
also perform simulations for $V=14\times14^3$. The results for the larger lattice are
shown in Figure~\ref{fig:cse_14x14_CI_Q1_Q2}. The data for $Q=0$ is in very good agreement
with the results for the smaller lattice size and there does not seem to be a large finite
size effect for this topological sector. For the $|Q| \neq 0$ sectors the picture is
completely different: Contrary to the small volume calculations the CSE current does not
get any corrections. The plots in Figure~\ref{fig:cse_14x14_CI_Q1_Q2} clearly show that
for the larger lattice volume the data for all topological sectors and chemical potentials
we investigated are in perfect agreement with the free fermion result $\scscf$. A possible
reason for the large finite size effects in topological sectors with non-zero $Q$ is
discussed in~\cite{Puhr:16:03}. The results for all our simulations on larger lattices are
summarised in Figure~\ref{subfig:CIs}.
\section{Conclusion}\label{sec:conclusion}
We performed a numerical study to quantify possible non-perturbative corrections to the
CSE current in quenched lattice QCD. Within statistical errors, which are smaller than
$10\%$ for the simulations with larger chemical potentials (see Figure~\ref{subfig:CIs}),
we do not find any correction to the CSE current and reproduce the free fermion value
$\scscf$ for the transport coefficient. The use of finite-density overlap fermions and a
conserved lattice axial current, which transforms covariantly under the lattice chiral
symmetry, eliminates potential systematic errors due to a explicit breaking of chiral
symmetry or a renormalisation of the axial current. Comparing the results for different
lattice sizes suggest that finite size effects are very small, at least in the
topological sector $Q=0$. A remaining source of systematic errors is the quenched
approximation. Taking the results of the random matrix model~\cite{Stephanov:96:01} at
face value, one could argue that the chiral condensate in quenched QCD should vanish as
soon as a finite chemical potential is turned on and consequently the non-perturbative
corrections predicted for the phase with broken chiral symmetry should be absent. However,
on the one hand the random matrix calculation can not take into account a finite external
magnetic field and on the other hand the presence of such a field can instigate the
spontaneous formation of condensates, like the chiral shift parameter
of~\cite{Gorbar:09:01,Gorbar:11:01,Gorbar:11:02}, which can also give non-perturbative
corrections to the CSE. Moreover, the holographic
calculations~\cite{Amado:14:01,Jimenez-Alba:14:01} found non-perturbatice corrections to
the CSE at small temperatures and were done in the quenched approximation (or ``probe
limit'' in the language of AdS/CFT). For all this reasons the non-renormalisation of the
CSE current in quenched QCD at both high and low temperatures is a non-trivial result.
It is important to emphasise that the results for the quenched theory do not necessarily
generalise to full QCD. In particular, our results do not exclude possible corrections that
could have their origin in the complex phase that the fermion determinate acquires at
finite chemical potential. Note that unquenched lattice calculations of the CSE are
notoriously difficult, since the necessity to introduce an external magnetic field leads
to a complex fermion determinant even in gauge theories which do otherwise not have a sign
problem at finite chemical potential, like for example $\Su(2)$ and $G_2$ gauge theories.
The non-renormalisation of the CSE in quenched QCD has a potential practical
application: If the axial current is computed for non-chiral lattice fermions and/or with
a non-covariant discretisation of the axial current, the ratio of this current and the
exact result $\ja_i = \scscf B_i$ gives the multiplicative renormalisation constant for
the axial current for this particular lattice discretisation of the Dirac operator and
axial current.
\section*{Acknowledgements}\label{sec:ack}
This work was supported by the S.~Kowalevskaja award from the
Alexander von Humboldt Foundation. The computations were performed on
``iDataCool'' at Regensburg University, on the ITEP cluster in Moscow and on
the LRZ cluster in Garching. We thank G.~Bali, A.~Dromard, R. Rödl and A.~Zhitnitsky for
valuable discussions and helpful comments.
|
1,314,259,995,954 | arxiv | \section{Introduction}
\setlength{\baselineskip}{13pt}
Today, physics is going through precision era-this is more so for Neutrino physics. With the measurement of reactor angle $\theta_{13}$ \cite{Fgli,Frero,Garc} precisely by reactor experiments, the unknown quantities left to be measured in neutrino sector are $-$ leptonic CP violating phase \cite{DK,MG,PT,Kang,LHCb,Patrik}, octant of atmospheric angle $\theta_{23}$ \cite{KD,Gonzalez,Animesh,Choubey,Daljeet}, mass hierarchy, nature of neutrino etc. Long baseline neutrino experiments (LBNE \cite{LBNE,Akiri}, NO$ \nu $A \cite{Ayres} , T2K \cite{T2K}, MINOS \cite{Minos}, LBNO \cite{DA} etc) may be very promising, in measuring many of these sensitive parameters.
\par
Exploring leptonic CP violation (CPV) is one of the most demanding tasks in future
neutrino experiments \cite{Branco}. The relatively large value of the reactor mixing angle $ \theta_{13} $ measured with a high precision in neutrino experiments \cite{F.P} has opened up a wide range of possibilities to examine CP violation in the lepton sector. The leptonic CPV phase can be induced by the PMNS
neutrino mixing matrix \cite{Pcarvo} which holds, in addition to the three mixing angles, a Dirac type CP
violating phase in general as it exists in the quark sector, and two extra phases if neutrinos are Majorana particles. Even if we do not yet have significant evidence for leptonic CPV, the current global fit to available neutrino data manifests nontrivial values of the Dirac-type CP phase \cite{Capp,Gon}. In this context, possible size of leptonic CP violation detectable through neutrino oscillations can be predicted. Recently, \cite{DK}, two of us have explored possibiities of improving CP violation discovery potential of newly planned Long-Baseline Neutrino Experiments (earlier LBNE, now called DUNE) in USA. In neutrino oscillation probability expression P($ \nu_{\mu}\rightarrow \nu_{e}$) relevant for LBNEs, the term due to significant matter effect, changes sign when oscillation is changed from neutrino to antineutrino mode, or vice-versa. Therefore in presence of matter effects, CPV effect is entangled and hence, one has two degenerate solutions - one due to CPV phase and another due to its entangled value. It has been suggested to resolve this issue by combining two experiments with different baselines \cite{Varger,Minakata}. But CPV phase measurements depends on value of reactor angle $\theta_{13}$, and hence precise measurement of $\theta_{13} $ plays crucial role in its CPV measurements. This fact was utilised recently by two of us \cite{DK}, where we explored different possibilities of improving CPV sensitivity for LBNE, USA. We did so by considering LBNE with \\
1. Its ND (near detector).\\
2. And reactor experiments.
\par
We considered both appearance ($ \nu_{\mu}\rightarrow \nu_{e}$) and disappearance ($ \nu_{\mu}\rightarrow \nu_{e}$) channels in both neutrino and antineutrino modes. Some of the observations made in \cite{DK} are\\
1. CPV discovery potential of LBNE increases significantly when combined with near detector and reactor experiments.\\
2. CPV violation sensitivity is more in LO (lower octant) of atmospheric angle $\theta_{23}$, for any assumed true hierarchy.\\
3. CPV sensitivity increases with mass of FD (far detector).\\
4. When NH is true hierarchy, adding data from reactors to LBNE improve its CPV sensitivity irrespective of octant.
\par
Aim of this work is to critically analyse the results presented in \cite{DK}, in context of entanglement of quadrant of CPV phase and octant of $\theta_{23} $, and hence study the role of leptogenesis (and baryogenesis) in resolving this enganglement. Though in \cite{DK},
we studied effect of both ND and reactor experiments on CPV sensitivity of the LBNEs, in this work we have considered only the effect
of ND. But similar studies can also be done for the effect of Reactor experiments on LBNEs as well. The details of LBNE and ND are same
as in \cite{DK}. Following the results of \cite{DK}, either of the two octants is favoured, and the enhancement of CPV sensitivity with
respect to its quadrant is utilized here to calculate the values of lepton-antilepton symmetry. This is done considering two cases of the
rotation matrix for the fermions - CKM only, and CKM+PMNS. Then, this is used to calculate the
value of BAU. This is an era of precision measurements in neutrino physics. We therefore consider variation of $\Delta m^{2}_{31}$
within its 1$\sigma$, 2$\sigma$ and 3$\sigma$ range values. We calculate baryon to photon ratio, and compare with its experimentally
known best fit value. We observe that the BAU can be explained most favourably for five possible cases explored here: IH, $\delta_{CP}= 1.43 \pi$ and HO of $ \theta_{23} $; IH, $\delta_{CP}= 0.5277 \pi$ and HO of $ \theta_{23} $; IH, $\delta_{CP}= 0.488 \pi$ and LO of $ \theta_{23} $; IH, $\delta_{CP}= 0.383 \pi$ and HO of $ \theta_{23} $; IH, $\delta_{CP}= 1.727 \pi$ and LO of $ \theta_{23} $. It is worth mentioning that the value of $\delta_{CP} = 1.43 \pi$ favoured by our calculation here is close to the central value of $ \delta_{CP} $ from the recent global fit result \cite{Gon, kol}. We also find that for variation of $\Delta m^{2}_{31}$, within its 1$\sigma$ range, the calculated values of $\eta_B$ for all possible five cases mentioned above lie in the allowed range of its best fit value. But for 3 $ \sigma $ variation of $\Delta m^{2}_{31}$, some of its values at its 3$ \sigma $ C.L are disfavoured. Also for the variation of $\theta_{13}$ within its 3 $ \sigma $ C.L, its values around 9.0974 are favoured, as far as matching with the best fit values of $\eta_B$ are concerned. These results could be important keeping in view that the quadrant of leptonic CPV phase, and octant of atmospheric mixing angle $\theta_{23}$ are yet not fixed. Also, they are significant in context of precision measurements on neutrino oscillation parameters.
\par
The paper is organized as follows. In Section II, we discuss entanglement of quadrant of CPV phase and octant of $ \theta_{23} $. In Section III, we present a review on leptogenesis and baryogenesis. In Sec. IV we show how the
baryon asymmetry (BAU) within the SO(10) model, by using two distinct forms for the lepton CP asymmetry, can be used to break the entanglement. Sec. V summarizes the work.
\section{CPV Phase and Octant of $\theta_{23}$}
As discussed above, from Fig. 3 of \cite{DK}, we find that by combining with ND and reactor experiments, CPV sensitivity of LBNE
improves more for LO (lower octant) than HO (higher octant), for any assumed true hierarchy. In Fig. 1 below we plot CP asymmetry,
\begin{equation}
A_{CP} = \frac{P(\nu_{\mu}\rightarrow \nu_{e})-P(\overline{\nu_{\mu}}\rightarrow \overline{\nu_{e}})}{P( \nu_{\mu}\rightarrow \nu_{e})+P(\overline{\nu_{\mu}}\rightarrow \overline{\nu_{e}})}
\label{diseqn}
\end{equation}
as a function of leptonic CPV phase $ \delta_{CP} $, for 0 $\leq \delta_{CP} \leq 2 \pi$.
It was shown in \cite{DK} that, using near detector (and combining with reactor experiments) at LBNE, the sentivity to measure CPV
phase (and hence CP asymmetry) improves more at lower octant of $ \theta_{23} $. CP asymmetry also depends on the mass hierarchy.
For NH, CP asymmetry is more in LO than in HO. For IH, CP asymmetry is more in LO than in HO. In this work we have used above information to calculate dependance of leptogenesis on octant of $ \theta_{23}$ and quadrant of CPV phase.
\begin{figure}[b]
\centerline{\includegraphics[width=7.8cm]{pcv.eps}}
\caption{CP asymmetry vs $\delta_{CP}$ at DUNE/LBNE, for both the hierarchies. In Fig. 1 red and green solid (dotted) lines are for NH (IH) with types of curve to distinguish HO and LO as the true octant respectively.\label{pcv.pdf}}
\end{figure}
From Fig. 1 we see that
\par
\begin{equation}
A_{CP}(LO) > A_{CP}(HO)
\label{diseqn}
\end{equation}
\par
For a given true hierarchy, there are eight degenerate solutions
$$\delta_{CP}(\text{first quadrant}) - \theta_{23}(\text{lower octant})$$
$$\delta_{CP}(\text{second quadrant}) - \theta_{23}(\text{lower octant})$$
$$\delta_{CP}(\text{third quadrant}) - \theta_{23}(\text{lower octant})$$
$$\delta_{CP}(\text{fourth quadrant}) - \theta_{23}(\text{lower octant})$$
$$\delta_{CP}(\text{first quadrant}) - \theta_{23}(\text{higher octant})$$
$$\delta_{CP}(\text{second quadrant}) - \theta_{23}(\text{higher octant})$$
$$\delta_{CP}(\text{third quadrant}) - \theta_{23}(\text{higher octant})$$
\begin{equation}
\delta_{CP}(\text{fourth quadrant}) - \theta_{23}(\text{higher octant})
\label{diseqn}
\end{equation}
This eight-fold degeneracy can be viewed as
\begin{equation}
\text{Quadrant of CPV phase} - \text{Octant of}\hspace{.1cm} \theta_{23}
\label{diseqn}
\end{equation}
entanglement. Out of these eight degenerate solutions, only one should be true solution. To pinpoint one true solution, this entanglement has to be broken. We have shown \cite{DK} that sensitivity to discovery potential of CPV at LBNEs in LO is improved more, if data from near detector of LBNEs, or from Reactor experiments is added to data from FD of LBNEs as shown in Fig. 3 of \cite{DK}. Therefore 8-fold degeneracy of (3) gets reduced to 4-fold degeneracy, with our proposal \cite{DK}. Hence, following this 4-fold degeneracy still remains to be resolved.
\begin{equation*}
\delta_{CP}(\text{first quadrant}) - \theta_{23}(\text{LO})
\end{equation*}
\begin{equation*}
\delta_{CP}(\text{second quadrant}) - \theta_{23}(\text{LO})
\end{equation*}
\begin{equation*}
\delta_{CP}(\text{third quadrant}) - \theta_{23}(\text{LO})
\end{equation*}
\begin{equation}
\delta_{CP}(\text{fourth quadrant}) - \theta_{23}(\text{LO})
\end{equation}
The possibility of $ \theta_{23} > 45^{0}$, ie HO of $ \theta_{23}$ is also considered in this work. In this context the degeneracy is
\begin{equation*}
\delta_{CP}(\text{first quadrant}) - \theta_{23}(\text{HO})
\end{equation*}
\begin{equation*}
\delta_{CP}(\text{second quadrant}) - \theta_{23}(\text{HO})
\end{equation*}
\begin{equation*}
\delta_{CP}(\text{third quadrant}) - \theta_{23}(\text{HO})
\end{equation*}
\begin{equation}
\delta_{CP}(\text{fourth quadrant}) - \theta_{23}(\text{HO})
\end{equation}
\begin{figure}[b]
\centerline{\begin{subfigure}[]{\includegraphics[width=6.8cm]{NH-cp.eps}}\end{subfigure}
\begin{subfigure}[]{\includegraphics[width=6.8cm]{IH-plot.eps}}\end{subfigure}}
\caption{In Fig. 2a and 2b $\delta_{CP}$ Vs $\Delta \chi^{2}$ sensitivity corresponding to CP discovery potential at LBNEs, for both the hierarchies and octant is shown.}
\end{figure}
In this work, we propose that leptogenesis can be used to break above mentioned 4-fold degeneracy of Eq. (5),(6). It is known that observed baryon asymmetry of the Universe (BAU) can be explained via leptogenesis \cite{Bari, RM, Bhupal, Pd, GC}. In leptogenesis, the lepton-antilepton asymmetry can be explained, if there are complex yukawa couplings or complex fermion mass matrices. This in turn arises due to complex leptonic CPV phases, $\delta_{CP}$, in fermion mass matrices. If all other parameters except leptonic $\delta_{CP}$ phase in the formula for lepton - antilepton asymmetry are fixed, for example, then observed value of BAU from experimental observation can be used to constrain quadrant of $\delta_{CP}$, and hence 4-fold entanglement of (5),(6) can be broken. An experimental signature of CP violation associated to the dirac phase $\delta_{CP}$, in PMNS matrix \cite{Mki}, can in principle be obtained, by searching for CP asymmetry in $ \nu $ flavor oscillation. To elucidate this proposal, we consider model independent scenario, in which BAU arises due to leptogenesis, and this lepton-antilepton asymmetry \cite{Uma} is generated by the out of equilibrium decay of the right handed, heavy majorana neutrinos, which form an integral part of seesaw mechanism for neutrino masses and mixings. Since our proposal is model independent, we consider type I seesaw mechanism, just for simplicity.
\section{\textbf{Leptogenesis and Baryogenesis in Type I Seesaw SO(10) Models}}
In Grand Unified theories like SO(10), one right handed heavy majorana neutrino per generation is added to Standard Model \cite{l,m,n,ibarra}, and they couple to left handed $ \nu $ via Dirac mass matrix $m_{D}$. When the neutrino mass matrix is diagonalised, we get two eigen values $ - $ light neutrino $ \sim $ $ \frac{m^{2}_{D}}{M_{R}} $ and a heavy neutrino state $ \sim $ $M_{R}$. This is called type I See Saw mechanism. Here, decay of the lightest of the three heavy RH majorana neutrinos, $M_{1}$, i.e $M_{3}, M_{2}\gg M_{1}$ will contribute to $l-\bar{l}$ asymmetry (for leptogenesis), i.e $\epsilon^{CP}_{l}$. In the basis where RH $\nu$ mass matrix is diagonal, the type I contribution to $\epsilon^{CP}_{l}$ is given by decay of $M_{1}$
\begin{equation}
\epsilon^{CP}_{l}=\frac{\Gamma(M_{1}\rightarrow lH) - \Gamma(M_{1}\rightarrow \bar{l} \bar{H})}{\Gamma(M_{1}\rightarrow lH) + (\Gamma(M_{1}\rightarrow \bar{l} \bar{H})},
\end{equation}
where $\Gamma(M_{1}\rightarrow lH)$ means decay rate of heavy majorana RH $\nu$ of mass $M_{1}$ to a lepton and Higgs. We assume a
normal mass hierarchy for heavy Majorana neutrinos. In this scenario the lightest of heavy Majorana neutrinos is in thermal equilibrium
while the heavier neutrinos, $M_{2}$ and $ M_{3} $, decay. Any asymmetry produced by the out of equilibrium decay of $M_{2}$ and
$ M_{3} $ will be washed away by the lepton number violating interactions mediated by $ M_{1} $. Therefore, the final lepton-antilepton asymmetry is given only by the CP-violating decay of $ M_{1} $ to standard model leptons (l) and Higgs (H). This contribution
is \cite{Mh}:
\begin{equation}
\epsilon_{l}=-\frac{3M_{1}}{8\pi}\frac{Im[\Delta m^{2}_{\odot}R^{2}_{12}+\Delta m^{2}_{A}R^{2}_{13}]}{\upsilon^{2}\sum
|R_{ij}|^{2}m_{j}}.
\end{equation}
$R$ is a complex orthogonal matrix with the property that $RR^{T} = 1$. $R$ can be parameterized as \cite{Osc}:
\begin{equation}
R = D_{\sqrt{M^{-1}}}Y_{\nu}UD_{\sqrt{K^{-1}}},
\end{equation}
where $Y_{\nu}$ is the matrix of neutrino yukawa couplings. In the basis, where the charged-lepton Yukawa matrix, $Y_{e}$ and gauge interactions are flavour-diagonal,
$ D_{K} = U^{T}KU $, where $K=Y_{\nu}^{T}M_{R}^{-1}Y_{\nu}$. $U$ is the PMNS matrix and $M_{R}$ is the RH neutrino Majorana scale. In the basis of right handed neutrinos, $D_{M} = Diag(M_{1}, M_{2},M_{3})$ where $M_{3}, M_{2}\gg M_{1}$. Equation (8) relates the lepton asymmetry to both the solar ($\Delta m^{2}_{21}$) and atmospheric ($ \Delta m^{2}_{A} $) mass squared differences.
Thus the magnitude of the matter-antimatter asymmetry can be predicted in terms of low energy oscillation parameters, $\Delta m^{2}_{21}$, $ \Delta m^{2}_{A} $ and a CPV phase. Here matrix $R$ is dependent on both $U_{PMNS}$ and $V_{CKM}$, and it can be shown that,
\begin{eqnarray*}
\text{Im}R^{2}_{13}& = &-\text{Sin}(2\delta_{q})\text{Cos}^{2}(\theta^{l}_{23})\text{Cos}^{2}(\theta^{l}_{13})\text{Sin}^{2}(\theta^{q}_{13})-2\text{Sin}(\delta_{q})\text{Cos}(\theta_{13}^{q})\text{Cos}(\theta_{23}^{l})\text{Cos}^{2}(\theta^{l}_{13})\text{Sin}(\theta_{12}^{q})\text{Sin}(\theta_{13}^{q})\text{Sin}(\theta_{23}^{l})\\& & +2\text{Sin}(-\delta_{l}-\delta_{q})\text{Cos}(\theta_{12}^{q})\text{Cos}(\theta_{13}^{q})\text{Cos}(\theta^{l}_{23})\text{Cos}(\theta_{13}^{l})\text{Sin}(\theta_{13}^{q})\text{Sin}(\theta_{13}^{l})-2\text{Sin}(\delta_{l})\text{Cos}(\theta_{12}^{q})\text{Cos}^{2}(\theta_{13}^{q})\text{Cos}(\theta^{l}_{13})\\& &\text{Sin}(\theta_{12}^{q})\text{Sin}(\theta_{23}^{l})\text{Sin}(\theta_{13}^{l})-\text{Sin}(2\delta_{l})\text{Cos}^{2}(\theta_{12}^{q})\text{Cos}^{2}(\theta_{13}^{q})\text{Sin}^{2}(\theta^{l}_{13})-2\text{Sin}(\delta_{l})\text{Cos}^{2}(\theta_{12}^{q})\text{Cos}^{2}(\theta_{13}^{q})\text{Sin}^{2}(\theta^{l}_{13})
\end{eqnarray*}
\begin{eqnarray*}
\text{Im}R^{2}_{12}&=&2\text{Sin}(\delta_{q})\text{Cos}(\theta_{13}^{q})\text{Cos}^{2}(\theta_{12}^{l})\text{Cos}(\theta^{l}_{23})\text{Sin}(\theta_{12}^{q})\text{Sin}(\theta_{13}^{q})\text{Sin}(\theta_{23}^{l})+2\text{Sin}(\delta_{q})\text{Cos}(\theta_{12}^{q})\text{Cos}(\theta_{13}^{q})\text{Cos}(\theta^{l}_{12})\text{Cos}(\theta^{l}_{13})\\& & \text{Sin}(\theta_{13}^{q})\text{Sin}(\theta_{12}^{q})\text{Sin}(\theta_{23}^{l})-\text{Sin}(2\delta_{q})\text{Cos}^{2}(\theta_{12}^{l})\text{Sin}^{2}(\theta_{13}^{q})\text{Sin}^{2}(\theta^{l}_{23})-2\text{Sin}(\delta_{l}-\delta_{q})\text{Cos}(\theta_{13}^{q})\text{Cos}(\theta_{12}^{l})\text{Cos}^{2}(\theta^{l}_{23})\\& &\text{Sin}(\theta^{q}_{12})\text{Sin}(\theta_{13}^{q})\text{Sin}(\theta_{12}^{l})\text{Sin}(\theta_{13}^{l})-2\text{Sin}(\delta_{l}-\delta_{q})\text{Cos}(\theta_{12}^{q})\text{Cos}(\theta_{13}^{q})\text{Cos}(\theta^{l}_{23})\text{Cos}(\theta^{l}_{13})\text{Sin}(\theta_{13}^{q})\text{Sin}^{2}(\theta_{12}^{l})\text{Sin}(\theta_{23}^{l})\\& &-2\text{Sin}(\delta_{l})\text{Cos}^{2}(\theta_{13}^{q})\text{Cos}(\theta_{12}^{l})\text{Cos}(\theta^{l}_{23})\text{Sin}^{2}(\theta^{q}_{12})\text{Sin}(\theta_{12}^{l})\text{Sin}(\theta_{23}^{l})\text{Sin}(\theta_{13}^{l})+2\text{Sin}(\delta_{l}-2\delta_{q})\text{Cos}(\theta_{12}^{l})\text{Cos}(\theta_{23}^{l})\\& &\text{Sin}^{2}(\theta^{q}_{13})\text{Sin}(\theta^{l}_{12})\text{Sin}(\theta_{23}^{l})\text{Sin}(\theta_{13}^{l})-2\text{Sin}(\delta_{l})\text{Cos}(\theta_{12}^{q})\text{Cos}^{2}(\theta_{13}^{q})\text{Cos}(\theta^{l}_{13})\text{Sin}(\theta^{q}_{12})\text{Sin}^{2}(\theta_{12}^{l})\text{Sin}(\theta_{23}^{l})\text{Sin}(\theta_{13}^{l})\\& &+ 2\text{Sin}(\delta_{l}-\delta_{q})\text{Cos}(\theta_{13}^{q})\text{Cos}(\theta_{12}^{l})\text{Sin}(\theta^{q}_{12})\text{Sin}(\theta^{q}_{12})\text{Sin}(\theta_{12}^{l})\text{Sin}^{2}(\theta_{23}^{l})\text{Sin}(\theta^{l}_{13})+ 2\text{Sin}(2\delta_{l}-2\delta_{q})\text{Cos}^{2}(\theta_{23}^{l})\\ & &\text{Sin}^{2}(\theta_{13}^{q})\text{Sin}^{2}(\theta^{l}_{12})\text{Sin}^{2}(\theta^{l}_{13})+ 2\text{Sin}(2\delta_{l}-\delta_{q})\text{Cos}(\theta_{13}^{q})\text{Cos}(\theta_{23}^{l})\text{Sin}(\theta^{q}_{12})\text{Sin}(\theta^{q}_{13})\text{Sin}^{2}(\theta_{12}^{l})\text{Sin}(\theta_{23}^{l})\text{Sin}^{2}(\theta^{l}_{13})\\& & + \text{Sin}(2\delta_{l})\text{Cos}^{2}(\theta_{13}^{q})\text{Sin}^{2}(\theta_{12}^{q})\text{Sin}^{2}(\theta^{l}_{12})\text{Sin}^{2}(\theta^{l}_{23})\text{Sin}^{2}(\theta_{13}^{l})
\end{eqnarray*}
\begin{eqnarray*}
R_{11}&=&\text{Cos}(\theta_{12}^{q})\text{Cos}(\theta_{13}^{q})\text{Cos}(\theta_{12}^{l})\text{Cos}(\theta_{13}^{l})+e^{-i\delta_{q}}\text{Sin}(\theta_{13}^{q})\text{Sin}(\theta_{12}^{l})\text{Sin}(\theta_{23}^{l})-e^{-i\delta_{l}}e^{-i\delta_{q}}\text{Sin}(\theta_{13}^{q})\text{Cos}(\theta_{12}^{l})\text{Cos}(\theta_{23}^{l})\text{Sin}(\theta_{13}^{l})\\ & & -\text{Cos}(\theta_{13}^{q})\text{Sin}(\theta_{12}^{q})\text{Cos}(\theta_{23}^{l})\text{Sin}(\theta_{12}^{l}) - e^{-i\delta_{l}}\text{Cos}(\theta_{13}^{q})\text{Sin}(\theta_{12}^{q})\text{Cos}(\theta_{12}^{l})\text{Sin}(\theta_{23}^{l})\text{Sin}(\theta_{13}^{l})
\end{eqnarray*}
\begin{eqnarray*}
R_{12}&=&\text{Cos}(\theta_{12}^{q})\text{Cos}(\theta_{13}^{q})\text{Cos}(\theta_{13}^{l})\text{Sin}(\theta_{12}^{l})-e^{-i\delta_{q}}\text{Sin}(\theta_{13}^{q})\text{Cos}(\theta_{12}^{l})\text{Sin}(\theta_{23}^{l})-e^{-i\delta_{l}}e^{-i\delta_{q}}\text{Cos}(\theta_{23}^{l})\text{Sin}(\theta_{12}^{l})\text{sin}(\theta_{13}^{l})\text{Sin}(\theta_{13}^{}q)\\ & & -\text{Cos}(\theta_{13}^{q})\text{Sin}(\theta_{12}^{q})\text{Cos}(\theta_{12}^{l})\text{Cos}(\theta_{23}^{l}) - e^{-i\delta_{l}}\text{Cos}(\theta_{13}^{q})\text{Sin}(\theta_{12}^{q})\text{Sin}(\theta_{12}^{l})\text{Sin}(\theta_{23}^{l})\text{Sin}(\theta_{13}^{l})
\end{eqnarray*}
\begin{equation}
R_{13} = e^{-i\delta_{q}}\text{Cos}(\theta_{23}^{l})\text{Cos}(\theta_{13}^{l})\text{Sin}(\theta_{13}^{q})-\text{Cos}(\theta_{13}^{q})\text{Cos}(\theta_{13}^{l})\text{Sin}(\theta_{12}^{q})\text{Sin}(\theta_{23}^{l}) - e^{-i\delta_{l}}\text{Cos}(\theta_{12}^{q})\text{cos}(\theta_{13}^{q})\text{Sin}(\theta_{13}^{l})
\end{equation}
where, $\theta^{l}_{23}$, $\theta^{l}_{13}$, $\theta^{l}_{12}$ denote the three $\nu$ mixing angles, $\theta^{q}_{23}$,
$\theta^{q}_{13}$, $\theta^{q}_{12}$ are the quark mixing angles. $\delta_{l}$ and $\delta_{q}$ are the leptonic CPV phase and quark
CPV phase respectively. When left-right symmetry is broken at high intermediate mass scale $ M_{R} $ in SO(10) theory,
CP asymmetry is given by
\begin{equation}
\epsilon_{l}=-\frac{3M_{1}}{8\pi}\frac{Im[\Delta m^{2}_{A}R^{2}_{13}]}{\upsilon^{2}\sum
|R_{ij}|^{2}m_{j}}
\end{equation}
where
$$|R_{11}|^{2}=\text{Cos}^{2}(\theta_{12}^{l})\text{Cos}^{2}(\theta_{13}^{l}), |R_{12}|^{2} = \text{Sin}^{2}(\theta_{12}^{l})\text{Cos}^{2}(\theta_{13}^{l}), |R_{13}|^{2} = \text{Cos}^{2}(\delta_{l})\text{Sin}^{2}(\theta_{13}^{l})+ \text{Sin}^{2}(\delta_{l})\text{Sin}^{2}(\theta_{13}^{l})$$and
\begin{equation}
\text{Im}R^{2}_{13} = -\text{Sin}^{2}(2\delta_{l})\text{Sin}^{2}(\theta_{13}^{l})
\end{equation}
The neutrino oscillation data used in our numerical calculations are summarised as follows {\cite{Frero}}.
$$\Delta m^{2}_{21}[10^{-5}eV^{2}] = 7.62 \pm 0.19$$
$$|\Delta m^{2}_{31}|[10^{-3}eV^{2}] = 2.55^{+0.06}_{-0.09}(2.43^{+0.07}_{-0.06})$$
$$\text{Sin}^{2}\theta_{12} = 0.320^{+0.016}_{-0.017} $$
$$\text{Sin}^{2}\theta_{23} = 0.613^{+0.022}_{-0.040}(0.600^{+0.026}_{-0.031})$$
\begin{equation}
\text{Sin}^{2}\theta_{13} = 0.0246^{+0.0049}_{-0.0028}(0.0250^{+0.0026}_{-0.0027})
\end{equation}
For $\Delta m^{2}_{31}, Sin^{2}\theta_{23}, Sin^{2}\theta_{13}$, the quantities inside the bracket corresponds to inverted neutrino mass hierarchy and those outside the bracket corresponds to normal mass hierarchy. The errors are within the 1$\sigma$ range of the $\nu$ oscillation parameters. It may be noted that some results on neutrino masses and mixings using updated values of running quark and lepton masses in
SUSY SO(10) have also been presented in \cite{Gayatri}. Though we consider 3-flavour neutrino scenario, 4-flavour neutrinos with
sterile neutrinos as fourth flavour, are also possible \cite{KBo}. It is worth mentionng that $ \nu $ masses and mixings can lead
to charged lepton flavor violation in grand unified theories like SO(10) \cite{GG}.
\par
The origin of the baryon asymmetry in the universe (baryogenesis) is a very interesting topic of current research. A well known mechanism is the baryogenesis via leptogenesis, where the out-of-equilibrium decays of heavy right-handed Majorana neutrinos produce a lepton asymmetry which is transformed into a baryon asymmetry by electroweak sphaleron processes \cite{Hooft, Manton, Kuzmin}. Lepton asymmetry is partially converted to baryon asymmetry through B+L violating sphaleron interactions \cite{ME}. As proposed in \cite{SO}, a baryon
asymmetry can be generated from a lepton asymmetry. The baryon asymmetry is defined as:
\begin{equation}
Y_{B} = \frac{n_{B}-n_{\bar{B}}}{s}= \frac{n_{B}-n_{\bar{B}}}{7n_{\gamma}}=\frac{\eta_{B}}{7},
\end{equation}
where $n_{B}, n_{\bar{B}},n_{\gamma}$ are number densities of baryons, antibaryons and photons respectively, $s$ is the entropy density, $ \eta $ is the baryon-to-photon ratio, $ 5.7 \times 10^{-10} \leq \eta_{B} \leq 6.7 \times 10^{-10}$ (95 \% C.L) \cite{B.D}. The lepton number is converted into the baryon number through electroweak sphaleron process \cite{Hooft,Manton,Kuzmin}.
\begin{equation}
Y_{B} = \frac{a}{a-1}Y_{L}, a = \frac{8N_{F} + 4N_{H}}{22N_{F}+13N_{H}},
\end{equation}
where $ N_{f} $ is the number of families and $ N_{H} $ is the
number of light Higgs doublets. In case of SM, $N_{f} = 3$ and $ N_{H} = 1 $. The lepton asymmetry is as follows:
\begin{equation}
Y_{L} = d\frac{\epsilon_{l}}{g^{*}}.
\end{equation}
$d$ is a dilution factor and $g^{*} = 106.75$ in the standard case \cite{SO}, is the effective number of degrees of freedom. The dilution factor d \cite{SO} is, $d = \frac{0.24}{k(lnk)^{0.6}}$ for $k\geq 10 $ and $d = \frac{1}{2k}, d = 1$ for $1\leq k \leq 10$ and $0\leq k \leq 1$ respectively, where the parameter k \cite{SO} is, $ k=\frac{M_{P}}{1.7\upsilon^{2}32\pi\sqrt{g^{*}}}\frac{(M_{D}\dagger M_{D})_{11}}{M_{1}} $, here $ M_{P} $ is the Planck mass. We have used the form of Dirac neutrino mass matrix $M_{D}$ from \cite{Joshipura}.
\section{\textbf{Analysis And Discussion Of Results}}
For our numerical analysis, we take the current experimental data for three neutrino
mixing angles as inputs, which are given at $1\sigma$ $-$ $3\sigma$ C.L, as presented in \cite{Frero}. Here, we perform numerical analysis and present
results both for normal hierarchy, inverted hierarchy, HO, LO from Fig. 2.
We have explored the CP asymmetry using Eq. (7)-Eq. (12) and corresponding baryon asymmetry using Eq. (14)-(16), for 152 different combinations (shown in Table I-XII) of the two hierarchies (NH and IH), two types of octants$-$ LO and HO, w ND, w/o ND (with and without near detector) and $\delta_{CP}$ corresponding to maximum $\chi^{2}$ (for maximum sensitivity from Fig. 2(a), 2(b)), for which the CP discovery potential of the DUNE is maximum. We also consider non maximal values of $ \delta_{CP} $ corresponding to $ \chi^{2} = $ 4, 9, 16, 25 from Fig. 2. We examine these different cases in the light of recent ratio of the baryon to photon density bounds, $ 5.7 \times 10^{-10} \leq \eta_{B} \leq 6.7 \times 10^{-10}$ (CMB), and checked for which of the 152 cases, our calculated value of $|\eta_{B}|$ lies within this range.
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Case}& \textbf{hierarchy, octant} & \textbf{w ND/\hspace{.1cm}OR w/o ND} & \textbf{$\delta_{CP}$} & \textbf{$\epsilon_{l}$}& \textbf{$|\eta_{B}|$}\\
\hline
$1$ &$\text{NH, LO}$ & $WND$&$ 101 $ & $-.0000177532$&$7.39703\times10^{-8} $ \\
\hline
$2$&$\text{NH, LO}$ & $WND$&$ 280$ & $-.0000125002$&$5.17312\times10^{-8}$ \\
\hline
$3$&$\text{NH, LO}$ & $W/oND$&$ 108 $ & $.0000153489$&$6.35202\times10^{-8} $\\
\hline
$4$&$\text{NH, LO}$ & $W/oND$&$ 282 $ & $7.53352\times10^{-6}$&$3.11769\times10^{-8} $\\
\hline
$5$&$\text{IH, LO}$ & $W/ND$&$ 83 $ & $2.56383\times10^{-6}$&$1.06102\times10^{-8} $ \\
\hline
$6$&$\text{IH, LO}$ & $W/ND$&$ 276 $ & $1.01403\times10^{-6}$&$4.19647\times10^{-9} $ \\
\hline
$7$&$\text{IH, LO}$ & $W/oND$&$ 88 $ & $1.46427 \times 10^{-7}$&$6.05975\times10^{-10} $\\
\hline
$8$&$\text{IH, LO}$ & $W/oND$&$ 275 $ & $3.6845\times10^{-6}$&$1.5248\times10^{-8} $\\
\hline
\end{tabular}
\end{center}
\caption{Calculated values of CP asymmetry $\epsilon_{l}$ and baryon to photon ratio $|\eta_{B}|$ in case of LO, for $ R_{1j} $ elements of R matrix consisting of $ U_{PMNS} $ and $V_{CKM}$ for the values of $ \delta_{CP} $ when the CP discovery potential of the LBNE/DUNE is maximum as shown in Fig. 2.}
\end{table}
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Case}&\textbf{hierarchy, Octant} & \textbf{w ND/ OR w/o ND} & \textbf{$\delta_{CP}$} & \textbf{$\epsilon_{l}$}& \textbf{$|\eta_{B}|$}\\
\hline
$1$&$\text{NH, LO}$ & $WND$&$ 101 $ & $.0000268767$&$1.11227\times10^{-7} $ \\
\hline
$2$&$\text{NH, LO}$ & $WND$&$ 280$ & $.0000238272$&$9.86068\times10^{-8} $ \\
\hline
$3$&$\text{NH, LO}$ & $W/oND$&$ 108 $ & $.0000231986$&$9.60055\times10^{-8} $\\
\hline
$4$&$\text{NH, LO}$ & $W/oND$&$ 282 $ & $.0000332106$&$1.3744\times10^{-7} $\\
\hline
$5$&$\text{IH, LO}$ & $WND$&$ 83 $ & $.0000109298$&$4.5232\times10^{-8} $ \\
\hline
$6$&$\text{IH, LO}$ & $W/ND$&$ 276 $ & $-3.3319\times10^{-6}$&$1.37888\times10^{-8} $ \\
\hline
$7$&$\text{IH, LO}$ & $W/oND$&$ 88 $ & $2.96234\times10^{-7}$&$1.22594\times10^{-9} $\\
\hline
$8$&$\text{IH, LO}$ & $W/oND$&$ 270 $ & $-9.18963\times10^{-7}$&$3.80305\times10^{-9} $\\
\hline
\end{tabular}
\end{center}
\caption{Same as in Table I, except here R matrix consists of $ U_{PMNS} $ only. }
\end{table}
We find that out of 32 different cases corresponding to maximal sensitivity $ \chi^{2} $ (from Fig. 2) as shown in Table I$-$IV, our calculated value of BAU is larger than the currently allowed range of BAU except for two cases: case 7 in table I and case 5 in table III for which the calculated $|\eta_{B}|$ is compatible with the present range of baryon to photon density ratio \cite{B.D}. In Table I, case 7 which has $\delta_{CP}= 88^{0}$ or $0.488\pi $\hspace{.1cm}(first quadrant), IH and atmospheric angle $\theta_{23}$ in LO has $ \eta_{B} = 6.05975 \times 10^{-10} $, consistent with
its best fit value, $ \eta_{B} = 6.05 \times 10^{-10} $ \cite{B.D}. For this case, $\epsilon_{l} = 1.46427\times 10^{-7}$ lies within the Davidson and Ibbara bounds \cite{ibarra}, ($\epsilon_{l}\leq 4.59 \times 10^{-5} $). In Table III, case 5 has $\delta_{CP}= 95^{0}$ or $0.5277 \pi$ \hspace{.1cm}(second quadrant), IH and atmospheric angle $\theta_{23}$ in HO has BAU equal to $6.2157 \times 10^{-10} $ which is in accord with the present $|\eta_{B}|$ bounds and it leads to CP asymmetry $|\epsilon_{l}| = 1.50195\times 10^{-7}$ that lies within the Davidson and Ibarra bounds.
\begin{figure}[b]
\centerline{\begin{subfigure}[]{\includegraphics[width=9.7cm]{1s8_delmm.eps}}\end{subfigure}
\begin{subfigure}[]{\includegraphics[width=9.7cm]{2s8_delmm.eps}}\end{subfigure}}
\caption{Variation of $ \eta_{B} $ with $ \Delta m^{2}_{31} $, for case 7 of Table I based on 1$ \sigma $ and 2$\sigma $ range of $ \Delta m^{2}_{31} $ in Fig. 3(a) and 3(b) respectively. Plot of $ \eta_{B} $ Vs $ \Delta m^{2}_{31} $[$eV^{2}$] with CP phases $ \delta_{CP}= 0.488 \pi$ for the case when R matrix consists of both $V_{CKM}$ and $U_{PMNS}$. The blue solid line in Fig. 3(a), 3(b) corresponds to $\theta_{23}$ in LO,
$ \delta_{CP}= 0.488 \pi $\hspace{.1cm}(first quadrant) and IH. The black horizontal line corresponds to the upper and lower limit on $ \eta_{B} $, $5.7\times 10^{-10} < \eta_{B} < 6.7 \times 10^{-10} $. As can be seen from the figure, the plots in Fig. 3(a), 3(b) satisfy the current experimental constraints on $ \eta_{B} $.}
\end{figure}
Figure 3 shows the allowed regions of $ |\eta_{B}| $ in the plane charted by ($ \Delta m^{2}_{31} $, $ |\eta_{B}| $) for $ \delta_{CP} $ allowed at maximal sensitivity of CP discovery potential from Fig. 2 (case 7 of Table I). Here we show the variation of $ |\eta_{B}| $ with $ \Delta m^{2}_{31} $, taking the variation of the later within its 1$ \sigma $ and 2$ \sigma $ limits. It can be seen that $ \eta_{B} $ for our calculation
(blue solid line) lies within the result of its global fit value ($5.7\times 10^{-10} < \eta_{B} < 6.7 \times 10^{-10} $) shown in \cite{B.D}.
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Case}& \textbf{hierarchy, octant} & \textbf{w ND/\hspace{.1cm}w/o ND} & \textbf{$\delta_{CP}$} & \textbf{$\epsilon_{l}$}& \textbf{$|\eta_{B}|$}\\
\hline
$1$ &$\text{NH, HO}$ & $WND$&$ 101 $ & $-.0000189857$ & $7.85709\times10^{-8} $ \\
\hline
$2$&$\text{NH, HO}$ & $WND$&$ 281$ & $-3.51289\times10^{-5}$&$1.45378\times10^{-7}$ \\
\hline
$3$&$\text{NH, HO}$ & $W/oND$&$ 102 $ & $2.72017 \times 10^{-5}$& $1.12572 \times 10^{-7} $\\
\hline
$4$&$\text{NH, HO}$ & $W/oND$&$ 283 $ & $-1.82461\times10^{-5}$&$7.551\times10^{-8} $\\
\hline
$5$&$\text{IH, HO}$ & $W/ND$&$ 95 $ & $-1.50195\times10^{-7}$&$6.2157\times10^{-10} $ \\
\hline
$6$&$\text{IH, HO}$ & $W/ oND$&$ 94 $ & $-8.7785\times10^{-8}$&$3.63291\times10^{-10} $ \\
\hline
$7$&$\text{IH, HO}$ & $W/ND$&$ 281 $ & $-5.8547 \times 10^{-6}$&$2.42292\times10^{-8} $\\
\hline
$8$&$\text{IH, HO}$ & $W/oND$&$ 272$ & $9.97129\times10^{-6}$&$4.12654\times10^{-8} $\\
\hline
\end{tabular}
\end{center}
\caption{Same as in Table I, but HO values are used.}
\end{table}
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Case}& \textbf{hierarchy, octant} & \textbf{w ND/\hspace{.1cm}w/o ND} & \textbf{$\delta_{CP}$} & \textbf{$\epsilon_{l}$}& \textbf{$|\eta_{B}|$}\\
\hline
$1$ &$\text{NH, HO}$ & $WND$&$ 101 $ & $.0000268767$ & $1.11227\times10^{-7} $ \\
\hline
$2$&$\text{NH, HO}$ & $WND$&$ 281$ & $.0000112743$&$4.66576\times10^{-8}$ \\
\hline
$3$&$\text{NH, HO}$ & $W/oND$&$ 102 $ & $6.73637\times 10^{-6}$& $2.78779 \times 10^{-8} $\\
\hline
$4$&$\text{NH, HO}$ & $W/oND$&$ 283 $ & $.0000163668$&$6.77325\times10^{-8} $\\
\hline
$5$&$\text{IH, HO}$ & $W/ND$&$ 95 $ & $4.1771\times10^{-6}$&$1.72891\times10^{-8} $ \\
\hline
$6$&$\text{IH, HO}$ & $W/ oND$&$ 94 $ & $-1.99098\times10^{-6}$&$8.23952\times10^{-9} $ \\
\hline
$7$&$\text{IH, HO}$ & $W/ND$&$ 281 $ & $7.65022 \times 10^{-6}$&$3.16598\times10^{-8} $\\
\hline
$8$&$\text{IH, HO}$ & $W/oND$&$ 272$ & $-.00001093$&$4.52369\times10^{-8} $\\
\hline
\end{tabular}
\end{center}
\caption{Same as in Table III, but R matrix consists of $ U_{PMNS} $ only. }
\end{table}
Next, we explore values of $ \delta_{CP} $ corresponding to $\chi^{2} $ = 4, 9, 16, 25 from Fig. 2 for which the CP discovery potential of the LBNE/DUNE is non maximal. For $\chi^{2} $ = 2 $ \sigma $, 3$ \sigma $ sensitivity of the CP discovery potential, Table V-VIII summarise the results where we find that out of the 64 possible cases in all, for 63 cases the calculated BAU is larger than the currently allowed range of BAU \cite{B.D} by almost two to three orders of magnitude except for case 4 of Table VII where $ \delta _{CP} $ = 1.924 $ \pi $, IH, HO, has BAU of the order of 8.65034 $ \times 10^{-12}$ less than the allowed $|\eta_{B}| $ limit.
\par
We examine 56 possible cases for non maximal CP discovery sensitivity potential of the LBNE/DUNE from Fig. 2 summarised in Table IX-XII corresponding to $ \chi^{2}$ at 4$\sigma$, 5$\sigma$ C.L out of which only 3 cases are consistent with the experimental results of $ |\eta_{B}| $ bounds, (a) Case 15 of Table XI where $ \frac{\delta_{CP}}{\pi}=1.43 $, $ \nu $ mass spectrum of IH nature, atmospheric angle $ \theta_{23} $ in HO, has CP asymmetry $ \epsilon _{l} = 1.48671 \times 10^{-7} $ which lies within $ |\epsilon _{l}^{max}| = 4.59 \times 10^{-5} $ (Davidson Ibbara bounds) and $ |\eta_{B}| = 6.15262 \times 10^{-10}$ that agrees with the present BAU range. It is worth noting that this value of $ \frac{\delta_{CP}}{\pi}=1.43 $ is close to the central value of $ \delta_{CP} $ from the recent global fit result \cite{kol}, (b) Case 13 of Table XI that locates $ \frac{\delta_{CP}}{\pi}= 0.3833 $, $ \nu $ mass spectrum of IH nature, $ \theta_{23} $ in HO, $ \epsilon _{l} = 1.40342 \times 10^{-7} $ ($ \leq $ $ |\epsilon _{l}^{max}| = 4.59 \times 10^{-5} $) has $ |\eta_{B}| = 5.80973 \times 10^{-10}$, consistent with the allowed BAU range. Here $ R_{1j} $ elements of R matrix consists of $ U_{PMNS} $ and $V_{CKM}$ in both the cases above, (c) Case 4 of Table XII which has $ \frac{\delta_{CP}}{\pi}= 1.727 $, IH $ \nu $ mass spectrum, $ \theta_{23} $ in LO, $ |\epsilon _{l}| = 1.47958 \times 10^{-7} $ lies within $ |\epsilon _{l}^{max}| = 4.59 \times 10^{-5} $ and $ |\eta_{B}| = 6.12311 \times 10^{-10}$ that agrees with the current experimental constraints \cite{B.D}.
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Case}&\textbf{hierarchy, Octant} & \textbf{$\Delta \chi^{2}$} & \textbf{$\delta_{CP}$} & \textbf{$\epsilon_{l}$}& \textbf{$|\eta_{B}|$}\\
\hline
$1$&$\text{NH, LO}$ & $4$&$ 17 $ & $-3.57328\times10^{-5}$&$1.4787\times10^{-7} $ \\
\hline
$2$&$\text{NH, LO}$ & $9$&$ 38 $ & $1.91122\times10^{-5}$&$7.90943\times10^{-8} $ \\
\hline
$3$&$\text{NH, LO}$ & $9$&$ 147 $ & $3.19392\times10^{-5}$&$1.32178\times10^{-7} $ \\
\hline
$4$&$\text{NH, LO}$ & $4$&$ 154$ & $3.33001\times10^{-6}$&$1.3781\times10^{-8} $ \\
\hline
$5$&$\text{NH, LO}$ & $4$&$ 203.5 $ & $3.31724\times10^{-5}$&$1.37281\times10^{-7} $\\
\hline
$6$&$\text{NH, LO}$ & $9$&$ 213 $ & $1.18422\times10^{-5}$&$4.9008\times10^{-8} $\\
\hline
$7$&$\text{NH, LO}$ & $9$&$ 332$ & $-7.01565\times10^{-6}$&$2.90337\times10^{-8} $\\
\hline
$8$&$\text{NH, LO}$ & $4$&$ 346.5$ & $1.72854\times10^{-6}$&$7.15341\times10^{-9} $\\
\hline
$9$&$\text{NH, HO}$ & $4$&$ 17 $ & $-3.6128\times10^{-5}$&$1.49513\times10^{-7} $ \\
\hline
$10$&$\text{NH, HO}$ & $4$&$ 155 $ & $-2.65416\times10^{-5}$&$1.0984\times10^{-7} $ \\
\hline
$11$&$\text{NH, HO}$ & $4$&$ 203 $ & $3.76207\times10^{-5}$&$1.5569\times10^{-7} $ \\
\hline
$12$&$\text{NH, HO}$ & $4$&$ 347.5$ & $8.3309\times10^{-7}$&$3.44768\times10^{-9} $ \\
\hline
$13$&$\text{NH, HO}$ & $9$&$ 38.3 $ & $-1.54969\times10^{-5}$&$6.41328\times10^{-8} $\\
\hline
$14$&$\text{NH, HO}$ & $9$&$ 147 $ & $3.09483\times10^{-5}$&$1.28077\times10^{-7} $\\
\hline
$15$&$\text{NH, HO}$ & $9$&$ 212$ & $-3.30145\times10^{-5}$&$1.36628\times10^{-7} $\\
\hline
$16$&$\text{NH, HO}$ & $9$&$ 333.5$ & $-1.97211\times10^{-6}$&$8.16144\times10^{-9} $\\
\hline
\end{tabular}
\end{center}
\caption{Calculated values of CP asymmetry $\epsilon_{l}$ and baryon to photon ratio $|\eta_{B}|$ in case of NH, for $ R_{1j} $ elements of R matrix consisting of $ U_{PMNS} $ and $V_{CKM}$ for DUNE/LBNE with its near detector, with $ \chi^{2} = 4 \hspace{.1cm}\text{and}\hspace{.1cm}9 $ measuring CP discovery sensitivity from Fig. 2.}
\end{table}
\begin{center}
\begin{figure*}[htbp]
\centering
{\begin{subfigure}[]{\includegraphics[width=8.87 cm]{ss.eps}}\end{subfigure}
\begin{subfigure}[]{\includegraphics[width=8.87cm]{st.eps}}\end{subfigure}\\
\caption{Variation of $ \eta_{B} $ with $ \Delta m^{2}_{31} $ within its 3$ \sigma $ C.L. The upper and lower limit on $ \eta_{B} $, $5.7\times 10^{-10} < \eta_{B} < 6.7 \times 10^{-10} $ are characterised by blue dashed horizontal lines. Black dotted line corresponds to best fit value, $ \eta_{B} = 6.05 \times 10^{-10}$. In the left panel, Fig. 4(a) shows the plot of $ \eta_{B} $ Vs $ \Delta m^{2}_{31} $ for $ \delta_{CP} = 1.43 \pi, 0.527 \pi, 0.383 \pi $. Fig. 4(b) of right panel frames the variation of $ \eta_{B} $ with $ \Delta m^{2}_{31} $ for $ \delta_{CP} = 0.488 \pi, 1.727 \pi$.}}
\end{figure*}
\end{center}
Plugging the experimental data for $ \Delta m^{2} _{31} $ at 3$ \sigma $ C.L, and other $ \nu $ oscillation parameters at best fit into Eq. (8 - 12) we predict the values of $ \eta_{B} $ from Eq. (14, 15, 16) as shown in the Fig. 4. The figure displays the allowed regions of $| \eta_{B} | $ in the plane ($ \Delta m^{2}_{13}, | \eta_{B} | $) for experimental results of $ \Delta m^{2}_{31} $ at 3$ \sigma $ C.L. In Fig. 4(a) red solid line conforms to the case 15 of Table XI, where $ \delta_{CP} = 1.43 \pi $, $ \nu $ mass spectrum of IH structure, atmospherc angle $ \theta_{23} $ in HO and $ |\eta_{B}| $ in the range consistent with $5.7\times 10^{-10} < \eta_{B} < 6.7 \times 10^{-10} $ except for $ \Delta m^{2}_{31} >- 2.2695 \times 10^{-3} eV^{2} $ and $ \Delta m^{2}_{31} <- 2.635 \times 10^{-3} eV^{2} $ where the red solid line departs from the experimental bound on $ \eta_{B} $. The orange solid line in Fig. 4(a) depicts case 13 of Table XI which has $ \delta_{CP} = 0.383 \pi $, $ \nu $ mass structure of IH spectrum, $ \theta_{23} $ in HO and $ |\eta_{B}| $ in the allowed range followed by the experimental constraints on $ |\eta_{B}| $ except for $ \Delta m^{2}_{31} >- 2.385 \times 10^{-3} eV^{2} $. Slight variation of $ \eta_{B} $ for $ \delta_{CP} = 0.5277 \pi $ can be seen from Fig. 4(a) for $ \Delta m^{2}_{31} < -2.63\times 10^{-3} eV^{2}$ (green solid line). Similarly the green solid line in Fig. 4(b) corresponds to $ \delta_{CP} =0.488 \pi $, IH $ \nu $ spectrum, which is consistent with the allowed range of BAU for $ \Delta m^{2}_{31} < -2.27\times 10^{-3} eV^{2}$. The red solid line in Fig. 4(b) characterises case 4 of Table XII, which has $ \delta_{CP} = 1.727 \pi $, $ \nu $ mass structure of IH nature, atmospherc angle $ \theta_{23} $ in LO and $ |\eta_{B}| $ in the range favoured by the present experimental limit on $ |\eta_{B}| $, $5.7\times 10^{-10} < |\eta_{B}| < 6.7 \times 10^{-10} $ except for $ \Delta m^{2}_{31} >- 2.255 \times 10^{-3} eV^{2} $ where the curve fails to fall in the allowed $ |\eta_{B}| $ bounds even at $ 2 \sigma $ C.L of $ \Delta m^{2}_{31} $.
\par
From the above discussion, we conclude that, out of total 152 cases presented in Table I-XII, only for five cases, the values of $ \eta_{B} $ lie within the experimental limits, which are summarised in Table XIII.
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Case}&\textbf{hierarchy, Octant} & \textbf{$\Delta \chi^{2}$} & \textbf{$\delta_{CP}$} & \textbf{$\epsilon_{l}$}& \textbf{$|\eta_{B}|$}\\
\hline
$1$&$\text{NH, LO}$ & $4$&$ 17 $ & $1.76335\times10^{-5}$&$7.2975\times10^{-8} $ \\
\hline
$2$&$\text{NH, LO}$ & $9$&$ 38 $ & $1.88675\times10^{-5}$&$7.80817\times10^{-8} $ \\
\hline
$3$&$\text{NH, LO}$ & $9$&$ 147 $ & $-3.2199\times10^{-5}$&$1.33253\times10^{-7} $ \\
\hline
$4$&$\text{NH, LO}$ & $4$&$ 203.5$ & $-3.28826\times10^{-6}$&$1.36082\times10^{-7} $ \\
\hline
$5$&$\text{NH, LO}$ & $4$&$ 154 $ & $4.1195\times10^{-6}$&$1.70482\times10^{-8} $\\
\hline
$6$&$\text{NH, LO}$ & $9$&$ 213 $ & $-3.16969\times10^{-5}$&$1.31175\times10^{-7} $\\
\hline
$7$&$\text{NH, LO}$ & $9$&$ 332$ & $-3.00567\times10^{-5}$&$1.24385\times10^{-7} $\\
\hline
$8$&$\text{NH, LO}$ & $4$&$ 346.5$ & $3.20414\times10^{-5}$&$1.32601\times10^{-7} $\\
\hline
$9$&$\text{NH, HO}$ & $4$&$ 17 $ & $1.76335\times10^{-5}$&$7.2975\times10^{-8} $ \\
\hline
$10$&$\text{NH, HO}$ & $4$&$ 155 $ & $2.83588\times10^{-5}$&$1.1736\times10^{-7} $ \\
\hline
$11$&$\text{NH, HO}$ & $4$&$ 203 $ & $3.10849\times10^{-5}$&$1.28642\times10^{-7} $ \\
\hline
$12$&$\text{NH, HO}$ & $4$&$ 347.5$ & $-2.16746\times10^{-5}$&$8.96988\times10^{-8} $ \\
\hline
$13$&$\text{NH, HO}$ & $9$&$ 38.3 $ & $3.10849\times10^{-5}$&$1.28642\times10^{-7} $\\
\hline
$14$&$\text{NH, HO}$ & $9$&$ 147 $ & $-3.2199\times10^{-5}$&$1.33253\times10^{-7} $\\
\hline
$15$&$\text{NH, HO}$ & $9$&$ 212$ & $3.82461\times10^{-6}$&$1.58278\times10^{-8} $\\
\hline
$16$&$\text{NH, HO}$ & $9$&$ 333.5$ & $2.77229\times10^{-7}$&$1.14729\times10^{-7} $\\
\hline
\end{tabular}
\end{center}
\caption{Same as in Table V, but $ R = U_{PMNS} $ only.}
\end{table}
\begin{center}
\begin{figure*}[htbp]
\centering
{\begin{subfigure}[]{\includegraphics[width=8.7cm]{th-88_3s.eps}}\end{subfigure}
\begin{subfigure}[]{\includegraphics[width=8.7cm]{th-95_3s.eps}}\end{subfigure}\\
\begin{subfigure}[]{\includegraphics[width=8.7cm]{th_257_3s.eps}}\end{subfigure}
\begin{subfigure}[]{\includegraphics[width=8.7cm]{th_69_3s.eps}}\end{subfigure}\\
\begin{subfigure}[]{\includegraphics[width=8.7cm]{th-311-3s.eps}}\end{subfigure}\\
\caption{Plot of $ \eta_{B} $ vrs $ \theta_{13} $ with CP phases in Fig. 5(a) $ \delta_{CP}= 88^{0} $, IH, LO; in Fig. 5(b) $ \delta_{CP}= 95 ^{0}$, IH, HO; in Fig. 5(c) $ \delta_{CP}= 257.5^{0}$, IH, HO; in Fig. 5(d) $ \delta_{CP}= 69 ^{0}$, IH, HO and in Fig. 5(e) $ \delta_{CP}= 311 ^{0}$, IH, LO within the 3 $\sigma$ errors of the best fit values of $ \theta_{13} $ for the favoured cases. The black solid horizontal line corresponds to the upper and lower limit on $ \eta_{B} $, $5.7\times 10^{-10} < \eta_{B} < 6.7 \times 10^{-10} $. }}
\end{figure*}
\end{center}
Figure 5 completes our discussion by showing the allowed regions in the plane ($ \theta_{13}, |\eta_{B}| $) which is done for five cases favoured by our analysis above. The shapes of the curves are somewhat symmetrical in Fig. 5(c) and 5(d) about $ \theta_{13} = 9^{0}$ for $ \delta_{CP} = 1.43 \pi$, IH, $ \theta_{23} $ in HO and $ \delta_{CP} = 0.383 \pi$, IH, $ \theta_{23} $ in HO. For, $ \delta_{CP} = 257.5 ^{0}$, values of $ \theta_{13} $ around $9.0974^{0}$ to $9.1^{0}$, $9.2^{0}$to $9.22^{0}$, $8.94^{0}$ to $8.97^{0}$, $8.82^{0}$ to $8.84^{0}$ are favoured which agrees well with the global fit value of $ \theta_{13} $ \cite{kol}. For, $ \delta_{CP} = 69 ^{0}$, values of $ \theta_{13} $ around $9.0874^{0}$ to $9.1^{0}$, $9.21^{0}$to $9.2^{0}$, $8.945^{0}$ to $8.99^{0}$ , $8.85^{0}$ are favoured for $5.7\times 10^{-10} < \eta_{B} < 6.7 \times 10^{-10} $ which is compatible with the global fit value of $ \theta_{13} $ \cite{kol}. For, $ \delta_{CP} = 88 ^{0}$ in Fig. 5(a), IH, $ \theta_{23} $ in LO, values of $ \theta_{13} $ around $9.0974^{0}$ to $9.103^{0}$, $9.61^{0}$to $9.65^{0}$ are favoured for $5.7\times 10^{-10} < \eta_{B} < 6.7 \times 10^{-10} $. Similarly for, $ \delta_{CP} = 311^{0}$ in Fig. 5(e), IH, $ \theta_{23} $ in LO, values of $ \theta_{13} $ around $9.0974^{0}$ to $9.12^{0}$, $9.72^{0}$to $9.78^{0}$ are mostly favoured for $5.7\times 10^{-10} < \eta_{B} < 6.7 \times 10^{-10} $ which is consistent with the global fit data of $ \theta_{13} $ at 2$ \sigma $ and 3$ \sigma $ C.L \cite{kol}. Lastly for $ \delta_{CP} = 95^{0}$ in Fig. 5(b), IH, $ \theta_{23} $ in HO, values of $ \theta_{13} $ around $9.0974^{0}$ to $9.11^{0}$, $9.52^{0}$to $9.54^{0}$ are mostly favoured for $5.7\times 10^{-10} < \eta_{B} < 6.7 \times 10^{-10} $ compatible with global fitting of $ \theta_{13} $ at 2$ \sigma $ and 3$ \sigma $ C.L \cite{kol}.
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Case}&\textbf{hierarchy, Octant} & \textbf{$\Delta \chi^{2}$} & \textbf{$\delta_{CP}$} & \textbf{$\epsilon_{l}$}& \textbf{$|\eta_{B}|$}\\
\hline
$1$&$\text{IH, HO}$ & $4$&$ 13.5 $ & $4.91465\times10^{-6}$&$2.03389\times10^{-8} $ \\
\hline
$2$&$\text{IH, HO}$ & $4$&$ 157.5 $ & $-7.63368\times10^{-7}$&$3.15914\times10^{-9} $ \\
\hline
$3$&$\text{IH, HO}$ & $4$&$ 202 $ & $1.24531\times10^{-6}$&$5.15362\times10^{-9} $ \\
\hline
$4$&$\text{IH, HO}$ & $4$&$ 346.3$ & $-2.09025\times10^{-9}$&$8.65034\times10^{-12} $ \\
\hline
$5$&$\text{IH, HO}$ & $9$&$ 29$ & $-5.98012\times10^{-6}$&$2.47483\times10^{-8} $\\
\hline
$6$&$\text{IH, HO}$ & $9$&$ 153 $ & $1.18773\times10^{-5}$&$4.91533\times10^{-8} $\\
\hline
$7$&$\text{IH, HO}$ & $9$&$ 209$ & $8.38787\times10^{-6}$ &$3.47125\times10^{-8} $\\
\hline
$8$&$\text{IH, HO}$ & $9$&$ 332.5$ & $2.45147\times10^{-7}$&$1.01449\times10^{-9} $\\
\hline
$9$&$\text{IH, LO}$ & $9$&$ 332.5$ & $1.03435\times10^{-6}$&$4.28058\times10^{-9} $ \\
\hline
$10$&$\text{IH, LO}$ & $9$&$ 209 $ & $5.36981\times10^{-6}$&$2.22225\times10^{-8} $ \\
\hline
$11$&$\text{IH, LO}$ & $9$&$ 153 $ & $7.94367\times10^{-6}$&$3.28743\times10^{-8} $ \\
\hline
$12$&$\text{IH, LO}$ & $9$&$ 29$ & $-7.28224\times10^{-6}$&$3.0137\times10^{-8} $ \\
\hline
$13$&$\text{IH, LO}$ & $4$&$ 346.1 $ & $-1.04874\times10^{-6}$&$4.34013\times10^{-9} $\\
\hline
$14$&$\text{IH, LO}$ & $4$&$ 203 $ & $1.26601\times10^{-5}$&$5.23928\times10^{-8} $\\
\hline
$15$&$\text{IH, LO}$ & $4$&$ 157.5$ & $-9.9942\times10^{-7}$&$4.13602\times10^{-9} $\\
\hline
$16$&$\text{IH, LO}$ & $4$&$ 13.5$ & $-3.75736\times10^{-7}$&$1.55496\times10^{-9} $\\
\hline
\end{tabular}
\end{center}
\caption{Same as in Table V, but IH is used.}
\end{table}
\begin{center}
\begin{figure*}[htbp]
{\centering
\begin{subfigure}[]{\includegraphics[width=8.7cm]{JCPp.eps}}\end{subfigure}
\begin{subfigure}[]{\includegraphics[width=8.7cm]{JCPm.eps}}\end{subfigure}\\
\caption{Plot of $ J_{CP} $ vrs $ \theta_{13} $ with CP phases in Fig. 6(a):$ \delta_{CP}= 95^{0} $, IH, HO; $ \delta_{CP}= 69^{0}$, IH, HO; $ \delta_{CP}= 88^{0}$, IH, LO. Fig. 6(b): $ \delta_{CP}= 257.5^{0} $, IH, HO; $ \delta_{CP}= 311^{0}$, IH, LO within the 3 $\sigma$ C.L of the best fit values of $ \theta_{13} $. Horizontal line represents the maximum allowed CP violation in the leptonic sector, $J_{CP}\leq .04|Sin\delta_{CP}|$. }}
\end{figure*}
\end{center}
The magnitude of CP violation in $ \nu_{l} \rightarrow \nu_{l^{'}} $ and $ \nu_{\bar{l}} \rightarrow \nu_{\bar{l}^{'}} $, $l=l^{'}=e,\mu,\tau$, is determined by the rephasing Jarkslog invariant $ \textit{J}_{CP} $, which in the standard parametrisation of the $ \nu $ mixing matrix has the form \cite{kol}:
\begin{equation}
\textit{J}_{CP} = Im (U_{\mu 3}U^{*}_{e3}U_{e2}U^{*}_{\mu2}) = \frac{1}{8}Cos\theta_{13}Sin2\theta_{12}Sin2\theta_{23}Sin2\theta_{13}Sin\delta_{CP}
\end{equation}
Since $Sin2\theta_{12}$, $Sin2\theta_{23}$, $Sin2\theta_{13}$ have been determined experimentally with a relatively good precision \cite{Fgli,Frero,Garc}, the size of CP violation effects in $ \nu $ oscillations depends essentially on leptonic CPV phase $ \delta_{CP} $. The current data implies $ \textit{J}_{CP} = 0.040|Sin\delta_{CP}| $ \cite{kol}, and a best fit value $\textit{J}_{CP}^{best}$ = -0.032 \cite{kol}. Our calculated values of Jarkslog invariant by plugging input for the three $ \nu $ mixing angles at its best fit for favoured cases of BAU and the values of leptonic $ \delta_{CP} $ phase are summarised in Table XIII. We find that for all the five favoured cases, our calculated values of $ J_{CP} $ lie within its present experimental limits.
\par
In Fig. 6 we plot $ J_{CP} $ Vs $ \theta_{13} $, taking variation of $ \theta_{13} $ within 3$ \sigma $ range of its best fit value and find that the plot for all the above listed five cases, $ J_{CP} $ lies within its present experimental limits.
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Case}&\textbf{hierarchy, Octant} & \textbf{$\Delta \chi^{2}$} & \textbf{$\delta_{CP}$} & \textbf{$\epsilon_{l}$}& \textbf{$|\eta_{B}|$}\\
\hline
$1$&$\text{IH, HO}$ & $4$&$ 13.5 $ & $4.00427\times10^{-6}$&$1.65714\times10^{-8} $ \\
\hline
$2$&$\text{IH, HO}$ & $4$&$ 157.5 $ & $3.11981\times10^{-6}$&$1.29111\times10^{-8} $ \\
\hline
$3$&$\text{IH, HO}$ & $4$&$ 202 $ & $3.99325\times10^{-6}$&$1.65258\times10^{-8} $ \\
\hline
$4$&$\text{IH, HO}$ & $4$&$ 346.3$ & $4.15622\times10^{-6}$&$1.72002\times10^{-8} $ \\
\hline
$5$&$\text{IH, HO}$ & $9$&$ 29$ & $4.15708\times10^{-6}$&$1.72038\times10^{-8} $\\
\hline
$6$&$\text{IH, HO}$ & $9$&$ 153 $ & $-3.99333\times10^{-6}$&$1.65267\times10^{-8} $\\
\hline
$7$&$\text{IH, HO}$ & $9$&$ 209$ & $-7.0083\times10^{-7}$ &$2.90033\times10^{-9} $\\
\hline
$8$&$\text{IH, HO}$ & $9$&$ 332.5$ & $-3.56253\times10^{-6}$&$1.47433\times10^{-8} $\\
\hline
$9$&$\text{IH, LO}$ & $9$&$ 332.5$ & $1.03435\times10^{-6}$&$4.28058\times10^{-9} $ \\
\hline
$10$&$\text{IH, LO}$ & $9$&$ 209 $ & $-7.0083\times10^{-7}$&$2.90033\times10^{-9} $ \\
\hline
$11$&$\text{IH, LO}$ & $9$&$ 153 $ & $-3.99333\times10^{-6}$&$1.65261\times10^{-8} $ \\
\hline
$12$&$\text{IH, LO}$ & $9$&$ 29$ & $4.15708\times10^{-6}$&$1.72038\times10^{-8} $ \\
\hline
$13$&$\text{IH, LO}$ & $4$&$ 346.1 $ & $3.63103\times10^{-6}$&$1.50267\times10^{-8} $\\
\hline
$14$&$\text{IH, LO}$ & $4$&$ 203 $ & $-2.80629\times10^{-6}$&$1.16136\times10^{-8} $\\
\hline
$15$&$\text{IH, LO}$ & $4$&$ 157.5$ & $3.11981\times10^{-6}$&$1.29111\times10^{-8} $\\
\hline
$16$&$\text{IH, LO}$ & $4$&$ 13.5$ & $4.000427\times10^{-6}$&$1.65714\times10^{-8} $\\
\hline
\end{tabular}
\end{center}
\caption{Same as in Table VI, but IH is used.}
\end{table}
\section{\textbf{Conclusion}}
Measuring CP violation in the lepton sector is one of the most challenging tasks today. A systematic study of the CP sensitivity of the current and upcoming LBNE/DUNE is done in our earlier work \cite{DK} which may help a precision measurement of leptonic $ \delta_{CP}$ phase. In this work, we studied how the entanglement of the quadrant of leptonic CPV phase and octant of atmospheric mixing angle $ \theta_{23} $ at LBNE/DUNE, can be broken via leptogenesis and baryogenesis. Here, we have considered the effect of ND only in LBNE, on sensitivity of CPV phase measurement, but similar conclusions would hold for the effect of reactor experiments as well. This study is done for both
Normal hierarchy and Inverted hierarchy, Higher Octant and Lower Octant. We considered two cases of fermion rotation matrix - PMNS only, and CKM+PMNS.
Following the results of \cite{DK}, the enhancement of CPV sensitivity with respect to its quadrant is utilized here to calculate the values of lepton-antilepton symmetry. Then, this is used to calculate the value of BAU. This is an era of precision measurements in neutrino physics. We therefore considered variation of $\Delta m^{2}_{31}$ within its 1$\sigma$, 2$\sigma$ and 3$\sigma$, and $\theta_{13}$ within its 3$ \sigma $ range. We calculated baryon to photon ratio, and compared with its experimentally known best fit value.
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Case}&\textbf{hierarchy, Octant} & \textbf{$\Delta \chi^{2}$} & \textbf{$\delta_{CP}$} & \textbf{$\epsilon_{l}$}& \textbf{$|\eta_{B}|$}\\
\hline
$1$&$\text{NH, LO}$ & $16$&$ 56 $ & $1.35794\times10^{-5}$&$5.62719\times10^{-8} $ \\
\hline
$2$&$\text{NH, LO}$ & $16$&$ 136 $ & $-3.11475\times10^{-5}$&$1.28902\times10^{-7} $ \\
\hline
$3$&$\text{NH, LO}$ & $16$&$ 232 $ & $1.37456\times10^{-5}$&$5.68851\times10^{-8} $ \\
\hline
$4$&$\text{NH, LO}$ & $16$&$ 314$ & $1.49244\times10^{-6}$&$6.17636\times10^{-9} $ \\
\hline
$5$&$\text{NH, LO}$ & $25$&$ 84 $ & $3.56574\times10^{-5}$&$1.47565\times10^{-7} $\\
\hline
$6$&$\text{NH, LO}$ & $25$&$ 122.5 $ & $7.31569\times10^{-6}$&$3.02754\times10^{-8} $\\
\hline
$7$&$\text{NH, LO}$ & $25$&$ 263.5$ & $-1.25402\times10^{-5}$&$5.18967\times10^{-8} $\\
\hline
$8$&$\text{NH, LO}$ & $25$&$ 294.5$ & $4.28344\times10^{-6}$&$1.77267\times10^{-8} $\\
\hline
$9$&$\text{NH, HO}$ & $16$&$ 59 $ & $3.19255\times10^{-5}$&$1.32121\times10^{-7} $ \\
\hline
$10$&$\text{NH, HO}$ & $16$&$132.5$ & $-1.70443\times10^{-5}$&$7.05367\times10^{-8} $ \\
\hline
$11$&$\text{NH, HO}$ & $16$&$ 232.25 $ & $8.92875\times10^{-6}$&$3.69509\times10^{-8} $ \\
\hline
$12$&$\text{NH, HO}$ & $16$&$ 314$ & $4.8229\times10^{-6}$&$1.99592\times10^{-8} $ \\
\hline
\end{tabular}
\end{center}
\caption{Same as in Table V, but for $ \chi^{2} = $ 16 and 25 }
\end{table}
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Case}&\textbf{hierarchy, Octant} & \textbf{$\Delta \chi^{2}$} & \textbf{$\delta_{CP}$} & \textbf{$\epsilon_{l}$}& \textbf{$|\eta_{B}|$}\\
\hline
$1$&$\text{NH, LO}$ & $16$&$ 56 $ & $-2.96623\times10^{-5}$&$1.2275\times10^{-7} $ \\
\hline
$2$&$\text{NH, LO}$ & $16$&$ 136 $ & $3.22739\times10^{-5}$&$1.33563\times10^{-7} $ \\
\hline
$3$&$\text{NH, LO}$ & $16$&$ 232 $ & $-2.72203\times10^{-5}$&$1.12649\times10^{-7} $ \\
\hline
$4$&$\text{NH, LO}$ & $16$&$ 314$ & $-1.04375\times10^{-5}$&$4.31949\times10^{-8} $ \\
\hline
$5$&$\text{NH, LO}$ & $25$&$ 84 $ & $-3.32343\times10^{-5}$&$1.37537\times10^{-7} $\\
\hline
$6$&$\text{NH, LO}$ & $25$&$ 122.5 $ & $-1.47354\times10^{-6}$&$6.09812\times10^{-9} $\\
\hline
$7$&$\text{NH, LO}$ & $25$&$ 263.5$ & $-2.36179\times10^{-5}$&$9.77404\times10^{-8} $\\
\hline
$8$&$\text{NH, LO}$ & $25$&$ 294.5$ & $-3.32892\times10^{-6}$&$1.37765\times10^{-7} $\\
\hline
$9$&$\text{NH, HO}$ & $16$&$ 59 $ & $-3.27271\times10^{-5}$&$1.35438\times10^{-7} $ \\
\hline
$10$&$\text{NH, HO}$ & $16$&$132.5$ & $-2.97961\times10^{-5}$&$1.23309\times10^{-7} $ \\
\hline
$11$&$\text{NH, HO}$ & $16$&$ 232.25 $ & $-1.46679\times10^{-5}$&$6.07021\times10^{-8} $ \\
\hline
$12$&$\text{NH, HO}$ & $16$&$ 314$ & $-1.04375\times10^{-5}$&$4.31949\times10^{-8} $ \\
\hline
\end{tabular}
\end{center}
\caption{Same as in Table VI, but for $ \chi^{2} = $ 16 and 25 }
\end{table}
To break the quadrant of CPV phase $ - $ Octant of $ \theta_{23} $ entanglement we have calculated BAU ($ \eta_{B} $) for 152 cases as shown in Tables I-XII, and found that only for five cases, our calculated $ \eta_{B} $ lies within the present best fit values of $ \eta_{B} $. These five cases are $\delta_{CP} = 1.43\pi$ (third quadrant), $\delta_{CP} = 0.527\pi$ (second quadrant), $\delta_{CP} = .383 \pi$ (first quadrant), $\delta_{CP} = .488\pi$ (first quadrant) for the case when R matrix consists of both $V_{CKM}$ and $U_{PMNS}$ and $\delta_{CP} = 1.727\pi$ (fourth quadrant), for the case when R matrix consists of $U_{PMNS}$ only. Next, we studied variation of $ \eta_{B} $, w.r.t 1$ \sigma $, 2$ \sigma $ and 3$ \sigma $ variation of $ \Delta m^{2}_{31} $, as shown in Figs. 3 and 4. It can be seen from Fig. 3 and 4 that for variation of $\Delta m^{2}_{31}$, within its 1$\sigma$ range, all calculated values of $\eta_B$ lie in the allowed range of its best value. For $\Delta m^{2}_{31}$ at its 3 $ \sigma $ C.L, the case $ \delta_{CP} = 0.488 \pi $ is consistent with the allowed range of BAU for $ \Delta m^{2}_{31} < -2.27\times 10^{-3} eV^{2}$. Similarly, very slight discrepancy of $ \eta_{B} $ for $ \delta_{CP} =0.5277 \pi $ can be seen from Fig. 4(a) for $ \Delta m^{2}_{31} < -2.63\times 10^{-3} eV^{2}$. Case 15 of Table XI, where $ \delta_{CP} = 1.43 \pi $ has $ |\eta_{B}| $ in the range compatible with $5.7\times 10^{-10} < \eta_{B} < 6.7 \times 10^{-10} $ except for $ \Delta m^{2}_{31} >- 2.2695 \times 10^{-3} eV^{2} $ and $ \Delta m^{2}_{31} <- 2.635 \times 10^{-3} eV^{2} $. It is worth noting that this value of $ \frac{\delta_{CP}}{\pi}=1.43 $ is close to the central value of $ \delta_{CP} $ from the recent global fit result \cite{kol}. Case 13 of Table XI: $ \delta_{CP} = 0.383 \pi $ has $ |\eta_{B}| $ in the range allowed by, $5.7\times 10^{-10} < \eta_{B} < 6.7 \times 10^{-10} $ except for $ \Delta m^{2}_{31} >- 2.385 \times 10^{-3} eV^{2} $. Case 4 of Table XII, where $ \delta_{CP} = 1.727 \pi $, has $ |\eta_{B}| $ in the range favoured by the present experimental constraints except for $ \Delta m^{2}_{31} >- 2.255 \times 10^{-3} eV^{2} $ where the straight line fails to satisfy allowed $ |\eta_{B}| $ bounds even at $ 2 \sigma $ C.L of $ \Delta m^{2}_{31} $. Interestingly here leptonic CPV phase $ \delta_{CP} =1.727 \pi $ lies within the 1$ \sigma $ ranges of $ \delta_{CP} $ from latest global fit analysis, $ \delta_{CP} = 1.67^{+0.37}_{-0.77} $ \cite{kol}. Here $ R_{1j} $ elements of R matrix consists of only $ U_{PMNS} $ elements.
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Case}&\textbf{hierarchy, Octant} & \textbf{$\Delta \chi^{2}$} & \textbf{$\delta_{CP}$} & \textbf{$\epsilon_{l}$}& \textbf{$|\eta_{B}|$}\\
\hline
$1$&$\text{IH, LO}$ & $25$&$ 59 $ & $7.35254\times10^{-6}$&$3.04279\times10^{-8} $ \\
\hline
$2$&$\text{IH, LO}$ & $25$&$ 131.5 $ & $1.00443\times10^{-6}$&$4.15675\times10^{-9} $ \\
\hline
$3$&$\text{IH, LO}$ & $25$&$ 246.5 $ & $3.71415\times10^{-6}$&$1.53707\times10^{-8} $ \\
\hline
$4$&$\text{IH, LO}$ & $25$&$ 311$ & $3.74283\times10^{-7}$&$1.54894\times10^{-9} $ \\
\hline
$5$&$\text{IH, LO}$ & $16$&$ 42 $ & $-7.26506\times10^{-6}$&$3.00659\times10^{-8} $\\
\hline
$6$&$\text{IH, LO}$ & $16$&$ 140.5$ & $7.91014\times10^{-6}$&$3.27355\times10^{-8} $\\
\hline
$7$&$\text{IH, LO}$ & $16$&$ 225.5$ & $8.74966\times10^{-7}$&$3.62098\times10^{-9} $\\
\hline
$8$&$\text{IH, LO}$ & $16$&$ 320.5$ & $8.74965\times10^{-7}$&$3.62097\times10^{-9} $\\
\hline
$9$&$\text{IH, HO}$ & $16$&$ 45 $ & $-1.88492\times10^{-6}$&$7.80058\times10^{-9} $ \\
\hline
$10$&$\text{IH, HO}$ & $16$&$139$ & $-2.36693\times10^{-7}$&$9.79536\times10^{-10} $ \\
\hline
$11$&$\text{IH, HO}$ & $16$&$ 226.5 $ & $-7.81644\times10^{-7}$&$3.23477\times10^{-9} $ \\
\hline
$12$&$\text{IH, HO}$ & $16$&$ 319$ & $-3.88288\times10^{-6}$&$1.6069\times10^{-8} $ \\
\hline
$13$&$\text{IH, HO}$ & $25$&$ 72$ & $1.40342\times10^{-7}$&$5.80793\times10^{-10} $ \\
\hline
$14$&$\text{IH, HO}$ & $25$&$ 123$ & $-3.73584\times10^{-6}$&$1.54604\times10^{-8} $ \\
\hline
$15$&$\text{IH, HO}$ & $25$&$ 257.5$ & $1.48671\times10^{-7}$&$6.15262\times10^{-10} $ \\
\hline
$16$&$\text{IH, HO}$ & $25$&$ 302$ & $-7.71976\times10^{-7}$&$3.19476\times10^{-9} $ \\
\hline
\end{tabular}
\end{center}
\caption{Same as in Table IX, but IH is used.}
\end{table}
In fig. 5 we showed variations of $ \eta_{B} $ with $ \theta_{13} $, taking range of $ \theta_{13} $ within 3$ \sigma $ values of its best fit values, for the five favoured cases and find that values of $ \theta_{13} $ around $ 9.0974^{0} $ to $9.12^{0}$ (which agrees well with the current fit data \cite{kol}) are favoured as far as matching with the best fit values of $|\eta_B|$ are concerned.
\par
We also calculated values of Jarkslog invariant $ J_{CP} $ for these five cases, and found that they lie within present experimental limits (shown in Table XIII). Variation of $ J_{CP} $ with $ \theta_{13} $, taking range of $ \theta_{13} $ within its 3$ \sigma $ values of its best fit values was also considered (Fig. 6), and find that $ J_{CP} $ lies within its experimental limits for these five cases even when variation of $ \theta_{13} $ is taken. These results could be important, as the quadrant of leptoniv CPV phase, and octant of atmospheric mixing angle $\theta_{23}$ are yet not fixed experimentally. Also, they are significant in context of precision measurements of neutrino oscillation parameters, specially the leptonic CPV phase, $ \Delta m^{2}_{31} $ and the reactor angle $\theta_{13}$.
It may be noted that out of the five cases found favourable in our work here, one of the values $ \delta_{CP} = 1.43\pi $ matches with the latest global fit value, $ \delta_{CP} $ = 1.4 $ \pi $. Future experiments like DUNE/LBNEs and Hyper-Kamionande \cite{Mth} that would measure $ \delta_{CP} $ (especially probing leptonic CPV) will support/disfavour the results presented in this work.
\section*{Acknowledgments}
GG would like to thank UGC, India, for providing RFSMS fellowship to her, during which part of this work was done. DD thanks HRI,
Allahabad, India for
providing a postdoctoral fellowship to him. KB thanks DST-SERB, Govt of India, for financial support through a project.
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{Case}&\textbf{hierarchy, Octant} & \textbf{$\Delta \chi^{2}$} & \textbf{$\delta_{CP}$} & \textbf{$\epsilon_{l}$}& \textbf{$|\eta_{B}|$}\\
\hline
$1$&$\text{IH, LO}$ & $25$&$ 59 $ & $-4.11136\times10^{-6}$&$1.70145\times10^{-8} $ \\
\hline
$2$&$\text{IH, LO}$ & $25$&$ 131.5 $ & $-3.26348\times10^{-6}$&$1.35057\times10^{-8} $ \\
\hline
$3$&$\text{IH, LO}$ & $25$&$ 246.5 $ & $9.54714\times10^{-7}$&$3.95101\times10^{-9} $ \\
\hline
$4$&$\text{IH, LO}$ & $25$&$ 311$ & $1.47958\times10^{-7}$&$6.12311\times10^{-10} $ \\
\hline
$5$&$\text{IH, LO}$ & $16$&$ 42 $ & $3.06981\times10^{-6}$&$1.27042\times10^{-8} $\\
\hline
$6$&$\text{IH, LO}$ & $16$&$ 140.5$ & $4.12475\times10^{-6}$&$1.707\times10^{-8} $\\
\hline
$7$&$\text{IH, LO}$ & $16$&$ 225.5$ & $4.11818\times10^{-6}$&$1.70428\times10^{-8} $\\
\hline
$8$&$\text{IH, LO}$ & $16$&$ 320.5$ & $-4.80846\times10^{-7}$&$1.98994\times10^{-9} $\\
\hline
$9$&$\text{IH, HO}$ & $16$&$ 45 $ & $3.74039\times10^{-6}$&$1.54905\times10^{-8} $ \\
\hline
$10$&$\text{IH, HO}$ & $16$&$139$ & $4.18492\times10^{-7}$&$1.73189\times10^{-8} $ \\
\hline
$11$&$\text{IH, HO}$ & $16$&$ 226.5 $ & $2.40081\times10^{-6}$&$9.93556\times10^{-9} $ \\
\hline
$12$&$\text{IH, HO}$ & $16$&$ 319$ & $-1.06298\times10^{-6}$&$4.39907\times10^{-9} $ \\
\hline
$13$&$\text{IH, HO}$ & $25$&$ 72$ & $-9.54837\times10^{-7}$&$3.95152\times10^{-9} $ \\
\hline
$14$&$\text{IH, HO}$ & $25$&$ 123$ & $3.41971\times10^{-6}$&$1.91552\times10^{-8} $ \\
\hline
$15$&$\text{IH, HO}$ & $25$&$ 257.5$ & $1.81927\times10^{-6}$&$7.52892\times10^{-9} $ \\
\hline
$16$&$\text{IH, HO}$ & $25$&$ 302$ & $3.04466\times10^{-6}$&$1.26001\times10^{-8} $ \\
\hline
\end{tabular}
\end{center}
\caption{Same as in Table X but IH is used.}
\end{table}
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Serial No.} & \textbf{$ \delta_{CP} $, hierarchy, octant, $ J_{CP} $ of our calculation }& \textbf{Quadrant Of $ \delta_{CP} $} \\
\hline
$1. $ & $ \delta_{CP} = 1.43 \pi $, IH, HO, $\textit{J}_{CP}$ = -.03439461 & third quadrant\\
\hline
$2. $ & $ \delta_{CP} = 1.727 \pi $, IH, LO, $\textit{J}_{CP}$ = -.026588173 & fourth quadrant \\
\hline
$3. $ & $ \delta_{CP} = 0.5277 \pi $, IH, HO, $\textit{J}_{CP}$ = .035095635 & second quadrant \\
\hline
$4. $ & $ \delta_{CP} = 0.488 \pi $, IH, LO, $\textit{J}_{CP}$ = .035208214 & first quadarnt\\
\hline
$5. $ & $ \delta_{CP} = 0.383 \pi $ , IH, HO, $\textit{J}_{CP}$ = .032889754 & first quadrant\\
\hline
\end{tabular}
\end{center}
\caption{ Preferred cases of $\delta_{CP}$, octant, hierarchy and $ \textit{J}_{CP} $ allowed by present range of $\eta_{B}$, $5.7\times 10^{-10} < \eta_{B} < 6.7 \times 10^{-10} $ }
\end{table}
\section*{References}
|
1,314,259,995,955 | arxiv | \section{Introduction}
During the last years there was a wide discussion about the use of
different scintillator and fiber materials in high luminosity
environments for tracking and calorimetric applications for new
collider and fixed target experiments. Extensive radiation hardness
studies for these materials have been carried out to investigate
various quantities
relevant for the problem. An introduction to the field is given in
\cite{lit1} whereas selected results of several groups are quoted in
\cite{lit2}-\cite{lit15}.
Due to the large number of relevant parameters, the published results
do not give a clear and conclusive picture of the subject. This
is true also for our own investigations which were connected with the
development of a scintillating fiber detector as backup solution for
the HERA-B inner tracker \cite{lit16}. Irradiation of scintillating and
clear
fibers using a gamma source and charged particles (p,e) at low and
high dose rates gave rather different results \cite{lit11},
\cite{lit12}-\cite{lit15}
First of all the material under study is important concerning the
basic polymer and the different dyes used to shift scintillation light
into the visible region of the spectrum. Decreases of light emission and
transparency are observed under irradiation. It is well known that Polystyrene
and PVT are much more radiation hard than PMMA and green shifting dyes
give better results than blue ones. It seems, however, that the kind of
production (e.g.polymerization time) can be important as well as
temperature and atmospheric effects during transport, storage and
machining \cite{lit3,lit10}.
An increase of the total dose rate normally will increase the damage
of the material under study. However, on an absolute scale results are
different even for the same substances. Whereas damages are observed
in some cases already at some 10 krad \cite{lit9,lit10} other studies show
considerable effects only above 1 Mrad \cite{lit11,lit12}. In situ
observations seem to indicate a change of the damage mechanism above this dose at
least for specific materials \cite{lit14}.
Very often a recovery of the light emission and transparency has been
reported after the irradiation. Recovery times of
some days \cite{lit4,lit6,lit14}, several weeks \cite{lit12} or even months
\cite{lit9} have been observed. However, for the same material
total \cite{lit12} and no recovery \cite{lit11} is reported.
The presence of oxygen during and after the irradiation seems to be of
particular importance for the damage and recovery of
materials. Parameters related to this question are dose rate,
surrounding atmosphere and coverage (glue). Published results are
again inconsistent. In \cite{lit2} it was measured that low dose rates in
air
produce larger damages than large ones. This was interpreted to be due
to a better possibility for oxygen diffusion in the material during
longer times. The same was observed in \cite{lit15} for doses below 1
Mrad. At about that value however the same damage was found. Nearly no
effect was seen for low rates in \cite{lit13}.
If oxygen diffusion is important one would expect an influence if
air is kept off the material surfaces. No differences were
observed for irradiation in air and nitrogen in
\cite{lit4,lit13}.
In contrast larger effects for irradiation in argon \cite{lit10} and
nitrogen \cite{lit11} were stated. Non-glued fibers were found to be
less damaged compared to glued ones \cite{lit11} opposite to the
measurements reported in \cite{lit12,lit13}.
All the above results underline that one is far away from a consistent
understanding of radiation damage mechanisms in scintillator and fiber
materials. In addition to the large field of parameters, measurements are
difficult to perform. General problems are dose determination for
small probes, low light signals and mechanical damages of fragile
objects during repeated measurements.
In order to estimate the radiation hardness of a detector of
preselected material one should therefore irradiate a finite size
prototype close to the later experimental conditions.
For the SCSF-78M fibers selected to build a high rate tracking
detector \cite{lit11,lit16} and a fast active target \cite{lit17} we made
such a test using 146 MeV/c pion beams at the Swiss Paul-Scherrer
Institute (PSI).
\section{Detector setup and readout}
Most of the following activities were done in close cooperation with
the FAST-collaboration \cite{lit17}. In a common test run at PSI radiation
hardness studies were performed as well as first proofs of ideas for
the precise measurement of the muon lifetime using a fast active
scintillating fiber target.
Two detectors - $\mu FAST$ I and II - have been used for
radiation hardness tests. Both were produced at DESY
Zeuthen with the winding technology \cite{lit16}. A drum grooved with a pitch
of 510 $\mu$m has been used to produce detector planes consisting of 8
fiber layers of $16 \times 8$ KURARAY SCSF-78M double clad fibers of
500 $\rm \mu$m diameter dense packed. White acrylic paint has been taken
to glue all fibers of a plane together over a length of about 10 cm. One plane has
an effective width of 66 mm and a depth of 3.6 mm. Twelve of these planes
were glued on top of each other to build the final detector.
A photo of $\mu FAST$ I is displayed in fig. 1. The bottom side of the
detector was polished and mirrored
by an aluminium foil. The other end consists of \linebreak30 cm loose
fibers. 8 $\times$ 8
fibers of a plane were combined to form a macroscopic pixel. Using a
plastic connector mask the 16 pixels of a plane were put to a 16
channel multianode photomultiplier Hamamatsu R5900-M16. The 192 pixels
of the detector are readout in parallel using 12 such devices.
For $\mu FAST$ II both ends of the glued fibers were cut and
polished. This allows to look through the fibers and visually to inspect
their quality. A corresponding photo is shown in fig. 2. The readout
of the detector pixels of the same size as in $\mu FAST$ I is done here
by clear optical light guides. Using \linebreak500 $\mu$m double clad
KURARAY clear fibers exactly the same planes are produced as for
scintillating
fibers with the same winding drum. Polished at one end of the about 5 cm
long glued part, the 30 cm long loose ends are again ordered to
macro-pixels of $8 \times 8$ fibers feed in masks fitting to the same
multianode photomultipliers as mentioned above. Only
four planes of this type were available and glued together to a
compact block. Using pins and holes this block could be connected with
moderate precision to all parts of the $\mu FAST$ II scintillator block.
During the test run the FAST DAQ system has been used to study light
signals from particles hitting $\mu FAST$ I or II. The signals from the
192 photomultiplier channels were splitted passively. After a beam
trigger the arrival time of all signals appearing in an interval of
$\pm$ 20 $\mu sec$ were registrated by two VME-TDC's CAEN V767. In addition
42 PM channels were connected to the channels of
six VME ADC's LECROY 1182 to measure signal amplitudes within a gate
of 30 nsec after the beam trigger. The detector planes were arranged
perpendicular to the incoming beam defining the z-direction. The
pixels in a plane have increasing row numbers in
y-direction. ADC-channels were connected to all planes of row numbers 8
and 9 and planes 4-12 in row numbers 7 and 10 (see also fig. 2).
At DESY Zeuthen $\mu FAST$ II has been studied during three months after
the test run using a cosmic ray trigger to activate a VME ADC CAEN V265
for registration of the PM-signal of particular pixels of the detector.
\section{Measurements at PSI}
The basic idea for the irradiation test was to stop an intense pion
beam in the center of one of the $\mu FAST$ detectors. Due to the strong
increase of the pion ionization loss near to its stopping point it
should be possible to irradiate different detector planes with
different but correlated and calculable doses.
The concept has been tested in a low intensity positive charged
particle beam of the $\pi M1$ area at PSI which is expected to contain mainly
pions. The beam momentum was selected to be 146 MeV/c. The setup is
shown schematically in fig 3a. Beam particles first cross a 8.5 cm
thick lucite degrader (D) then a beam trigger system of three plastic
scintillators (T1-T3) of 0.5 cm thickness each and hit finally the
planes of the detectors $\mu FAST$ I or II perpendicularly. Using $\mu
FAST$ I half a million triggers have been recorded for radiation
hardness studies with a data rate of about 100 Hz.
As came out later ~80 $\%$ of all triggers were due to minimum ionizing
particles (positrons) crossing all detector planes. These triggers were used to
calibrate the 12 photomultipliers to give the same signals for any
plane. The corresponding energy loss spectrum is shown in fig. 4a.
Pions entering the detector were already non-relativistic and gave
rise to larger energy losses (fig. 4b) increasing to the stopping
point (fig. 4c). Comparing the average values of the three
distributions a ratio of 1.0/2.7/5.6 is found. As required, the pions
stop dominantly in the center of the detector (see fig. 5).
\subsection{Irradiation scheme}
A high intense charged pion beam of the area $\pi E3$
of PSI has been used for a passive irradiation of the $\mu FAST$ II
scintillator block selecting the same momentum of 146 MeV/c as in $\pi
M1$. The corresponding arrangement is sketched in
fig. 3b. The beam crosses a 9.0 cm thick lucite degrader (D). The beam profile
is measured using a transparent wire chamber (C). The beam rate is
monitored by two 0.5 cm thick scintillation counters (T1-T2). Finally
particles enter the detector which was surrounded by a lead shield
otherwise.
Because the scintillation counters could operate only up to moderate
particle rates the PSI proton accelerator intensity was reduced for a
short time by a factor 19.2 to measure the particle flux in our beam
configuration. It was assumed that a linear scaling to the nominal
accelerator intensity is possible also for the $\pi E3$ area. With that
procedure we measured a particle flux of $0.9 \times 10^8$
particles cm$^{-2} \cdot$ sec$^{-1}$ in a 2 $\times$ 1 cm$^2$ peak region of the beam
profile, illuminating the six central channels (6 - 11) of the detector in
about 5 cm height. Within a total irradiation time of 186000 sec ( 51h
40$^{\prime}$) 1.67 $\times$ 10$^{13}$ particles/cm$^2$ hit that detector region in
total.
To estimate the corresponding radiation dose per detector plane the
particle content of the beam becomes important. With the available
setup it could not be measured for the conditions in $\pi E3$. In
contrast to the situation in $\pi M1$ where the beam collimators were
nearly closed, they were completely opened here what should decrease positron
appearance.
In fig. 6a the average energy loss in the detector planes as measured
with $\mu FAST I$ in $\pi M1$ is distributed for pions and positrons.
Fig. 6b shows
the part of incoming pions crossing a detector plane or stopping
there. Together with the total particle flux this numbers allow to
estimate the integral irradiation dose per detector plane. To do that
a GEANT-based Monte Carlo was used to simulate the experimental
conditions and fit the pion stopping distributions in figs. 6b. The
corresponding Monte Carlo result for the dose per plane is displayed in
fig. 6c if the beam consists only of pions or positrons and for the
particle mixture found for the measurement in $\pi$M1. Planes 4 and 5
have been irradiated correspondingly with a dose beetween 1 - 4 Mrad.
\subsection{Irradiation damage}
When the irradiation was finished first a visual inspection of the
degrader and the $\mu FAST$ II detector was made.
As visible in fig. 7 the degrader showed a clear damage in the region where
the intense beam was crossing. The degrader is made out of two
blocks pressed strongly together with screws. Both blocks arise from
the same piece of lucite of 5 cm thickness. The second one
was machined down to 4 cm a few hours before the irradiation started.
During that procedure the block was warmed up considerably. This
circumstance seems to be reflected in an interesting way in the damage
profile. Beam particle hitting the blocks from left produce in the
first one a brown zone which becomes smaller and weaker near to its
end. At the entrance of the second block however a kind of phase transition
seems to happen. Much stronger browning is observed. The widening of
the profile is due to increasing multiple scattering of pions slowing
down. The damage seems to be irreversible. After three months no
change of the transparency has been observed.
In contrast to the strong effect for the degrader no visible change of
the transparency of the $\mu FAST$ II detector fibers was seen.
The detector has been studied in addition to $\mu FAST$ I before and after
irradiation in the $\pi M1$ particle beam. However only planes 2-5 were
readout with the optical light guide planes described in section 2. In
table 1 the ratio $R_1$ of the average values of the ADC-spectra for this
planes measured with $\mu FAST$ II and I is given. A light loss of about
20 $\%$ is observed for the clear light guides due to connector coupling
losses because no fiber-by-fiber coupling is possible.
The ratio $R_2$ of the average values of ADC-spectra measured with $\mu FAST$
II after and before the irradiation is also shown in the table. It
comes out to be near to unity, demonstrating that no clear radiation
damage is appearing for the considered detector region. One would,
however, like to prove whether the shape of the corresponding spectra
remains unchanged too. That is shown in fig. 8, where the ratio $R_3$ of
the two sum spectra is distributed for the region of reasonable
statistics of the data. All values are consistent with one within their
errors, giving an average $<R_3> $ which allows a decrease of the observed
light signal of less than 10 $\%$ due to irradiation.
\section{Laboratory measurements with cosmic rays}
To check the long term behavior of the irradiated $\mu FAST$ II detector
for all its twelve planes, the detector was studied using a
cosmic ray particle trigger in the DESY Zeuthen laboratory during
three months. The detector planes
were placed perpendicular to the zenith axis in a black box. On top of the
first plane a scintillation counter allowed to measure at different
positions along the fibers. The complete trigger was made by a
threefold coincidence of this counter and signals from two planes
directly before and/or after the measured one. For every plane pixel 9
was selected for trigger and measurement. To study any plane of
$\mu FAST$ II, the light guide block step by step was moved across the
planes. Repeated measurements of the same pixel were done therefore
often with different photomultipliers. For this purpose the
PM's were calibrated using a constant LED light signal.
In fig. 9 the results of all measurements are distributed versus the
plane number and a rough order in time. In contrast to the impression
from the first measurement no dependence on the
plane number is observed. As can be seen from
fig. 10 all data points follow a gaussian distribution.
Due to lack of time, only a single measurement has been
done before irradiation in plane 11. It fits to the gaussian behavior
of all data.
In fig. 11 the average values of the measurements per plane are shown
together with a total average and its one and three sigma region. The
single data point from before irradiation agrees with that average in
1.4 $\sigma$.
\section{Summary}
A low energy high rate beam of positive pions has been stopped in the
center of a fiber detector made out of 12 planes with 16 pixels each.
The fibers are glued together with white acrylic paint. Every pixel
consists of $8 \times 8$ KURARAY SCSF-78M fibers of 500 $\mu$m diameter.
Within about 52 hours a total dose of 1 - 4 Mrad has been placed in
the detector center with a rate of 20 - 80 krad/h. The large error of
this number is due to a unknown positron contribution to the
beam.
Unexpectedly, all measurements are consistent with no radiation
damage of any fiber detector plane. However, a large and
irreversible damage (browning) reflecting the profile of the crossing
beam is observed for the lucite degrader placed in front of the detector.
Within the measurement errors we can not exclude a maximum
decrease of 10 $\%$ of the light signals of detector fibers after
irradiation. If present, no recovery has been observed during three
months.
\section*{Acknowledgement}
First of all we would like to thank the FAST collaboration members
from BNL, Bologna, CERN, PSI and ETH Zurich who heavily supported the
radiation hardness measurements. Secondly the whole program would not
have been possible without tremendous help provided by the Paul
Scherrer Institute. In particular we thank C. Petitjean, K. Deiters,
and J. Egger for support in the installation of the necessary hardware
and the accelerator crew for providing the required intensities for
monitoring and exposure.
|
1,314,259,995,956 | arxiv | \section{Introduction}
Three-dimensional gravity has proven to be a fruitful testing ground for our ideas about holography.
Perhaps the most interesting question is whether pure general relativity---a theory with only metric degrees of freedom---with a negative cosmological constant exists as a quantum theory in its own right. If this were the case, then one should be able to find its holographic dual for a given value of $G_N/R_{\rm AdS}$, the Newton constant in AdS units. This appears to be an extremely difficult problem (see e.g. \cite{Witten:2007kt, Maloney:2007ud, Castro:2011zq}). However, general relativity exists as a sub-sector of any theory of gravity in three dimensions. From the boundary point of view, it captures the dynamics of the Virasoro sector of any two-dimensional CFT with central charge $c=3R_{\rm AdS}/2G_N$. This semi-microscopic interpretation is unavailable in higher-dimensional AdS/CFT, where the stress tensor does not generate a closed symmetry algebra.
This perspective lends a universality to AdS$_3$/CFT$_2$ that underlies, for example, the matching of the asymptotic symmetry group of anti-de Sitter space to the Virasoro algebra \cite{Brown:1986nw} and the matching of BTZ black hole entropy to the Cardy growth of states \cite{Strominger:1997eq}. These are features of any theory of three-dimensional gravity, and of any dual CFT. Recent work has revealed an even richer set of properties of two-dimensional CFTs that admit a large-$c$ limit and are dual to weakly coupled bulk theories of gravity. These relate to aspects of such theories' spectra and thermodynamics \cite{Hartman:2014oaa, Keller:2014xba, Belin:2014fna, Haehl:2014yla, Benjamin:2015hsa}, entanglement \cite{rt, Headrick:2010zt, Hartman:2013mia, Faulkner:2013yia, Barrella:2013wja, Chen:2013kpa, Castro:2014tta, Asplund:2014coa, Caputa:2014eta}, Virasoro blocks \cite{Fitzpatrick:2014vua, deBoer:2014sna, Hijano:2015rla, Fitzpatrick:2015zha, Perlmutter:2015iya}, modular geometry \cite{Jackson:2014nla}, and chaotic response to perturbations \cite{Roberts:2014ifa}, among others.
Despite this fascinating progress, much remains to be understood about basic consequences of Virasoro symmetry. To this end, in this paper we will focus on the computation of the partition function of three-dimensional gravity in a universe whose boundary is a Riemann surface $\Sigma_g$ of genus $g$. Schematically, this should be given by a bulk path integral over geometries ${\cal M}_g$ which asymptote to $\Sigma_g$:
\es{in2}{Z_{\rm grav}(\Om_g) = \int_{\partial {\cal M}_g = \Sigma_g} {\cal D}g~ e^{-S[g]}~.}
This partition function is a function of the conformal structure moduli of the Riemann surface $\Sigma_g$, denoted $\Om_g$. These partition functions contain vital information about the theory: for instance, one can recover the correlation functions of a given CFT from its higher-genus partition functions by pinching handles \cite{Friedan:1986ua}. Thus, by tuning the moduli $\Om_g$, one could in principle recover the correlation functions of the boundary CFT.
Equation \eqref{in2} is in general an extremely difficult object to compute. Moreover, it is not ``universal'' in the sense described earlier.
In particular, in \eqref{in2} we have written the bulk path integral only over metric degrees of freedom; in more complicated theories of gravity more degrees of freedom should be included.
The partition function $Z_{\rm grav}(\Om_g)$ written above is that of the CFT dual to pure gravity at a given value of Newton's constant, if it exists.
In this paper we will not be interested in the full partition function \eqref{in2}, but rather in an object which is both easier to compute and universal: we will study the contribution to $Z_{\rm grav}(\Om_g)$ from a single saddle-point geometry ${\cal M}_g$, including perturbative quantum corrections. This restricted partition function maps to the contribution of the Virasoro sector to the CFT partition function on $\Sigma_g$.
This is easiest to see at genus one. In the semiclassical regime $G_N \ll R_{\rm AdS}$, the path integral \eqref{in2} can be recast as a sum over saddle points of the Einstein action with solid torus topology, along with perturbative corrections. The simplest such saddle point is thermal AdS$_3$, the Euclidean geometry found by taking empty AdS$_3$ and periodically identifying in Euclidean time, which contributes to the partition function as
\es{in3}{Z_{\rm TAdS}(\tau,\overline{\tau}) = |q|^{-c/12} \prod_{n=2}^{\infty}{1\over |1-q^n|^2}~,\quad q:=e^{2\pi i \tau}~.}
In this expression, we have included not only the classical action of thermal AdS (the factor of $|q|^{-c/12}$) but also all of the perturbative quantum corrections which come from loops of gravitons in thermal AdS.
With certain reasonable assumptions, all other saddle points are simply $SL(2,\mathbb{Z})$ modular transformations of thermal AdS, and the sum over geometries is a sum over $SL(2,\mathbb{Z})$ transformations of \eqref{in3}. A direct calculation of $Z_{\rm TAdS}$ does not yield a result consistent with an interpretation as a trace of the Hilbert space of a CFT \cite{Maloney:2007ud, Keller:2014xba}.\footnote{However, in the quantum regime $G_N \sim R_{\rm AdS}$, it was argued in \cite{Castro:2011zq} that at specific minimal-model values of $c$, not only can the sum be performed, but it agrees with the minimal-model partition functions.} Nevertheless, \eqref{in3} does have a natural interpretation as the Virasoro vacuum character of {\it any} CFT with central charge $c>1$ and an $SL(2,\mathbb{R}) \times SL(2,\mathbb{R})$-invariant ground state. We note that this object is not modular invariant, which reflects the fact that in \eqref{in3} we have focused on only one saddle out of the $SL(2,\mathbb{Z})$ family. In the language of Riemann surfaces, \eqref{in3} is a function not of the conformal structure of the boundary torus, but rather of the Teichm\"uller parameter $\tau$.
In the present paper, our goal is to compute the analog of \eqref{in3} at higher genus. Any theory of AdS$_3$ gravity contains solutions which are handlebodies---solid genus-$g$ geometries---which have the Riemann surface $\Sigma_g$ as their conformal boundaries. These solutions are quotients of Euclidean AdS$_3$, much like thermal AdS. The contribution of a given handlebody to the path integral---including graviton loop corrections---has a universal CFT interpretation for any value of $c$, as the contribution of the states in the vacuum representation to the CFT partition function on $\Sigma_g$. We call this quantity $Z_{\rm vac}$.\footnote{This was called $Z_{fake}$ in \cite{Yin:2007gv}, and the correspondence with the bulk saddle point partition function was written as $Z_{fake} = Z_{saddle}$.} $Z_{\rm vac}$ is a function not just of the conformal structure of $\Sigma_g$, but rather of the Teichm\"uller parameters that parametrize the universal cover of the moduli space. In other words, to compute $Z_{\rm vac}$ we must specify a marking of the Riemann surface $\Sigma_g$ that fixes a choice of contractible and non-contractible cycles of the handlebody (A- and B-cycles, respectively). Thus $Z_{\rm vac}$ is not invariant under the modular group; a modular-invariant partition function could be obtained only, for example, by summing over bulk saddles that describe the different ways a boundary $\Sigma_g$ can be ``filled in" by bulk geometries. One of our goals in this paper will be to give a direct CFT computation of
$Z_{\rm vac}$, which can then be interpreted gravitationally.
There has been a recent resurgence of interest in higher-genus partition functions of two-dimensional CFTs. This interest is partly motivated by the study of entanglement entropies (EEs). The computation of EEs via the replica trick involves evaluating entanglement R\'enyi entropies (EREs), which in turn are equal to certain higher-genus partition functions. A particularly interesting line of research uses calculations of EREs to test the Ryu-Takayanagi (RT) classical EE formula and understand the quantum corrections to it \cite{rt, Hartman:2013mia, Faulkner:2013yia, Barrella:2013wja, Faulkner:2013ana}.
The partition functions relevant for EREs have been computed in holographic CFTs in two ways: in gravity, by explicitly finding the relevant saddles and evaluating their classical actions and one-loop determinants, and in field theory, by computing twist-operator correlators in certain cyclic orbifold CFTs and then expanding the results in powers of $1/c$. In every case where the computation has been carried out on both sides, agreement has been found. This is a check of our basic understanding of AdS$_3$/CFT$_2$ duality. Further, in many cases the computation using one technique gives results that are not practically computable using the other, thereby giving new information about both three-dimensional gravity and large-$c$ CFTs. For example, by expanding the results of the twist-operator computation to higher orders in $1/c$, one determines higher-loop quantum corrections on the gravity side that would be exceedingly difficult to obtain by direct computation. These results hint at a surprising novel structure, which we describe below.
\subsection{Summary of results}
In this paper we will directly compute $Z_{\rm vac}$ at genus two, for arbitrary values of $c>1$, using CFT techniques. We will use a sewing construction, represented schematically in figure \ref{fig-sewing-vac}. We start with a Riemann surface $\Sigma$ that has been constructed by Schottky uniformization and replace the handles of $\Sigma$ with a sum over states propagating along these handles. The result is a weighted sum over four-point functions of local operators on the sphere. If we were to include all possible operators in this sum, we would obtain the full, modular-invariant partition function, as a function of the pinching parameters $p_1$ and $p_2$, that describe the widths of the handles, and a third modulus $x$, which is the cross-ratio of the four-point functions on the sphere. The universal contribution $Z_{\rm vac}$ is computed by summing only over operators in the Virasoro vacuum block. The four-point functions of these operators are determined completely by conformal Ward identities. Thus $Z_{\rm vac}$ is in principle completely determined in terms of the central charge. We will compute the answer perturbatively in $p_1$ and $p_2$ but exactly in $x$. We will mostly assume that the CFT has no extended symmetry algebra beyond two copies of Virasoro, although as we will see in section \ref{v-i-i} it is straightforward to extend our results to higher-spin theories.
\begin{figure}
\centering
\includegraphics[width=0.98\textwidth]{figs/sewing-vac}
\caption{\label{fig-sewing-vac}A depiction of the sewing construction as applied to $Z_{\rm vac}$, the contribution of the Virasoro vacuum representation to a genus-two CFT partition function. The coordinates $p_1$ and $p_2$ represent the widths of the two handles in a Schottky uniformization of the Riemann surface. The handles are replaced by a sum over pairwise operator insertions, where we include all Virasoro descendants of the identity, ${\cal O} \in {\cal H}_{\rm vac}$. This recasts $Z_{\rm vac}$ as a sum of sphere four-point functions, weighted by powers of $p_1$ and $p_2$. The operators $\mathcal{O}_i$ and $\mathcal{O}_j$ have holomorphic conformal weights $h_i$ and $h_j$, respectively. A detailed description of the sewing construction is presented in section \ref{iv} (see equations \ref{Z2} and \ref{Ch1h2}).}
\end{figure}
Conformal bootstrap methods play an important role in our computation of $Z_{\rm vac}$, since our computation requires us to sum over all four-point functions of Virasoro descendants on the sphere. These correlation functions---and indeed the correlation function of any family of chiral operators---can be efficiently computed using a holomorphic version of the conformal bootstrap. The essential idea is that these correlation functions can be regarded as a meromorphic functions on ${\cal M}_{0,4}$, the moduli space of four marked points on a sphere, with poles only when the operators coincide. A meromorphic function on a compact space is determined entirely by its polar behaviour. For chiral operators of finite conformal dimension, this polar part is determined by a finite number of three-point function coefficients. The result is an exact expression for the correlation function in terms of a {\it finite} number of three-point function coefficients. This should be contrasted with the usual approach, where a four-point function involves a sum over an infinite number of intermediate states, so is written in terms of an infinite sum of OPE coefficients. Similar ideas have been advanced in \cite{Bouwknegt:1988sv, Bowcock:1990ku, Keller:2013qqa}. When the chiral operators are Virasoro descendants of the identity, we show using free bosons that all connected $n$-point functions have polynomial dependence on $c$. This implies that, when expressed in terms of $c$, bulk scattering of graviton states in AdS$_3$ is purely classical, in analogy to the one-loop exactness of the torus partition function.
Our result for $Z_{\rm vac}$ will hold for a general Riemann surfaces $\Sigma$, but for certain values of the moduli---those corresponding to the so-called replica surface---our results can be used to compute genus-two EREs. We mainly consider the case of two disjoint intervals in the vacuum of a CFT; the replica manifold has genus two when $n=3$, and is denoted ${\mathscr R}_{2,3}$. Our results extend previous results in \cite{Chen:2013kpa, Chen:2013dxa}, which were obtained from the twist-field four-point function. Those works employ a short-interval expansion in which the conformally invariant cross-ratio, which we call $y$, is taken to be small. The sewing technique is well-suited to computation to higher orders in $y$; \cite{Chen:2013dxa} worked through $O(y^8)$, and we extend this to $O(y^{12})$.
In fact, the authors of \cite{Chen:2013dxa} found a quite remarkable result: their $O(y^8)$ term in $\log Z_{\rm vac}$ exhibits a two-loop truncation in the expansion in $1/c$ at large $c$.
To understand why that result is interesting, let us consider the bulk AdS$_3$ interpretation of our results. Our computation of $Z_{\rm vac}$ is not limited to large $c$; it is a truly quantum result for the saddle-point partition function for a genus-two handlebody of three-dimensional pure gravity, applicable even when $G_N/R_{\rm AdS}$ is of order one. Expanding our result at large $c$ is equivalent to making the semiclassical approximation in the bulk. More precisely, the expansion of the ``vacuum free energy,''%
\begin{equation}\label{in5}
F_{\rm vac} := -\log Z_{\rm vac}\,,
\end{equation}
at small Newton constant (large $c$) is the loop expansion of three-dimensional AdS gravity:
\es{in6}{F_{\rm vac} = \sum_{\ell=0}^{\infty}c^{1-\ell} F_{\rm vac;\,\ell}}
where $F_{\rm vac;\,\ell}$ denotes an $\ell$-loop contribution. In the bulk, no computations have been done beyond one-loop order. At one loop, there is a closed-form expression for the graviton handlebody determinant \cite{Yin:2007gv, Giombi:2008vd}. Our result for $F_{\rm vac;1}$ is a computation of this determinant in a new regime of moduli space, not described by previous computations \cite{Yin:2007gv, Barrella:2013wja}.
What about at higher loops? At genus one, the expansion \eqref{in3} truncates at one-loop order: higher-loop contributions only renormalize the value of $c$ \cite{Maloney:2007ud}. It is natural to ask whether the higher-genus partition function obeys an analogous truncation.
Indeed, the results of \cite{Chen:2013dxa} imply that
that $F_{\ell>2}({\mathscr R}_{2,3})$ vanishes through $O(y^8)$, perhaps suggesting that the partition function at genus two truncates at two loops. One motivation for this paper was to investigate whether this truncation really occurs for the full partition function $Z_{\rm vac}$.\footnote{
On general grounds, such a truncation might seem to conflict with the pole structure of CFT correlation functions, regarded as analytic functions of $c$. Let us make the argument at genus two for concreteness. In the sewing construction, a genus-two partition function is written as a sum over four-point functions. The statement of truncation becomes the statement that at each order in the sewing expansion, the total contribution from all four-point functions truncates at order 1/c. As argued by Zamolodchikov \cite{zamo}, the conformal block decomposition of a given four-point function contains poles at minimal model values of $c$ where the exchanged operators become null. Unless these poles cancel against the poles in the other four-point functions contributing at a given order in the sewing expansion, the partition function will not truncate in a $1/c$ expansion. This sort of cancellation at every order in the sewing expansion seems highly unlikely. Indeed, our computations bear out this conclusion.}
Our conclusion is that the truncation does not occur, and that the cancellation observed in \cite{Chen:2013dxa} is an artifact of the small-$y$ expansion.
Indeed, we will show that on the replica manifold ${\mathscr R}_{2,3}$ there are nonzero contributions to the free energy at all orders in the $1/c$ expansion. These first appear at $O(y^{12})$ in the short-interval expansion, explaining why these corrections were not found in \cite{Chen:2013dxa}.
More generally, we will show that the genus-two partition function $Z_{\rm vac}$ of pure three-dimensional gravity does not truncate at any order in $1/c$. The same is true of pure higher spin gravity. Explicit contributions to the all-loop terms $F_{\rm vac;\;\ell}$ are given in section\, \ref{v-i}. To our knowledge, these are the first all-loop results beyond genus one for a Riemann surface with three independent moduli. We show that in the regime of small $p_1$ and $p_2$, the only point in the moduli space at which the loop expansion \eqref{in6} truncates is the separating degeneration point, at which $\Sigma$ degenerates into the union of two tori.
This paper is organized as follows. In section \ref{ii} we recall the relationship between R\'enyi entropies and higher-genus partition functions, and review the sewing construction of higher genus partition functions as a weighted sum over sphere-correlation functions. In section \ref{iii} we describe techniques to compute these correlation functions, including an analytic version of the conformal bootstrap. In section \ref{iv} we apply these techniques to compute $Z_{\rm vac}$, the contribution to the partition function from the Virasoro descendants, at genus two. We discuss the large central charge limit of this result in section \ref{v}, which allows us to understand the nature of quantum corrections to the higher-genus partition function of three-dimensional gravity, as well as applications to R\'enyi entropies, before concluding in section \ref{vi}. Appendices contain details relevant to the sewing construction.
\section{Review}\label{ii}
In this section we will review some relevant background material, and explain the methodology and philosophy behind our computations. In subsection \ref{Renyireview}, we briefly review previous work on R\'enyi entropies in the vacuum of 2D CFTs. Using these R\'enyi entropies as a guide, we explain how pure 3D quantum gravity naturally computes the universal contribution of the Virasoro identity block to CFT partition functions on generic Riemann surfaces. Then, in subsection \ref{sewing}, we will explain the sewing construction, which we will apply in section \ref{iv} to the computation of higher-genus partition functions.
\subsection{R\'enyi entropies and higher-genus partition functions}\label{Renyireview}
Two-dimensional CFTs provide perhaps the simplest arena in which to investigate entanglement entropies (EEs) in field theories. In this subsection, we will briefly review some calculations of these quantities, with particular attention to their dependence on the central charge $c$ of the theory.
\subsubsection{General CFTs}
The simplest quantity one can consider in this context is the vacuum EE of a single interval $[u,v]$ on the line. The corresponding R\'enyi entropy is given in terms of the partition function on the surface\footnote{In general, we will denote the plane branched $n$ times over a set of $N$ intervals by ${\mathscr R}_{N,n}$; this surface has genus $(N-1)(n-1)$.} ${\mathscr R}_{1,n}$, which is the plane branched $n$ times over the interval \cite{Holzhey:1994we}:\footnote{The partition function on a genus-zero surface is defined, for a given theory, up to a multiplicative constant independent of the metric. We are choosing that constant so that $Z(\mathbb{C})=1$; otherwise the argument of the logarithm in \eqref{SZrelation} would be $Z({\mathscr R}_{1,n})/Z(\mathbb{C})^n$.}
\begin{equation}\label{SZrelation}
S^{(n)}([u,v]) = -\frac1{n-1}\ln Z({\mathscr R}_{1,n})\,.
\end{equation}
This partition function is in turn related to the two-point function on the plane of twist operators in the orbifold theory ${\cal C}^n/\mathbb{Z}_n$ (where ${\cal C}$ is the original CFT) \cite{cardy0}:
\begin{equation}
Z({\mathscr R}_{1,n}) = \ev{\sigma(u)\tilde\sigma(v)}_{{\cal C}^n/\mathbb{Z}_n}\,.
\end{equation}
It will be convenient to work in terms of the free energy, which we define on any surface $X$ as
\begin{equation}
F(X):=-\ln Z(X)\,.
\end{equation}
The free energy $F({\mathscr R}_{1,n})$ is proportional to $c$ and otherwise independent of the theory: the surface ${\mathscr R}_{1,n}$ has genus zero, so the free energy is given entirely by a Liouville action multiplied by $c$; alternatively, in the twist-operator language, the two-point function depends only on their dimension, which is proportional to $c$. The result is \cite{Holzhey:1994we,cardy0}
\begin{equation}
S^{(n)}([u,v]) = \frac c6\left(1+\frac1n\right)\ln\left(\frac{v-u}\epsilon\right),
\end{equation}
where $\epsilon$ is an ultraviolet-cutoff length scale; its presence reflects the divergence in the partition function due to the presence of conical singularities in ${\mathscr R}_{1,n}$. This gives rise to the well-known formula for the EE \cite{Holzhey:1994we,cardy0},
\begin{equation}
S([u,v])=S^{(1)}([u,v]) = \frac c3\ln\left(\frac{v-u}\epsilon\right).
\end{equation}
The above result is easily generalized to the case of a single interval on a circle at zero temperature or on the line at finite temperature \cite{cardy0}. In either case, the branched cover surface continues to have genus zero, and therefore the EREs and EE continue to be proportional to $c$. The simplest cases where higher-genus partition functions appear are a single interval on the circle at finite temperature and two intervals on the line at zero temperature; the corresponding branched-cover surfaces have genera $n$ and $n-1$, respectively. This implies that the ERE will depend on the full operator content of the CFT, not just its central charge \cite{Calabrese:2009ez}. In rest of this subsection we will focus on the two-interval case, which is the best-studied one.
For two intervals $[u_1,v_1]\cup[u_2,v_2]$, it is convenient to work with the mutual (R\'enyi) information, which is ultraviolet-finite, hence conformally invariant and dependent only on the cross-ratio $y$ of the four endpoints \cite{Calabrese:2009ez}:\footnote{In the literature, this cross-ratio is often denoted $x$; however, we will use $x$ for a different cross-ratio in what follows.}
\begin{equation}
I^{(n)}(y) := S^{(n)}([u_1,v_1])+S^{(n)}([u_2,v_2])-S^{(n)}([u_1,v_1]\cup[u_2,v_2])\,,\quad
y:=\frac{(v_1-u_1)(v_2-u_2)}{(u_2-u_1)(v_2-v_1)}\,.
\end{equation}
We have
\begin{equation}\label{IFrelation}
I^{(n)}(y) = \frac1{1-n}F({\mathscr R}_{2,n}) + \text{subtractions}\,.
\end{equation}
The subtractions, given by the EREs of the individual intervals, soak up the divergences in $F({\mathscr R}_{2,n})$, leaving an unambiguous finite value for $I^{(n)}(y)$. The partition function on ${\mathscr R}_{2,n}$ can be expressed as a four-point function of twist operators in the orbifold theory:
\begin{equation}\label{twist4point}
Z({\mathscr R}_{2,n}) = \ev{\sigma(u_1)\tilde\sigma(v_1)\sigma(u_2)\tilde\sigma(v_2)}_{{\cal C}^n/\mathbb{Z}_n}\,.
\end{equation}
The surface ${\mathscr R}_{2,n}$ has genus $n-1$, so the partition function depends on the full operator content of the theory and not just its central charge. However, it contains a universal contribution that only depends on $c$. To define this part, it is useful to first set up some notation regarding the topology of the surface ${\mathscr R}_{2,n}$.
A useful basis of cycles on ${\mathscr R}_{2,n}$ can be described as follows. On each sheet, there is a cycle that separates the two intervals. We will call these A-cycles. The sum of all $n$ of them is trivial, so there are $n-1$ independent ones. There are also cycles which encircle the points $v_1,u_2$, crossing each cut once, which we call B-cycles; again, there are $n-1$ independent ones. (See figure \ref{fig-R2n}.)
\begin{figure}[t!]
\centering
\includegraphics[width=0.4\textwidth]{figs/R2n}
\caption{\label{fig-R2n}The $n$-sheeted replica surface ${\mathscr R}_{2,n}$, which is the branched covering surface of the plane with two intervals and has genus $n-1$. On each sheet, there is a cycle separating the two intervals called the $A$-cycle, and another cycle encircling the two points $v_1$ and $u_2$, called the $B$-cycle. There are $n-1$ independent cycles of each type.}
\end{figure}
The A-cycles intersect the B-cycles but not themselves, and vice versa. (Linear combinations $A_i,B_j$ can be constructed with intersection numbers $A_i\cdot B_j=\delta_{ij}$, but this will not be necessary for our purposes.) It is also useful to visualize the surface ${\mathscr R}_{2,n}$ as two spheres connected by $n$ tubes. This can be related to the branched cover by cutting each sheet along a small ellipse surrounding the interval $[u_1,v_1]$ and another one surrounding the interval $[u_2,v_2]$. Each interval then becomes a sphere with $n$ holes, while each sheet becomes a tube connecting one sphere to the other. Each A-cycle wraps a tube, while each B-cycle runs along one tube and back along another. (See figure \ref{fig-spheres}.)
\begin{figure}[t!]
\centering
\includegraphics[width=0.6\textwidth]{figs/spheres}
\caption{\label{fig-spheres}An alternate depiction of the surface ${\mathscr R}_{2,n}$ in figure \ref{fig-R2n}. ${\mathscr R}_{2,n}$ can be visualized as two spheres connected by $n$ tubes. The two spheres, one for each interval, are made by cutting small holes around each pair of intervals on all $n$ sheets. The tubes connecting the holes on the two spheres represent the sheets. In this picture, the $A$-cycles wrap the $n$ tubes and the $B$-cycles run through two different tubes.}
\end{figure}
${\mathscr R}_{2,n}$ enjoys a $\mathbb{Z}_n$ ``replica symmetry'', which cyclically permutes the sheets, and hence also the tubes.
The universal part of $Z({\mathscr R}_{2,n})$ to which we alluded above is defined as the contribution in which only Virasoro descendants of the vacuum appear on the A-cycles.
In other words, if for any circle $C$ we define $P_{\rm vac}(C)$ as the projection operator onto the conformal family of the vacuum of the Hilbert space of ${\cal C}$ on $C$, then we define the \emph{vacuum partition function} as the path integral with projectors $P_{\rm vac}(A_1)\cdots P_{\rm vac}(A_{n-1})$ inserted:
\begin{equation}
Z_{\rm vac}({\mathscr R}_{2,n}) := \ev{P_{\rm vac}(A_1)\cdots P_{\rm vac}(A_{n-1})}Z({\mathscr R}_{2,n})\,.
\end{equation}
With only vacuum descendants on $A_1,\ldots,A_{n-1}$, the cycle $A_n$ (which is a linear combination of the others) is automatically guaranteed to host only such descendants as well.\footnote{To see this, cut along all $n$ A-cycles, leaving two sphere $n$-point functions (on the left and right spheres of figure \ref{fig-spheres}). For each $n$-point function, $n-1$ of the operators correspond to vacuum descendants. As a result, if the $n$th one is not a vacuum descendant, then the $n$-point function vanishes, and hence does not contribute to the vacuum partition function.} Note that the choice of representative of any given cycle $A_i$ is unimportant; any representative can be mapped to any other by a holomorphic diffeomorphism, which acts on the Hilbert space by the Virasoro group, under which conformal families don't mix. In the orbifold description, the vacuum partition function can be written
\begin{equation}
Z_{\rm vac}({\mathscr R}_{2,n}) = \ev{\sigma(u_1)\tilde\sigma(v_1)P_{\text{orb vac}}(A)\sigma(u_2)\tilde\sigma(v_2)}_{{\cal C}^n/\mathbb{Z}_n}\,,
\end{equation}
where $P_{\text{orb vac}}$ is the projector onto states of ${\cal C}^n/\mathbb{Z}_n$ composed of descendants of the identity of ${\cal C}$,\footnote{This set of states includes more than just Virasoro descendants of the vacuum of ${\cal C}^n/\mathbb{Z}_n$. Rather, it includes all descendants of the vacuum under the larger algebra consisting of ($\mathbb{Z}_n$-symmetric) products of Virasoro generators acting on the different copies of ${\cal C}$.} and $A$ is a circle enclosing $[u_1,v_1]$. Note that, unlike the full partition function, $Z_{\rm vac}({\mathscr R}_{2,n})$ is not a modular invariant quantity, due to the distinguished role of the A-cycles.
As we will see in the next subsection, the vacuum partition function is particularly well-studied in the context of holographic and other large-$c$ CFTs.
\subsubsection{Large-$c$ CFTs}
We are interested in families of CFTs, such as holographic ones, that admit a large-$c$ limit. In such theories, all of these quantities---the free energies, entanglement (R\'enyi) entropies, and mutual (R\'enyi) informations---admit an expansion in $1/c$ starting at order $c$. We thus write, for example,
\begin{equation}
I^{(n)}(y) = \sum_{\ell=0}^\infty c^{1-\ell}I^{(n)}_\ell(y)\,,\qquad
F({\mathscr R}_{2,n}) = \sum_{\ell=0}^\infty c^{1-\ell}F_\ell({\mathscr R}_{2,n})\,.
\end{equation}
In a holographic CFT, the parameter $1/c$ is proportional to the bulk Newton constant,
\begin{equation}
\frac1c = \frac{2G_N}{3R_{\rm AdS}}\,,
\end{equation}
so the expansion in $1/c$ is a loop expansion in the bulk (hence the index $\ell$).
From a CFT perspective, the loop corrections ($\ell\ge1$) are ``cleaner'' than the classical one ($\ell=0$), in the following sense. First, $F_{\ell\ge1}$ is unambiguous, since the scheme dependence of the free energy is due to the Weyl anomaly, which is proportional to the central charge. Second, it is finite even on a singular surface such as ${\mathscr R}_{2,n}$, since the Weyl transformation that smoothes out those conical singularities shifts the free energy by $c$ times a Liouville action. Third, since it is Weyl-invariant, it depends only on the complex structure of ${\mathscr R}_{2,n}$, hence only on the cross-ratio $y$, not the positions of the endpoints themselves. Finally, since the subtractions present in \eqref{IFrelation} are proportional to $c$, we simply have
\begin{equation}\label{IFrelation2}
I^{(n)}_\ell(y) = \frac1{1-n}F_\ell({\mathscr R}_{2,n})\qquad\text{for $\ell\ge1$}\,.
\end{equation}
These properties will all be useful when we study the loop corrections below.
The RT formula makes a strikingly simple prediction for the classical part of the mutual information \cite{rt}:
\begin{equation}\label{RTpredict}
I_0(y) = \begin{cases} 0\,, & y\le1/2 \\ (1/3)\ln(y/(1-y)) & y\ge1/2 \end{cases}\,.
\end{equation}
It is interesting that this formula does not depend on the field content or other specifics of the dual theory. On the other hand, the loop corrections do depend on the field content, although they always include certain ``universal'' terms due to the gravitational sector, as we will explain below.
Significant effort has gone into testing the prediction \eqref{RTpredict} and computing the loop corrections using the replica trick. Two strategies have been followed to compute the relevant free energies. The first is to find the dominant gravitational saddle point whose conformal boundary is ${\mathscr R}_{2,n}$; the terms in the $1/c$ expansion of $F({\mathscr R}_{2,n})$ are then given by the classical action, the one-loop determinant of the fields about that background, and so on. The second strategy is to compute the four-point function of twist fields \eqref{twist4point} using CFT techniques such as the conformal-block decomposition. The RT prediction for the classical part was successfully confirmed, modulo some assumptions, by both methods, in \cite{Faulkner:2013yia} and \cite{Hartman:2013mia} respectively.
Consider the calculation of $F_0({\mathscr R}_{2,n})$, starting with the gravity method. In \cite{Faulkner:2013yia}, two gravitational saddles were constructed with conformal boundary ${\mathscr R}_{2,n}$. Both are handlebodies; in one, which we will call $H_A$, the A-cycles are contractible, while in the other, $H_B$, the B-cycles are contractible. $H_A$ has a smaller action for $y<1/2$ and $H_B$ for $y>1/2$. These are the only solutions that preserve the replica symmetry of ${\mathscr R}_{2,n}$, and are also believed to be the only type of solution that exists uniformly for all $n$. Their actions are analytic functions of $n$; when continued down to $n=1$, they reproduce precisely the RT prediction \eqref{RTpredict} for the EE. In \cite{Hartman:2013mia}, an analysis of conformal blocks in the ${\cal C}^n/\mathbb{Z}_n$ theory at large $c$---again, imposing the replica symmetry---led to the same result.
An important subtlety regarding these calculations is as follows. For general $n$ and $y$, it is not clear whether the dominant gravitational saddle is always $H_A$ or $H_B$, and therefore whether their actions indeed give the correct free energy and R\'enyi entropy.\footnote{Even if this is not the case, one can argue that these are the relevant saddles to consider for the purposes of analytically continuing the ERE down to $n=1$ to find the EE.} However, for small $y$, the tubes are very thin (as we will see in section \ref{v-ii} when we discuss the period matrix for ${\mathscr R}_{2,n}$), so the dominant saddle must indeed be the handlebody that fills them in, namely $H_A$. This is important for our purposes because the calculations we will describe from here on will always be done in an expansion in $y$, and therefore we can safely ignore this subtlety and assume that $H_A$ is the dominant saddle.
We now turn to the one-loop correction to the free energy, $F_1({\mathscr R}_{2,n})$, which as noted in \eqref{IFrelation2} directly gives the one-loop correction to the mutual R\'enyi information, $I_1^{(n)}(y)$. $F_1({\mathscr R}_{2,n})$ is proportional to the sum of the logs of the fluctuation determinants of all the fields propagating on the relevant gravitational saddle. In any theory of gravity, this includes the metric fluctuations. Their one-loop determinant on the handlebody $H_A$ was computed in an expansion in $y$ to order $y^8$ for all $n$ in \cite{Barrella:2013wja}, and to order $y^{10}$ for $n=1$ in \cite{Beccaria:2014lqa}.
As we will now explain, this contribution to the free energy is simply the ${\cal O}(c^0)$ part of the vacuum free energy $Z_{\rm vac}({\mathscr R}_{2,n})$ defined in the previous subsection. In fact, more generally, consider the partition function obtained from the classical action and loop corrections to all orders of perturbative pure gravity on $H_A$. We will now argue that this quantity is precisely $Z_{\rm vac}({\mathscr R}_{2,n})$. In the genus-one case, this was shown in \cite{Maloney:2007ud}, and we can adopt their argument here. In a Hilbert-space interpretation, we can choose to think of the A-cycles as defining a spatial direction and the B-cycles a (Euclidean) time direction. This is convenient because the states defined on the A-cycles are perturbative pure quantum gravity states on an AdS${}_3$ background, since the A-cycles are contractible and the handlebody is locally Euclidean AdS$_3$. Since the creation operators for metric fluctuations are, from the CFT viewpoint, Virasoro generators, these states are thus Virasoro descendants of the vacuum. Thus the perturbative pure gravity partition function on $H_A$ is precisely $Z_{\rm vac}({\mathscr R}_{2,n})$. The exact correspondence between the perturbative quantum gravity partition function and the universal identity block contributions to CFT partition functions was articulated and tested in \cite{Yin:2007gv}. We will extend that work in section \ref{v-iii}.
\vskip .1 in
We now return to computation of R\'enyi entropies specifically. Having established the CFT interpretation of $Z_{\rm vac}({\mathscr R}_{2,n})$, we can see that reproducing the results of \cite{Barrella:2013wja,Beccaria:2014lqa} using the twist-field method requires including only descendants of the vacuum as intermediate states in the conformal-block decomposition of the 4-point function \eqref{twist4point}, since the intermediate states are precisely those living on the A-cycles. More precisely, one should include states of the orbifold theory ${\cal C}^n/\mathbb{Z}_n$ that are made up of descendants of the vacuum of ${\cal C}$; these include more than just the descendants of the vacuum of ${\cal C}^n/\mathbb{Z}_n$. It is easy to see that the term of order $y^h$ in $I^{(n)}(y)$ is given by descendants at level $h$.
These calculations were carried out to order $y^8$ by Chen et al. in \cite{Chen:2013dxa}. Expanding their result in powers of $1/c$, the one-loop (order $c^0$) term matched the bulk metric one-loop determinant computed earlier in \cite{Barrella:2013wja}. Their $c^{1-\ell}$ term started at order $y^{2\ell+2}$, so their results could access $\ell\leq 3$. In other words, they not only reproduced the one-loop determinant, but effectively computed two-loop and three-loop free energies, which would presumably be quite challenging from a direct bulk perturbative calculation. The coefficient at each order in $1/c$ and $y$ is a rational function of $n$. We will not reproduce these rather complicated functions here for general $n$. However, let us note the following pattern in the $n$-dependence observed by \cite{Chen:2013dxa}:
\eq{chenpatt}{F_{\rm vac,\ell}({\mathscr R}_{2,n}) = (n-1)(n-2)\cdots (n-\ell)\left(\sum_{m=2\ell+2}^8 \alpha_{m,\ell}(n)y^m\right)+ O(y^9)~.}
The $\alpha_{m,\ell}(n)$ are functions of $n$; some of them have zeroes at positive $n$, but none of these zeroes coincide, unlike those shown in \eqref{chenpatt}.
There are some notable features of this formula. First, $F_{\rm vac;\;\ell\ge1}({\mathscr R}_{2,n})$ carries a factor of $n-1$. The fact that it vanishes at $n=1$ can be understood from the fact that the genus-zero free energy is given entirely by a Liouville action multiplied by $c$; it is also necessary, given \eqref{IFrelation2}, for the mutual R\'enyi information to have a smooth limit as $n\to1$. Second, $F_{\rm vac;\;\ell\ge2}({\mathscr R}_{2,n})$ carries an overall factor of $n-2$. The fact that it vanishes at $n=2$ can be understood from the fact that the contribution of the identity family to the genus-one free energy is one-loop exact: aside from a classical (order-$c$) part, it is given by $-\ln\chi_{\rm vac}(y)$, where $\chi_{\rm vac}(y)$ is the character of the identity family, which is independent of $c$.
Perhaps surprisingly, the $y^8$ term of $F_{\rm vac;\;3}({\mathscr R}_{2,n})$ computed by Chen et al. carries an overall factor of $n-3$. (Recall that \cite{Chen:2013dxa} only computed through $O(y^8)$ in the $y$-expansion.) If the pattern \eqref{chenpatt} were to hold to {\it all} orders in $y$, this would imply a truncation in the loop expansion around handlebodies asymptotic to ${\mathscr R}_{2,n}$ with appropriate cycles contractible. On this basis, Chen et al.\ were led to suggest that the genus-$g$ free energy might be $g$-loop exact for all $g$, at least for the replica manifolds ${\mathscr R}_{N,n}$. One might even wonder whether this could be true for all genus-$g$ manifolds. One of the main purposes of this paper is to test this intriguing idea. To do this, we will calculate $Z_{\rm vac}$ on generic genus-two Riemann surfaces, $\Sigma$, using a different technique that we describe now; this complementary approach will provide a gateway to applications to R\'enyi entropy and 3D quantum gravity.
\subsection{Vacuum amplitudes from sewing}\label{sewing}
In section \ref{iv}, we will compute $Z_{\rm vac}$ via the sewing construction. We heuristically explain this method here with the help of figure \ref{fig-sewing}; the method applies to computation of the full partition function $Z$ of ${\cal C}$, but can be specialized to computation of $Z_{\rm vac}$. The basic idea is to replace each handle of $\Sigma$ by a sum over local operator insertions at its ends. This frames the computation of $Z$ as a weighted sum of sphere four-point functions. As stressed earlier in this section, computing $Z_{\rm vac}$ as opposed to the full partition function of ${\cal C}$ means that we only allow Virasoro vacuum descendants to propagate along the handles. This construction is perturbative in the width of the handles. There are many parameterizations of a given surface $\Sigma$; we use the Schottky construction, which forms $\Sigma$ as a quotient of the Riemann sphere by a discrete subgroup of $PSL(2,\mathbb{C})$, the M\"obius group. The genus-two Schottky space is parameterized by coordinates $\lbrace p_1,p_2,x\rbrace$. Roughly speaking, these describe the width of the two handles and the sphere coordinate of the lone endpoint not fixed by conformal symmetry, respectively. The computation of $Z_{\rm vac}$ is then a double power series in $p_1$ and $p_2$, where the powers are the left-moving conformal weights of the operators inserted at the endpoints.
\begin{figure}
\centering
\includegraphics[width=0.98\textwidth]{figs/sewing}
\caption{\label{fig-sewing} A picture of the sewing approach to computing a genus-two partition function, $Z$. The mechanism was explained in figure \ref{fig-sewing-vac}. To compute $Z$ rather than $Z_{\rm vac}$, one simply lets the sum run over all operators in the CFT Hilbert space.}
\end{figure}
In order to make eventual contact with R\'enyi entropies and the work of \cite{Yin:2007gv}, we will also need to express $Z_{\rm vac}$ in terms of the period matrix of $\Sigma$, denoted $\Om$. That is, we need to perform the coordinate map $\lbrace p_1,p_2,x\rbrace \mapsto \Om$. This is known in closed form, but is complicated (see, e.g., \cite{Yin:2007gv, Gaberdiel:2010jf}). If we define multiplicative periods
\eq{mp}{q_{ij} := e^{2\pi i \Om_{ij}}}
then $q_{ij}$ admits a power series in $p_1$ and $p_2$ of the following form:
\es{mltprds}{
&q_{11}=p_1\sum_{n,m=0}^{\infty}p^n_1p^m_2\sum_{r=-n-m}^{n+m}c(n,m,|r|)\,x^r,\\
&q_{12}=x+x\sum_{n,m=1}^{\infty}p^n_1p^m_2\sum_{r=-n-m}^{n+m}d(n,m,r)\,x^r\\
&q_{22}=q_{11}(p_1\leftrightarrow p_2)~.
}
The $c(n,m,|r|)$ and $d(n,m,r)=d(m,n,r)$ are coefficients given in Appendix E of \cite{Gaberdiel:2010jf} through $m=n=7$.
Thus, in order to compute $Z_{\rm vac}$ via sewing, we must compute four-point functions of operators in the Virasoro identity representation. We turn to this now, by way of the more general subject of computing four-point functions of arbitrary holomorphic operators.
\section{Four-point functions and the analytic bootstrap}\label{iii}
In this section, which may be read independently of the rest of the paper, we discuss methods for computing correlators in 2D CFTs. We will focus on four-point functions. A standard way to compute a four-point function is to do an OPE expansion of pairs of operators. This yields a power series in the cross-ratio $x$ of the four points. However, we wish to calculate the correlator at finite values of $x$. We will describe two methods to do this. The first, described in subsection \ref{iii-i}, is via direct manipulation of operator modes, and may be familiar to CFT practitioners. The second, described in subsection \ref{iii-ii}, is an analytic realization of the conformal bootstrap that applies specifically to correlators of only holomorphic (or only anti-holomorphic) operators. The upshot is that the combination of holomorphy with fundamental properties of the OPE and crossing symmetry yields a result that is determined by a {\it finite} number of OPE coefficients. In the case that all four operators have identical holomorphic dimensions, the solution of crossing symmetry leads to an especially simple algorithm.
Before turning to those methods, we first describe a simple approach that applies specifically to correlators of descendants of the identity, and use the results to discuss the powers of the central charge $c$ that appear in such correlators.
\subsection{Free bosons and powers of $c$}\label{powers}
As explained in subsection \ref{sewing} (and illustrated in figure \ref{fig-sewing-vac}), $Z_{\rm vac}$, the universal part of the genus-two partition function, can be constructed from four-point functions on the plane of descendants of the identity. In this subsection, we will discuss general properties of such correlators.
The first property to note is that they are independent of the rest of the field content of the theory, and depend only on its central charge. This follows from the fact that the identity Verma module is closed under fusion. Since in this paper we are particularly interested in the powers of $c$ that appear in $Z_{\rm vac}$, in this subsection we will focus on the question of what powers of $c$ can appear in such correlators. We will first use a simple counting argument in a free field theory to show that the powers of $c$ are highly constrained. In particular, if one thinks of $1/c$ as a coupling constant, then it appears that these correlators are tree-level exact. We will relate this classicality to the fact that the sphere partition function, for any CFT, has a particularly simple $c$-dependence, and then discuss its bulk interpretation for holographic CFTs. Finally, we will discuss the generalizations of these statements at higher genus.
The fact that correlators of descendants of the identity are independent of the theory, except its central charge, implies that we can compute them in any convenient theory with a variable central charge. One simple choice is the theory of $c$ free bosons; by writing the relevant operators in terms of elementary fields, it is in principle straightforward to compute their correlators using free-field Wick contractions. This procedure is fairly tractable for calculating, for example, the four-point function of the stress tensor, but it rapidly becomes unwieldy when applied to higher-point functions or higher-level descendants, and for these calculations the methods described in the following subsections are far more efficient.
Nonetheless, the free-boson method gives a fast way to answer the important question of what powers of $c$ appear in a given correlator. For example, the stress tensor is \begin{equation}
T =
-\frac12\sum_{\mu=1}^c:\partial X^\mu\partial X^\mu:\,.
\end{equation}
An $m$-point function of stress tensors $\ev{T(z_1)\cdots T(z_m)}$ includes indices $\mu_1,\ldots,\mu_m$. The $2m$ $X$ fields appearing can be contracted in various ways, linking the different $T$'s, and therefore the different $\mu$'s, to each other. For example, a contraction between $X^{\mu_1}$ and $X^{\mu_2}$ leads to a factor of $\delta_{\mu_1\mu_2}$. In the connected part of the correlator, they are all linked in one group, so a non-zero contribution occurs only when all the $\mu$ are equal: $\mu_1=\cdots=\mu_m$. Hence the connected part of the correlator is linear in $c$, independent of $m$. Disconnected parts give higher powers of $c$; for example, the stress tensor four-point function has a term quadratic in $c$, from contractions in which the $T$ are linked in two separate pairs (see \eqref{eq8} for the explicit form).
All descendants of the identity can be written as normal-ordered products of derivatives of stress tensors. The connected part of any correlator---regardless of the number and type of operator---again comes from terms in which all the $\mu$'s are equal, and is therefore again linear in $c$. This also follows from the fact that the generating function for connected correlators of the stress tensor is the sphere free energy as a functional of the metric, which is simply $c$ times the Liouville action. Thus, if we think of $1/c$ as playing the role of $\hbar$, any CFT on the sphere is purely classical in this sector, in other words its correlators are effectively given entirely by tree-level contributions. To translate this into the usual field-theory language, if we normalize $T$ by a factor of $c^{-1/2}$ (so that its two-point function is 1), then a connected correlator with a total of $P$ factors of $T$ is proportional to $c^{1-P/2}$, just like a tree-level diagram with $P$ external legs in a field theory with coupling constant $1/c$.
We now turn to holographic theories, where $1/c\sim G_N$ is indeed the bulk coupling constant. An operator made of $p$ stress tensors corresponds to a state containing $p$ gravitons. This again leads to $c^{1-P/2}$ for a tree-level bulk process involving a total of $P$ gravitons. From this point of view, the absence of loop corrections may seem mysterious, given that there certainly exist Witten diagrams in the bulk containing loops, which make non-zero contributions to such a correlator. However, in 3D gravity, all terms in the effective action which depend only on the metric (even those generated by loops of other fields) can be absorbed in the Einstein-Hilbert term \cite{Gupta:2007th}. Hence, the full quantum effective action for the metric \emph{is} simply the classical action, with a renormalized value of the Newton constant. Since it is the renormalized Newton constant that enters in the relation $1/c=2G_N/3R_{\rm AdS}$, when working in terms of $c$, the theory appears to be entirely classical.\footnote{This property is {\it not} directly related to the absence of propagating degrees of freedom in pure 3D gravity. To demonstrate this, one can consider the correlator of four spin-$s$ currents with $s>2$. As we will show by example later in section \ref{identop} (see \eqref{Lamb}--\eqref{wcoeffs}), these correlators do not truncate in a $1/c$ expansion. This implies a non-trivial loop expansion for bulk four-point scattering of spin-$s$ gauge fields in pure 3D higher spin gravity, even though these theories also lack propagating modes.}
The arguments above, both in the field theory and in the bulk, depend crucially on the fact that we are working on the plane (or, more generally, on the sphere with any metric). On surfaces with non-zero genus, as we recalled in subsection \ref{Renyireview}, the free energy (or effective action) includes higher powers of $1/c$, and these higher-order terms depend on the full operator content of the theory. So one would not expect correlators to be purely classical. Similarly, from a bulk point of view, gravitons and other particles can propagate in loops that wrap non-trivial cycles of the bulk, giving corrections that cannot be captured by a local effective action.
Nonetheless, as noted below \eqref{chenpatt}, at genus one, the \emph{vacuum} free energy $F_{\rm vac}(T^2)=-\ln Z_{\rm vac}(T^2)$ does have the special property that it is one-loop exact, in other words contains only terms linear and constant in $c$ (in any CFT). $Z_{\rm vac}(T^2)$ is defined as the path integral with the insertion of the operator $P_{\rm vac}(A)$ that projects onto vacuum descendants on some fundamental cycle $A$ of the torus. Derivatives of the free energy with respect to the metric give connected correlators of the form $\ev{P_{\rm vac}(A)\mathcal{O}_1\mathcal{O}_2\cdots}_{\rm con}$, where the $\mathcal{O}_i$ are descendants of the identity. Such correlators are therefore also one-loop exact (contain only terms linear and constant in $c$). We will confirm this property by explicit calculation in subsection \ref{iv-ii} below.
\subsection{Operator modes}\label{iii-i}
To begin, we will compute vacuum correlators of the form
\eq{eq1}{\langle {\cal O}(\infty) T(1) T(z) {\cal O}(0) \rangle}
where $\langle \cdot \rangle := \langle 0|\cdot|0\rangle$. The operator ${\cal O}$ is allowed to be an arbitrary, non-holomorphic operator, not necessarily primary or quasi-primary. As is conventional, we leave its anti-holomorphic dependence implicit in what follows. We define mode expansions
\eq{eq2}{T(z) = \sum_{n\, \in \, \mathbb{Z}} {L_n\over z^{n+2}}~, \quad {\cal O}(z) = \sum_{n\,\in\, \mathbb{Z}} {{\cal O}_{n}\over z^{n+h}}}
where the stress tensor modes obey the Virasoro algebra,
\eq{eq3}{[L_m,L_n] = (m-n) L_{m+n} + {c\over 12}n(n^2-1) \delta_{m+n,0}~.}
In terms of modes, the four-point function is
\eq{eq5}{\langle {\cal O}(\infty) T(1) T(z) {\cal O}(0) \rangle = z^{-2} \sum_{n\, \in \, \mathbb{Z}} z^{-n} \langle {\cal O}_{h} L_{-n} L_n {\cal O}_{-h}\rangle~.}
To proceed, we break up the sum into the $n=0$ mode term, and two sums over positive and negative integers (denoted $\mathbb{Z}_+$ and $\mathbb{Z}_-$, respectively). Using the fact that $L_0{\cal O}_{-h} |0\rangle = h{\cal O}_{-h}|0\rangle$, the $n=0$ mode contributes a term $z^{-2} h^2 {\cal N}_{{\cal O}}$, where ${\cal N}_{{\cal O}} = \langle {\cal O}_{h} {\cal O}_{-h}\rangle$ is the norm of ${\cal O}$. Using the Virasoro algebra, and relabeling $n\rightarrow -n$, we can rewrite the sum over $\mathbb{Z}_-$ in terms of a sum over $\mathbb{Z}_+$ as
\begin{eqnarray}\label{eq7}
\sum_{n\, \in \, \mathbb{Z}_-} z^{-n} \langle {\cal O}_{h} L_{-n} L_n {\cal O}_{-h}\rangle \!\!\!&=&\!\!\! \sum_{n\, \in \, \mathbb{Z}_+}z^{n}\Big(\langle {\cal O}_{h} L_{-n} L_{n} {\cal O}_{-h}\rangle+\left(2nh+{c\over 12}n(n^2-1)\right) {\cal N}_{{\cal O}} \Big)\quad \quad \quad\\
\!\!\!&=&\!\!\! \sum_{n\, \in \, \mathbb{Z}_+}z^n \langle {\cal O}_{h} L_{-n} L_{n} {\cal O}_{-h}\rangle+\left({c\over 2}{z^2\over (1-z)^4}+2h{z\over (1-z)^2}\right){\cal N}_{{\cal O}}~.\nonumber
\end{eqnarray}
The quantity in angle brackets is simply the squared norm $||L_n {\cal O}_{-h}|0\rangle||^2$.
There is a further simplification of this sum: it truncates on account of vacuum invariance. Suppose ${\cal O}$ is a level $N$ descendant of a primary field ${\cal O}'$ of holomorphic dimension $H=h-N$. Then ${\cal O}_{-h}$ can be written as a linear combination of ``lexicographically ordered'' operators,
\eq{eq9}{L_{-n_1}\cdots L_{-n_k} {\cal O}'_{-H}}
where $n_1 \geq n_2 \geq \ldots \geq n_k$ and $N = \sum_{i=1}^k n_i$. This implies that the sum in \eqref{eq7} truncates at $n=N$, because $L_n {\cal O}_{-h}|0\rangle = 0$ for $n>N$ by definition of a primary.
Taking this into account and adding \eqref{eq7} to the other pieces, the full correlator is
\es{eq8}{&\langle {\cal O}(\infty) T(1) T(z) {\cal O}(0) \rangle = \\&z^{-2}\left(\sum_{n=1}^N(z^n+z^{-n}) \langle {\cal O}_{h} L_{-n} L_{n} {\cal O}_{-h}\rangle +\left({c\over 2}{z^2\over (1-z)^4}+2h{z\over (1-z)^2}+h^2\right){\cal N}_{{\cal O}}\right)~.}
A pleasing feature of the expression in parenthesis is its manifest invariance under $z\rightarrow 1/z$, which is simply invariance under crossing symmetry corresponding to exchange of the two stress tensors.
If ${\cal O}$ is quasi-primary, then the $n=1$ term of the sum vanishes. If ${\cal O}$ is primary, the entire sum vanishes. That leaves us with a very simple expression:
\es{eq10}{{\cal O}~\text{primary}:\quad \langle {\cal O}(\infty) T(1) T(z) {\cal O}(0) \rangle =z^{-2}\left({c\over 2}{z^2\over (1-z)^4}+2h{z\over (1-z)^2}+h^2\right){\cal N}_{{\cal O}}~.}
A useful check of \eqref{eq8} is to take ${\cal O}=T$, which yields the stress tensor four-point function. Using the Virasoro algebra to compute $\langle L_2 L_{-2} L_2 L_{-2} \rangle = c^2/4$, we find
\es{TTTT}{\langle T(\infty)T(1) T(z) T(0) \rangle &=z^{-4}\left({c^2\over 4}\left(1+z^4+{z^4\over (1-z)^4}\right) + 2c{(1-z+z^2)\over (1-z)^2}\right)~.}
This agrees with previous results (e.g. \cite{Osborn:2012vt}). We may define the conformal cross-ratio as
\eq{eq4}{x := {z_{12}z_{34}\over z_{13}z_{24}} = z~,}
in which case \eqref{TTTT} has the correct form of a four-point function as determined by conformal symmetry: %
\eq{TTTT2}{\langle T(\infty)T(1) T(z) T(0) \rangle = z^{-4} {\cal F}(x)~.}
We will reproduce this result in the next subsection in a more efficient way.\footnote{Following our discussion in subsection \ref{powers}, note that this could be derived for {\it all} $c$ from a 3D gravity computation at {\it large} $c$, by thinking of $T$ as a single-graviton state: the ${\cal O}(c^2)$ part is the free-field Wick contraction, and the ${\cal O}(c)$ part is the connected bulk correlator of four gravitons expressed in terms of the renormalized Newton constant.}
One can easily generalize this analysis to correlators where $T$ is replaced by a different operator. For simplicity, we consider the four-point function of two pairs of holomorphic quasi-primaries ${\cal O}^a$ and ${\cal O}^b$, of dimensions $h_a$ and $h_b$, respectively. (Their modes are defined as in \eqref{eq2}.) In this case, the resulting expression is
\begin{eqnarray}\label{oooo}
&&\!\!\!\!\!\!\!\langle {\cal O}^a(\infty){\cal O}^b(1){\cal O}^b(z){\cal O}^a(0)\rangle=\\
&&\!\!\!\!\!\!\!z^{-h_j}\left(\sum_{n=1}^{h_a} (z^n+z^{-n})\langle {\cal O}^a_{h_a}{\cal O}^b_{-n}{\cal O}^b_n{\cal O}^a_{-h_a}\rangle + \langle {\cal O}^a_{h_a}{\cal O}^b_0{\cal O}_0^b{\cal O}^a_{-h_a}\rangle+ \sum_{n\, \in \, \mathbb{Z}_+}z^n\langle {\cal O}^a_{h_a} [{\cal O}^b_n,{\cal O}^b_{-n}]{\cal O}^a_{-h_a}\rangle\right)\nonumber
\end{eqnarray}
To understand why the first sum truncates at $n=h_a$, we need to examine the OPE between quasi-primaries: in terms of modes,
\eq{qpope}{[{\cal O}^a_m,{\cal O}^b_n] = \sum_c C^{ab}_{~~c} P(m,n;h_{a},h_{b}; h_{c})\, {\cal O}^c_{m+n} + G^{ab}\delta_{m+n,0} \left({\begin{array}{c}
m+h_a-1 \\
h_a+h_b-1 \\
\end{array} } \right) ~.}
$G^{ab}$ is the Zamolodchikov metric, $ {\cal O}^c$ are also quasi-primary, $C^{ab}_{~~c}$ are OPE coefficients, and the $P(m,n;h_{a},h_{b}; h_{c})$ are known functions\footnote{See e.g. equation (3.4) of \cite{Bowcock:1990ku}. All $P(m,n;h_{a},h_{b}; h_{c})$ are finite in unitary CFTs for operators of finite dimension.} encoding the contribution of the full global conformal family of ${\cal O}^c$. All modes ${\cal O}^b_{n}$ with $n> -h_b$ annihilate the vacuum. This enables us to write ${\cal O}^b_n{\cal O}^a_{-h_a}|0\rangle = [{\cal O}^b_n,{\cal O}^a_{-h_a}]|0\rangle$ for $n>-h_b$; the OPE \eqref{qpope}, combined with the unitarity bound $h\geq 0$, ensures that modes with $n>h_a$ give vanishing contribution. This explains the upper bound in \eqref{oooo}.
Equation \eqref{oooo}, while compact, is not particularly transparent. Even if ${\cal O}^b$ is made of current modes alone, its modes may be given by infinite sums over products of the $L_n$, which are difficult to manipulate. More generally, the four-point function appears to depend on the full holomorphic operator content of the theory, due to the presence of the commutator $[{\cal O}^b_n, {\cal O}^b_{-n}]$. In fact, this latter point belies the true structure of the result. We now demonstrate this explicitly as we turn to a much more powerful method of computation for correlators of holomorphic operators.
\subsection{The holomorphic bootstrap}\label{iii-ii}
We will now describe a general method to compute the correlation functions of chiral operators using crossing symmetry.
We will see that any correlation function of chiral operators which obey a closed operator product algebra may be determined uniquely by a {\it finite} number of three-point function coefficients. This is in contrast to the typical situation, where the OPE allows us to determine correlation functions only in terms of an infinite sum over intermediate states.
In many cases, such as for the correlation functions of Virasoro descendants of the identity, this leads to an extremely efficient computational algorithm.
Let us recapitulate our conventions for chiral operators. We make no further reference to the mode notation of the previous subsection. We will consider a family of chiral operators ${\cal O}_a(z)$, with integer dimensions $h_a$, and ${\bar h}_a=0$. We will take the basis ${\cal O}_a$ to be quasi-primaries and assume that the ${\cal O}_a$ satisfy a closed OPE
\begin{equation}
{\cal O}_a(z_1) {\cal O}_b(z_2) \sim \sum_c C_{ab}{}^c {{\cal O}_c (z_2)\over z_{12}^{h_a+h_b-h_c}}~+~(\text{descendants})
\end{equation}
The two point functions
\begin{equation}
\langle {\cal O}_a (z_1) {\cal O}_b (z_2) \rangle = {G_{ab} \over z_{12}^{h_a+h_b}}
\end{equation}
and three-point functions
\begin{equation}
\langle {\cal O}_a(z_1) {\cal O}_b(z_2) {\cal O}_c(z_3) \rangle = {C_{abc} \over z_{12}^{h_a+h_b-h_c}z_{13}^{h_a+h_c-h_b}z_{23}^{h_b+h_c-h_a}}
\end{equation}
are fixed, up to constants, by conformal invariance.
\subsubsection{Four-point functions: General case}
Conformal invariance constrains the four-point function to take the form
\begin{equation}
\label{4pt}
\langle {\cal O}_a(z_1) {\cal O}_b(z_2) {\cal O}_c(z_3) {\cal O}_d (z_4) \rangle
=\left({ 1\over z_{12}^{h_a+h_b} z_{34}^{h_c+h_d}} \left({z_{24} \over z_{14}} \right)^{h_{ab} } \left({z_{14} \over z_{13}}\right)^{h_{cd}} \right){\cal F}_{abcd}(x) ~.
\end{equation}
where we define the cross ratio $x$ as in \eqref{eq4},
\begin{equation}
x={z_{12} z_{34} \over z_{13} z_{24}}~, ~~~~~~~~~~~~~~~~ 1-x={z_{14} z_{23} \over z_{13} z_{24}} ~.
\end{equation}
We will use the notation $H=\sum_a h_a$, $h_{ab}=h_a-h_b$, $z_{ab}=z_a-z_b$, etc.
Our starting point is the observation that the four-point function (\ref{4pt}) depends analytically on the $z_i$ and has poles only when the points $z_i$ coincide. Thus ${\cal F}_{abcd}$ is a meromorphic function of $x$ with poles only at $x=0,1,\infty$. So ${\cal F}_{abcd}$ is a rational function of $x$, which is uniquely completely determined (up to a constant piece) by its polar behaviour at these points. As we will see, this polar behaviour is fixed by only a finite number of three-point function coefficients.
We begin by considering the expansion of ${\cal F}_{abcd}$ near $x\to0$.
This can be found by inserting the ${\cal O}_a {\cal O}_b$ and ${\cal O}_c {\cal O}_d$ OPE into the four-point function (\ref{4pt}). The result is a sum over intermediate operators ${\cal O}_e$. The contributions from the descendant states of a given quasi-primary are given by a rigid (i.e. $SL(2,\mathbb{R}$)) conformal block. The rigid conformal blocks were written in terms of hypergeometric functions in \cite{zamo}. The result is
\begin{equation}
\label{cblock}
{\cal F}_{abcd} (x)=
\sum_{e} \left(C_{abe} C_{cd}{}^e \right)
x^{h_e}F(h_e-h_{ab}, h_e+h_{cd};2h_e;x)
\end{equation}
From this we see that ${\cal F}_{abcd}$ is finite as $x\to 0$. The constant term as $x\to0$ is given by the exchange of the identity operator, so
\begin{equation}
{\cal F}_{abcd} (x) = G_{ab} G_{cd} + \dots \label{0exp}
\end{equation}
where $\dots$ denotes terms that vanish as $x\to 0$.
We now need to determine the polar behaviour near $x\to1$ and $x\to\infty$. To do this we will use the transformation properties of
the four-point function under crossing symmetry. The crossing symmetry conditions can be derived by considering how the correlation function (\ref{4pt}) transforms when the $z_i$ are permuted. In particular, let us consider a permutation $\pi\in S_4$ of four elements. We have
\begin{equation}
\langle {\cal O}_a(z_1) {\cal O}_b(z_2) {\cal O}_c(z_3) {\cal O}_d (z_4) \rangle =
\langle O_{\pi(a)}(z_{\pi(1)}) O_{\pi(b)}(z_{\pi(2)}) O_{\pi(c)}(z_{\pi(3)}) O_{\pi(d)} (z_{\pi(4)}) \rangle
\end{equation}
This relates ${\cal F}_{abcd} (x)$ to ${\cal F}_{\pi(abcd)} (\pi (x))$, where the permutation $\pi$ acts on the cross-ratio as
\begin{equation}
\pi(x) \equiv {z_{\pi(13)} z_{\pi(24)} \over z_{\pi(12)} z_{\pi(34)}}
\end{equation}
One just needs to determine how the permutation $\pi$ acts on the prefactor in parenthesis in equation (\ref{4pt}).
Some permutations have $\pi(x)=x$; these give identities for the ${\cal F}_{abcd}(x)$ with fixed $x$. One can verify that these identities follow immediately from the conformal block expansion (\ref{cblock}), using properties of the hypergeometric function identities and symmetries of the three-point function coefficients.
Other permutations act on $x$, and give non-trivial information about four-point functions. In particular, the permutations $\pi=(14)$ and $\pi=(24)$ give the crossing symmetry equations
\es{cros}{{\cal F}_{abcd}(x) &= (-1)^{H} x^{h_a + h_d} {\cal F}_{dbca}({1/x})\\
&= (-1)^{H} x^{h_c + h_d} (1-x)^{-h_c-h_b} {\cal F}_{adcb}({1-x})}
These crossing equations strongly constrain the allowed form of the three-point function coefficients.
Since ${\cal F}_{abcd}(x)$ is finite as $x\to0$, we see that ${\cal F}_{abcd}$ has a pole of order $h_a+h_d$ at $x\to \infty$ and a pole of order $h_b+h_c$ at $x\to1$.
We need to understand better the behaviour near these poles.
To determine the behaviour near $x\to\infty$ we insert the conformal block expansion (\ref{cblock}) into the first crossing symmetry equation to get
\begin{eqnarray}
{\cal F}_{abcd}(x) &=& (-1)^{H} \sum_{e} C_{dbe} C_{ca}{}^e x^{h_a + h_d-h_e} {F}(h_e-h_{db},h_e+h_{ca};2h_e;{1/x})
\\
&\sim& \sum_{n=1}^{h_a+h_d} \alpha_n x^{n} + \dots~~~~~~~{\rm as}~x\to\infty ~.
\label{inftyexp}
\end{eqnarray}
Here $\dots$ denotes terms that are finite at $x\to\infty$.
The important point is that, because the hypergeometric function is finite as $x\to\infty$, the only terms that contribute to the pole are those with $h_e<h_a+h_d$.
In particular, the power series expansion of the hypergeometric function at $x\to\infty$ gives an explicit formula for the $\alpha_n$ in terms of the three-point function coefficients $C_{dbe} C_{ca}^e$ with $h_e<h_a+h_d$. We find
\begin{equation}
\alpha_n = (-1)^H \sum_{h_e=0}^{h_a+h_d-n} C_{dbe} C_{ca}{}^e{ (h_e-h_{db})_{h_a+h_d-h_e-n} (h_e+h_{ca})_{h_a+h_d-h_e-n} \over(h_a+h_d-h_e-n)! (2h_e)_{h_a+h_d-h_e-n}}~.
\end{equation}
Similarly, near $x\to1$ we have
\es{1exp}{{\cal F}_{abcd}(x) &= (-1)^{H} \sum_{e} C_{ade} C_{cb}{}^e (1-x)^{h_e-h_c-h_b} {F}(h_e-h_{ad},h_e+h_{cb};2h_e;{1-x}) x^{h_c + h_d}
\\
&\sim \sum_{n=1}^{h_b+h_c} \beta_n (1-x)^{-n} + \dots~~~~~~~{\rm as}~x\to1 ~.}
where $\dots$ denotes terms that are finite as $x\to1$.
Again, the hypergeometric function has a simple power series expansion at $x\to1$, giving an explicit formulas for the coefficients $\beta_n$ in terms of the three-point function coefficients with $h_e<h_b+h_c$. The formula for the $\beta_n$ is a bit more complicated than that for $\alpha_n$, since we must expand $x^{h_c + h_d}$ in powers of $1-x$ as well as the hypergeometric function, so we will not write it explicitly. However, the important point is that there is a completely explicit (albeit complicated) expression for the $\beta_n$ in terms of the three-point function coefficients $C_{ade} C_{cb}{}^e$ with $h_e < h_b+h_c$.
The four-point function ${\cal F}_{abcd}$ is now completely fixed. It is the unique rational function of $x$ which is finite everywhere except at $1$ and $\infty$, where its polar behaviour given by (\ref{inftyexp}) and (\ref{1exp}), and whose value at $x=0$ is given by (\ref{0exp}):
\begin{equation}
\label{Fis}
{\cal F}_{abcd}(x) = G_{ab}G_{cd}+\sum_{n=1}^{h_a+h_d} \alpha_n x^{n} + \sum_{n=1}^{h_b+h_c} \beta_n \left[ \left({1-x}\right)^{-n} -1\right]~.
\end{equation}
We see that ${\cal F}_{abcd}$ depends on a total of $H = h_a+h_b+h_c+h_d$ coefficients, $\alpha_n, \beta_n$, which are determined by combinations of a finite number of three-point function coefficients.
This is a consequence of crossing symmetry applied in a holomorphic setting; for non-holomorphic operators, there is no simple formula for a four-point function in terms of a finite number of operators.
This has a remarkable consequence for the conformal bootstrap program, where crossing symmetry is used to place constraints on the three-point function coefficients. This is especially true for chiral CFTs. In a typical CFT, the bootstrap results in equations involving an infinite number of three-point function coefficients, which can only be solved by truncating or approximating the crossing symmetry equations in some way. For a chiral CFT, the constraints are all written in terms of a finite number of equations. For example, by comparing \eqref{cblock} with the expansion of \eqref{Fis} around $x=0$ we can obtain explicit formulas for all of the coefficients $C_{abe} C_{cd}{}^e$, for all $e$, in terms of the coefficients $C_{dbe} C_{ca}{}^e$ with $h_e<h_a+h_d$ and $C_{ade} C_{cb}{}^e$ with $h_e<h_b+h_c$. Of course, our results also apply to chiral operators in non-chiral CFTs.
Moreover, we note that (\ref{Fis}) is not the unique way of writing the four-point function. In writing (\ref{4pt}) we chose to separate out a particular combination of $z_{ij}$ to define a meromorphic function. This choice led to a meromorphic function depending on $H$ coefficients which were determined by three-point function coefficients $C_{dbe} C_{ca}{}^e$ with $h_e<h_a+h_d$ and $C_{ade} C_{cb}{}^e$ with $h_e<h_b+h_c$.
Other ways of separating out a meromorphic function will lead to different expressions which in some cases may be more useful. For example one particular interesting way of imposing the crossing symmetry relations is to write the four-point function as
\begin{eqnarray}\label{4ptalt}
{F_{abcd}(x)
\over
z_{12}^{h_a +h_b-H/3}
z_{13}^{h_a +h_c-H/3}
z_{14}^{h_a +h_d-H/3}
z_{23}^{h_b +h_c-H/3}
z_{24}^{h_b +h_d-H/3}
z_{34}^{h_c +h_d-H/3}
}~
\end{eqnarray}
where
\begin{equation}
{\cal F}_{abcd} (x) =
x^{H/3} (1-x)^{h/3-h_b-h_c} F_{abcd}(x)
\end{equation}
The function $F_{abcd}$ is convenient because it treats the four points democratically -- which makes the crossing symmetry equations very simple -- but does so at the price of introducing a branch cut in $F_{abcd}$ coming from the fractional powers of $H/3$. $F_{abcd}$ has singularities of order ${H/3}$ at each of the three points $x=0,1,\infty$; the crossing equations determine $F_{abcd}$ to be
\begin{equation}
F_{abcd}(x) = \sum_{n=0}^{\left\lfloor H/3\right\rfloor} \left(a_n x^{n-H/3} +
b_n x^{H/3-n}+
c_n (1-x)^{n-H/3}
\right)
\end{equation}
where the $a_n, b_n, c_n$ are determined by the three-point functions of operators with $h_e\leq{\left\lfloor H/3\right\rfloor}$. Note that, since $n=0, \dots, {\left\lfloor H/3\right\rfloor}$ we now have $3{\left\lfloor H/3+1\right\rfloor}$ coefficients to determine. It is reasonably straightforward, though tedious, to write explicit expressions for these coefficients in terms of the three-point functions. The advantage of this approach is that it will, in principle, require the computation of fewer three-point function coefficients. For example, if the number of operators in the chiral algebra increases rapidly with dimension (as in the case of the Virasoro algebra) then this expansion would be much more efficient.
\subsubsection{Four-point functions: Identical operators}\label{identop}
Let us now simplify to the case where the four operators ${\cal O}_a$ are identical operators ${\cal O}$ of weight $h$, which is of interest for our computation of the higher genus partition function. In this case the above procedure simplifies considerably. The four-point function ${\cal F}(x) = {\cal F}_{abcd}(x)$ is a meromorphic function with poles only at $x=1,\infty$ which obeys the simplified crossing symmetry equation
\begin{equation}\label{crossingsimple}
{\cal F}(x) = x^{2h} {\cal F}(1/x) = x^{2h} (1-x)^{2h} {\cal F}(1-x)
\end{equation}
In fact, the space of such functions is a vector space of dimension $1+\lfloor2h/3\rfloor$.
To see this, consider the function
\begin{equation}
a(x) = {(1-x+x^2)^2 \over (1-x)^2}
\end{equation} which obeys (\ref{crossingsimple}) with $h=1$. The function ${\cal F}(x) a(x)^{-h}$ is invariant under the anharmonic group generated by $x\to 1-x$ and $x\to 1/x$. Moreover, this function is analytic everywhere on the Riemann sphere with the exception of a pole of order $2h$ at $x = {\rm e}^{\pi i/3}$, along with a mirror image pole at $x={\rm e}^{-\pi i /3}$. These points are order-three fixed points of the anharmonic group, so when expanded around $x=\pm e^{\pi i /3}$, only cubic powers may appear.
The function
\begin{equation}\label{k}
k(x) = {x^2 (1-x)^2\over (1-x+x^2)^3}
\end{equation}
is the unique meromorphic function invariant under the anharmonic group that has a pole of order 3 at $x = \pm {\rm e}^{\pi i/3}$.
We can therefore expand
${\cal F}(x) a(x)^{-h}$ in integer powers of $k$, to obtain
\es{F(x)}{{\cal F}(x)
&= \sum_{n=0}^{\lfloor2h/3\rfloor} c_n {x^{2n} (1-x+x^2)^{2h-3 n}\over (1-x)^{2h-2n} }}
To implement this way of computing four-point functions, we note that to determine the coefficients $c_n$, we now simply expand this function in powers of $x$ and use the OPE to determine these coefficients as products of three-point functions.
It is instructive to phrase our conclusions in the language of modular functions. Equating $x$ with the modular lambda function
\eq{mod1}{x = \lambda(\tau) \approx 16 q^{1/2} - 128 q + O(q^{3/2})~,}
where $q=e^{2\pi i \tau}$, gives a map from ${\cal M}_{0,4}$ (the moduli space of four marked points on the sphere) to ${\cal M}_{1,0}$ (the moduli space of a torus). Accordingly, $SL(2,\mathbb{Z})$ transformations of $\tau$ induce anharmonic group transformations of $x$: specifically, $x\rightarrow 1-x$ and $x \rightarrow 1/x$ are induced by the $S$ and $TST$ transformations, respectively. The problem of finding a function invariant under the anharmonic group therefore maps to finding a modular function, with desired polar structure in $q$ determined by the poles in $x$ via \eqref{mod1}. Given the identification \eqref{mod1}, our function $k(x)$ in \eqref{k} is just (256 times) the inverse of the $J$ function: $k(x) = 256/J(\tau)$. So the construction of the four-point function ${\cal F}(x)$ is literally identical to that of torus partition functions of holomorphic CFTs, as in \cite{Witten:2007kt}. Likewise, \eqref{F(x)} implies a Rademacher expansion for OPE coefficients of higher dimension operators.
We now treat some useful examples. For $h=1$ we have
\begin{equation}
{\cal F}(x) = c_0 {(1-x+x^2)^2\over(1-x)^2}
\end{equation}
This is exactly the four-point function of a spin-1 current, $j$. The coefficient $c_0=k^2$ is determined by the first (trivial) OPE coefficient $j j 1$, where $k$ is the level of the current algebra.
For $h=2$ we get two possible functions,
\begin{equation}
{\cal F}(x) = { c_0 (1-x+x^2)^4 + c_1 (1-x)^2 x^2 (1-x+x^2) \over (1-x)^4}
\end{equation}
This is the stress tensor four-point function. Matching the small $x$ expansion with OPE coefficients of $T T 1$ and $T T T$, we find $c_0= c^2/4$ and $c_1 = c(2-c)$, where we used the canonical norm for the stress tensor, ${\cal N}_T = c/2$ . This matches $\langle TTTT\rangle$ as computed in \eqref{TTTT}.
For $h=3$ we have
\begin{equation}\label{F3}
{\cal F}(x) = { c_0 (1-x+x^2)^6 + c_1 (1-x)^2 x^2 (1-x+x^2)^3 + c_2 (1-x)^4 x^4 \over (1-x)^6}
\end{equation}
This is the four-point function of a spin-3 current, call it $W$, which was first worked out in
\cite{Zamolodchikov:1985wn} in the context of CFTs with ${\cal W}_3$ symmetry.\footnote{We believe our result actually corrects a sign error in \cite{Zamolodchikov:1985wn}: the parameter $\mu$ there should be a sum, not a difference, of two terms.} Matching the small $x$ expansion with OPE coefficients determines the $c_i$. In a theory with ${\cal W}_3$ symmetry, the first three quasi-primary operators appearing in the exchange channel of $\langle WWWW\rangle$ are $1,T$ and the level-four quasi-primary $\Lambda$, which is the normal-ordered product of $T$ with a derivative term subtracted:
\begin{equation}\label{Lamb}
\Lambda := (TT) - {3\over 10}\partial^2 T~.
\end{equation}
This operator has norm ${\cal N}_{\Lambda} = c(5c+22)/10$. The OPE coefficient ${WWW}$ vanishes, as it does for $W$ being any odd-spin chiral quasi-primary. Computing OPE coefficients using the Virasoro algebra and matching to the small-$x$ expansion of \eqref{F3}, we find
\eq{wcoeffs}{c_0 = {\cal N}_W^2~, \quad c_1 = {\cal N}_W^2\left({6(3-c)\over c}\right)~, \quad c_2 = {\cal N}_W^2\left( {3(5c^2-71c-102)\over c(5c+22)}\right)}
where $\langle W_3W_{-3}\rangle \equiv {\cal N}_W\propto c$ is the norm of $W$.\footnote{If the chiral algebra contains a spin-4 current too, as in the case of the ${\cal W}_{\infty}[\lambda]$ algebra appearing in the context of higher spin AdS/CFT \cite{Gaberdiel:2011wb}, this current will also appear at level four with nonzero OPE coefficient, and will change the value of $c_2$. This generalization is simple to compute using the ${\cal W}_{\infty}[\lambda]$ algebra; one instead finds $c_2 = 3{\cal N}_W^2(\lambda^2(25c^2-115c+546)-100c^2-740c-7464)/(5c(5c+22)(\lambda^2-4))$. This agrees with a previous result \cite{Long:2014oxa}.} As explained in section \ref{powers}, $\langle WWWW\rangle$ does not truncate in a $1/c$ expansion.
For $h=4$ we get
\begin{equation}\label{F4h}
{\cal F}(x) = { c_0 (1-x+x^2)^8 + c_1 (1-x)^2 x^2 (1-x+x^2)^5 + c_2 (1-x)^4 x^4 (1-x+x^2)^2 \over (1-x)^8}
\end{equation}
An example of an $h=4$ chiral operator whose four-point function we will need in the sewing construction of $Z_{\rm vac}$ is the quasi-primary $\Lambda$ introduced in \eqref{Lamb}. Again computing the $c_i$ by matching to OPE coefficients, we find
\eq{348}{c_0 = {\cal N}_{\Lambda}^2~, \quad c_1 = {\cal N}_{\Lambda}^2\left( \frac{32}{c}-8\right)~,\quad c_2 = {\cal N}_{\Lambda}^2\left( \frac{4 \left(125 c^2+590 c+3704\right)}{5 c (5 c+22)}\right)}
In closing, one interesting comment is that certain general consequences can be immediately read off from \eqref{F(x)}. For example, when $h$ is not a multiple of 3 every contribution to the four-point function ${\cal F}(x)$ includes a factor of $1-x+x^2$. Thus if $h$ is not a multiple of 3, the four-point function vanishes when $x = \pm {\rm e}^{\pi i/3}$.
\section{Genus-two partition functions}\label{iv}
We are now ready to compute $Z_{\rm vac}$ at genus two, which captures the contribution of the Virasoro vacuum module to the partition function of a CFT on a genus-two Riemann surface. We begin this section by reviewing the Schottky uniformization of generic genus-$g$ Riemann surfaces and the sewing construction of the genus-$g$ partition function. We then turn to the actual computation of $Z_{\rm vac}$ at genus two using sphere four-point functions of low-lying Virasoro vacuum descendants. The final result can be found by substituting the results of subsection \ref{iv-ii} into equation \eqref{Z2}. We focus on the holomorphic part of $Z_{\rm vac}$ henceforth.
\subsection{Schottky uniformization and the partition function}\label{iv-i}
A non-singular genus-$g$ Riemann surface can be constructed by cutting out $2g$ disks on the Riemann sphere and identifying pairs of boundary circles to form $g$ handles. The Schottky uniformization of the Riemann surface entails the identification of pairs of circles through M\"obius transformations $\gamma_i$, $i\in\{1,2,\cdots,g\}$, which are elements of $PSL(2,\mathbb C)$. The maps $\gamma_i$ form the generators of the Schottky group, $\Gamma$. There are three parameters $\{a_i,r_i,p_i\}$ associated with the $i^{\rm{th}}$ handle: $(a_i,r_i)$ are the locations of the centers of the boundary circles, and $p_i$ determine the width of the handles. A global conformal transformation can fix the positions of three boundary circles on the sphere and thus a genus-$g$ Riemann surface has $3g-3$ complex moduli.
We consider the Schottky uniformization of a genus-$g$ Riemann surface following the conventions of \cite{Gaberdiel:2010jf} (see their Appendix C), where the locations of each pair of identified circles are given by the M\"obius transformation
\begin{equation}
\gamma_{a_i,r_i}(z)=\frac{r_iz+a_i}{z+1},\label{mobius-i}
\end{equation}
where $\gamma_{a_i,r_i}(0)=a_i$ and $\gamma_{a_i,r_i}(\infty)=r_i$. The generators of the Schottky group are given in terms of this map as
\begin{equation}\label{sch-gens}
\gamma_i=\gamma_{a_i,r_i}\gamma_{p_i}\gamma_{a_i,r_i}^{-1},
\end{equation}
where $\gamma_{p_i}(z)=p_iz$. We note that identified circles have opposite orientations: for the $i^{\rm{th}}$ pair the two boundary circles are given by the maps $C_i=\gamma_{a_i,r_i}\gamma_{R_i}C$ and $\bar C_{-i}=\gamma_{a_i,r_i}\hat\gamma\gamma_{R_{-i}}C$, where $C$ is the unit circle at the origin, $R_i$ and $R_{-i}$ are the radii of $C_i$ and $\bar C_{-i}$ respectively, and $\hat\gamma$ is the inverse map
\begin{equation}\label{gamma1z}
\hat\gamma(z)=\frac1z.
\end{equation}
The product of the radii of the two circles is $R_i\,R_{-i}=p_i$. We refer the reader to Appendix C of \cite{Gaberdiel:2010jf} for more details on the Schottky parametrization.
The partition function of a genus-$g$ Riemann surface uniformised by the Schottky group is given by the following power series expansion in $p_i$ \cite{Gaberdiel:2010jf}:\footnote{Actually, the formula \eqref{Zg} just gives the partition function up to a factor of the form $e^{-cF_0}$; in other words, in an expansion of the free energy $F:=-\ln Z_g$ in $1/c$, it only gives the order $c^0$ and higher terms. The order-$c^1$ term $cF_0$ depends on the full metric on the Riemann surface, not just the complex structure. Its calculation within the context of the sewing construction is explained in appendix \ref{orderc}.}
\begin{equation}\label{Zg}
Z_g=\sum_{h_1,\cdots,\,h_g}p_1^{h_1}\cdots p_g^{h_g}\;C_{h_1,\ldots,\,h_g}(a_3,\ldots,a_g,r_2,\ldots,r_g).
\end{equation}
In the sewing construction, a handle is replaced by the boundary states inserted at the centers of the two disks. The functions $C_{h_1,\cdots,h_g}$ are $2g$-point functions on the Riemann sphere and $h_i$ is the conformal dimension of the operators inserted at the $i^{\rm{th}}$ pair of disks. A schematic picture of the sewing construction is shown in figure \ref{fig-sewing}. In the above equation we have fixed the positions $a_1=0$, $r_1=\infty$, and $a_2=1$. The $2g$-point functions $C_{h_1,\cdots,h_g}$, whose ingredients we will explain in the next paragraphs, are sums over products of vertex operators of the form
\begin{equation}\label{Ch1hg}
C_{h_1,\cdots,h_g}=\sum_{\phi_i,\psi_i\in{{\cal H}_{h_i}}}\prod_{i=1}^{g}G_{\phi_i\psi_i}^{-1}\bigg\langle\prod_{i=1}^{g}V^{out}(\psi_i,r_i)\;\;V^{in}(\phi_i,a_i)\bigg\rangle,
\end{equation}
where $G$ is the Zamolodchikov metric defined below in (\ref{G-ii}), and ${\cal H}_{h_i}$ is the Hilbert space of states of dimension $h_i$.
These expressions for the vertex operators should be understood as follows. Under any M\"obius transformation $\gamma(z)$, the vertex operator $V(\phi(z))$ transforms as \cite{Gaberdiel:1999mc}
\begin{equation}\label{U}
V\bigg(U\Big(\gamma(z)\Big)\phi,\gamma(z)\bigg)=V\bigg(\gamma^{\prime}(z)^{L_0}\,e^{\frac{\gamma^{\prime\prime}(z)}{2\gamma^\prime(z)}\,L_1}\phi,\gamma(z)\bigg),
\end{equation}
where
\begin{equation}
\gamma^\prime(z)=\frac{d\gamma}{dz},\qquad\gamma^{\prime\prime}(z)=\frac{d^2\gamma}{dz^2}.
\end{equation}
For the M\"obius transformation (\ref{mobius-i}) we have
\begin{eqnarray}
\gamma_{a_i,r_i}^\prime(z)\!\!\!&=&\!\!\!\frac{(r_i-a_i)}{(z+1)^2},\qquad\gamma_{a_i,r_i}^{\prime\prime}(z)=-2\,\frac{(r_i-a_i)}{(z+1)^3},\label{gammaa}\nonumber\\
\Big(\gamma_{a_i,r_i}\hat\gamma\Big)^\prime(z)\!\!\!&=&\!\!\!\frac{(a_i-r_i)}{(z+1)^2},\qquad\Big(\gamma_{a_i,r_i}\hat\gamma\Big)^{\prime\prime}(z)=-2\,\frac{(a_i-r_i)}{(z+1)^3}.\label{gammar}
\end{eqnarray}
The ``in" and ``out" vertex operators in the sewing construction (\ref{Ch1hg}) then transform as
\begin{equation}\label{Vin}
V^{in}(\phi_i,a_i)=V\bigg(U\Big(\gamma_{a_i,r_i}(z=0)\Big)\phi_i,\gamma_{a_i,r_i}(z=0)\bigg)=(r_i-a_i)^{L_0}e^{-L_1}\phi_i(a_i),
\end{equation}
and
\begin{equation}\label{Vout}
V^{out}(\psi_i,r_i)=V\bigg(U\Big(\gamma_{a_i,r_i}\hat\gamma(z=0)\Big)\psi_i,\gamma_{a_i,r_i}\hat\gamma(z=0)\bigg)=(-1)^{L_0}(r_i-a_i)^{L_0}e^{-L_1}\psi_i(r_i).
\end{equation}
For $i=1$, \emph{i.e.} for the handle with the two ends at $(0,\infty)$, one can perform an extra M\"obius transformation under which the two maps at zero and infinity become the identity and the inverse map, respectively. This is described in the next subsection. We note that if $\phi_i$ and $\psi_i$ are quasi-primaries, then the vertex operators are given by
\es{Vinout-qp}{
V^{in}(\phi_{qp},a_i)&=(r_i-a_i)^{h_{\phi_{qp}}}\phi_{qp}(a_i),\\
V^{out}(\psi_{qp},r_i)&=(-1)^{h_{\psi_{qp}}}(r_i-a_i)^{h_{\psi_{qp}}}\psi_{qp}(r_i).}
\subsubsection{Genus two}
We now specialize to genus-two Riemann surfaces. The partition function is given by
\begin{equation}\label{Z2}
Z_{g=2}=\sum_{h_1,h_2=0}^{\infty}p_1^{h_1}p_2^{h_2}C_{h_1,h_2}(x),
\end{equation}
where we have defined $r_2=x$. Using (\ref{Ch1hg}), the functions $C_{h_1,h_2}(x)$ are found to be
\eq{Ch1h2}{C_{h_1,h_2}(x)=\sum_{\phi_i,\psi_i\in{{\cal H}_{h_i}}}G_{\phi_1\psi_1}^{-1}G_{\phi_2\psi_2}^{-1}\bigg\langle V^{out}(\psi_1,\infty)\;V^{out}(\psi_2,x)\;V^{in}(\phi_2,1)\;V^{in}(\phi_1,0)\bigg\rangle~.}
These two formulae apply in general. For our purposes of computing $Z_{\rm vac}$, we only allow Virasoro descendants of the identity to be inserted at the boundary circles of the handles as in figure \ref{fig-sewing-vac}. Henceforth, we refer to \eqref{Z2} with the understanding that we compute $Z_{\rm vac}$ specifically.
Let us now define the vertex operators needed in \eqref{Ch1h2}, starting with those at $(0,\infty)$. The functions $C_{h_1,h_2}(x)$ are invariant under the map $\gamma_{a_i,r_i}\to\gamma_{a_i,r_i}\gamma_{t}$, where $\gamma_t(z)=tz$, $t\in\mathbb C^*$. For the $i=1$ handle with its two ends located at $a_1=0$ and $r_1=\infty$, we consider a M\"obius transformation of the form $\gamma_{a_1,r_1}\gamma_{1/r_1}$ and find
\begin{equation}\label{U0infty}
\gamma_{a_1,r_1}\gamma_{\frac1{r_1}}=\frac z{1+\frac z{r_1}}\Big|_{r_1\to\infty}=z,\qquad\qquad
\gamma_{a_1,r_1}\gamma_{\frac1{r_1}}\hat\gamma=\frac1{z+\frac1{r_1}}\Big|_{r_1\to\infty}=\frac1z.
\end{equation}
This therefore gives the identity map for $a_1=0$ and the inverse map for $r_1=\infty$. The vertex operator at the origin is simply
\eq{V0}{V^{in}(\phi_1,0) = V(\phi_1,0) = \phi_1(0).}
The vertex operator at infinity follows from using
\begin{equation}
\hat\gamma^\prime(z)=-\frac1{z^2},\qquad\hat\gamma^{\prime\prime}(z)=\frac2{z^3},
\end{equation}
which yields
\begin{equation}\label{U1/z}
V^{out}(\psi_1,\infty)=V\bigg(U\Big(\hat\gamma(z)\Big)\psi_1,\infty\bigg)=\lim_{z\to\infty}(-1)^{L_0}z^{2L_0}\,e^{-z\,L_1}\psi_1(z).
\end{equation}
For the handle with vertices at $(a_2,r_2)=(1,x)$, we use (\ref{Vin})--(\ref{Vout}) to read off
\begin{eqnarray}
V^{in}(\phi_2,1)\!\!\!&=&\!\!\!(x-1)^{L_0}e^{-L_1}\phi_2(1),\label{U1x-i}\\
V^{out}(\psi_2,x)\!\!\!&=&\!\!\!(-1)^{L_0}(x-1)^{L_0}e^{-L_1}\psi_2(x).\label{U1x-ii}
\end{eqnarray}
The Zamolodchikov metric is defined in terms of the in and out vertex operators as\footnote{We note that our formulae (\ref{Vout}) and (\ref{U1/z}) contain an extra factor of $(-1)^{L_0}$ comparing to the formulae in Appendix C of \cite{Gaberdiel:2010jf}. The reason is that we choose a different convention than that of \cite{Gaberdiel:2010jf}. In our convention $G$ is the Zamolodchikov metric whereas in \cite{Gaberdiel:2010jf} their metric $\hat G$ is a metric on the space of states which is related to $G$ via $\hat G_{\phi\psi}=G_{(-1)^{L_0}\phi\psi}$.}
\begin{equation}\label{G-ii}
G_{\phi\psi}=\bigg\langle V^{out}(\psi,\infty) V^{in}(\phi,0)\bigg\rangle.
\end{equation}
We choose an orthogonal basis of states at each level by diagonalizing the Gram matrix. The Zamolodchikov metric is thus diagonal and the ingoing and outgoing vertex operators are the same up to M\"obius transformations. Consequently, we can (and will) define the norm of the states as $\mathcal N_{\phi}\equiv G_{\phi\psi}$.
To summarize, the following is the prescription for constructing the genus-two partition function: insert vertex operators \eqref{V0}, (\ref{U1/z})--(\ref{U1x-ii}) and the Zamolodchikov metric \eqref{G-ii} into (\ref{Ch1h2}) to evaluate $C_{h_1,h_2}(x)$, and sum over these using (\ref{Z2}).
\subsection{Four-point functions $C_{h_1,h_2}(x)$}\label{iv-ii}
We will momentarily compute some of the functions $C_{h_1,h_2}(x)$ defined in (\ref{Ch1h2}). Before doing so, it is useful to elucidate some of their general properties.
First, these functions are symmetric under the exchange of the positions of the two handles:
\begin{equation}
C_{h_1,h_2}(x)=C_{h_2,h_1}(x)
\end{equation}
When $h_1=0$ or $h_2=0$,
\begin{equation}\label{c0h}
C_{0,h}(x) = C_{h,0}(x) = d(h)~,
\end{equation}
where $d(h)$ is the degeneracy of operators at level $h$. This follows from the definition of the vertex operators and of the $C_{h_1,h_2}(x)$ themselves. It can also be understood intuitively: replacing either handle with two insertions of the identity reduces the genus-two partition function to the torus partition function, the holomorphic half of which is $\mathop{\rm Tr} p^{L_0} = \sum_h d(h) p^h$.\footnote{Note the $c$-independence of this quantity. When summing over the vacuum module only, the dimensions of each state are fixed by conformal symmetry, and hence unrenormalized. This is the CFT statement of the one-loop exactness of the pure gravity partition function on a solid torus.}
Thus, \eqref{Z2} implies (\ref{c0h}).
Now we consider the $x$-dependence of $C_{h_1,h_2}(x)$. For general $x$, they obey
\begin{equation}
C_{h_1,h_2}(x) = C_{h_1,h_2}(1/x)
\end{equation}
which is a consequence of modularity with respect to $Sp(4,\mathbb{Z})$. Taking various limits in $x$ corresponds to taking OPE limits of the (dressed) four-point functions defining $C_{h_1,h_2}(x)$. The simplest one is $x\rightarrow1$, which describes the fusion of two ends of the same handle. In this case,
\begin{equation}\label{cx1}
C_{h_1,h_2}(1) = d(h_1)\times d(h_2)~.
\end{equation}
This again follows by definition, and is necessary for the partition function \eqref{Z2} to factorize in the separating degeneration limit.
More subtle are the equivalent OPE limits $x\rightarrow 0$ and $x\rightarrow\infty$, which describes the fusion of two ends of different handles. These limits are singular. In Appendix \ref{app-OPE}, we show that in a $1/c$ expansion, the leading powers of $x\rightarrow 0$ that appear are correlated with powers of $1/c$ as follows:
\begin{equation}\label{pc2}
\lim_{c\rightarrow\infty}\lim_{x\rightarrow0}C_{h_1,h-h_1}(x) \sim O(x^{-h})+ {1\over c} O(x^{-h+2}) + \left(\sum_{n=2}^{\infty}{1\over c^n}\right)O(x^{-h+4}).
\end{equation}
We have defined $h_2=h-h_1$, and are assuming $h>h_1>0$ because $C_{0,h}(x)$ is constant. We are ignoring $h_1$- and $h$-dependent coefficients at each order, and displaying only the leading singular behavior at each order in $1/c$. The last term means that at $\mathcal{O}(1/c^2)$ and {\it all} orders beyond, the leading divergence scales as $O(x^{-h+4})$.
We now proceed to compute $C_{h_1,h_2}(x)$ explicitly for low values of $h_1$ and $h_2$. A word on notation: henceforth, we denote the set of operators at level $h$ above the ground state as $\lbrace {\cal O}_h^{(i)}\rbrace$, where $i=1,2,\ldots, d(h)$.
\subsubsection{$C_{h,0}(x)$}\label{iv-ii-i}
In this case, the identity operator propagates through one of the handles, so the four-point functions reduce to two-point functions
\es{C0h}{
C_{h,0}(x)&=\sum_{\phi,\psi\in{{\cal H}_{h}}}G_{\phi\psi}^{-1}\bigg\langle V^{out}(\psi,\infty)\;V^{in}(\phi,0)\bigg\rangle,\\
C_{0,h}(x)&=\sum_{\phi,\psi\in{{\cal H}_{h}}}G_{\phi\psi}^{-1}\bigg\langle V^{out}(\psi,x)\;V^{in}(\phi,1)\bigg\rangle.}
As discussed earlier, $C_{h,0}(x)=d(h)$; from the definition of $G_{\phi\phi}$ in \eqref{G-ii}, this is obviously true. It is less obvious that $C_{0,h}(x)=d(h)$ by looking at \eqref{C0h}. Hence, we find it instructive to compute two $C_{0,h}(x)$ in detail, to illustrate how to use the method of \cite{Gaberdiel:2010jf} outlined above. In the first example we consider a quasi-primary state and in the second example we consider a secondary state. The latter is particularly useful, as secondary operators transform nontrivially under the M\"obius transformations, so care must be taken in computing their correlation functions.
At level 2 ($h=2$) there is only one state, the stress tensor $T=L_{-2}|0\rangle$, which is a quasi-primary with norm ${\cal N}_T=\frac c2$. From \eqref{U1x-i}--\eqref{U1x-ii}, the vertex operators at $x$ and $1$ are
\es{Tvertex}{V^{out}(T,x) &= (x-1)^2 \,T(x),\\
V^{in}(T,1) &= (x-1)^2 \,T(1).\\
}
Then we have, as expected,
\es{C02}{
C_{0,2}(x)&= {\cal N}_T^{-1}\bigg\langle V^{out}(T,x)\;V^{in}(T,1)\bigg\rangle = 1~,}
where we used the stress tensor two-point function
\eq{TT}{\langle T(x) T(1) \rangle = {c\over 2}{1\over (x-1)^4}~.}
At level 3, there is again one state $\mathcal{O}_3=\partial T=L_{-3}|0\rangle$, this time a secondary state with norm ${\cal N}_{\mathcal{O}_3}=2c$. The vertex operators at $x$ and $1$ are now, from \eqref{U1x-i}--\eqref{U1x-ii},
\es{}{V^{out}({\cal O}_3,x) &= -(x-1)^3\partial T(x)-4(x-1)^2T(x),\\
V^{in}({\cal O}_3,1) &= (x-1)^3\partial T(1)-4(x-1)^2T(1).}
Using \eqref{TT} we again have, as expected,
\begin{eqnarray}\label{C03}
C_{0,3}(x)\!\!\!&=&\!\!\!\mathcal N_{\mathcal{O}_3}^{-1}\bigg\langle V^{out}(\mathcal{O}_3,x)\;V^{in}(\mathcal{O}_3,1)\bigg\rangle\nonumber\\
\!\!\!&=&\!\!\!\frac1{2c}\bigg\langle \Big(-(x-1)^3\,\partial T(x)-4(x-1)^2\,T(x)\Big)\;\Big((x-1)^3\partial\,T(1)-4\,(x-1)^2T(1)\Big)\bigg\rangle\nonumber\\
\!\!\!&=&\!\!\!1.
\end{eqnarray}
It is also useful to see the vertex operator at infinity, which carries non-trivial dressing on account of ${\cal O}_3$ being a secondary operator:
\eq{L-3i}{V^{out}({\cal O}_3,\infty) = \lim_{z\to\infty} V\Big((-1)^{L_0}z^{2L_0}\,e^{-z\,L_1}{\cal O}_3,z\Big) = \lim_{z\rightarrow\infty} \Big(-z^6 \partial T(z) - 4z^5 T (z)\Big).}
We next move on to the computation of $C_{h_1,h_2}(x)$ for $h_1\ne0$ and $h_2\ne0$. We compute the requisite four-point functions using the methods described in section \ref{iii}. The transformation properties of the vertex operators are evaluated following the same procedure shown in the above examples; accordingly, the presentation here is streamlined, with some details of the vertex operator transformations relegated to Appendix \ref{app-Cs}. There, we also list the operators and their norms through level six of the vacuum module.
\subsubsection{$C_{h,2}(x)$}\label{iv-ii-i}
First, consider the four-point function with $h_2=2$ which corresponds to the four-point function of four stress tensors, each dressed with appropriate factors of $x$ and $z$ (recall equation \ref{Ch1h2}). We find that
\begin{eqnarray}\label{C22}
C_{2,2}(x)\!\!\!&=&\!\!\!\frac1{\left(\frac c2\right)^2}\lim_{z\to\infty}\bigg\langle z^{4}T(z)\;\;(x-1)^2\,T(x)\;\;(x-1)^2T(1)\;\;T(0)\bigg\rangle\nonumber\\
\!\!\!&=&\left(1+(x-1)^4+\frac{(x-1)^4}{x^4}\right)+\frac8c\,\frac{(x-1)^2\,(1-x+x^2)}{x^2},
\end{eqnarray}
We note that $C_{2,2}(x)$ is manifestly symmetric under $x\to1/x$, as required.
We next compute $C_{3,2}(x)$. This can be done by taking the derivative of $C_{2,2}(x)$ with respect to the sphere coordinates at the insertion points of $\mathcal{O}_3$, or by direct computation of the four-point function using the Virasoro mode expansion formula \eqref{eq8}. We obtain
\begin{eqnarray}\label{C32}
C_{3,2}(x)\!\!\!&=&\!\!\!\frac1{x^5}\,\Big(4-16x+24x^2-16x^3+4x^4+x^5+4x^6-16x^7+24x^8-16x^9+4x^{10}\Big)\nonumber\\
\!\!\!&+&\!\!\!\frac1c\,\frac{2\,(x-1)^2}{x^3}\,\Big(4+x-4x^2+x^3+4x^4\Big),
\end{eqnarray}
We used \eqref{Tvertex} and (\ref{L-3i}) to write down the necessary vertex operators. One can explicitly check that $C_{3,2}(x)=C_{2,3}(x)$, as required.
We next move to level four and evaluate $C_{4,2}(x)$. There are two orthogonal states at level four,
\begin{equation}\label{O4}
\mathcal{O}_{4}^{(1)}=\Lambda = \left(L_{-2}^2-\frac35L_{-4}\right)|0\rangle~, \quad \mathcal{O}_4^{(2)}=L_{-4}|0\rangle~,
\end{equation}
where $\Lambda$ is the quasi-primary first recalled in \eqref{Lamb}, and $\mathcal{O}_4^{(2)}=\partial^2 T/2$ is secondary with norm ${\cal N}_{\mathcal{O}_4^{(2)}}=5c$. We obtain
\begin{eqnarray}\label{C42}
C_{4,2}(x)\!\!\!&=&\!\!\!\frac2{x^6}\,\Big(5 - 20 x + 31 x^2 - 24 x^3 + 11 x^4 - 4 x^5 + 3 x^6\\
&&\quad - 4 x^7+11 x^8-24 x^9 + 31 x^{10}-20 x^{11} + 5 x^{12}\Big)\nonumber\\
&&\!\!\!\!\!\!\!\!\!\!\!+\frac1c\,\frac{4\,(x-1)^2}{x^4}\,\Big(4 - 3 x + 10 x^2 - 14 x^3 + 10 x^4 - 3 x^5 + 4 x^6\Big).\nonumber
\end{eqnarray}
Note that the finite expansion in $1/c$ is at first surprising because ${\cal N}_{\Lambda}^{-1}$ has an infinite $1/c$ expansion; there are non-trivial cancellations between the two correlators. We will explain this in a moment.
For the remaining computations, we will be briefer. (We remind the reader of Appendix \ref{app-Cs} containing further details.) At level five, we find
\begin{eqnarray}\label{C25}
C_{5,2}(x)\!\!\!&=&\!\!\!\frac1{x^7}\,\Big(20-80x+124x^2-95x^3+40x^4-10x^5\\
&&\quad+4x^7-10x^9+40x^{10}-95x^{11}+124x^{12}-80x^{13}+20x^{14}\Big)\nonumber\\
\!\!\!&+&\!\!\!\frac1c\,\frac{4\,(x-1)^2}{x^5}\,\Big(6-6x+7x^2+5x^3-14x^4+5x^5+7x^6-6x^7+6x^8\Big).\nonumber
\end{eqnarray}
At level six, we find
\begin{eqnarray}\label{C26}
&&\!\!\!\!\!\!\!\!\!\!C_{6,2}(x)=\frac1{x^8}\,\Big(35-140x+220x^2-172x^3+67x^4-8x^5+2x^6-8x^7+12x^8\\
&&\qquad\qquad\,-8x^9+2x^{10}-8x^{11}+67x^{12}-172x^{13}+220x^{14}-140x^{15}+35x^{16}\Big)\nonumber\\
&&\qquad+\frac1c\,\frac{4\,(x-1)^2}{x^6}\,\Big(8-7x+10x^2-8x^3+46x^4-74x^5+46x^6-8x^7\nonumber\\
&&\qquad\qquad\qquad\qquad\;+10x^8-7x^9+8x^{10}\Big).\nonumber
\end{eqnarray}
We observe that all the functions $C_{h,2}(x)$ we have computed so far contain a term proportional to $1/c$ and a term constant in $c$. In fact, this is true for all $h$. The reason is that the sum over $h$ of $C_{h,2}(x)$ is related to the two-point function of the stress tensor on the torus with the insertion of a vacuum projector:
\begin{equation}
\ev{P_{\rm vac}T(1)T(x)} = \frac c2\frac1{(x-1)^4}\sum_hC_{h,2}(x)p_1^h
\end{equation}
(where $p_1=e^{2\pi i \tau}$). This two-point function can in turn be obtained by differentiating the vacuum free energy $F_{\rm vac}(T^2)$ with respect to the metric. This free energy was shown at the end of subsection \ref{powers} to be one-loop exact, in other words to contain only terms linear and constant in $c$. Hence, for all $h$, $C_{h,2}(x)$ must contain only terms constant and inversely proportional to $c$.
\subsubsection{$C_{h,3}(x)$}\label{iv-ii-i}
We next compute the functions $C_{h_1,h_2}(x)$ with $h_1\ge3$ and $h_2=3$. The first function is $C_{3,3}(x)$ which corresponds to the four-point function of four $\partial T$'s. This correlation function can be evaluated by taking the derivatives of $C_{2,2}(x)$ with respect to the sphere coordinates at the location of the four operators, or by direct computation. We find
\begin{eqnarray}\label{C33}
C_{3,3}(x)\!\!\!&=&\!\!\!\frac{1}{x^6}\,\Big(25-110x+191x^2-164x^3+71x^4-14x^5\\
&&\quad+3x^6-14x^7+71x^8-164x^9+191x^{10}-110 x^{11}+25x^{12}\Big)\nonumber\\
&&\!\!\!\!\!\!\!\!\!\!+\frac1c\,\frac{(x-1)^2}{x^4}\,\Big(18-6x+x^2-8x^3+x^4-6x^5+18x^6\Big),\nonumber
\end{eqnarray}
Next, for $C_{4,3}(x)$ we find
\begin{eqnarray}\label{C34}
&&\!\!\!\!\!\!\!\!\!\!C_{4,3}(x)=\frac2{x^7}\,\Big(45-210x+399x^2-396x^3+219x^4-66x^5+9x^6\\
&&\qquad\qquad+x^7+9x^8-66x^9+219x^{10}-396x^{11}+399x^{12}-210x^{13}+45x^{14}\Big)\nonumber\\
&&\qquad+\frac1c\,\frac{(x-1)^2}{x^5}\,\Big(64-83x+68x^2+3x^3-56x^4+3x^5+68x^6-83x^7+64x^8\Big),\nonumber
\end{eqnarray}
and for $C_{5,3}(x)$ we find
\begin{eqnarray}\label{C35}
&&\!\!\!\!\!\!\!\!\!\!C_{5,3}(x)=\frac1{x^8}\,\Big(245-1190x+2380x^2-2526x^3+1530x^4\\
&&\qquad\qquad-530x^5+100x^6-10x^7+4x^8-10x^9+100x^{10}\nonumber\\
&&\qquad\qquad-530x^{11}+1530x^{12}-2526x^{13}+2380x^{14}-1190x^{15}+245x^{16}\Big)\nonumber\\
\!\!\!&+&\!\!\!\frac1c\,\frac{(x-1)^2}{x^6}\,\Big(150-264x+201x^2-32x^3+3x^4-56x^5+3x^6\nonumber\\
&&\qquad\qquad\;-32x^7+201x^8-264x^9+150x^{10}\Big).\nonumber
\end{eqnarray}
Again, all of the functions $C_{h,3}(x)$ evaluated so far truncate at order $1/c$ in a large-$c$ expansion. This is because all four-point functions in $C_{h,3}(x)$ can be computed by taking derivatives of stress tensors in $C_{h,2}(x)$, which do not affect the $c$-dependence of the correlators.
\subsubsection{$C_{4,4}(x)$}\label{iv-ii-i}
The function $C_{4,4}(x)$ is a linear combination of four four-point functions:
\begin{equation}\label{C44}
C_{4,4}(x)=\sum_{i,j=1}^2{\cal N}_{\mathcal{O}_{4,i}}^{-1}{\cal N}_{\mathcal{O}_{4,j}}^{-1}\bigg\langle V^{out}(\mathcal{O}_{4,i},\infty)\;V^{out}(\mathcal{O}_{4,j},x)\;V^{in}(\mathcal{O}_{4,j},1)\;V^{in}(\mathcal{O}_{4,i},0)\bigg\rangle~.
\end{equation}
By the same argument below \eqref{C25} and \eqref{C35}, the terms which contain at least one pair of the secondary operator $\mathcal{O}_4^{(2)}=\partial^2T/2$ truncate at order $1/c$ in a large-$c$ expansion. The only term in $C_{4,4}(x)$ which could potentially contribute at higher orders in $1/c$ is the four-point function of four quasi-primaries $\Lambda$, defined in \eqref{Lamb}. Let us focus on this contribution, which we call $C_{4,4}|_{\Lambda}(x)$.
Using the definitions of the vertex operators simply yields
\eq{446}{C_{4,4}|_{\Lambda}(x) = {\cal N}_{\Lambda}^{-2}(x-1)^8\lim_{z\rightarrow\infty} z^8\bigg \langle \Lambda(z)\, \Lambda(x)\, \Lambda(1) \,\Lambda(0) \bigg\rangle~.}
The norm of $\Lambda$ was given in \eqref{No4}. Substituting this and the results \eqref{F4h}--\eqref{348} for the four-point function obtained via the holomorphic bootstrap, we find
\begin{eqnarray}\label{C44-qp}
C_{4,4}|_{\Lambda}(x)\!\!\!&=&\!\!\!\frac{(1-x+x^2)^8}{x^8}+\left(\frac{32}c-8\right)\frac{(x-1)^2(1-x+x^2)^5}{x^6}\nonumber\\
\!\!\!&+&\!\!\!\frac{4\,(3704+590c+125c^2)}{5c\,(22+5c)}\frac{(x-1)^4(1-x+x^2)^2}{x^4}~.
\end{eqnarray}
Crucially to what follows, we observe that $C_{4,4}|_{\Lambda}(x)$ contributes to an infinite expansion in $1/c$. This comes entirely from the inverse norms in \eqref{446}.
\vskip .1 in
The collection of $C_{h_1,h_2}(x)$ computed in this subsection, plugged into the partition function \eqref{Z2}, forms one of our main computational results: namely, the first several terms in the Virasoro vacuum module contribution to the partition function of an arbitrary CFT on a genus-two Riemann surface, in the regime of Schottky parameters $p_1,p_2\ll1$.
\section{Free energy at large central charge, 3D gravity, and R\'enyi entropies}\label{v}
Having derived the first handful of terms in equation \eqref{Z2} for all $c$, it is trivial to expand it at large $c$. As we have discussed, this large $c$ expansion may be interpreted as the semiclassical expansion of the pure 3D quantum gravity partition function around genus-two handlebody geometries with conformal boundary $\Sigma$ specified by Schottky parameters $\lbrace p_1,p_2,x\rbrace$. In subsection \ref{v-i}, we present some of our main results. First, we provide explicit contributions of the Virasoro vacuum representation to the CFT free energy at all orders in $1/c$, corresponding to all-loop free energies in the gravitational loop expansion. We also show that at least in the perturbative regime $p_1,p_2\ll1$, the loop expansion does not truncate except when $\Sigma$ is the union of two tori.
We then proceed to subsections \ref{v-ii} and \ref{v-iii}, where we expand our general result \eqref{Z2} near two symmetric points in the genus-two moduli space: the replica surface ${\mathscr R}_{2,3}$ used to compute the R\'enyi entropy $S_3$ for two disjoint intervals in vacuum, and the point corresponding to the separating degeneration limit $x=1$. Our results will extend those of \cite{Chen:2013dxa} and \cite{Yin:2007gv}, respectively.
\subsection{All-loop results in 3D quantum gravity}\label{v-i}
We consider the $1/c$ expansion of the vacuum free energy, $F_{{\rm vac}}=-\log Z_{{\rm vac}}$:
\eq{}{F_{{\rm vac}} = \sum_{\ell=0}^{\infty} c^{1-\ell}F_{{\rm vac};\,\ell}}
where $\ell$ denotes the loop order. Its moduli-dependence is kept implicit. We can read off the loop corrections $F_{{\rm vac} ;\,\ell}$ from \eqref{Z2} upon expanding the $C_{h_1,h_2}(x)$ in $1/c$. We likewise expand these as
\eq{chhell}{C_{h_1,h_2}(x) = \sum_{\ell=1}^{\infty}c^{1-\ell}C_{h_1,h_2;\,\ell}(x)~.}
To begin, note that in the small $(p_1,p_2)$ expansion in which we work, both the one- and two-loop free energies are nonzero. This follows from the explicit results in section \ref{iv} and from the R\'enyi entropy computation \eqref{chenpatt}, but also from our general exposition of $c$-scaling of identity module correlators in section \ref{powers}.
More interesting is the question of whether there are higher-loop terms. For $\ell>2$, the $C_{h_1,h_2}(x)$ that we have computed all obey $C_{h_1,h_2;\,\ell}(x)=0$ except for $C_{4,4}|_{\Lambda}(x)$, computed in \eqref{C44-qp}, which clearly has an infinite expansion. Accordingly, the leading contribution to the three-loop free energy $F_{{\rm vac};\,3}$ in a small $(p_1,p_2)$ expansion can, and does, appear at $O(p_1^4p_2^4)$:
\eq{f30}{F_{{\rm vac};\,3}= p_1^4p_2^4\left(C_{4,4;\,3}(x) - {1\over 2} \left(C_{2,2;\,2}(x)\right)^2\right) + O(p_1^4p_2^5)~.}
There is no cancellation: instead, our results yield
\eq{f3}{F_{{\rm vac};\,3} = p_1^4p_2^4\,{13312 (x-1)^4(1-x+x^2)^2\over 25x^4} + O(p_1^4p_2^5)~.}
For $x\in \mathbb{R}$, this only vanishes at $x=1$. This point in moduli space corresponds to the strict separating degeneration limit. As the torus free energy is known to truncate at ${\cal O}(c^0)$ in a large $c$ expansion \cite{Maloney:2007ud}, the fact that $F_{{\rm vac};\,3}=0$ when $x=1$ is required by consistency. The interesting result proven here is that $F_{{\rm vac};\,3}$ is nonzero everywhere else on the real line.\footnote{Note that \eqref{f30} and \eqref{f3} also hold when $p_1=p_2$, because the only contribution at ${\cal O}(1/c^2)$ to $C_{h_1,h_2}(x)$ for $h_1+h_2=8$ comes from $C_{4,4}(x)$. On the other hand, if $x$ is a function of $p_1$ and $p_2$, higher order terms in \eqref{f3} are not necessarily suppressed. We will encounter such a situation in our discussion of R\'enyi entropy.}
In fact, the infinite $1/c$ expansion of $C_{4,4}(x)$ and the finite $1/c$ expansion of $C_{2,2}(x)$ together imply that the $\ell$-loop free energy $F_{{\rm vac};\,\ell}$ is nonzero for {\it all} $\ell$, at least in a small $(p_1,p_2)$ expansion. The reason is simply that the term at $O(p_1^4p_2^4)$ cannot be cancelled by higher order terms in $p_1$ and $p_2$. For $\ell>3$---that is, at ${\cal O}(1/c^3)$ and beyond---$F_{{\rm vac};\,\ell}$ is given to leading order in $p_1,p_2$ as
\eq{allloop}{F_{{\rm vac};\,\ell>3} = p_1^4p_2^4 ~C_{4,4;\,\ell>3}(x) + O(p_1^4p_2^5)}
with
\eq{c44e}{C_{4,4;\,\ell>3}(x) = { (x-1)^4(1-x+x^2)^2\over x^4}\cdot \left({4(3704+590c+125c^2)\over5c(5c+22)}\right)\Bigg|_{c^{1-\ell}}~.}
determined by \eqref{C44-qp}. From a technical standpoint, this non-trivial $1/c$ expansion arises from the inverse norms appearing in the sewing construction, cancelling the norms in the correlator $\langle \Lambda\L\Lambda\L\rangle$.
Interpreted as a CFT result, \eqref{allloop} is an exact expression for the contribution of the Virasoro vacuum module to the genus-two free energy of any family of CFTs that admits a $1/c$ expansion. Interpreted as a pure gravity result, \eqref{allloop} is an explicit formula for all-loop free energies on genus-two handlebodies. The loop counting parameter in the bulk is $G_N=3R_{\rm AdS}/2c$. In contrast to the one-loop exactness at genus one, the genus-two free energy is not exact at {\it any} loop order.
Strictly speaking, we have so far established that the semiclassical expansion does not truncate for any real $x\neq 1$. What about complex $x$? In particular, the all-loop terms \eqref{f3} and \eqref{c44e} clearly vanish at $x=e^{\pm i\pi/3}$, the complex roots of $1-x+x^2=0$. This follows from the same property of the four-point function of $\Lambda$, as discussed below equation \eqref{348}. But this is a special feature of correlators of identical operators with $h/3\notin \mathbb{Z}$, so it will not persist to higher orders in the sewing expansion. For instance, $C_{4,5;\,\ell>2}(e^{\pm i\pi/3})\neq 0$, and likewise at all higher levels. Therefore, we have shown the following statement: {\it perturbatively in $(p_1,p_2)$, the loop expansion does not truncate for any genus-two handlebodies except at the separating degeneration point.}
We note that at fixed order in $p_1$ and $p_2$, the $1/c$ expansion for $Z_{\rm vac}$ converges. This does not necessarily imply, however, that at a fixed point in moduli space (i.e.\ for fixed values of $p_1$ and $p_2$) the $1/c$ expansion converges. Indeed, since there are presumably other saddle point contributions to the path integral (coming from bulk handlebodies with different topology) one might expect that the series expansion of $Z_{\rm vac}$ is asymptotic in $1/c$. However, in the genus one case the series expansion for $Z_{\rm vac}$ converges---in fact, it truncates at order $c^0$. Thus it is an interesting open question whether the $1/c$ expansion for $Z_{\rm vac}$ converges at higher genus.
Finally, let us comment on the holographic interpretation of our result for the one-loop partition function, $Z_{{\rm vac};\,1}$. This can be viewed as a computation of the holomorphic half of the graviton handlebody determinant,\footnote{The first product in \eqref{grav} runs over primitive elements $\gamma\in{\cal P}\subset\Gamma$, defined as those elements that cannot be written as $\gamma=\beta^m$ for $\beta \in \Gamma$ and $m>1$. The eigenvalues of $\gamma$ are ${\rm eig}(\gamma) = q_{\gamma}^{\pm 1/2}$, and we do not count $\gamma$ and $\gamma^{-1}$ as distinct elements.}
\eq{grav}{Z_1^{\rm grav} = \prod_{\gamma \in {\cal P}} \prod_{n=2}^{\infty} {1\over |1-q_{\gamma}^n|^2}~,}
The novelty of our computation is that we work in the regime of $p_1,p_2\ll 1$ but for arbitrary $x$. This regime has not been probed directly in existing computations of \eqref{grav}. In \cite{Barrella:2013wja}, \eqref{grav} was computed for handlebodies asymptotic to replica manifold for two-interval R\'enyi entropy in a short interval expansion; this has only a single modulus and requires $p_1=p_2 \ll 1$ and $x \gg 1$. In \cite{Yin:2007gv}, \eqref{grav} was computed near the separating degeneration limit where $\Sigma$ becomes the union of two tori, which requires $x \approx 1$ for arbitrary $(p_1,p_2)$.
\subsubsection{Higher spin theories}\label{v-i-i}
So far, we have restricted to the pure Virasoro sector of the CFT. The meaning and calculation of $Z_{{\rm vac}}$ are conceptually unmodified in the presence of higher spin currents. Along with the stress tensor, these Virasoro primaries live in the vacuum representation of an extended conformal symmetry, typically a $W$ algebra. In the computation of $Z_{{\rm vac}}$ by sewing, we now allow these currents and their normal ordered products to propagate through the handles. The resulting $Z_{{\rm vac}}$ is again of the form \eqref{Z2}, only with different coefficients $C_{h_1,h_2}(x)$.
The holographic dual of $Z_{{\rm vac}}$ in the presence of higher spin symmetry is the perturbative partition function of pure 3D higher spin gravity. A bulk Chern-Simons theory with connections valued in two copies of a Lie algebra ${\cal G}$ describes the vacuum sector of a CFT whose $W$ algebra is the Drinfeld-Sokolov reduction of ${\cal G}$ \cite{Campoleoni:2010zq}. Accordingly, the $1/c$ expansion of $Z_{{\rm vac}}$ for such a CFT yields the semiclassical loop expansion of the ${\cal G}\times{\cal G}$ Chern-Simons higher spin theory.
As a simple example, consider a CFT with $W_{3}$ symmetry, which contains a single higher spin current of spin three, $W$. Its presence will modify most of the $C_{h_1,h_2}(x)$ coefficients, starting with $C_{3,2}(x)$ and $C_{3,3}(x)$.\footnote{The generating function of quasi-primaries containing at least one $W$ current is given in Appendix B of \cite{Perlmutter:2013paa}.} We can easily compute these using the correlators of section \ref{iii}. The interesting term is $C_{3,3}(x)$. Denoting the contribution to $C_{3,3}(x)$ from the $W$ current four-point function as $\delta_W C_{3,3}(x)$, we find, using \eqref{F3} and \eqref{wcoeffs},
\es{}{\delta_WC_{3,3}(x) &:= {\cal N}_W^{-2} (x-1)^6 \lim_{z\rightarrow\infty} z^6 \langle W(z) W(x) W(1) W(0)\rangle \\
&= {(1-x+x^2)^6\over x^6} + {6(3-c)\over c} {(1-x)^2 (1-x+x^2)^3\over x^4}\\& + {3(5c^2-71c-102)\over c(5c+22)} {(1-x)^4 \over x^2}~.}
Expanded at large $c$, this yields an infinite series of loop corrections to the free energy of ${\cal G}=SL(3)$ higher spin gravity:
\eq{}{F_{{\rm vac};\,\ell>2}^{\rm SL(3)} = p_1^3p_2^3 \,{(1-x)^4 \over x^2}\cdot \left({3(5c^2-71c-102)\over c(5c+22)}\right) \Bigg|_{c^{1-\ell}} + O(p_1^3p_2^4)~.}
This is nonzero for all $x\neq 1$, so we conclude that the loop expansion does not truncate away from the separating degeneration limit for small $(p_1,p_2)$. This is true for all higher spin algebras ${\cal G}$. We will return to the topic of higher spin theories in the Discussion.
\subsection{R\'enyi entropies}\label{v-ii}
As discussed in section \ref{ii}, there are three R\'enyi entropies that involve genus-two replica manifolds without punctures (i.e. for CFTs not in excited states). These are the $N=2,n=3$ and $N=3,n=2$ R\'enyi entropies for a CFT on the plane, and $N=1,n=2$ for a CFT on the torus. We mostly focus on the $N=2$ case, with replica manifold ${\mathscr R}_{2,3}$. Our results in section \ref{v-i} are sufficient to rule out the truncation of the $1/c$ expansion of $F_{{\rm vac}}$ even in the case of the replica manifold $\Sigma={\mathscr R}_{2,3}$ introduced in section \ref{Renyireview}. We now exhibit this in detail; the final results can be found in \eqref{f3ren} and \eqref{renall}.
\subsubsection{Two intervals on the plane}\label{v-iia}
Our goal is to express the free energy in terms of the coordinate $y$ parameterizing the interval spacing, defined in section \ref{ii}. To do so, we need only to express the Schottky coordinates $\lbrace p_1,p_2,x\rbrace$ in terms of $y$. One way to proceed is by using the period matrix $\Om(y)$ for the replica manifold ${\mathscr R}_{2,3}$, which is known \cite{Calabrese:2009ez}. Thus, we will perform the map $\lbrace p_1,p_2,x\rbrace \mapsto \lbrace q_{ij}(y)\rbrace$, where $q_{ij}(y) = \exp[2\pi i \Om_{ij}(y)]$ are the multiplicative periods, in the regime of $y$ corresponding to small $(p_1,p_2)$. Plugging into \eqref{Z2} gives $F_{{\rm vac}}({\mathscr R}_{2,3})$ for arbitrary $c$; we then proceed to study this result at large $c$.
For two disjoint intervals and arbitrary $n$, the period matrix is \cite{Calabrese:2009ez}
\es{rnprd}{
\Om_{ij}(y)=\frac{2i}{n}\sum_{k=1}^{n-1}\sin\left(\pi\frac{k}{n}\right)\,\cos\left(2\pi\frac{k}{n}(i-j)\right)\frac{_2F_1\left(\frac{k}n,1-\frac{k}n;1;1-y\right)}{_2F_1\left(\frac{k}n,1-\frac{k}n;1;y\right)}~,}
Specializing to $n=3$, the period matrix is given by
\es{rnprd3}{
\Om(y)=\frac{2i}{\sqrt3}\,\frac{_2F_1\left(\frac13,\frac23;1;1-y\right)}{_2F_1\left(\frac13,\frac23;1;y\right)}\left({\begin{array}{cc}
1 & -\frac12 \\
-\frac12 & 1 \\
\end{array} } \right).}
This is a highly symmetric genus-two Riemann surface: there is only a single modulus $y$, as opposed to the $3g-3=3$ moduli of a generic genus-two surface.
To express the Schottky coordinates in terms of $y$, we need to invert the power series expansion given in \eqref{mltprds}. The fact that $q_{11}=q_{22}$ implies that in Schottky coordinates, ${\mathscr R}_{2,3}$ has $p_1=p_2\equiv p$, as a quick inspection of \eqref{mltprds} reveals. Our results are applicable when $p\ll 1$, so \eqref{mltprds} forces us to take $q_{11}\ll 1$ too. From \eqref{rnprd3}, this is just the short interval limit, $y\ll 1$, often studied in the context of 2D CFT R\'enyi entropy: taking $y\ll1$ in \eqref{rnprd3} yields multiplicative periods
\es{qy}{&q_{11}\big|_{y\ll1}=\frac{y^2}{729}+\frac{10y^3}{6561}+\frac{29y^4}{19683}+O(y^5),\\
&q_{12}\big|_{y\ll1}=\frac{27}{y}-15-2y-\frac{734y^2}{729}-\frac{4181y^3}{6561}+O(y^4)~.}
Finally, we obtain the series expansion of $p$ and $x$ in terms of $y$ by inverting \eqref{mltprds} using \eqref{qy} and the explicit results for the coefficients $c(n,m,|r|)$ and $d(n,m,r)$ given in \cite{Gaberdiel:2010jf}. The result is
\begin{eqnarray}\label{pxy-sub}
p(y)\!\!\!&=&\!\!\!\frac{y^2}{729}+\frac{28}{19683}\,y^3+\frac{26}{19683}\,y^4+\frac{5768}{4782969}\,y^5 +\frac{47429}{43046721}\,y^6+\frac{10582844}{10460353203}\,y^7+O(y^8),\nonumber\\
x(y)\!\!\!&=&\!\!\!\frac{27}{y}-15-\frac{56}{27}\,y-\frac{28}{27}\,y^2-\frac{12892}{19683}\,y^3-\frac{3044}{6561}\,y^4 +O(y^5)\,.
\end{eqnarray}
Note that $x(y)$ diverges linearly for small $y$.\footnote{We note that $p(y)$ is nothing but the square of the larger eigenvalue of the Schottky generators themselves: $\text{eig}(L_i(y)) = p(y)^{\pm 2}$, where $L_1(y) = L_2(y)$ are the Schottky generators in the $y\ll 1$ regime. This was already computed in \cite{Barrella:2013wja,Perlmutter:2013paa,Chen:2013dxa,Beccaria:2014lqa}; see in particular equation (3.8) of \cite{Beccaria:2014lqa}, with $k=1, n=3$. One can then find $x(y)$ using the Schottky relations. Such an algorithm is an alternative to that presented in the text.}
We can now compute the vacuum free energy $F_{{\rm vac}}(y)=-\log Z_{{\rm vac}}(y)$, and hence the vacuum contributions to R\'enyi entropy, in a $y\ll 1$ short interval expansion, where
\eq{zy}{Z_{{\rm vac}}(y) = \sum_{h_1,h_2=0}^{\infty} p(y)^{h_1+h_2}C_{h_1,h_2}(y)~,}
We will further expand this result at large $c$ and compare to those of \cite{Chen:2013dxa}.
In order to perform this expansion, we need to be a bit careful: because powers of $x$ introduce inverse powers of $y$, it is not manifest in \eqref{zy} that the short interval expansion can be meaningfully organized in powers of $p(y)$. We need to know something about how $C_{h_1,h_2}(x)$ scales with large $x$, and hence small $y$. Fortunately, we can read this off from \eqref{pc2}. Keeping terms to leading order in $y\rightarrow 0$ at each order in $1/c$, and ignoring coefficients, \eqref{pc2} and \eqref{pxy-sub} imply that for $h>h_1>0$,
\eq{pc1}{\lim_{c\rightarrow\infty}\lim_{y\rightarrow 0}\, p(y)^hC_{h_1,h-h_1}(y) \sim O(y^{h})+ {1\over c} O(y^{h+2}) + \left(\sum_{n=2}^{\infty}{1\over c^n}\right)O(y^{h+4})~.}
Therefore, we can indeed ignore higher order terms in the sum over $h$ when we expand in small $y$.
Without further ado, the results are as follows. At $\ell=1,2$ we find
\es{MI}{F_{{\rm vac};\,1}(y)&=\frac{y^4}{177147}+\frac{56\,y^5}{4782969}+\frac{2189\,y^6}{129140163}+\frac{24668\,y^7}{1162261467}+O(y^8)\\
F_{{\rm vac};\,2}(y)&=\frac{8\,y^6}{387420489}+\frac{8\,y^7}{129140163}+\frac{11122\,y^8}{94143178827}+\frac{51818\,y^9}{282429536481}+O(y^{10})~.}
Comparing to the results of \cite{Barrella:2013wja,Chen:2013dxa}, we find agreement through ${\cal O}(y^8)$.\footnote{We can compare (\ref{MI}) directly to $I_3$ in \cite{Chen:2013dxa}. The mutual information $I_n$, cf. (\ref{IFrelation}), has an overall factor of 1/2 for $n=3$; including the anti-holomorphic part, as they do in \cite{Chen:2013dxa}, contributes an overall factor of 2, so the two factors cancel.} \cite{Chen:2013dxa} only computed through $O(y^8)$, so our term at $O(y^9)$ is new.
At three-loop order, \eqref{f3} implies a nonzero result. Evaluating \eqref{f3} for $p_1=p_2=p(y)$ and $x=x(y)$, we find
\eq{f3ren}{F_{{\rm vac};\,3}(y) = y^{12}\left({13\cdot 2^{10} \over 5^2 \cdot 3^{36} }\right)+O(y^{13}) ~.}
As discussed around \eqref{chenpatt}, the authors of \cite{Chen:2013dxa} computed $F_{{\rm vac};\,3}$ through $O(y^8)$ only, and found zero. We now see that the first contribution appears at $O(y^{12})$. It is remarkable that a computation through $O(y^{11})$ using twist fields would not have revealed the nonzero result!~ This speaks to the different strengths of the twist field method and the sewing expansion that we have performed.
Finally, nonzero all-loop results at $O(y^{12})$ follow from \eqref{allloop} and \eqref{c44e}:
\es{renall}{F_{{\rm vac} ;\,\ell>3} (y)= {y^{12}\over 3^{36}}\cdot\left( {4(3704+590c+125c^2)\over5c(5c+22)}\right)\Bigg|_{c^{1-\ell}}+O(y^{13}) ~.}
\subsubsection{Other genus-two R\'enyi entropies}\label{v-iib}
Consider the three-interval R\'enyi entropy on the plane with $n=2$. In this case, the replica manifold ${\mathscr R}_{3,2}$ is a genus-two manifold characterized by three moduli that parameterize the positions of the intervals modulo conformal symmetry.\footnote{Besides the case of $n=2$ R\'enyi entropy for two intervals (for which the replica manifold is a torus with complex structure $\tau$ given by a known function of the interval length \cite{Lunin:2000yv}), this is the only replica manifold that spans its entire genus $g=(N-1)(n-1)$ moduli space.} Our results for the free energy for $p_1,p_2\ll 1$ and general $x$ can therefore be regarded as (universal contributions to) R\'enyi entropies for the case of three disjoint intervals and $n=2$.
The period matrix of ${\mathscr R}_{3,2}$ is known in terms of Lauricella functions \cite{Coser:2013qda}. To apply our results, one would first need to understand the relative spacings of intervals that corresponds to $p_1,p_2\ll 1$, by using the map from Schottky space to the period matrix. We do not pursue this geometric picture here. It is clear, however, that not all intervals need to be short, because $x$ is allowed to be general. Thus, we have implicitly provided the first computations of universal contributions to 2D CFT R\'enyi entropies that do not require all intervals to be short.
One can also consider the case of one interval on the torus with $n=2$. The replica manifold has two moduli, namely, the temperature and interval length. Our methods can again be applied to this case to derive universal contributions to the R\'enyi entropy from the stress tensor sector. This has been done perturbatively in a high or low temperature expansion in \cite{Barrella:2013wja,Chen:2014unl,Datta:2013hba} using different methods that cannot access terms at two-loop and beyond in a large-$c$ expansion, unlike the sewing method here. We note that $p_1$ and $p_2$ as a function of the moduli have been computed perturbatively in \cite{Barrella:2013wja,Chen:2014unl,Datta:2013hba}. We leave the remaining explicit calculation for future work.
\subsection{The separating degeneration limit}\label{v-iii}
An important predecessor of the present work is \cite{Yin:2007gv}, where the relation between $Z_{\rm vac}$ and 3D gravity was first enunciated precisely. Yin tested this relation at genus two, focusing on the separating degeneration limit of the Riemann surface, where $\Sigma$ becomes the union of two tori. Before we probe this region of moduli space with our new results, let us briefly review the work of \cite{Yin:2007gv}.
For the sake of easy comparison to \cite{Yin:2007gv}, we use his notation in this subsection. We write the elements of the period matrix $\Om$ as
\begin{equation}
\Om_{11} = \rho~, \quad \Om_{22} = \sigma~, \quad \Om_{12} = \nu~.
\end{equation}
We also define the multiplicative periods
\begin{equation}
q=e^{2\pi i \rho}~, \quad s= e^{2\pi i \sigma}~, \quad v = 2 \pi i \nu~.
\end{equation}
The separating degeneration limit corresponds to the limit $v\rightarrow 0$ with $(q,s)$ fixed, where $q$ and $s$ parameterize the complex structure of the two tori.
In the $1/c$ expansion, \cite{Yin:2007gv} computed parts of $F_{{\rm vac};\, 0}, F_{{\rm vac};\, 1}$ and $F_{{\rm vac};\, 2}$ at genus two using a variety of methods, all of which agree:
\vskip .1 in
$\bullet$\quad Demanding a match to the polar parts of extremal CFT partition functions at low values of $k=c/24$, which are fixed by invariance under the genus-two modular group $Sp(4,\mathbb{Z})$. That this match should hold follows from the definition of extremal CFTs, theories that have no non-trivial Virasoro primaries of dimension less than $k+1$ above the vacuum.
$\bullet$\quad For $F_{{\rm vac};\, 1}$, direct computation of \eqref{grav}.
$\bullet$\quad Direct computation of $Z_{\rm vac}$ written as a sum over bilinears of torus one-point functions of Virasoro vacuum descendants. This is similar to what we do in the present work.
\vskip .1 in
Although it is not our focus here, $F_0$ is given by a certain Liouville action whose origins we explain in Appendix \ref{orderc}. In order to write the expressions for $F_{{\rm vac};\;1}$ and $F_{{\rm vac};\,2}$, we must define the holomorphic Eisenstein series, normalized as
\es{eis}{\hat E_n^{\rho} = \sum_{m=1}^{\infty} {m^{n-1} q^m\over 1-q^m}~.}
For $n=2,4$, these hatted versions relate to the usual Eisenstein series as
\es{eis2}{&\hat E^{\rho}_2 = {1-E_2(q)\over 24}\approx q+3q^2+4q^3+O(q^4)\\
&\hat E^{\rho}_4 = {E_4(q)-1\over 240}\approx q+9q^2+28q^3+O(q^4)~.}
The results of \cite{Yin:2007gv}, which we denote $F_{{\rm vac}}^{\rm Yin}$, are as follows: in the separating degeneration limit $v\rightarrow 0$,
\begin{eqnarray}\label{Yin1}
F_{{\rm vac};\,1}^{\rm Yin} \!\!\!&=&\!\!\! -\sum_{n=2}^{\infty}\log[(1-q^n)(1-s^n)] + v^2\left({2q\over 1-q}\hat E_2^{\sigma} + {2s\over 1-s}\hat E_2^{\rho} - 4\hat E_2^{\sigma}\hat E_2^{\rho}\right)\nonumber\\\!\!\!&+&\!\!\! v^4\Bigg(-{1\over 2} \left({2q\over 1-q}\hat E_2^{\sigma} + {2s\over 1-s}\hat E_2^{\rho} - 4\hat E_2^{\sigma}\hat E_2^{\rho}\right)^2 \\\!\!\!&+&\!\!\! {qs\over 6}\Big(-2(q+s) + 45(q^2+s^2) + 72 qs + 745(q^2s+qs^2) + 3720 q^2s^2\Big)+O(q^4,s^4) \Bigg)\nonumber\\\!\!\!&+&\!\!\!O(v^6)\nonumber
\end{eqnarray}
and\footnote{The semiclassical expansion of the free energy in \cite{Yin:2007gv} was performed in powers of $1/k=24/c$. We expand in $1/c$, and define $F^{\rm Yin}$ according to the $1/c$, rather than the $1/k$, expansion. Thus, our $F_{{\rm vac};\, 2}^{\rm Yin}$ equals $24$ times the $S_2$ found in \cite{Yin:2007gv}.}
\es{Yin2}{
F_{{\rm vac};\,2}^{\rm Yin} &= {2v^2}\left({q\over 1-q}-\hat E_2^{\rho}\right)\left({s\over 1-s}-\hat E_2^{\sigma}\right)\\&+ 24v^4\left(q^2s^2\left({13\over 36} + {1\over 8}(q+s) - {45\over 16}qs\right)+O(q^4,s^4)\right)+O(v^6)~.}
Note that these are non-perturbative in $q$ and $s$ through $O(v^2)$, and the leading term in $F_1^{\rm Yin}$ is just the sum of one-loop free energies on two tori with periods $q$ and $s$. (Note that in order to recover the $O(v^0)$ piece of $F_{{\rm vac};\,1}^{\rm Yin}$, one relies on \eqref{cx1}.) Expanding everything through $O(q^3s^3)$, we find
\es{Yin3}{F_{{\rm vac};\,1}^{\rm Yin} &= (q^2+q^3+s^2+s^3) \\&- v^2\Big(2 q s (q + s + 3 q s) (2 + 3 q + 3 s + 8 q s)\Big)\\
&+v^4\Bigg({q s\over 6} \Big(-2 (q+s) + 45 (q^2+s^2) + 72 q s + 745 (q^2 s + q s^2) + 3624 q^2 s^2\Big)\Bigg)\\&+O(v^6, q^4, s^4)}
and
\es{Yin4}{
F_{{\rm vac};\,2}^{\rm Yin} &= 24v^2\left(q^2s^2\left({1\over 3}+ {1\over
2} q+{1\over 2}s+{3\over 4}qs\right) \right) \\
&+ 24v^4\left(q^2s^2\left( {13\over 36} + {1\over 8}(q+s)-{45\over
16}qs \right)\right) +O(v^6, q^4, s^4)~.}
We are now in a position to extend these results using our computations. As in the previous subsection, our goal is to perform the map $\lbrace p_1,p_2,x \rbrace \mapsto \lbrace q,s,v\rbrace$, express $F_{{\rm vac}}$ in these variables, and expand at large $c$.
In \cite{Yin:2007gv}, the following relations were established:
%
\es{p1p22}{p_1 &= q\left(1-v^2(2\hat E_2^{\sigma}) - v^4\left(2(\hat E_2^{\sigma})^2 + {2\over 3}\hat E_2^{\rho} \hat E_2^{\sigma}-{1\over 6} \hat E_4^{\sigma} + {10\over 3}\hat E_2^{\rho} \hat E_4^{\sigma}\right)+O(v^6)\right)\\
p_2 &= p_1(\sigma\leftrightarrow \rho)~.}
All we need now is to derive $x(q,s,v)$. We do so by inverting one of the Schottky relations in equation \eqref{mltprds},
\begin{equation}\label{gkv}
e^v= x+x\sum_{n,m=1}^{\infty}p_1^np_2^m\sum_{r=-n-m}^{n+m}d(n,m,r)x^r~.
\end{equation}
Note that for $s=q=0$, we have $x=e^v$. So we can write this as $x=e^v + O(q,s,qv, sv)$. The final result for $x$ is rather appealing,
\begin{equation}\label{xconj}
{x = e^v -4 \hat E_2^{\sigma} \hat E_2^{\rho}(v^3+v^4) + O(v^5qs)~.}
\end{equation}
We derive this in Appendix \ref{app-degen}.
Plugging equations \eqref{p1p22} and \eqref{xconj} into \eqref{Z2} enables us to extend the results of \cite{Yin:2007gv} in two ways. First, we can now give the $O(v^4)$ part of the one- and two-loop free energies \eqref{Yin3} and \eqref{Yin4}, respectively, through $O(q^3s^4, q^4s^3)$, not only through $O(q^3s^3)$. Second, and more importantly, we can write down some of the leading terms as $v\rightarrow 0$ for {\it all} loops.
We find
\begin{eqnarray}
F_{{\rm vac};\, 1} \!\!\!&=&\!\!\! F_{{\rm vac};\,1}^{\rm Yin}\nonumber\\\!\!\!&+&\!\!\!v^4\left({1\over 6} q s \left(q^3 (210 + 2764 s + 11865 s^2) + s^3 (210 + 2764 q + 11865 q^2) \right)+O(q^4s^4) \right)\nonumber\\\!\!\!&+&\!\!\!O(v^6)
\end{eqnarray}
and
\begin{equation}
{F_{{\rm vac};\, 2} = F_{{\rm vac};\,2}^{\rm Yin}-v^4\Big( q^2s^2\left(14(q^2+s^2) + 283 q s(q+s)\right)+O(q^4s^4) \Big)+O(v^6)~.}
\end{equation}
To read off the free energy at three loops and beyond, we plug \eqref{xconj} into \eqref{f3}--\eqref{c44e}. At three-loop order, we find
\begin{equation}
{F_{{\rm vac};\, 3} = q^4s^4\left({13312\over 25}v^4 + {86528\over 75} v^6 + O(v^8)\right) + O(q^4s^5v^4, q^5s^4v^4)~.}
\end{equation}
To derive terms at higher orders in $v$ for fixed $q$ and $s$, we would need to expand $x$ beyond $O(v^4)$. Likewise, to derive terms at higher orders in $q$ and $s$ for fixed $v$, we would need to include more terms in the sewing expansion, like $p_1^4p_2^5 C_{4,5}(x)$.
Finally, the all-loop expansion is completed by the terms\footnote{This result should be contrasted with footnote 6 of \cite{Yin:2007gv}.}
\es{allloopyin}{F_{{\rm vac};\, \ell>3} &= q^4s^4\left(v^4+{13\over 6}v^6+O(v^8)\right)\cdot \left({4(3704+590c+125c^2)\over5c(22+5c)}\right)\Big|_{c^{1-\ell}}\\&+ O(q^4s^5v^4, q^5s^4v^4)~.}
\section{Discussion}\label{vi}
We close with a discussion of some open questions, progressing from obvious directions for future work to the more speculative. Some directions for future work were mentioned in the text.
\vskip .1 in
$\bullet$\quad In the realm of R\'enyi entropy, performing the calculation suggested in section \ref{v-iib} for three intervals would give a satisfying derivation away from a short-interval expansion. In addition, one can straightforwardly apply our results to the case of the $n=2$ R\'enyi entropy for a single interval on the torus, at least in a high or low temperature expansion. One need only perform the map between Schottky coordinates and the temperature and interval length; this map has already been partially performed in \cite{Barrella:2013wja,Chen:2014unl,Datta:2013hba}. No results have yet been derived for the torus case beyond one loop.
\vskip .1 in
$\bullet$\quad
One can consider including local operator insertions on $\Sigma$. The sewing procedure remains a sum over sphere correlation functions, now with these extra operator insertions. The operators generate non-vacuum states in the CFT. Taking $\Sigma$ to be a replica manifold, one can thus compute excited-state R\'enyi entropies by the sewing procedure. Such entropies have been computed in CFT using twist-field and holographic methods (e.g. \cite{Alcaraz:2011tn, Astaneh:2013gp, Caputa:2014vaa, Asplund:2014coa, Caputa:2014eta}). As we have tried to demonstrate, the sewing construction is likely to provide a complementary approach that operates at finite $c$, so this seems like an especially worthwhile pursuit. It would be easy, for instance, to read off the ${\cal O}(c^0)$ terms from the above procedure: these would be predictions for bulk one-loop corrections to the Einstein action evaluated on the ``punctured handlebody''.
\vskip .1 in
$\bullet$\quad
It would be nice to prove that nowhere in the moduli space, except at the separating degeneration point, does the genus-two partition function truncate in a $1/c$ expansion (whereas our method could only access the regime of small $p_1,p_2$). This seems highly likely to be the case. Understanding the structure of the Schottky sum rules in Appendix \ref{app-degen} could also be enlightening.
\vskip .1 in
$\bullet$\quad
In our analytic bootstrap of section \ref{iii}, we could equally have used Virasoro conformal blocks rather than global conformal blocks. In this case, the crossed blocks are related to the original blocks by the fusion and braiding matrices. These are known in closed form \cite{Ponsot:2000mt}, so our conclusions can also be phrased in terms of OPE coefficients of Virasoro primaries rather than quasi-primaries. The Virasoro approach is in principle more efficient, as it will fix the four-point function in terms of even fewer pieces of data, and it would be worthwhile to make this precise. An interesting demonstration of this fact comes by way of the $W_3$ correlator $\langle WWWW \rangle$, as computed in \eqref{Lamb}--\eqref{wcoeffs}: up to the norm of $W$, the Virasoro approach would fix $\langle WWWW \rangle$ without having to compute even a single OPE coefficient.
\vskip .1 in
$\bullet$\quad
We briefly considered CFTs with higher-spin symmetry; it would be straightforward to extend our computation of $Z_{\rm vac}$ to higher orders for such theories. A more exciting prospect would be to compute the partition function on $\Sigma$ in the presence of insertions that carry higher-spin charge. There is natural motivation for this from holography. In particular, while much work has been done to construct solutions of higher-spin gravity with nonzero higher-spin charge and solid-torus topology \cite{Ammon:2012wc}, there has been no work on building solutions of higher-spin gravity of higher genus and with nonzero higher-spin fields turned on. A subset of such ``higher-spin handlebodies'' would be saddle points of the Euclidean higher-spin gravitational path integral with replica boundary conditions and nonzero higher-spin charge \cite{Perlmutter:2013paa}; accordingly, their action would be expected to match CFT computations of R\'enyi entropy in states with higher-spin charge and/or chemical potentials. This calculation would be analogous to the one peformed in \cite{Faulkner:2013yia} in the spin-2 case. Constructing such R\'enyi entropies via partition functions on replica manifolds endowed with higher-spin charge, rather than via twist fields \cite{Datta:2014uxa} or Wilson lines \cite{deBoer:2014sna}, would be an interesting application of the replica trick to the higher-spin setting. One might also try to make contact with the ``spin-3 entropy'' of \cite{Hijano:2014sqa}.
\vskip .1 in
$\bullet$\quad Our results can be used to test the idea that Liouville theory provides an effective description of irrational CFTs with large central charge (see e.g. \cite{Jackson:2014nla} for a recent refinement of this idea and references to earlier work). In particular, the $1/c$ expansion of the genus-two partition function can be checked against a diagrammatic calculation in Liouville theory.
\vskip .1 in$\bullet$\quad
Upon first glance, the relation between
\eq{zvac1}{Z_{{\rm vac};\,1} = \sum_{h_1,h_2=0}^{\infty} p_1^{h_1}p_2^{h_2} \lim_{c\rightarrow\infty} C_{h_1,h_2}(x)}
and the bulk graviton determinant \eqref{grav} seems opaque. Nevertheless, these two quantities are equal. Both formulae are written in terms of Schottky data, so it should be possible to find a clean mapping between them. This would be a useful stepping stone to writing down a closed formula for the two-loop contribution to the bulk partition function, in analogy to the determinant \eqref{grav}. In the sewing prescription, the two-loop result simply requires us to sum over the ${\cal O}(1/c)$ parts of $C_{h_1,h_2}(x)$ instead of just the ${\cal O}(c^0)$ parts. Is there an equally simple prescription in the bulk, and if so do the primitive elements of the Schottky group play a privileged role as they do at one loop?
\vskip .1 in
$\bullet$\quad
Part of the motivation for the present work was the one-loop exactness of the pure-gravity partition function on the solid torus. The current understanding of this result relies on an elegant and simple argument about Virasoro representation theory, which can be understood holographically. It can be derived without recourse to CFT by computing the energies of bulk excitations, or equivalently, by quantizing the phase space given by two copies of $\text{diff}\,S^1/SL(2,\mathbb{R})$ \cite{Maloney:2007ud}. Still, it would be very satisfying to derive this result from a more direct perspective in the bulk. For example, while the solid torus partition function of a pure higher-spin theory is also believed to be one-loop exact, we do not know the analog of diff\,$S^1$ in that context; there should be a more direct argument one can make in the bulk. Understanding this exactness from the perspective of the bulk diagrammatic expansion could provide insights useful for higher genus.
On the other hand, a perhaps cleverer approach would be to derive the partition function from the $SL(2,\mathbb{R})\times SL(2,\mathbb{R})$ Chern-Simons formulation of 3D gravity. Einstein-Hilbert gravity and Chern-Simons theory are non-perturbatively inequivalent, but it is believed that the semiclassical expansion around a well-defined saddle point can be performed in either formulation. The Chern-Simons approach builds in the topological nature of 3D gravity, whereas the loop expansion of 3D gravity in the metric formulation is no simpler than it is in higher dimensions, despite the absence of propagating bulk degrees of freedom. Presumably, such a computation would be manifestly one-loop exact, in analogy to similar truncations in compact Chern-Simons theory \cite{Jeffrey:1992tk}. (More precisely, all higher-loop effects could be absorbed in a renormalization of the Newton constant.) A Chern-Simons approach would also have the benefit of immediately generalizing to pure higher-spin gravity. The challenge to carrying this out is that both the gauge group and the topology are non-compact. There has been progress in recent years in computing Chern-Simons partition functions for non-compact gauge groups (see e.g. \cite{Dimofte:2009yn}), but the requisite technology does not yet exist for the solid torus. This technology would represent a significant advance in our understanding of 3D gravity.
\section*{Acknowledgments}
We thank Matthias Gaberdiel, Jared Kaplan, Christoph Keller, Albion Lawrence, David Poland, David Simmons-Duffin, Herman Verlinde, Roberto Volpato and Xi Yin for helpful discussions. MH, AM, and EP wish to thank the Aspen Center for Physics for hospitality during this work, which was supported in part by National Science Foundation Grant No.\ PHYS-1066293. MH and IGZ were supported in part by the National Science Foundation under CAREER Grant No.\ PHY10-53842. EP was supported in part by funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013), ERC Grant agreement STG 279943, ``Strongly Coupled Systems'', and in part by the Department of Energy under Grant No. DE-FG02-91ER40671. IGZ was supported in part by the Department of Energy under Grant No.\ DE-SC0009987.
AM is supported by the National Science and Engineering Research Council of Canada.
\begin{appendix}
\section{An OPE limit of $C_{h_1,h_2}(x)$}\label{app-OPE}
The goal in this Appendix is to show that the $x\rightarrow 0$ behavior of $C_{h_1,h-h_1}(x)$ is given, as in \eqref{pc2}, by
\eq{apc2}{\lim_{c\rightarrow\infty}\lim_{x\rightarrow 0}C_{h_1,h-h_1}(x) \sim O(x^{-h})+ {1\over c} O(x^{-h+2}) + \left(\sum_{n=2}^{\infty}{1\over c^n}\right)O(x^{-h+4})}
where we again have written $h_2=h-h_1$, and we restrict to $h>h_1>0$. We are ignoring $h_1$- and $h$-dependent coefficients at each order, and displaying only the leading singular behavior at each order in $1/c$.
The upshot is that \eqref{apc2} follows from considering the $t$-channel OPE limit of the four-point functions that define the $C_{h_1,h_2}(x)$. The expansion \eqref{apc2} follows from the $c$-scaling of the OPE coefficients and norms that appear in the conformal block decomposition. The leading ${\cal O}(c^0)$ term in \eqref{apc2} arises from identity exchange; the leading ${\cal O}(1/c)$ term, from $T$ exchange; and all higher order terms in $1/c$, from exchange of all other quasi-primaries in the Virasoro identity representation.
Let us give more detail. Recall that the $C_{h_1,h_2}(x)$ are defined in terms of sums over four-point functions of vertex operators,
\es{achh}{C_{h_1,h_2}(x) =\sum_{\phi_i,\psi_i\in{{\cal H}_{h_i}}}G_{\phi_1,\psi_1}^{-1}G_{\phi_2,\psi_2}^{-1}\bigg\langle V^{out}(\psi_1,\infty)\;V^{out}(\psi_2,x)\;V^{in}(\phi_2,1)\;V^{in}(\phi_1,0)\bigg\rangle}
where the Hilbert subspaces ${\cal H}_{h_i}$ are spanned by operators of holomorphic dimensions $h_i$. Vertex operators $ V^{out}(\psi,z)$ and $ V^{in}(\phi,z)$ are just chiral CFT operators $\psi(z)$ and $\phi(z)$, respectively, dressed with $z$-dependent factors. In the $z\rightarrow 0$ limit, the dressing factors are finite (cf. section \ref{iv-i}), so we can ignore them. Thus, the $x\rightarrow 0$ limit of $C_{h_1,h_2}(x)$ is simply the $t$-channel limit of a weighted sum over four-point functions of CFT operators, including descendants.
A four-point function of pairwise identical quasi-primary operators of dimensions $h_1$ and $h_2$ can be written in a global conformal block decomposition as
\eq{gblock}{\langle \psi(\infty)\phi(x)\phi(1)\psi(0)\rangle= x^{-h}\sum_{{\cal O}} {C_{\psi\phi {\cal O}}^2\over {\cal N}_{{\cal O}}}x^{h_{{\cal O}}}{}_2F_1(h_{{\cal O}},h_{{\cal O}};2h_{{\cal O}};x)}
where $C_{\psi\phi {\cal O}}$ are OPE coefficients, and $h=h_1+h_2$. We are expanding this correlator in the $x\rightarrow 0$ channel. Four-point functions involving secondary operators can be written using derivatives acting on an expression of the above form. For our purposes, we only allow Virasoro descendants of the identity to run in the internal channel: ${\cal O}\in\lbrace 1,T,\Lambda,\ldots \rbrace$. Indeed, for the purposes of establishing \eqref{apc2}, consideration of the exchange of these three operators alone will be sufficient: that is, we associate the scaling in \eqref{apc2} with specific terms in \eqref{gblock}. Let us write out the first three terms in \eqref{gblock} coming from the Virasoro identity block, ${\cal O}\in\lbrace 1,T,\Lambda \rbrace$:
\es{gblock2}{&\langle \psi(\infty)\phi(x)\phi(1)\psi(0)\rangle=\\
&x^{-h}\left(C_{\psi\phi 1}^2 + {2 \over c}C_{\psi\phi T}^2\,x^2\,{}_2F_1(2,2;4;x) + {10\over c(5c+22)}C_{\psi\phi \Lambda}^2 \,x^4\, {}_2F_1(4,4;8;x)+O(x^6)\right)~.}
We have substituted the explicit operator norms.
When $\psi$ and $\phi$ are in the same global conformal family, fusion onto the identity is allowed, and $C_{\psi\phi 1}\neq 0$ and independent of $c$. This yields a term of order ${\cal O}(c^0)$ and $O(x^{-h})$. For a given $(h_1,h_2)$, there is {\it always} such a term in the definition of $C_{h_1,h_2}(x)$, since the latter are defined as a sum over {\it all} correlators involving operators at levels $(h_1,h_2)$. This is most obvious when $h_1=h_2$, because $C_{h_1,h_2}(x)$ will include correlators of four identical operators; but even when $h_1\neq h_2$, the definition of $C_{h_1,h_2}(x)$ includes correlators of arbitrary derivatives of $T$, all of which are in the same global conformal family. For instance, $C_{2,4}(x)$ includes $\langle T(\infty)\, \partial^2 T(x) \,\partial^2 T(1) \,T(0)\rangle$, which permits fusion onto the identity when $x\rightarrow 0$. There are generically no cancellations among terms in the sum \eqref{achh}. This accounts for the first term on the right-hand side of \eqref{apc2}. This leading behavior was also observed in \cite{Gaberdiel:2010jf}.
The second term in \eqref{apc2}, at ${\cal O}(1/c)$, comes from the second term in \eqref{gblock2}. Because $C_{\psi\phi T}$ is $c$-independent\footnote{For instance, when $\psi=\phi$ is a quasi-primary, $C_{\phi\phi T} = h \,{\cal N}_{\phi}$.} and ${\cal N}_T =c/2$, this term contributes at ${\cal O}(1/c)$ compared to the identity exchange, but not beyond. As explained above, the definition of $C_{h_1,h_2}(x)$ always includes such terms. This accounts for the second term on the right-hand side of \eqref{apc2}.
The final terms in \eqref{apc2}, at ${\cal O}(1/c^2)$ and beyond, come from exchange of the level-four quasi-primary $\Lambda$ in \eqref{gblock2}. Because its inverse norm has an infinite expansion in $1/c$, this will contribute a term $O(x^{-h+4})$ to all orders in a $1/c$ expansion, thus accounting for the remaining terms in \eqref{apc2}.
\section{More details on the sewing construction}\label{app-Cs}
\subsection{Operators and norms}\label{app-Cs-i}
In this section, we list the operators and their norms at the first six levels of the Virasoro vacuum representation. To ensure that we have not missed any, it is useful to expand the holomorphic Virasoro vacuum character, $\chi_{\rm vac}$:
\es{chivac}{\chi_{\rm vac} &=\text{Tr}_{\rm vac}(q^{L_0-c/24})\\
&= q^{-c/24}\prod_{n=2}^{\infty}{1\over 1-q^n}\\
&\approx q^{-c/24}(1+q^2+q^3+2q^4+2q^5+4q^6\ldots)~.}
One can branch $\chi_{\rm vac}$ into global $SL(2,\mathbb{R})$ characters, thereby counting the number of quasi-primary fields. The resulting generating function, call it $\chi_{\rm qp}$, is
\eq{}{\chi_{\rm qp} = (q^{c/24}\chi_{\rm vac}-1)(1-q) \approx q^2+q^4+2q^6+\ldots.}
In terms of the degeneracy $d(h)$ of all level $h$ operators, the degeneracy of level $h$ quasi-primaries is $d(h)-d(h-1)$ (for $h>1$). We use the shorthand ${\cal O}={\cal O}(0)$ to denote operators.
\begin{itemize}
\item{Level 2: there is one quasi-primary operator, the stress-energy tensor, $T=L_{-2}|0\rangle$, with norm ${\cal N}_T = c/2$.}
\item{Level 3: there is one secondary operator, ${\cal O}_3=\partial T = L_{-3}|0\rangle$, with norm ${\cal N}_{{\cal O}_3} = 2c$.}
\item{Level 4: there are two operators,
\begin{equation}\label{O4}
\mathcal{O}_4^{(1)}=\Lambda = \left(L_{-2}^2-\frac35L_{-4}\right)|0\rangle~,\quad \mathcal{O}_4^{(2)} = L_{-4}|0\rangle~.
\end{equation}
The operator $\mathcal{O}_4^{(1)}$ is the commonly studied quasi-primary often denoted $\Lambda$, and $\mathcal{O}_4^{(2)}$ is secondary. Their norms are
\begin{equation}\label{No4}
{\cal N}_{\Lambda}=\frac c2\left(c+\frac{22}5\right),\qquad\quad{\cal N}_{\mathcal{O}_4^{(2)}}=5c~.
\end{equation}
}
\item{Level 5: there are two operators ,
\begin{equation}\label{O5}
\mathcal{O}_5^{(1)} = L_{-1}\left(L_{-2}^2-\frac35L_{-4}\right)|0\rangle~, \quad \mathcal{O}_5^{(2)} = L_{-5}|0\rangle~,
\end{equation}
where both of them are secondary. Their norms are
\begin{equation}\label{No5}
{\cal N}_{\mathcal{O}_5^{(1)}}=4\,c\left(c+\frac{22}5\right),\qquad\quad{\cal N}_{\mathcal{O}_5^{(2)}}=10c~.
\end{equation}
}
\item{Level 6: there are four operators (all acting on $|0\rangle$),
\begin{eqnarray}\label{l6-qps}
\mathcal{O}_6^{(1)}\!\!\!&=&\!\!\!-\frac{20}{63}L_{-6}-\frac{8}{9}L_{-4}L_{-2}+\frac{5}{9}L_{-3}L_{-3},\\
\mathcal{O}_6^{(2)}\!\!\!&=&\!\!\!-\frac{(60c+78)}{(70c+29)}L_{-6}-\frac{3(42c+67)}{(70c+29)}L_{-4}L_{-2}+\frac{93}{(70c+29)}L_{-3}L_{-3}+L_{-2}L_{-2}L_{-2},\nonumber\\
\mathcal{O}_6^{(3)}\!\!\!&=&\!\!\!L_{-1}L_{-1}\big(L_{-2}L_{-2}-\frac35\,L_{-4}\big),\nonumber\\
\mathcal{O}_6^{(4)}\!\!\!&=&\!\!\!L_{-6},\nonumber
\end{eqnarray}
where $\mathcal{O}_6^{(1)}$ and $\mathcal{O}_6^{(2)}$ are quasi-primary, and $\mathcal{O}_6^{(3)}$ and $\mathcal{O}_6^{(4)}$ are secondary. Their norms are
\begin{eqnarray}\label{No6}
{\cal N}_{\mathcal{O}_6^{(1)}}\!\!\!&=&\!\!\!\frac 4{63}\,c\left(70c+29\right),\qquad{\cal N}_{\mathcal{O}_6^{(2)}}=\frac34\,c\,\frac{(2 c-1)\,(5c+22)\,(7c+68)}{(70c+29)},\nonumber\\
{\cal N}_{\mathcal{O}_6^{(3)}}\!\!\!&=&\!\!\!72\,c\left(c+\frac{22}5\right),\qquad{\cal N}_{\mathcal{O}_6^{(4)}}=\frac{35}2c~.\nonumber
\end{eqnarray}
}
\end{itemize}
The secondary operators $L_{-n}|0\rangle$, $n>2$ have the form $\partial^{(n-2)}T/(n-2)!$, with norms $n(n^2-1) c/12$.
\subsection{Four-point functions of vertex operators}\label{app-Cs-ii}
In this section we provide more details on the transformation properties of the vertex operators which were used in computation of the four-point functions $C_{h_1,h_2}(x)$ in subsection \ref{iv-ii}. The final expressions for $C_{h_1,h_2}(x)$ in terms of $x$ are reported in the main text and are not repeated here.
We will need the explicit expressions for the vertex operators at infinity. For $h=2,3$, we have
\es{app-C32}{V^{out}(T,\infty)&=\lim_{z\to\infty}z^4 T(z)\\
V^{out}({\cal O}_3,\infty) &= \lim_{z\to\infty} \Big(-z^{6}\partial T(z)-4z^{5}T(z)\Big)~.}
For $h=4$, we have
\es{O41infty}{
V^{out}(\Lambda,\infty)&=\lim_{z\to\infty}z^8 \Lambda(z)\\
V^{out}(\mathcal{O}_{4}^{(2)},\infty)&=\lim_{z\to\infty}\Big(\frac12z^{8}\partial^2T(z)+5z^{7}\partial T(z)+10z^{6}T(z)\Big)~.}
For $h=5$, we have
\es{O51infty}{
V^{out}(\mathcal{O}_{5}^{(1)},\infty)&=\lim_{z\to\infty}\Big(-z^{10}-z^9L_1\Big)\,\mathcal{O}_5^{(1)}(z)\\
V^{out}(\mathcal{O}_{5}^{(2)},\infty)&=\lim_{z\to\infty}\Big(-\frac16z^{10}\partial^3T(z)-3z^{9}\partial^2T(z)-15z^{8}\partial T(z)-20z^7T(z)\Big)~.
}
Finally, for $h=6$, we have quasi-primary vertex operators
\begin{equation}
V^{out}(\mathcal{O}_{6}^{(i)},\infty)=\lim_{z\to\infty}z^{12}\,\mathcal{O}_6^{(i)}(z),\quad i=\{1,2\}~,
\end{equation}
and secondary vertex operators
\begin{eqnarray}\label{O6infty}
V^{out}(\mathcal{O}_{6}^{(3)},\infty)\!\!\!&=&\!\!\!\lim_{z\to\infty}\Big(z^{12}+z^{11}L_1+\frac{z^{10}}{2}L_1^2\Big)\,\mathcal{O}_6^{(3)}(z),\nonumber\\
V^{out}(\mathcal{O}_{6}^{(4)},\infty)\!\!\!&=&\!\!\!\lim_{z\to\infty}\Big(\frac1{24}\,z^{12}\,\partial^4T(z)+\frac76\,z^{11}\,\partial^3T(z)+\,\frac{21}2\,z^{10}\,\partial^2T(z)\nonumber\\&&\quad~ +35\,z^9\,\partial T(z)+35\,z^8T(z)\Big)~.
\end{eqnarray}
With these in hand, we start with $C_{h,2}(x)$. Using the definitions
\es{}{V^{out}(T,x) &= (x-1)^2 \,T(x)\\
V^{in}(T,1) &= (x-1)^2 \,T(1)\\
V^{in}({\cal O},0) &= {\cal O}(0)}
that follow from section \ref{iv-i}, $C_{h,2}(x)$ takes the general form
\es{}{C_{h,2}(x) = {\cal N}_T^{-1}(x-1)^4 \sum_{i=1}^{d(h)}{\cal N}_{{\cal O}_{h}^{(i)}}^{-1} \bigg\langle V^{out}(\mathcal{O}_{h}^{(i)},\infty)\;T(x)\;T(1)\;\mathcal{O}_{h}^{(i)}(0)\bigg\rangle~.}
We next consider the functions $C_{h,3}(x)$. For $h=3$ we have
\vspace{-5pt}
\begin{equation}\label{app-C33}
C_{3,3}(x)=\mathcal N_{\mathcal{O}_3}^{-2}\bigg\langle V^{out}(\mathcal{O}_3,\infty)\;V^{out}(\mathcal{O}_3,x)\;V^{in}(\mathcal{O}_3,1)\;V^{in}(\mathcal{O}_3,0)\bigg\rangle,
\end{equation}
where $V^{out}(\mathcal{O}_3,\infty)$ is given in \eqref{app-C32} and
\vspace{-5pt}
\begin{eqnarray}\label{O3x1}
V^{out}(\mathcal{O}_3,x)\!\!\!&=&\!\!\!-(x-1)^3\,\partial T(x)-4(x-1)^2\,T(x),\nonumber\\
V^{in}(\mathcal{O}_3,1)\!\!\!&=&\!\!\!(x-1)^3\partial\,T(1)-4\,(x-1)^2T(1)~.
\end{eqnarray}
The expressions for $C_{4,3}(x)$ and $C_{5,3}(x)$ can then be easily obtained using (\ref{O3x1}) and the vertex operators given above.
\section{Schottky parameters in the separating degeneration limit}\label{app-degen}
Here, we provide details of the map between Schottky space and the period matrix in the separating degeneration limit considered in section \ref{v-iii}:
\eq{}{\lbrace p_1,p_2,x\rbrace~ \mapsto ~\lbrace q,s,v\rbrace~.}
The final result, perturbative in $v$ but non-perturbative in $q$ and $s$, is
\es{ay}{p_1 &= q\left(1-v^2(2\hat E_2^{\sigma}) - v^4\left(2(\hat E_2^{\sigma})^2 + {2\over 3}\hat E_2^{\rho} \hat E_2^{\sigma}-{1\over 6} \hat E_4^{\sigma} + {10\over 3}\hat E_2^{\rho} \hat E_4^{\sigma}\right)+O(v^6)\right)\\
p_2 &= p_1(\sigma\leftrightarrow \rho)\\
x &= e^v -4 \hat E_2^{\sigma} \hat E_2^{\rho}(v^3+v^4) + O(v^5qs)~.}
The hatted Eisenstein series were defined in \eqref{eis}. The first two relations were derived in \cite{Yin:2007gv}; here, we will derive the last one.
The computation is an exercise in series solutions of algebraic equations. We make a series ansatz for $x$,
\eq{x}{x = \sum_{j=0}^{\infty}x_j(q,s) v^j~,}
plug this and the expressions for $p_1$ and $p_2$ in \eqref{ay} into the perturbative Schottky relation
\eq{ayb}{e^v= x+x\sum_{n,m=1}^{\infty}p_1^np_2^m\sum_{r=-n-m}^{n+m}d(n,m,r) x^r~,}
and solve order-by-order for $x_j(q,s)$.
An immediate question that may occur to the reader is how we are able to obtain a result \eqref{ay} that is non-perturbative in $q$ and $s$, despite only having access to $d(n,m,r)$ to finite order in $(n,m)$. The answer is that we were able to infer various sum rules obeyed by the $d(n,m,r)$ that we believe to hold for all $(n,m)$:
\begin{subequations}\label{idensum}
\begin{eqnarray}
&&\sum_{r=-n-m}^{n+m} d(n,m,r) = 0\label{sda}\\
&&\sum_{r=-n-m}^{n+m} d(n,m,r) \,r = 0\label{sdb}\\
&&\sum_{r=-n-m}^{n+m} d(n,m,r) \,r^2 = 0\label{sdc}\\
\sum_{n,m=1}^{\infty}q^ns^m\!\!\!\!\!\!\!\!\!&& \sum_{r=-n-m}^{n+m} d(n,m,r) \,r^3 = 24 \hat E_2^{\rho} \hat E_2^{\sigma}\label{sdd}\\
&&\sum_{r=-n-m}^{n+m} d(n,m,r)\, r^4= 0\label{sde}
\end{eqnarray}
\end{subequations}
where $\hat E_2^{\rho}$ was defined in \eqref{eis}. We have also found a set of sum rules obeyed by the $c(n,m,|r|)$ that appear in the other two Schottky relations \eqref{mltprds}:
\begin{eqnarray}\label{idensumc}
&&\sum_{r=-n-m}^{n+m} c(n,m,|r|) = 0~, \quad (n,m)\neq (0,0)\nonumber\\
&&\sum_{r=-n-m}^{n+m} c(n,m,|r|)\, r^2 = 0 ~, \quad n\neq 0\\
\sum_{m=1}^{\infty} s^m \!\!\!\!\!\!\!\!\!&&\sum_{r=-m}^{m} c(0,m,|r|) \,r^2 = 4 \hat E_2^s~.\nonumber
\end{eqnarray}
We have checked all of these identities through $m=n=7$ using the tables of \cite{Gaberdiel:2010jf}.\footnote{We are grateful to the authors of \cite{Gaberdiel:2010jf} for sharing the relevant Mathematica notebooks.} Actually, we have proven \eqref{idensumc}, as well as \eqref{sda}-\eqref{sdc}. Proof of \eqref{idensumc} follows from comparing a series solution for $p_1$ and $p_2$ using \eqref{mltprds} to the known solution \eqref{ay}, and demanding consistency through $O(v^2)$. Proof of \eqref{sda}-\eqref{sdc} follows from demanding that all three perturbative relations in \eqref{mltprds} yield the same result for $\lbrace p_1,p_2,x\rbrace$: hence, having proven \eqref{idensumc}, we can use these to derive sum rules obeyed by $d(n,m,r)$. Proof of \eqref{sdd} and \eqref{sde} is undoubtedly possible using similar methods.
With these sum rules in hand, we proceed to invert \eqref{ayb}. At $O(v^0)$, we must solve
\eq{v0}{1 = x_0(q,s) + x_0(q,s)\sum_{n,m=1}^{\infty}q^ns^m\sum_{r=-n-m}^{n+m} d(n,m,r) x_0(q,s)^r~.}
But \eqref{sda} implies that $x_0(q,s)=1$ solves \eqref{v0}. At $O(v)$, we must solve
\eq{v1}{1 = x_1(q,s) +x_1(q,s)\sum_{n,m=1}^{\infty}q^ns^m \sum_{r=-n-m}^{n+m}d(n,m,r) r~.}
This time, \eqref{sdb} implies that the second term vanishes, leaving $x_1(q,s)=1$. The analysis at $O(v^2)$ is nearly identical, and \eqref{sdc} implies $x_2(q,s)=1/2$.
At $O(v^3)$, the first non-trivial sum appears in the series expansion:
\es{x31}{{1\over 3!} = x_3(q,s) + {1\over 3!} \sum_{n,m=1}^{\infty}q^ns^m \sum_{r=-n-m}^{n+m}d(n,m,r) r^3~.}
Plugging in \eqref{sdd} leads to
\eq{x3}{x_3(q,s) = {1\over 3!}-4 \hat E_2^{\rho} \hat E_2^{\sigma}~.}
Finally, at $O(v^4)$, we must solve
\eq{x41}{{1\over 4!} = x_4(q,s) + {1\over 3!}\sum_{n,m=1}^{\infty}q^ns^m\sum_r d(n,m,r) r^3 + {1\over 4!}\sum_{n,m=1}^{\infty}q^ns^m\sum_r d(n,m,r) r^4~.}
The final sum rule \eqref{sde} eliminates the last term, and \eqref{x3} leaves us with
\eq{}{x_4(q,s) = {1\over 4!} - 4 \hat E_2^{\rho} \hat E_2^{\sigma}~.}
Putting this all together, we find the advertised result in \eqref{ay}.
We believe that the above sum rules may have interesting applications in other studies of genus-two Riemann surfaces. It would be interesting to understand, for instance, why the sum \eqref{sdd} factorizes. A systematic exploration of all sum rules obeyed by these coefficients would be worthwhile. It seems likely that higher order sum rules may be expressible in terms of holomorphic Eisenstein series \eqref{eis}.
\section{The order-$c$ part of the free energy and the sewing construction}\label{orderc}
In subsections \ref{sewing} and \ref{iv-i}, we reviewed the sewing construction, which expresses the partition function of an arbitrary CFT on a genus-$g$ Riemann surface in terms of $2g$-point functions on the sphere, as illustrated in figure \ref{fig-sewing}. However, the formulas in those subsections, such as \eqref{Zg}, only give the order-$c^0$ and higher (in $1/c$) terms in the free energy $F=-\ln Z$; they miss the order-$c$ term. (This term depends on the full metric on the Riemann surface, not just its complex structure; in other words it depends on the choice of representative of the Weyl class.) This is adequate for the purposes of this paper, since our main interest is in the higher-order terms in $1/c$. However, for completeness, in this appendix we will explain how to obtain the order-$c$ term within the context of the sewing construction.
As an illustrative example, consider an arbitrary CFT on a flat torus with modular parameter $\tau$. The partition function is well-known to be
\begin{equation}\label{Ztau}
Z(\tau) = (p\bar p)^{-c/24}\sum_ip^{h_i}\bar p^{\tilde h_i}=(p\bar p)^{-c/24}\sum_{h,\tilde h}d(h,\tilde h)p^h\bar p^{\tilde h}\,,
\end{equation}
where $p:=e^{2\pi i\tau}$ and $d(h,\tilde h)$ is the multiplicity of operators of weights $h,\tilde h$. To calculate $Z(\tau)$ using \eqref{Zg} (more precisely, its generalization including the antiholomorphic sector), we need to compute the coefficient $C_{h,\tilde h}$. Applying the definitions \eqref{Ch1hg} and \eqref{G-ii}, we find simply $C_{h,\tilde h}=d(h,\tilde h)$. Hence \eqref{Zg} would give $Z(\tau) = \sum_{h,\tilde h}d(h,\tilde h)p^h\bar p^{\tilde h}$; thus, we are missing the factor $(p\bar p)^{-c/24}$.
We proceed with a brief recap of the sewing construction.\footnote{We will closely follow the discussion of the sewing construction in sections 9.3 and 9.4 of \cite{Polchinski1}. However, that reference considered CFTs with vanishing total central charge (in the context of string theory), so the issue we are focusing on here did not arise there. When comparing to that reference, note also that the correlators there are unnormalized, whereas (as throughout this paper) ours are normalized.} For convenience we will assume that the metric on our Riemann surface $M$ is smooth. We now cut it along a circle and glue in two disks, which we call $D_1$ and $D_2$, to obtain a new manifold $M_0$ (which may be connected or disconnected). We choose the metric on these disks in such a way that the metric on the new surface is still smooth. This implies that $D_1$ and $D_2$ can be glued together to make a sphere with a smooth metric, which we'll call $S$. We consider coordinates $z_{1,2}$ on $D_{1,2}$ which, when extended to $S$, obey $z_1=1/z_2$.
The path integral on $M$ can be computed by inserting a complete set of states on the circle where it has been cut. By the state-operator mapping, this is equivalent to inserting a complete set of operators on $D_{1,2}$ at $z_{1,2}=0$, with an appropriate inverse metric $G^{ij}$ (to be determined below) on the space of operators:
\begin{equation}\label{ZM}
Z(M) = Z(M_0)\sum_{i,j}G^{ij}\ev{\mathcal{O}_i^{(z_1)}\mathcal{O}_j^{(z_2)}}_{M_0}\,,
\end{equation}
where the superscripts denote that $\mathcal{O}_i$ is inserted at the origin of the $z_1$ and $\mathcal{O}_j$ at the origin of the $z_2$ coordinate system. More generally, we can start with arbitrary operators $\mathcal{O}_a\cdots$ on $M$:
\begin{equation}\label{opinsertion}
Z(M)\ev{\mathcal{O}_a\cdots}_M=Z(M_0)\sum_{i,j}G^{ij}\ev{\mathcal{O}_a\cdots\mathcal{O}_i^{(z_1)}\mathcal{O}_j^{(z_2)}}_{M_0}\,.
\end{equation}
To fix the inverse metric $G^{ij}$, we consider the case where $M$ happens to include a patch $D_1'$ that is diffeomorphic to $D_1$, with an operator $\mathcal{O}_k$ inserted at $z_1'=0$. Cutting $M$ along the boundary of $D_1'$ and gluing in $D_1$ and $D_2$ yields $M_0=M\cup S$, where the $S$ is covered by coordinates $z_1'$ and $z_2=1/z_1'$. Equation \eqref{opinsertion} then becomes:
\begin{eqnarray}
Z(M)\ev{\mathcal{O}_a\cdots\mathcal{O}_k^{(z_1')}}_M
&=&Z(M_0)\sum_{i,j}G^{ij}\ev{\mathcal{O}_a\cdots\mathcal{O}_k^{(z_1')}\mathcal{O}_i^{(z_1)}\mathcal{O}_j^{(z_2)}}_{M_0} \nonumber\\
&=&Z(M)Z(S)\sum_{i,j}G^{ij}\ev{\mathcal{O}_a\cdots\mathcal{O}_i^{(z_1)}}_M\ev{\mathcal{O}_j^{(z_2)}\mathcal{O}_k^{(z_1')}}_S
\,.
\end{eqnarray}
For this to hold for arbitrary $\mathcal{O}_k$ and arbitrary insertions $\mathcal{O}_a\cdots$, it must be that
\begin{equation}
G_{ij} = Z(S)\ev{\mathcal{O}_i^{(z_1)}\mathcal{O}_j^{(z_2)}}_S = Z(S)\mathcal{G}_{ij}\,,
\end{equation}
where $\mathcal{G}_{ij}$ is the Zamolodchikov metric.
Now that we have fixed $G^{ij}$, the partition function \eqref{ZM} becomes
\begin{equation}
Z(M) = \frac{Z(M_0)}{Z(S)}\sum_{i,j}\mathcal{G}^{ij}\ev{\mathcal{O}_i^{(z_1)}\mathcal{O}_j^{(z_2)}}_{M_0}\,.
\end{equation}
It is often useful to add a parameter $p$ to the sewing construction, so that the coordinate identification is $z_1z_2=p$. (Even though $p$ can be absorbed in a coordinate transformation on $M_0$, it is useful to fix the coordinate system on $M_0$ and use $p$ to vary the modulus of $M$.) This can be done by replacing $z_2$ by $z_2'$ in the formulas above, and defining $z_2=pz_2'$. We have $\mathcal{O}_j^{(z_2')}=p^{h_j}\bar p^{\tilde h_j}\mathcal{O}_j^{(z_2)}$, so
\begin{equation}\label{ZM2}
Z(M) = \frac{Z(M_0)}{Z(S)}\sum_{i,j}p^{h_j}\bar p^{\tilde h_j}\mathcal{G}^{ij}\ev{\mathcal{O}_i^{(z_1)}\mathcal{O}_j^{(z_2)}}_{M_0}\,.
\end{equation}
Cutting $M$ along $g$ non-contractible cycles, where $g$ is its genus, reduces it to a sphere. This yields the formula \eqref{Zg}, except with a product of sphere partition functions $Z(S_1)\cdots Z(S_g)$ in the denominator. The free energy on any sphere is proportional to $c$, so these factors contribute such a term to $F(M)$. (The coordinate transformation from local coordinates $z_{1,2}$ in the vicinity of each operator insertion to the single coordinate $z$ covering the plane leads to the definition of the operators $V^{out},V^{in}$ explained in subsection \ref{iv-i}.)
To illustrate the application of \eqref{ZM2}, let us return to the example of the flat torus. Set $\beta=\Im\tau$, and let the horizontal cycle have circumference $2\pi$; thus the total area is $4\pi^2\beta$. We will cut it along the horizontal cycle. For $D_1$ and $D_2$ we use unit round hemispheres. Thus $S$ is a round unit sphere, while $M_0$ is a cylinder of circumference $2\pi$ and length $2\pi\beta$ with round endcaps. In the next paragraph we will compute the ratio $Z(M_0)/Z(S)$ using the Liouville action, finding $e^{\pi c\beta/6} = (p\bar p)^{-c/24}$, precisely the prefactor appearing in the expression \eqref{Ztau} for the torus partition function.
In order to compute $Z(M_0)/Z(S)$, we will compute the change in the partition function $Z(M_0)$ under a small change in $\beta$, and then integrate the result up from $\beta=0$ (noting that $M_0|_{\beta=0}=S$). Under a Weyl transformation, $ds^2 = e^{2\omega}d\hat s^2$, the partition function gets transformed by the Liouville action:
\begin{equation}
Z = e^{S_L}\hat Z\,,\qquad
S_L = \frac{c}{24\pi}\int \sqrt{\hat g}\left(\hat g^{ab}\partial_a\omega\partial_b\omega + \hat R\omega\right).
\end{equation}
We will let $d\hat s^2$ be the metric with cylinder length $2\pi\beta$, and $ds^2$ with cylinder length $2\pi(\beta+\delta\beta)$. Hence the Weyl transformation relating them is close to the identity, with $\omega$ of order $\delta\beta$, and we can work to first order in $\omega$. Since the cylinders have the same circumference, $\omega$ vanishes on the cylinder. $\omega$ can also be taken to vanish on, say, the bottom endcap, while on the top endcap it transforms the hemisphere into a hemisphere attached to a thin cylinder of height $2\pi\delta\beta$. On this endcap, $\hat R=2$, so
\begin{equation}
\int\sqrt{\hat g}\hat R\omega = \int\sqrt{\hat g}2\omega = \int\sqrt{g}-\int\sqrt{\hat g} = 4\pi^2\delta\beta\,.
\end{equation}
Hence
\begin{equation}
S_L = \frac{\pi c}{6}\delta\beta\,.
\end{equation}
Integrating from $\beta=0$, we find
\begin{equation}
\ln Z(M_0) = \ln Z(S)+\frac{\pi c}6\beta\,,
\end{equation}
as promised.
\end{appendix}
\bibliographystyle{ssg}
|
1,314,259,995,957 | arxiv | \section{Introduction}
\label{sec:intro}
The observed value of the cosmological constant, $\Lambda_0$, is extremely small, smaller than naive theoretical expectations by a factor of $10^{-60}$ to $10^{-120}$.
Also, this tiny vacuum energy density is \emph{now} of the same order as the matter density.
These two mysteries are the cosmological constant problems, the former is the cosmological constant hierarchy problem, and the later is the cosmological constant coincidence problem.
The most promising solution \cite{Weinberg:1987dv,Weinberg:1988cp,Efstathiou:1995ne,Martel:1997vi,Garriga:1999hu,Garriga:2000cv,Pogosian:2006fx}
to these problems is using anthropic selection \cite{carter,davies,carter_mccrea,barrow_tipler,greenstein,Stewart:2000vu},
which notes that we should take into account our own existence when we consider quantities that we observe.
In particular, the probability of observing a given value of the cosmological constant is
\begin{equation}\label{eq:prob}
\fn{P}{\Lambda|\smiley} = \frac{\fn{P}{\Lambda} \fn{P}{\smiley|\Lambda}}{\fn{P}{\smiley}}
\end{equation}
where $\fn{P}{\Lambda}$ is the probability distribution of the cosmological constant in the whole universe and $\fn{P}{\smiley|\Lambda}$ is the probability of finding an observer in a region with cosmological constant $\Lambda$.
Thus, even if $\fn{P}{\Lambda}$ is small, $\fn{P}{\Lambda|\smiley}$ may be large depending on the anthropic likelihood
\begin{equation}
\fn{L}{\Lambda|\smiley} = \frac{\fn{P}{\smiley|\Lambda}}{\fn{P}{\smiley}} .
\end{equation}
For anthropic selection to be able to select the observed value, the observed value must exist.
For it to exist naturally, two things are necessary: a sufficient number of different low energy laws of physics (i.e.\ vacua) to allow the natural existence of the observed value in the laws of physics, and the realization of those low energy laws of physics in different regions of the universe.
String theory calculations have supported the anthropic prediction of at least $10^{10^2}$ different vacua, while both the many-worlds interpretation of quantum mechanics and eternal inflation generate a multiverse realizing the vacua.
To determine $\fn{P}{\Lambda}$ it is necessary to understand the fundamental theory well, much better than our current understanding, and even if this is done, a proper measure for the eternally inflating multiverse is lacking.
Though it seems reasonable to take $\fn{P}{\Lambda}$ to be constant over the anthropically interesting range of the cosmological constant \cite{Weinberg:1987dv,Efstathiou:1995ne},
a choice of cutoff for the eternally inflating multiverse may then modify this flat prior, though it is unknown which, if any, is the correct choice.
In this paper we consider three types of multiverse measure:
the pocket based measure which Weinberg and Vilenkin assumed \cite{Garriga:1998px,Vanchurin:1999iv,Garriga:2005av,Easther:2005wi}, the scale factor cutoff measure \cite{DeSimone:2008bq,Bousso:2008hz} and the causal patch measure \cite{Bousso:2006ev,Bousso:2006ge}.
To estimate the anthropic likelihood it is necessary to have an anthropic model which relates the number of observers to some calculable quantity.
Weinberg et al.\ \cite{Martel:1997vi} and Vilenkin et al.\ \cite{Pogosian:2006fx} modeled the number of observers as proportional to the total mass in gravitationally collapsed objects with mass greater than a certain threshold, usually taken to be the mass of the Milky Way, at late times.
Using this model they postdicted the observed value of the cosmological constant with an error of 1 to 2$\sigma$.
The total mass in gravitationally collapsed objects
depends not only on the cosmological constant but also on the primordial density perturbation amplitude, $Q$,
and there have been some studies to understand our observed value of the primordial density perturbation amplitude, $Q_0$,
using anthropic selection \cite{Tegmark:1997in,Banks:2003es, Graesser:2004ng,Garriga:2005ee}.
Tegmark \& Rees \cite{Tegmark:1997in}
showed that both too high and too low a primordial density perturbation amplitude may be harmful for observers,
and constrained anthropically allowed values of the primordial density perturbation amplitude to within an order of magnitude of the observed value.
The plan of our paper is as follows.
In \sect{sec:prior}, we review multiverse measures and how they affect the prior.
In \sect{sec:fin}, we review Weinberg's anthropic model \cite{Martel:1997vi,Garriga:1999hu,Garriga:2000cv,Pogosian:2006fx}, discuss its deficiencies,
and introduce some improved models.
In \sect{sec:his}, we introduce anthropic models using the mass history of the collapsed object.
We summarize our result in \sect{sec:sum} and discuss future work in \sect{sec:dis}.
\section{Prior distribution and choice of multiverse measure}
\label{sec:prior}
As in \eq{eq:prob}, the probability of observing an observable $O$ is
\begin{equation}\label{eq:prior_vague}
\fn{P}{O|\smiley} = \fn{P}{O} \fn{L}{O|\smiley}.
\end{equation}
In this paper we focus on the anthropic likelihood $\fn{L}{O|\smiley}$,
but the prior distribution also affects the probability $\fn{P}{O|\smiley}$,
so we will briefly review possible prior distributions of the cosmological constant and the primordial density perturbation amplitude.
In the eternally inflating multiverse,
the number of observers in each universe is infinite,
so it is ill-defined how to compare the numbers of observers in different types of universe.
However, the comoving anthropic likelihood,
which counts the number of observers in a comoving volume, $\fn{L_\mathrm{c}}{O|\smiley}$,
is well-defined since all the ambiguity is left in the corresponding prior distribution $\fn{P_\mathrm{c}}{O}$.
$\fn{P_\mathrm{c}}{O}$ can be divided into two parts:
the primordial prior distribution $\fn{P_\varnothing}{O} \equiv \fn{P_\mathrm{c}}{O, t=0}$ which comes from both the fundamental theory and the multiverse ambiguity at the primordial stage,
and an additional factor $\fn{W_\mathrm{c}}{O,t_{\smiley}} \equiv {\fn{P_\mathrm{c}}{O}} / {\fn{P_\varnothing}{O}}$,
which depends on the observing time $t_{\smiley}$.
Then \eq{eq:prior_vague} can be written as
\begin{equation}
\fn{P}{O|\smiley} = \fn{P_\varnothing}{O} \fn{W_\mathrm{c}}{O, t_{\smiley}} \fn{L_\mathrm{c}}{O|\smiley}.
\end{equation}
In the case that $O$ is the cosmological constant,
if one assumes that $\Lambda = 0$ is not unique,
then it seems reasonable to take $\fn{P_\varnothing}{\Lambda}$ to be constant over the anthropically interesting range of $\Lambda$,
since these values are very small compared with particle physics scales \cite{Weinberg:1987dv,Efstathiou:1995ne}.
$\fn{W_\mathrm{c}}{\Lambda, t_{\smiley}}$ is determined by the multiverse measure.
We consider three multiverse measures:
the pocket based measure \cite{Garriga:1998px,Vanchurin:1999iv,Garriga:2005av,Easther:2005wi},
the scale factor cutoff measure \cite{DeSimone:2008bq,Bousso:2008hz} and the causal patch measure \cite{Bousso:2006ev,Bousso:2006ge},
which correspond to counting the number of observers within a comoving volume,
a physical volume
and a Hubble volume, respectively.
In the case of the pocket based measure,
$\fn{W_\mathrm{c}}{\Lambda, t_{\smiley}} = 1$ by definition.
However, in the cases of the scale factor cutoff measure and the causal patch measure,
$\fn{W_\mathrm{c}}{\Lambda, t_{\smiley}}$ depends on the choice of $t_{\smiley}$.
Assuming that we are typical observers, and the preferred time is mostly determined by stellar evolution, we set the observing time $t_{\smiley}$ as the physical time $t_0$ (14 billion years)\footnote{The origin of time could also be chosen as the time when the density perturbation becomes of order unity,
but this would not make much difference to our results.
Lineweaver \& Egan \cite{Lineweaver:2007qh} estimated the age distribution of terrestrial planets, and argued that $t_{\smiley} = t_0$ may be a typical time for the terrestrial-type observers.}.
$\fn{L_\mathrm{c}}{\Lambda | \smiley}$ is determined by the number observers in a comoving volume,
and we will focus on it in the following sections.
In this section we will neglect its effect and set it as constant.
\begin{figure}[hbt]
\begin{center}
\includegraphics[height=0.3\textwidth]{eps/measure_prob}
\caption{\label{fig:prob_measure}
Normalized probability distribution $\Lambda \fn{P}{\Lambda|\smiley}$ for the {\color{red} scale factor cutoff} measure and the {\color{blue} causal patch} measure,
assuming both $\fn{P_\varnothing}{\Lambda}$ and $\fn{L_\mathrm{c}}{\Lambda|\smiley}$ are constant, and $t_{\smiley} = t_0$.
The pocket based measure is not shown here because it is very small.
We use $\Lambda \fn{P}{\Lambda|\smiley}$ to make the area inside the curve to be the probability.}
\end{center}
\end{figure}
\begin{table}[hbt]
\begin{center}
\begin{tabular}{|c||c|c|c|}
\hline $\typicality{\Lambda_0}$ & PB & SFC & CP \\[0.5ex]
\hline \hline $t_{\smiley} = t_0$ & $2 \times 10^{-120}$ & 0.55 & 0.14 \\[0.5ex]
\hline
\end{tabular}
\caption{\label{tab:prior_lambda}
Typicalities of the observed value of the cosmological constant, $\typicality{\Lambda_0}$, for the pocket based measure (PB), the scale factor cutoff measure (SFC), and the causal patch measure (CP),
assuming both $\fn{P_\varnothing}{\Lambda}$ and $\fn{L_\mathrm{c}}{\Lambda|\smiley}$ are constant.
We assume $0 \lesssim \Lambda \lesssim 1$.}
\end{center}
\end{table}
\fig{fig:prob_measure} and \tab{tab:prior_lambda} show $\Lambda \fn{P}{\Lambda|\smiley}$ and the typicality \cite{Page:2006er} of $\Lambda_0$ for different multiverse measures, assuming $\fn{P_\varnothing}{\Lambda}$ and $\fn{L_\mathrm{c}}{\Lambda|\smiley}$ are constant so that only $\fn{W_\mathrm{c}}{\Lambda, t_{\smiley}}$ affects the shape of $\fn{P}{\Lambda|\smiley}$, where the typicality of an observable $O = O_0$ is defined as\footnote{In this paper we only consider the typicality within the range $O > 0$ and so normalize the probability as $\displaystyle \int_0^{\infty} \d{O} \fn{P}{O|\smiley} = 1$. The typicality using the whole range of $O$ is greater than that using $O > 0$ \cite{Pogosian:2006fx}.} \begin{equation}
\fn{\mathcal{T}_+}{O_0} = 2 \times \min
\biggl[ \int_0 ^{O_0 } \d{O} \, \fn{P}{O|\smiley},
\int_{O_0 } ^{\infty} \d{O} \, \fn{P}{O|\smiley}\biggr]\,.\label{eq:typicality} \end{equation}
In the case of the pocket based measure,
$\fn{W_\mathrm{c}}{\Lambda, t_{\smiley}} = 1$ so $\Lambda \fn{P}{\Lambda|\smiley}$ is low for $\Lambda \sim \Lambda_0$.
In the cases of the scale factor cutoff measure and the causal patch measure,
since a physical volume and a Hubble volume are smaller than a comoving volume in a large $\Lambda$,
the scale factor cutoff and the causal patch measure suppress the region where $\Lambda$ dominates at $t = t_0$,
which makes the typicality of $\Lambda_0$ for both measures high.
See Appendix~\ref{app:prior} for analytic forms.
\begin{table}[hbt]
\centering
\begin{tabular}{|c||c|c|}
\hline $\typicality{Q_0}$ & $\fn{P_\mathrm{c}}{Q} = \textrm{constant}$ & $\fn{P_\mathrm{c}}{Q} \propto Q^{-1}$ \\[0.5ex]
\hline \hline - & $2 \times 10^{-5}$ & 0.63 \\[0.5ex]
\hline
\end{tabular}
\caption{\label{tab:prior_q}
Typicalities of the observed value of the primordial density perturbation amplitude, $\typicality{Q_0}$,
for different $\fn{P}{Q}$, assuming $\fn{L_\mathrm{c}}{Q|\smiley}$ is constant.
We assume $10^{-16} \lesssim Q \lesssim 1$.
}
\end{table}
In the case of the primordial density perturbation amplitude,
since it does not affect the volume of the universe,
the multiverse ambiguity is independent of $t_{\smiley}$
so $\fn{W_\mathrm{c}}{Q, t_{\smiley}} = 1$.
If we also assume that $\fn{L_\mathrm{c}}{Q|\smiley}$ is constant,
then $\fn{P}{Q|\smiley}$ is determined only by $\fn{P_\varnothing}{Q}$.
Since $\fn{P_\varnothing}{Q}$ is unknown,
we consider two toy models for $\fn{P_\varnothing}{Q}$: flat in linear scale, i.e.\ $\fn{P_\varnothing}{Q} = \textrm{constant}$,
and flat in log scale, i.e.\ $\fn{P_\varnothing}{Q} \propto Q^{-1}$.
In the case $\fn{P_\varnothing}{Q} = \textrm{constant}$, the typicality is small,
which means that the prior distribution itself predicts much larger $Q$ than $Q_0$.
On the other hand,
since $Q_0 \sim 10^{-5}$ lays in the middle of a plausible range of $Q$ in the log scale,
$10^{-16} \lesssim Q \lesssim 1$,\footnote{We set the lower bound of $Q$ from $Q \gtrsim H_\mathrm{inflation} \gtrsim m_\mathrm{susy}$.}
$\fn{P_\varnothing}{Q} \propto Q^{-1}$ gives relatively large $\typicality{Q_0}$.
See \tab{tab:prior_q}.
However, the actual probability of observable $O$ with taking into account of our existence is given by
\begin{equation}
\fn{P}{O|\smiley} = \fn{P_\varnothing}{O} \fn{W_\mathrm{c}}{O, t_{\smiley}} \fn{L_\mathrm{c}}{O|\smiley}.
\end{equation}
Therefore, our results can be changed significantly,
depending on the actual form of $\fn{L_\mathrm{c}}{O|\smiley}$.
Especially, even the pocket based measure and $\fn{P_\mathrm{c}}{Q} = \textrm{constant}$ may also be able to explain $\Lambda_0$ and $Q_0$ well,
by combining with the anthropic likelihood.
\section{Anthropic models using a single mass constraint}
\label{sec:fin}
\subsection{Weinberg's anthropic model: $M \geq M_*$ at $t \to \infty$}
\label{sec:fin_classic}
Weinberg et al.\ \cite{Martel:1997vi} and Vilenkin et al.\ \cite{Pogosian:2006fx} model the number of observers as proportional to the total mass in gravitationally collapsed objects with mass greater than a certain threshold, $M \geq M_*$, at late times, $t \to \infty$.
There are several motivations for this model:
\begin{enumerate}
\item
Uncollapsed mass is not expected to give rise to observers.
\item
The total mass of gravitationally collapsed objects is one of the easiest quantities to calculate.
\item
If the collapsed object is too small, then there may be no chance for the evolution of complex life, for example, due to lack of metals.
\item
Once a collapsed object is large enough to be habitable, the number of observers may plausibly be proportional to the number of baryons in the object and hence proportional to the total mass.
\item
Once the mass of an object overcomes $M_*$, it may become habitable irrespective of when it formed, and so the collapsed objects with $M \geq M_*$ at $t \to \infty$ may include all habitable collapsed objects.
\end{enumerate}
Weinberg's model includes our object, the Milky Way,
but it also includes supermassive objects.
However, supermassive objects may not be very habitable, for example, due to the strong interactions between galaxies in superclusters.
If this is true, including these supermassive objects gives a bias to small cosmological constant, since a small cosmological constant lets matter collapse more easily.
Thus, Weinberg's model may give a misleadingly good result.
The choice of $M_*$ in Weinberg's model is another difficulty.
If $M_*$ is taken to be the mass of the Milky Way, as is usually done, then the typical mass in $M \geq M_*$ is greater than that of the Milky Way and we are not typical.
This anomaly can be reduced by choosing smaller $M_*$, but reducing $M_*$ reduces the typicality, and there is no obvious smaller choice of $M_*$.
\subsection{$M = M_*$ at $t = t_*$}
\label{sec:fin_fix}
This model assumes that there exist anthropically preferred mass and time scales $M_*$ and $t_*$,
so that the number of observers is proportional to the fraction of gravitationally collapsed objects with $M = M_*$ at $t = t_*$.
\footnote{Graesser \& Salem \cite{Graesser:2006ft} also used $M = M_*$ but kept $t \to \infty$.}
In order to choose $M_*$ and $t_*$ we will use the assumption that we are typical observers,
although one must be careful not to introduce bias by considering features due to our value of the cosmological constant as opposed to features affecting the formation of observers.
By assuming we are typical observers, it seems obvious to choose $M_*$ as the mass of the Milky Way and $t_*$ as $t_0$ (14 billion years).
However, the Press-Schechter formalism \cite{Press:1973iz}, which we use to calculate the fraction of collapsed objects, identifies the Local Group, not the Milky Way, at $t = t_0$.
This is because the formalism can only identify objects of the mass of the Milky Way which are isolated within at least $1.9\mathinner{\mathrm{Mpc}}$ at $t = t_0$, but Andromeda and other members of the Local Group are now within this range.
Thus, the Press-Schechter formalism seems to require us to choose $M_*$ as the mass of the Local Group, but as we do not seem to have any plausible anthropic justification to use the Local Group as our object
\footnote{Other members of a group of galaxies may perturb merging objects away from direct hit trajectories which may be anthropically beneficial.}, this would not be consistent either.
However, it is not only the present time which affects our existence.
For example, the state of the galaxy before the formation of the solar system may be essential by influencing the star formation rate or metal abundance.
Also, galaxies may need to be isolated up to a certain time, in order to prevent harmful interactions.
Thus, we may have anthropic motivation to choose $t_*$ earlier than the formation of the solar system,
or even earlier than the formation of the Local Group.
If we set $t_*$ earlier than the formation of the Local Group,
the technical problem with using the Press-Schechter formalism disappears,
since in this case we can identify the Milky Way as an isolated collapsed object.
Thus, we set $t_*$ as a time earlier than the formation of the Local Group and $M_*$ as the mass of the Milky Way at that time.
Here, as a definite example, we take $t_*$ as 6 billion years.
Also, Refs.~\cite{Hammer:2007ki,Burstein:2004pn} suggest that the Milky Way may not have had any major interaction or a significant amount of minor mergers over the last 10 billion years, so we may approximate the mass of the Milky Way at 6 billion years as similar to its current mass.
Thus, we set $M_*$ as the mass of the Milky Way ($M_\mathrm{MW}$).
\subsection{Results}
\label{sec:fin_res}
\begin{figure}[hbt]
\centering
\includegraphics[height=0.3\textwidth]{eps/final-mw} \hspace{20pt}
\includegraphics[height=0.3\textwidth]{eps/likeQ}
\caption{Anthropic likelihoods for the anthropic models using a single mass constraint.
Left: anthropic likelihoods for the cosmological constant $\fn{L_\mathrm{c}}{\Lambda|\smiley}$.
Anthropic model: {\color{red} $M \geq M_\mathrm{MW}$ at $t \to \infty$}, {\color[rgb]{0,0.5,0} $M \geq M_\mathrm{MW}$ at $t = 6\mathinner{\mathrm{Gyr}}$} and {\color{blue} $M = M_\mathrm{MW}$ at $t = 6\mathinner{\mathrm{Gyr}}$}.
Right: anthropic likelihoods for the primordial density perturbation amplitude $\fn{L_\mathrm{c}}{Q|\smiley}$, for $\Lambda = \Lambda_0$.
$\fn{L_\mathrm{c}}{Q|\smiley}$ for anthropic models with $M \geq M_\mathrm{MW}$ is not shown because it is very small.
}\label{fig:final}
\end{figure}
In addition to Weinberg's model, $M \geq M_\mathrm{MW}$ at $t \to \infty$, and the model $M = M_\mathrm{MW}$ at $t = 6\mathinner{\mathrm{Gyr}}$,
we consider the model $M \geq M_\mathrm{MW}$ at $t = 6\mathinner{\mathrm{Gyr}}$
to see how the mass and time conditions independently affect the likelihood.
We also consider the anthropic likelihood for the primordial density perturbation amplitude,
assuming $\Lambda = \Lambda_0$.
\fig{fig:final} shows both $\fn{L_\mathrm{c}}{\Lambda|\smiley}$ and $\fn{L_\mathrm{c}}{Q|\smiley}$ for each model
(see Appendix~\ref{app:final} for the analytic forms).
\begin{figure}[hbt]
\centering
\includegraphics[height=0.3\textwidth]{eps/final-mw-pb}
\includegraphics[height=0.3\textwidth]{eps/final-mw-sfc}
\includegraphics[height=0.3\textwidth]{eps/final-mw-cp}
\caption{Probability of an observer observing $\Lambda$, $\fn{P}{\Lambda|\smiley}$.
Left: pocket based measure; Middle: scale factor cutoff measure with $t_\mathrm{obs} = t_0$;
Right: causal patch measure with $t_\mathrm{obs} = t_0$.
Anthropic model: {\color{red} $M \geq M_\mathrm{MW}$ at $t \to \infty$}, {\color[rgb]{0,0.5,0} $M \geq M_\mathrm{MW}$ at $t = 6\mathinner{\mathrm{Gyr}}$} and {\color{blue} $M = M_\mathrm{MW}$ at $t = 6\mathinner{\mathrm{Gyr}}$}.}\label{fig:final_lambda}
\end{figure}
\begin{table}[hbt]
\centering
\begin{tabular}{|r @{~at~} l ||c|c|c|}
\hline \multicolumn{2}{|c||}{\multirow{2}{*}{$\typicality{\Lambda_0}$}} & \multirow{2}{*}{PB} & SCF & CP \\
\multicolumn{2}{|c||}{} & & $t_{\smiley} = t_0$ & $t_{\smiley} = t_0$ \\[0.5ex]
\hline \hline $M \geq M_\mathrm{MW}$ & $t \to \infty$ & 0.22 & 0.36 & 0.11 \\[0.5ex]
\hline $M \geq M_\mathrm{MW}$ & $t = 6\mathinner{\mathrm{Gyr}}$ & 0.086 & 0.49 & 0.16 \\[0.5ex]
\hline $M = M_\mathrm{MW}$ & $t = 6\mathinner{\mathrm{Gyr}}$ & 0.049 & 0.52 & 0.17 \\[0.5ex]
\hline
\end{tabular}
\caption{\label{tab:final_lambda}
$\typicality{\Lambda_0}$ for the anthropic models using a single mass constraint for the pocket based measure (PB), the scale factor cutoff measure (SFC), and the causal patch measure (CP).}
\end{table}
\begin{table}[hbt]
\centering
\begin{tabular}{|r @{~at~} l ||c|c|}
\hline \multicolumn{2}{|c||}{$\typicality{Q_0}$} & $\fn{P_\mathrm{c}}{Q} = \mathrm{constant}$ & $\fn{P_\mathrm{c}}{Q} \propto Q^{-1}$ \\[0.5ex]
\hline \hline $M \geq M_\mathrm{MW}$ & $t \to \infty$ & $7 \times 10^{-7}$ & $8 \times 10^{-3}$ \\[0.5ex]
\hline $M \geq M_\mathrm{MW}$ & $t = 6\mathinner{\mathrm{Gyr}}$ & $1 \times 10^{-7}$ & $2 \times 10^{-3}$ \\[0.5ex]
\hline $M = M_\mathrm{MW}$ & $t = 6\mathinner{\mathrm{Gyr}}$ & 0.33 & 0.76 \\[0.5ex]
\hline
\end{tabular}
\caption{\label{tab:final_Q}
$\typicality{Q_0}$ for the anthropic models using a single mass constraint, for $\Lambda = \Lambda_0$.
We assume $10^{-16} \lesssim Q \lesssim 1$.}
\end{table}
\fig{fig:final_lambda} and \twotab{tab:final_lambda}{tab:final_Q} summarize the typicalities in the different anthropic models.
In the case of the pocket based measure which Weinberg et al.\ \cite{Martel:1997vi} implicitly used,
$\typicality{\Lambda_0}$ decreases by a factor of two as $t_*$ changes from infinity to $6\mathinner{\mathrm{Gyr}}$,
and decreases by a further factor of two as the constraint changes from $M \geq M_\mathrm{MW}$ to $M = M_\mathrm{MW}$.
This illustrates how Weinberg's model may overestimate the typicality.
On the other hand, in the cases of the scale factor cutoff and causal patch measures,
the prior distribution from the measure already suppresses the region where the difference between the anthropic likelihoods from the different anthropic models is significant.
Therefore, in these cases,
all three anthropic models provide typicalities similar to the one only assuming $t_{\smiley} = t_0$.
In the case of the primordial density perturbation amplitude,
anthropic models with the mass constraint $M \geq M_\mathrm{MW}$ include supermassive objects,
which always prefer large $Q$.
Therefore, anthropic models with $M \geq M_\mathrm{MW}$ give a low typicality of $Q_0$,
i.e.\ beyond $3\sigma$.
So Weinberg's model may require the extra anthropic bound of $Q$ suggested by Tegmark \& Rees \cite{Tegmark:1997in},
$10^{-1} Q_0 \lesssim Q \lesssim 10 Q_0$,
which ensures sufficient cooling of galaxies and the stable orbits of planets.
On the other hand,
our model $M = M_\mathrm{MW}$ at $t = 6\mathinner{\mathrm{Gyr}}$ gives a high typicality of $Q_0$, i.e.\ within $1\sigma$, without any additional assumption.
Note that the choice of the prior distribution for $Q$ does not provide any qualitative difference.
\subsection{Degeneracies between $\Lambda$ and $Q$}\label{sec:fin_Q}
In principle, one should analyze the entire space of physical parameters to determine the anthropic likelihood of the cosmological constant.
A first step toward this direction is to examine the two dimensional parameter space of $\Lambda$ and $Q$.
In this two dimensional parameter space, larger primordial density perturbation amplitude can cancel the effect of the cosmological constant \cite{Tegmark:1997in,Banks:2003es, Graesser:2004ng,Garriga:2005ee},
leading to degeneracies in the parameter space.
\begin{figure}[hbt]
\centering
\includegraphics[height=0.3\textwidth]{eps/Q-Lambda}
\caption{ \label{fig:Q_lamb}
$\fn{Q}{\Lambda,t_*}$ for which the population of galaxies and clusters at $t = t_*$ is independent of $\Lambda$, for {\color{red} $t_* = 6\mathinner{\mathrm{Gyr}}$} and {\color{blue} $t_* \to \infty$}.
}
\end{figure}
For simplicity, we choose a slice from the $(\Lambda, Q)$ space which maximizes the degeneracy.
We set $Q = \fn{Q}{\Lambda,t_*}$ so that the value of the matter power spectrum on the scales of galaxies at $t = t_*$ is independent of the cosmological constant.
\fig{fig:Q_lamb} shows $\fn{Q}{\Lambda,t_*}$, and it has the large $\Lambda$ behavior
\begin{equation}
\fn{Q}{\Lambda,t_*} \propto \Lambda^{\frac{1}{3}}
\qquad \textrm{for } \Lambda / \Lambda_0 \gg \fn{f}{{t_*}/{t_0}}\,,
\end{equation}
where $\fn{f}{6\mathinner{\mathrm{Gyr}} / t_0} \simeq 10$, and the late time behavior
\begin{equation}
\fn{Q}{\Lambda,\infty} = Q_0 \left(\frac{\Lambda}{\Lambda_0}\right)^{\frac{1}{3}} .
\end{equation}
See Appendix~\ref{app:sigma_qfunc} for the exact form of $\fn{Q}{\Lambda,t_*}$.
On the slice $Q = \fn{Q}{\Lambda,t_*}$, the probability of observing $\Lambda = \Lambda'$ is
\footnote{We define $\displaystyle \fn{P_\mathrm{c}}{\fn{Q}{\Lambda,t_*}} \equiv \int \d{\Lambda'} \fn{P_\mathrm{c}}{\Lambda=\Lambda',Q=\fn{Q}{\Lambda',t_*}}$.}
\begin{equation}
\fn{P}{\Lambda=\Lambda'|\smiley,\fn{Q}{\Lambda,t_*}} =
\frac{ \fn{P}{\Lambda=\Lambda'|\fn{Q}{\Lambda,t_*}} \fn{P}{\smiley|\Lambda=\Lambda',\fn{Q}{\Lambda,t_*}} }{ \fn{P}{\smiley|\fn{Q}{\Lambda,t_*}} } .
\end{equation}
For anthropic models using a single time $t = t_*$ and $Q = \fn{Q}{\Lambda,t_*}$,
the population of galaxies and clusters at $t_*$ is independent of the cosmological constant,
and so $\fn{P}{\smiley|\Lambda=\Lambda',\fn{Q}{\Lambda,t_*}}$ is independent of $\Lambda'$.
Therefore, the anthropic likelihood becomes
\begin{equation}
\fn{L_\mathrm{c}}{\Lambda=\Lambda'|\smiley,\fn{Q}{\Lambda,t_*}} \equiv \left. \frac{ \fn{P}{\smiley|\Lambda=\Lambda',\fn{Q}{\Lambda,t_*}} }{ \fn{P}{\smiley|\fn{Q}{\Lambda,t_*}} } \right|_\mathrm{c} = 1 ,
\end{equation}
and the probability of observing $\Lambda = \Lambda'$ reduces to the modified prior,
\begin{align}
\fn{P}{\Lambda=\Lambda'|\smiley,\fn{Q}{\Lambda,t_*}} & = \fn{P_\mathrm{c}}{\Lambda=\Lambda'|\fn{Q}{\Lambda,t_*}} \\
& = \frac{ \fn{P_\mathrm{c}}{\Lambda=\Lambda'} \fn{P_\mathrm{c}}{\fn{Q}{\Lambda,t_*}|\Lambda=\Lambda'} }{ \fn{P_\mathrm{c}}{\fn{Q}{\Lambda,t_*}} } .
\end{align}
Therefore,
\begin{equation}
\frac{ \fn{P}{\Lambda=\Lambda'|\smiley,\fn{Q}{\Lambda,t_*}} }{ \fn{P_\mathrm{c}}{\Lambda=\Lambda'} }
\propto \fn{P_\mathrm{c}}{Q=\fn{Q}{\Lambda',t_*}}
\end{equation}
depends on the prior distribution of $Q$.
For example,
if $\fn{P_\mathrm{c}}{Q} = \textrm{constant}$,
\begin{equation}
\frac{ \fn{P}{\Lambda=\Lambda'|\smiley,\fn{Q}{\Lambda,t_*}} }{ \fn{P_\mathrm{c}}{\Lambda=\Lambda'} } = \textrm{constant} ,
\end{equation}
or if $\fn{P_\mathrm{c}}{Q} \propto Q^{-1}$,
\begin{equation}
\frac{ \fn{P}{\Lambda=\Lambda'|\smiley,\fn{Q}{\Lambda,t_*}} }{ \fn{P_\mathrm{c}}{\Lambda=\Lambda'} } \propto {\Lambda'}^{-\frac{1}{3}}
\qquad \textrm{for } \Lambda' / \Lambda_0 \gg \fn{f}{{t_*}/{t_0}}
\end{equation}
On the other hand,
if we apply Tegmark \& Rees' anthropic bound of $Q$ \cite{Tegmark:1997in},
$10^{-1} Q_0 \lesssim Q \lesssim 10 Q_0$,
$\Lambda$ can be also constrained on the slice $Q = \fn{Q}{\Lambda,t_*}$ as $10^{-3} \Lambda_0 \lesssim \Lambda \lesssim 10^3 \Lambda_0$,
which effectively breaks the degeneracy between $\Lambda$ and $Q$.
See \tab{tab:Q_lamb}.
\begin{table}[hbt]
\centering
\begin{tabular}{|r @{~$\lesssim$~} c @{~$\lesssim$~} l ||c|c|}
\hline \multicolumn{3}{|c||}{\multirow{2}{*}{$\typicality{\Lambda_0}$}} & $Q = \fn{Q}{\Lambda, t_\mathrm{f}}$ & $Q = \fn{Q}{\Lambda, t_\mathrm{f}}$ \\[0.5ex]
\multicolumn{3}{|c||}{ } & $\fn{P_\mathrm{c}}{Q} = \textrm{constant}$ & $\fn{P_\mathrm{c}}{Q} \propto Q^{-1}$ \\[0.5ex]
\hline \hline 0 & $\Lambda$ & 1 & $2 \times 10^{-120}$ & $2 \times 10^{-80}$ \\[0.5ex]
\hline $10^{-16}$ & $Q$ & 1 & $2 \times 10^{-15}$ & $2 \times 10^{-10}$ \\[0.5ex]
\hline $10^{-6}$ & $Q$ & $10^{-4}$ & $2 \times 10^{-3}$ & $2 \times 10^{-2}$ \\[0.5ex]
\hline
\end{tabular}
\caption{\label{tab:Q_lamb}
$\typicality{\Lambda_0}$ for different boundaries of $\Lambda$ and $Q$,
assuming $Q = \fn{Q}{\Lambda, t_\mathrm{f}}$ for which the population of galaxies and clusters at $t = t_\mathrm{f}$ is independent of $\Lambda$.
We assume flat prior/pocket based measure.
$10^{-16} \lesssim Q \lesssim 1$ and $10^{-6} \lesssim Q \lesssim 10^{-4}$ come from $Q \gtrsim H_\mathrm{inflation} \gtrsim m_\mathrm{susy}$ and Tegmark \& Rees \cite{Tegmark:1997in}, respectively.
Note that the typicality does not depend on the mass constraint.}
\end{table}
\section{Anthropic models using the mass history}
\label{sec:his}
\subsection{Motivation}
\label{sec:his_mot}
The evolution of life and creation of observers depends on many complex factors.
For example, early accretion may determine the population of early stars in galaxies, which determines the element abundance of later stellar gas which is crucial to the formation of complex life.
Mergers or collisions of galaxies may damage or destroy habitable environments within galaxies, for example, by disturbing peaceful stellar orbits, triggering star formation and supernovae, or activating galactic nuclei.
These features, which may be beneficial or harmful for the formation of observers, cannot be taken into account by considering just the mass of a gravitationally collapsed object at a single time.
As a first step towards taking into account these complex factors, we will consider the mass history of the gravitationally collapsed object.
\subsection{Calculational technique: extended Press-Schechter}
\label{sec:his_meth}
To calculate the anthropic likelihood taking into account the mass history,
we use the extended Press-Schechter formalism \cite{Bond:1990iw,Bower:1991kf,Lacey:1993iv}.
The extended Press-Schechter formalism computes the mass fraction of collapsed objects with $M = M_\mathrm{f}$ at a certain time $t = t_\mathrm{f}$,
which were formed from objects with $M = M_\mathrm{i}$ at an earlier time $t = t_\mathrm{i}$.
This formalism limits us to taking into account only two points in the mass history to determine the anthropic likelihood.
\subsection{Results}
\label{sec:his_res}
\begin{figure}[hbt]
\centering
\includegraphics[height=0.3\textwidth]{eps/history_like_L_time}
\includegraphics[height=0.3\textwidth]{eps/history_like_L_mass}
\includegraphics[height=0.3\textwidth]{eps/history_like_L_class} \\[10pt]
\includegraphics[height=0.3\textwidth]{eps/history_like_Q_time}
\includegraphics[height=0.3\textwidth]{eps/history_like_Q_mass}
\includegraphics[height=0.3\textwidth]{eps/history_like_Q_class}
\caption{\label{fig:his_L}
$\fn{L_\mathrm{c}}{\Lambda|\smiley}$ (top) and $\fn{L_\mathrm{c}}{Q|\smiley}$ (bottom) for the anthropic models using the mass history with $M_\mathrm{f} = M_\mathrm{MW}$ at $t_\mathrm{f} = 6\mathinner{\mathrm{Gyr}}$.
Left: $M_\mathrm{i} = 0.8 M_\mathrm{MW}$ at $t_\mathrm{i} = $ {\color{red} $3\mathinner{\mathrm{Gyr}}$}, {\color[rgb]{0,0.5,0} $4\mathinner{\mathrm{Gyr}}$} and {\color{blue} $5\mathinner{\mathrm{Gyr}}$} .
Middle: $M_\mathrm{i} = $ {\color{red} $0.2 M_\mathrm{MW}$}, {\color[rgb]{0,0.5,0} $0.8 M_\mathrm{MW}$} and {\color{blue} $0.9 M_\mathrm{MW}$} at $t_\mathrm{i} = 4\mathinner{\mathrm{Gyr}}$.
Right: $M_\mathrm{i}$ {\color{red} $\leq$}, {\color[rgb]{0,0.5,0} $=$} and {\color{blue} $\geq$} $0.8 M_\mathrm{MW}$ at $t_\mathrm{i} = 4\mathinner{\mathrm{Gyr}}$.
}
\end{figure}
In \sect{sec:fin},
we used the anthropic model with a single mass constraint, $M = M_\mathrm{MW}$ at $t = 6\mathinner{\mathrm{Gyr}}$.
We calculated the corresponding anthropic likelihoods and typicalities of the cosmological constant and the primordial density perturbation amplitude.
Here, in addition to the final mass constraint $M_\mathrm{f} = M_\mathrm{MW}$ at $t_\mathrm{f} = 6\mathinner{\mathrm{Gyr}}$,
we consider three types of initial mass constraint $M_\mathrm{i}$ at an earlier time $t_\mathrm{i}$: $M_\mathrm{i} \geq M_*$, $M_\mathrm{i} = M_*$ or $M_\mathrm{i} \leq M_*$,
where $M_*$ is a certain mass scale.
As a central example, we set $M_* = 0.8 M_\mathrm{MW}$ and $t_\mathrm{i} = 4\mathinner{\mathrm{Gyr}}$.
\begin{figure}[hbt]
\centering
\includegraphics[height=0.274\textwidth]{eps/history_pb_time}
\includegraphics[height=0.274\textwidth]{eps/history_pb_mass}
\includegraphics[height=0.274\textwidth]{eps/history_pb_class} \\[2ex]
\includegraphics[height=0.274\textwidth]{eps/history_sfc_time}
\includegraphics[height=0.274\textwidth]{eps/history_sfc_mass}
\includegraphics[height=0.274\textwidth]{eps/history_sfc_class} \\[2ex]
\includegraphics[height=0.3\textwidth]{eps/history_cp_time}
\includegraphics[height=0.3\textwidth]{eps/history_cp_mass}
\includegraphics[height=0.3\textwidth]{eps/history_cp_class}\\
\caption{\label{fig:his_measure}
$\fn{P}{\Lambda|\smiley}$ using the mass history with different multiverse measure.
Left to right: same to \fig{fig:his_L}.
Top: the pocket based measure; Middle: the scale factor cutoff measure; Bottom: the causal patch measure.}
\end{figure}
\fig{fig:his_L} shows the dependence of the anthropic likelihoods of $\Lambda$ and $Q$ on the choice of $t_\mathrm{i}$, $M_*$ and the mass constraint.
See Appendix~\ref{app:history} for the analytic forms.
Since matter collapses at later times if $\Lambda$ and $Q$ is smaller,
larger $t_\mathrm{i}$ and smaller $M_\mathrm{i}$ shift both $\fn{L_\mathrm{c}}{\Lambda|\smiley}$ and $\fn{L_\mathrm{c}}{Q|\smiley}$
toward smaller $\Lambda$ and $Q$.
However, in the cases of the scale factor cutoff and the causal patch measures,
the prior distribution suppresses the region where the change in the anthropic likelihood occurs,
and $\fn{P}{\Lambda|\smiley}$ remains similar regardless of the change of constraint (see \fig{fig:his_measure}).
In order to understand how the typicality changes by mass and time constraints,
we plot in \figs{fig:his_cont_degeneracy}{fig:his_cont_lambda} the contour diagrams of the typicalities for the three types of constraint as a function of $t_\mathrm{i}$ and $M_*$.
\begin{figure}[p]
\centering
\includegraphics[height=0.274\textwidth]{eps/mw_6gyr_upper_pb}
\includegraphics[height=0.274\textwidth]{eps/mw_6gyr_upper_pb_Q}\\[2ex]
\includegraphics[height=0.274\textwidth]{eps/mw_6gyr_point_pb}
\includegraphics[height=0.274\textwidth]{eps/mw_6gyr_point_pb_Q}\\[2ex]
\includegraphics[height=0.3\textwidth]{eps/mw_6gyr_lower_pb}
\includegraphics[height=0.3\textwidth]{eps/mw_6gyr_lower_pb_Q}\\
\caption{\label{fig:his_cont_degeneracy}
Contour diagrams of $\fn{\mathcal{T}_+}{\Lambda_0}$
using the mass history with the usual flat prior/pocket based measure and $M_\mathrm{f} = M_\mathrm{MW}$ at $t_\mathrm{f} = 6\mathinner{\mathrm{Gyr}}$.
Left: $Q = Q_0$; Right: $Q = \fn{Q}{\Lambda,t_\mathrm{f}}$,
assuming $\fn{P_\mathrm{c}}{Q} = \textrm{constant}$,
see \sect{sec:fin_Q}.
Top: $M_\mathrm{i} \leq M_*$; Middle: $M_\mathrm{i} = M_*$; Bottom: $M_\mathrm{i} \geq M_*$.
Typicality: {\color{violet} 0--0.01}, {\color{blue} 0.01--0.03}, {\color[rgb]{0,0.5,0} 0.03--0.1}, {\color[rgb]{0.4,0.4,0} 0.1--0.3}, {\color{red} 0.3--1}.}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[height=0.274\textwidth]{eps/Q_upper}
\includegraphics[height=0.274\textwidth]{eps/Q_upper_log}\\[2ex]
\includegraphics[height=0.274\textwidth]{eps/Q_point}
\includegraphics[height=0.274\textwidth]{eps/Q_point_log}\\[2ex]
\includegraphics[height=0.3\textwidth]{eps/Q_lower}
\includegraphics[height=0.3\textwidth]{eps/Q_lower_log} \\
\caption{\label{fig:his_cont_Q}
Contour diagrams of $\typicality{Q_0}$
using the mass history with $\Lambda = \Lambda_0$ and $M_\mathrm{f} = M_\mathrm{MW}$ at $t_\mathrm{f} = 6\mathinner{\mathrm{Gyr}}$.
Left: $\fn{P_\mathrm{c}}{Q} = \textrm{constant}$; Right: $\fn{P_\mathrm{c}}{Q} \propto Q^{-1}$.
Top: $M_\mathrm{i} \leq M_*$; Middle: $M_\mathrm{i} = M_*$; Bottom: $M_\mathrm{i} \geq M_*$.
Typicality: {\color{violet} 0--0.01}, {\color{blue} 0.01--0.03}, {\color[rgb]{0,0.5,0} 0.03--0.1}, {\color[rgb]{0.4,0.4,0} 0.1--0.3}, {\color{red} 0.3--1}.
White dash: $\typicality{Q_0} = 1$.}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[height=0.274\textwidth]{eps/mw_6gyr_upper_pb}
\includegraphics[height=0.274\textwidth]{eps/mw_6gyr_upper_sfc}
\includegraphics[height=0.274\textwidth]{eps/mw_6gyr_upper_cp} \\[2ex]
\includegraphics[height=0.274\textwidth]{eps/mw_6gyr_point_pb}
\includegraphics[height=0.274\textwidth]{eps/mw_6gyr_point_sfc}
\includegraphics[height=0.274\textwidth]{eps/mw_6gyr_point_cp} \\[2ex]
\includegraphics[height=0.3\textwidth]{eps/mw_6gyr_lower_pb}
\includegraphics[height=0.3\textwidth]{eps/mw_6gyr_lower_sfc}
\includegraphics[height=0.3\textwidth]{eps/mw_6gyr_lower_cp}\\
\caption{\label{fig:his_cont_lambda}
Contour diagrams of $\typicality{\Lambda_0}$
using the mass history with $Q = Q_0$ and $M_\mathrm{f} = M_\mathrm{MW}$ at $t_\mathrm{f} = 6\mathinner{\mathrm{Gyr}}$.
Left: pocket based measure; Middle: scale factor cutoff measure with $t_\mathrm{obs} = t_0$; Right: causal patch measure with $t_\mathrm{obs} = t_0$.
Top: $M_\mathrm{i} \leq M_*$; Middle: $M_\mathrm{i} = M_*$; Bottom: $M_\mathrm{i} \geq M_*$.
Typicality: {\color{violet} 0--0.01}, {\color{blue} 0.01--0.03}, {\color[rgb]{0,0.5,0} 0.03--0.1}, {\color[rgb]{0.4,0.4,0} 0.1--0.3}, {\color{red} 0.3--1}.
White dash: $\typicality{\Lambda_0} = 1$.}
\end{figure}
In \fig{fig:his_cont_degeneracy}, we start from the standard flat prior/pocket based measure,
and compare the cases $Q = Q_0$ and $Q = \fn{Q}{\Lambda,t_\mathrm{f}}$,
which makes the population of galaxies and clusters at $t = t_\mathrm{f}$ to be independent of $\Lambda$.
In the case of $Q = \fn{Q}{\Lambda, t_\mathrm{f}}$, we take $\fn{P_\mathrm{c}}{Q} = \textrm{constant}$,
which gives the greatest difference to the case of $Q = Q_0$.
Note that even in the case of $Q = \fn{Q}{\Lambda,t_\mathrm{f}}$
the value of the matter power spectrum on the scale of galaxies at the earlier time $t = t_\mathrm{i}$ depends on $\Lambda$.
Therefore, the degeneracy between $\Lambda$ and $Q$,
discussed in \sect{sec:fin_Q} and which afflicts models using only a single mass constraint,
is broken for models using the mass history.
However, the case $M_\mathrm{i} \geq M_*$ allows large $\Lambda$ and $Q$ and so the degeneracy is effectively unbroken.
For any history, the maximum value of typicality is $\typicality{\Lambda_0} \simeq 0.1$, i.e.\ about $1.5 \sigma$.
In \fig{fig:his_cont_Q}, we calculate the typicality of $Q_0$,
by considering the prior distributions $\fn{P_\mathrm{c}}{Q} = \mathrm{constant}$ and $\fn{P_\mathrm{c}}{Q} \propto Q^{-1}$.
In contrast to our previous model using a single mass constraint,
which gives a high typicality within $1 \sigma$,
this model may provide a high typicality, e.g.\ within $1 \sigma$,
or a low typicality, e.g.\ beyond $3 \sigma$,
depending on the mass history.
Note that these values are robust even if we apply the Tegmark \& Rees bound on $Q$ \cite{Tegmark:1997in}.
In \fig{fig:his_cont_lambda}, we compare the different multiverse measures for the typicality of $\Lambda_0$.
As seen in \fig{fig:his_measure},
in the case of the pocket based measure $\fn{P}{\Lambda|\smiley}$ mainly depends on the mass and time conditions.
On the other hand, in the cases of the scale factor cutoff and the causal patch measures,
it mostly depends on the prior distribution from the measure itself.
As a result, both measures provide $\typicality{\Lambda_0}$ similar to that assuming only $t_{\smiley} = t_0$.
Since their prior distributions are weighted toward $\Lambda \lesssim \Lambda_0$,
there even exists a mass history which gives $\typicality{\Lambda_0} = 1$.
However, along this mass history $\typicality{Q_0}$ is less than $10^{-5}$, i.e.\ beyond $3\sigma$, and this mass history is ruled out by $Q = Q_0$.
To determine whether models using mass history can actually help understanding $\Lambda_0$ and $Q_0$,
we make a quantitative example with a definite constraint.
To make an anthropic model we need to consider which historical factors may be anthropically important.
A galaxy may need to be sufficiently large at early times to produce or retain sufficient metals for life,
and it may also need to avoid dangerous interactions.
On the other hand, the galaxy may need to accrete sufficiently,
for example, to stimulate star formation.
In order to make a quantitative model,
we use observational studies in Ref.~\cite{Bullock:2005pi} that suggests that nearly 80\% of the current mass of the Milky Way came from an early major merger 10 billion years ago,
and in Refs.~\cite{Hammer:2007ki,Burstein:2004pn} that suggest that there has not been any major interaction or a significant amount of minor mergers since then.
Interestingly, this is somewhat different to the history of Andromeda
which may have experienced a more recent significant merger \cite{Burstein:2004pn}.
A comparative study of the merger histories and habitabilities of the Milky Way and Andromeda may be instructive.
Combining the above arguments, we consider the following toy models:
\paragraph{($M_\mathrm{i} \geq 0.8 M_\mathrm{MW}$ at $t_\mathrm{i} = 4\mathinner{\mathrm{Gyr}}$) and ($M_\mathrm{f} = M_\mathrm{MW}$ at $t_\mathrm{f} = 6\mathinner{\mathrm{Gyr}}$)}
to require that the galaxy was sufficiently large at a sufficiently early time.
\paragraph{($M_\mathrm{i} = 0.8 M_\mathrm{MW}$ at $t_\mathrm{i} = 4\mathinner{\mathrm{Gyr}}$) and ($M_\mathrm{f} = M_\mathrm{MW}$ at $t_\mathrm{f} = 6\mathinner{\mathrm{Gyr}}$)}
to require that the galaxy was sufficiently large at a sufficiently early time and had subsequent matter accretion.
\begin{table}[hbt]
\centering
\begin{tabular}{|c||c|c|c|c|}
\hline \multirow{2}{*}{$\typicality{\Lambda_0}$} & \multirow{2}{*}{PB}
& PB & SFC & CP \\
& & $\fn{Q}{\Lambda,t_\mathrm{f}}$ & $\tau_{\smiley} = t_0$ & $\tau_{\smiley} = t_0$ \\[0.5ex]
\hline \hline $M_\mathrm{i} \geq 0.8 M_\mathrm{MW}$ & 0.011 & $2 \times 10^{-6}$ & 0.71 & 0.26 \\[0.5ex]
\hline $M_\mathrm{i} = 0.8 M_\mathrm{MW}$ & 0.045 & 0.023 & 0.55 & 0.25 \\[0.5ex]
\hline
\end{tabular}
\caption{ \label{tab:his_ex_lambda}
$\typicality{\Lambda_0}$ for the anthropic models with $t_\mathrm{i} = 4\mathinner{\mathrm{Gyr}}$ and $M_\mathrm{f} = M_\mathrm{MW}$ at $t_\mathrm{f} = 6\mathinner{\mathrm{Gyr}}$.
In the case $Q = \fn{Q}{\Lambda,t_\mathrm{f}}$ we take $\fn{P_\mathrm{c}}{Q} = \textrm{constant}$.
}
\end{table}
\begin{table}[hbt]
\centering
\begin{tabular}{|c||c|c|}
\hline $\typicality{Q_0}$ & $\fn{P_\mathrm{c}}{Q} = \mathrm{constant}$ & $\fn{P_\mathrm{c}}{Q} \propto Q^{-1}$ \\[0.5ex]
\hline \hline $M_\mathrm{i} \geq 0.8 M_\mathrm{MW}$ & 0.045 & 0.14 \\[0.5ex]
\hline $M_\mathrm{i} = 0.8 M_\mathrm{MW}$ & 0.41 & 0.67 \\[0.5ex]
\hline
\end{tabular}
\caption{ \label{tab:his_ex_Q}
$\typicality{Q_0}$ for the anthropic models with $t_\mathrm{i} = 4\mathinner{\mathrm{Gyr}}$ and $M_\mathrm{f} = M_\mathrm{MW}$ at $t_\mathrm{f} = 6\mathinner{\mathrm{Gyr}}$.
}
\end{table}
\twotab{tab:his_ex_lambda}{tab:his_ex_Q} show the typicalities of our toy models.
If we neglect the cases of the scale factor cutoff and the causal patch measures,
then the model $M_\mathrm{i} \geq 0.8 M_\mathrm{MW}$,
as might be suggested by the results of Refs.~\cite{Hammer:2007ki,Burstein:2004pn},
has some difficulty to explain both $\Lambda_0$ and $Q_0$.
On the other hand, in the case of the model $M_\mathrm{i} = 0.8 M_\mathrm{MW}$,
$\typicality{\Lambda_0}$ is greater, though maybe not sufficiently, than the case $M_\mathrm{i} \geq 0.8 M_\mathrm{MW}$,
and $\typicality{Q_0}$ is high, i.e.\ within $1 \sigma$.
Therefore, we set this model as our reference model.
\begin{table}[hbt]
\centering
\begin{tabular}{|c||c|c|c|c|}
\hline \multirow{2}{*}{$\typicality{\Lambda_0}$} & \multirow{2}{*}{PB}
& PB & SFC & CP \\
& & $\fn{Q}{\Lambda,t_\mathrm{f}}$ & $\tau_{\smiley} = t_0$ & $\tau_{\smiley} = t_0$ \\[0.5ex]
\hline \hline $M_\mathrm{i} = 0.9 M_\mathrm{MW}$ at $t_\mathrm{i} = 3\mathinner{\mathrm{Gyr}}$ & $4 \times 10^{-5}$ & $4 \times 10^{-6}$ & 0.36 & 0.48 \\[0.5ex]
\hline $M_\mathrm{i} = 0.8 M_\mathrm{MW}$ at $t_\mathrm{i} = 4\mathinner{\mathrm{Gyr}}$ & 0.045 & 0.023 & 0.55 & 0.25 \\[0.5ex]
\hline $M_\mathrm{i} = 0.5 M_\mathrm{MW}$ at $t_\mathrm{i} = 5\mathinner{\mathrm{Gyr}}$ & 0.10 & 0.077 & 0.59 & 0.22 \\[0.5ex]
\hline
\end{tabular}
\caption{ \label{tab:his_ex2_lambda}
$\typicality{\Lambda_0}$ for the anthropic models with $M_\mathrm{f} = M_\mathrm{MW}$ at $t_\mathrm{f} = 6\mathinner{\mathrm{Gyr}}$.
We slightly change the mass and time constraints from the model with $M_\mathrm{i} = 0.8 M_\mathrm{MW}$ at $t_\mathrm{i} = 4\mathinner{\mathrm{Gyr}}$.
In the case $Q = \fn{Q}{\Lambda, t_\mathrm{f}}$ we take $\fn{P_\mathrm{c}}{Q} = \textrm{constant}$.
}
\end{table}
\begin{table}[hbt]
\centering
\begin{tabular}{|c||c|c|}
\hline $\typicality{Q_0}$ & $\fn{P_\mathrm{c}}{Q} = \mathrm{constant}$ & $\fn{P_\mathrm{c}}{Q} \propto Q^{-1}$ \\[0.5ex]
\hline \hline $M_\mathrm{i} = 0.9 M_\mathrm{MW}$ at $t_\mathrm{i} = 3\mathinner{\mathrm{Gyr}}$ & $1 \times 10^{-4}$ & $4 \times 10^{-4}$ \\[0.5ex]
\hline $M_\mathrm{i} = 0.8 M_\mathrm{MW}$ at $t_\mathrm{i} = 4\mathinner{\mathrm{Gyr}}$ & 0.41 & 0.67 \\[0.5ex]
\hline $M_\mathrm{i} = 0.5 M_\mathrm{MW}$ at $t_\mathrm{i} = 5\mathinner{\mathrm{Gyr}}$ & 0.89 & 0.56\\[0.5ex]
\hline
\end{tabular}
\caption{ \label{tab:his_ex2_Q}
$\typicality{Q_0}$ for the anthropic models with $M_\mathrm{f} = M_\mathrm{MW}$ at $t_\mathrm{f} = 6\mathinner{\mathrm{Gyr}}$.
We slightly change mass and time conditions from the model with $M_\mathrm{i} = 0.8 M_\mathrm{MW}$ at $t_\mathrm{i} = 4\mathinner{\mathrm{Gyr}}$.
}
\end{table}
\twotab{tab:his_ex2_lambda}{tab:his_ex2_Q} show whether the results from our reference model, $M_\mathrm{i} = 0.8 M_\mathrm{MW}$ at $t_\mathrm{i} = 4\mathinner{\mathrm{Gyr}}$, are robust even if we slightly change mass and time constraints.
In the case of $\typicality{\Lambda_0}$,
the results from the scale factor cutoff measure and the causal patch measure are robust,
which only shows that $t_{\smiley} = t_0$ plays a more significant role than any other anthropic factor.
Note that the direction which increases the typicality for the scale factor cutoff measure is opposite to that for the causal patch measure.
On the other hand, in the cases of $\typicality{\Lambda_0}$ with the pocket based measure and $\typicality{Q_0}$,
the model with $M_\mathrm{i} = 0.9 M_\mathrm{MW}$ at $t_\mathrm{i} = 3\mathinner{\mathrm{Gyr}}$ gives a low typicality.
Therefore, if the proper anthropic constraint consists of larger $M_\mathrm{i}$ and smaller $t_\mathrm{i}$ than our model,
the anthropic solution for both the cosmological constant and the primordial density perturbation amplitude would be in trouble,
and we may require additional anthropic constraints to solve this problem.
\section{Conclusion}
\label{sec:sum}
\begin{table}[hbt]
\centering
\begin{tabular}{|r @{~at~} l ||c|c|c|c|}
\hline \multicolumn{2}{|c||}{\multirow{2}{*}{$\typicality{\Lambda_0}$}} & \multirow{2}{*}{PB}
& PB & SFC & CP \\
\multicolumn{2}{|c||}{} & & $\fn{Q}{\Lambda,t_\mathrm{f}}$ & $t_{\smiley} = t_0$ & $t_{\smiley} = t_0$ \\[0.5ex]
\hline \hline \multicolumn{2}{|c||}{-} & $2 \times 10^{-120}$ & $2 \times 10^{-15}$ & 0.55 & 0.14 \\[0.5ex]
\hline $M \geq M_\mathrm{MW}$ & $t \to \infty$ & 0.22 & $2 \times 10^{-15}$ & 0.36 & 0.11 \\[0.5ex]
\hline $M = M_\mathrm{MW}$ & $t = 6\mathinner{\mathrm{Gyr}}$ & 0.049 & $2 \times 10^{-15}$ & 0.52 & 0.17 \\[0.5ex]
\hline $M = M_\mathrm{MW}$ & $t = 6\mathinner{\mathrm{Gyr}}$ & \multirow{2}{*}{0.045} & \multirow{2}{*}{0.023} & \multirow{2}{*}{0.55} & \multirow{2}{*}{0.25} \\
$M = 0.8 M_\mathrm{MW}$ & $t = 4\mathinner{\mathrm{Gyr}}$ & & & & \\[0.5ex]
\hline
\end{tabular}
\caption{\label{tab:sum_lambda}
Typicalities of $\Lambda_0$ for different anthropic models and multiverse measures:
the pocket based measure (PB), the scale factor cutoff measure (SFC) and the causal patch measure (CP).
We assume $0 \lesssim \Lambda \lesssim 1$ and $10^{-16} \lesssim Q \lesssim 1$.
$\fn{Q}{\Lambda,t_\mathrm{f}}$ makes the population of galaxies and clusters at $t = t_\mathrm{f}$ independent of $\Lambda$.
In the case $Q = \fn{Q}{\Lambda,t_\mathrm{f}}$, we take $\fn{P_\mathrm{c}}{Q} = \textrm{constant}$,
which gives the greatest difference to the case of $Q = Q_0$.
}
\end{table}
\begin{table}[hbt]
\centering
\begin{tabular}{|r @{~at~} l ||c|c|}
\hline \multicolumn{2}{|c||}{$\typicality{Q_0}$} & $\fn{P_\mathrm{c}}{Q} = \mathrm{constant}$ & $\fn{P_\mathrm{c}}{Q} \propto Q^{-1}$ \\[0.5ex]
\hline \hline \multicolumn{2}{|c||}{-} & $2 \times 10^{-5}$ & 0.63 \\[0.5ex]
\hline $M \geq M_\mathrm{MW}$ & $t \to \infty$ & $7 \times 10^{-7}$ & $8 \times 10^{-3}$ \\[0.5ex]
\hline $M = M_\mathrm{MW}$ & $t = 6\mathinner{\mathrm{Gyr}}$ & 0.33 & 0.76 \\[0.5ex]
\hline $M = M_\mathrm{MW}$ & $t = 6\mathinner{\mathrm{Gyr}}$ & \multirow{2}{*}{0.41} & \multirow{2}{*}{0.67} \\
$M = 0.8 M_\mathrm{MW}$ & $t = 4\mathinner{\mathrm{Gyr}}$ & & \\[0.5ex]
\hline
\end{tabular}
\caption{\label{tab:sum_Q}
Typicalities of $Q_0$ for different anthropic models and prior distributions of $Q$, with $\Lambda = \Lambda_0$.
We assume $10^{-16} \lesssim Q \lesssim 1$.
}
\end{table}
We studied the comoving anthropic likelihood of an obsevable $O$, $\fn{L_\mathrm{c}}{O|\smiley}$,
which counts the number of observers in a comoving volume,
where $O$ corresponds to the cosmological constant $\Lambda$ and the primordial density perturbation amplitude $Q$.
To estimate $\fn{L_\mathrm{c}}{O|\smiley}$,
we started from Weinberg's anthropic calculation \cite{Martel:1997vi,Garriga:1999hu,Garriga:2000cv,Pogosian:2006fx}
which models the number of observers as proportional to the total mass in gravitationally collapsed objects
with mass greater than a certain threshold, $M_*$, at late times, $t \to \infty$.
While this model can postdict $\Lambda_0$ well with simple assumptions,
it assumes the supermassive objects are equally habitable to the Milky Way,
but they may not be very habitable while they give a bias to small $\Lambda$.
See the second row of \tab{tab:sum_lambda}.
Also, since supermassive objects prefer large $Q$,
Weinberg's model predicts large $Q$ unless one applies the Tegmark \& Rees' bound \cite{Tegmark:1997in} (see the second row of \tab{tab:sum_Q}).
In order to avoid the above problems of Weinberg's model,
we considered a model that assumes that the number of observers is proportional to the number of gravitationally collapsed objects with certain mass and time scales $M = M_*$ and $t = t_*$.
Though it seems obvious to choose $M_*$ as the mass of the Milky Way and $t_*$ as $t_0$,
the Press-Schechter formalism \cite{Press:1973iz}, which we used to count the collapsed objects,
identifies our collapsed object at $t_0$ as the Local Group,
which makes it inconsistent to choose $M_*$ as the mass of the Milky Way.
Also, since we do not seem to have any plausible anthropic justification to use the Local Group as our object,
it is also inconsistent to choose $M_*$ as the mass of the Local Group.
However, the time before the formation of the Local Group may be anthropically more influential than $t_0$,
for example, by influencing the star formation rate or metal abundance, etc.
Also, if we set $t_*$ earlier than the formation of the Local Group,
we can identify the Milky Way as an isolated collapsed object
and the above technical problem with using the Press-Schechter formalism disappears.
Thus, we set $M_*$ as the mass of the Milky Way and $t_*$ as a time earlier than the formation of the Local Group, say, 6 billion years.
Since Weinberg's model was biased to small $\Lambda$,
the typicality of $\Lambda_0$ for our model in the pocket based measure is lower than Weinberg's by a factor of four.
See the third row of \tab{tab:sum_lambda}.
In the case of $Q$, our model can postdict $Q_0$ within $1\sigma$, while Weinberg's model predicts large $Q$
(see the third row of \tab{tab:sum_Q}).
Furthermore, it is not just the single mass constraint but the full mass history of a galaxy or a galaxy group which affects its habitability.
As a first step to consider the full mass history,
we introduced anthropic models assuming the number of observers is proportional to the number of gravitationally collapsed objects with $M=M_\mathrm{f}$ at $t = t_\mathrm{f}$,
which were formed from objects with $M = M_\mathrm{i}$ at an earlier time $t = t_\mathrm{i}$,
using the extended Press-Schechter formalism \cite{Bond:1990iw,Bower:1991kf,Lacey:1993iv}.
\figs{fig:his_cont_degeneracy}{fig:his_cont_lambda} show the typicalities of $\Lambda_0$ and $Q_0$ by choosing different $M_\mathrm{i}$ and $t_\mathrm{i}$ constraints and prior distributions.
Especially, as a toy model, we chose $M_\mathrm{i} = 0.8 M_\mathrm{MW}$ and $t_\mathrm{i} = 4 \mathinner{\mathrm{Gyr}}$,
since a galaxy may need to be a certain mass and mass fraction in earlier times
to produce sufficient metals and stimulate star formation, and also to avoid dangerous interactions.
Then the typicalities of both $\Lambda_0$ and $Q_0$ are similar to the model with the single mass constraint $M = M_\mathrm{MW}$ at $t = 6\mathinner{\mathrm{Gyr}}$ (see the fourth row of \twotab{tab:sum_lambda}{tab:sum_Q}).
However, there is no degeneracy between $\Lambda$ and $Q$,
which afflicts all kinds of single mass constraint models in the pocket based measure (see the second column of \tab{tab:sum_lambda}).
We also studied the effect of the multiverse measure on our typicality.
In addition to Weinberg/Vilenkin's flat prior/pocket based measure \cite{Garriga:1998px,Vanchurin:1999iv,Garriga:2005av,Easther:2005wi},
we considered two multiverse measures: the scale factor cutoff measure \cite{DeSimone:2008bq,Bousso:2008hz} and the causal patch measure \cite{Bousso:2006ev,Bousso:2006ge}.
In the case of the pocket based measure,
the typicality of $\Lambda_0$ is relatively small and sensitive to the choice of the anthropic model.
On the other hand, if we assume that the observing time $t_{\smiley} = t_0$,
both the scale factor cutoff measure and the causal patch measure
always give a high typicality,
and it is not affected much by any other anthropic factors.
\begin{figure}[hbt]
\begin{center}
\includegraphics[height=0.3\textwidth]{eps/make_typical} \hspace{20pt}
\includegraphics[height=0.3\textwidth]{eps/mediocrity}
\caption{\label{fig:mediocrity}
Examples which illustrate the difference between the mass history which \emph{makes our universe typical} and that which is \emph{typical in our universe}.
Left: the typicality of $\Lambda_0$ in the case of the causal patch measure.
Right: the typicality of mass history in our universe.
Typicality: {\color{violet} 0--0.01}, {\color{blue} 0.01--0.03}, {\color[rgb]{0,0.5,0} 0.03--0.1}, {\color[rgb]{0.4,0.4,0} 0.1--0.3}, {\color{red} 0.3--1}.
White dash: the maximal typicality.
}
\end{center}
\end{figure}
Note that one must be careful not to confuse the two separate questions:
whether a given mass history \emph{makes our universe typical}
and whether a given mass history is \emph{typical in our universe}.
There is a common misconception, called the ``Principle of Mediocrity,''
that our Galaxy is a typical galaxy in our universe, and our universe is a typical universe.
However, from the anthropic point of view,
the typicality of $\Lambda_0$ from a given mass history and the typicality of that history in our universe are not expected to be similar.
For example, as shown in \fig{fig:mediocrity},
in the case of the causal patch measure,
the mass history which is most typical in our universe does not postdict $\Lambda_0$ within $1\sigma$,
and the mass history which makes $\Lambda_0$ most typical is not typical in our universe within $2\sigma$.
\section{Discussion}\label{sec:dis}
The main problem of our work is how to choose anthropic factors or an anthropic model in terms of mass history.
Also, our calculation technique, the extended Press-Schechter formalism,
is crude and uses at most two historical points,
and it cannot follow the late history of a galaxy after it joins a galaxy group.
The actual history of the Milky Way can give hints for anthropic factors,
although one must be careful not to introduce bias by considering features due to our value of the cosmological constant,
as opposed to features affecting the formation of observers.
Interestingly, the history of the Milky Way seems to be somewhat different to the history of Andromeda.
From this we suggested that a comparative study of the habitabilities of the Milky Way and Andromeda may be instructive.
Cosmological numerical simulation may be able to consider the full history of a galaxy,
especially the late history.
These late times may provide the strongest anthropic constraint on the cosmological constant,
since the effect of the cosmological constant on the large scale structure is greatest at late times.
This may give $\Lambda_0$ a high probability even in the case of the pocket based measure.
On the other hand, late times may not be so influential,
since the galaxy group may shield the effect of the cosmological constant.
This would support our previous argument that it may be better to set $t_\mathrm{f}$ as the time before the formation of the Local Group.
Up to now, we related the cosmological constant to the mass history,
which has only an indirect connection to the real anthropic factors.
What we actually need to do in the future is to relate the cosmological constant to physical properties those are more directly related to real anthropic factors,
e.g.\ metallicity and star formation rate.
Numerical simulation may help to make this possible.
\section*{Acknowledgments}
The authors thank Changbom Park, Jai-chan Hwang, Juhan Kim, Donghui Jeong, Jinn-Ouk Gong,
Kenji Kadota, Dong-han Yeom, Seoktae Koh, Cai Kai, Michael Gowanlock, Alex Nielson, Bum-Hoon Lee
and Emanuil Vilkovisky.
The authors also thank the hospitality of Center for Quantum SpaceTime, Fesenkov Astrophysical Institute, Korea Institute for Advanced Study and Yukawa Institute of Theoretical Physics.
SEH and EDS are supported by the National Research Foundation grant (2009-0077503) funded by the Korean government.
SEH is also supported by the National Research Foundation grant (2009-006814, 2007-0093860) funded by the Korean government.
HZ is supported by T\"UB\.ITAK research fellowship programme for foreign citizens.
\newpage
|
1,314,259,995,958 | arxiv | \section{Introduction}
It is very well known that Galilean relativity of inertial frames
with relative velocity and rotation is described by the Euclidean
group that is the homogeneous subgroup of the full Galilei group\footnote{The
full Galelei group also includes translations in position and time
}.\ \ We review, in the following section, the derivation of the
action of the Euclidean group on frames of a particle in Newtonian
space-time from the assumption of invariance of a Newtonian time
line element and invariance of length in the inertial rest frame.
The group multiplication law gives the usual Newtonian addition
of velocity.\ \ The diffeomorphisms of the space-time with these
invariants are the straight lines trajectories of an inertial particle.\ \
The group of transformations between frames of particles following
noninertial trajectories also has an invariant Newtonian time line
element and invariance of length in the inertial rest frame.\ \ We
use the Hamilton formulation on extended phase space with position,
time, momentum and energy degrees of freedom and therefore must
also have invariance of the symplectic metric. Using the same method
as reviewed for the Euclidean group, this results in the Hamilton
group that is parameterized by rotation angles, and rates of change
of position, momentum and energy with time, i.e. velocity, force
and power.\ \ The group multiplication law results in the usual
Newtonian addition of velocities and force. The diffeomorphisms
with these invariants must satisfy Hamilton's equations of motion.\ \ The
power transformation law has terms that integrate to those terms
in the Hamiltonian that are required in noninertial frames.\ \ The
homogeneous subgroup of the Galilei group is the inertial special
case of the Hamilton group.
\section{Newtonian inertial frames}
The Newtonian space-time $\mathbb{M}\simeq \mathbb{R}^{n+1}$ has
coordinates $x=(q,t)$ where $q\in \mathbb{R}^{n}$ are the $n$ position
co-ordinates and $t\in \mathbb{R}$ is the time coordinate. The usual
physical case corresponds to $n=3$.\ \ A frame in the cotangent
space at a point $x$, ${{T}^{*}}_{x}\mathbb{M}$, has a basis $d
x=(d q,d t)$. The action of the general linear group element $\Gamma
\in \mathcal{G}\mathcal{L}( n+1,\mathbb{R}) $ on the cotangent space,
suppressing the indices and using basic matrix notation is
\begin{equation}
d \tilde{x }=\Gamma \cdot d x,
\end{equation}
\noindent where $\Gamma $ is a nonsingular\ \ $(n+1)\times (n+1)$
real matrix and $d x$ is a column vector.
The line element may be written as
\begin{equation}
d s^{2}=d t^{2}={\eta \mbox{}^{\circ}}_{a b}d x^{a}d x^{b} ={}^{t}d
x\cdot \eta \mbox{}^{\circ}\cdot d x,
\end{equation}
\noindent where the indices $a,b..=0,1..n$ and\ \ $\eta $ is an
$(n+1)\times (n+1)$\ \ matrix\ \
\begin{equation}
\eta \mbox{}^{\circ}=\left( \begin{array}{ll}
0_{n\times n} & 0_{1\times n} \\
0_{n\times 1} & 1
\end{array}\right) .
\end{equation}
\noindent In this expression, $0_{n\times m}$ is an $n\times m$
zero matrix.\ \ \ The condition that the line element is invariant
under the action of the group is\ \
\begin{equation}
d t^{2}={}^{t}d x\cdot \eta \mbox{}^{\circ}\cdot d x=d {\tilde{t
}}^{2}={}^{t}\left( \Gamma \cdot d x\right) \cdot \eta \mbox{}^{\circ}\cdot
\Gamma \cdot d x ,
\end{equation}
\noindent and therefore
\begin{equation}
\eta \mbox{}^{\circ}={}^{t}\Gamma \cdot \eta \mbox{}^{\circ}\cdot
\Gamma .%
\label{A: Newtonian time line element invariance}
\end{equation}
\noindent We may write $\Gamma $ as an $(n+1)\times (n+1)$ matrix
of the form
\begin{equation}
\Gamma =\left( \begin{array}{ll}
R & v \\
w & \epsilon
\end{array}\right)
\end{equation}
\noindent with $R$ an $n\times n$ submatrix, $\epsilon \in \mathbb{R}$
and $v,w\in \mathbb{R}^{n}$ with $v$ a column vector and $w$ a row
vector.\ \ \ Equation (5) results in the expression
\begin{equation}
\eta \mbox{}^{\circ}=\left( \begin{array}{ll}
0 & 0 \\
0 & 1
\end{array}\right) =\left( \begin{array}{ll}
{}^{t}R & {}^{t}w \\
{}^{t}v & {}\epsilon
\end{array}\right) \left( \begin{array}{ll}
0 & 0 \\
0 & 1
\end{array}\right) \left( \begin{array}{ll}
R & v \\
w & \epsilon
\end{array}\right) =\left( \begin{array}{ll}
{}^{t}w w & {}^{t}w \epsilon \\
\epsilon w & \epsilon ^{2}
\end{array}\right) ,
\end{equation}
\noindent where the dimensions of the zero matrices are now implicit.
It follows directly that $w=0$ and $\epsilon =\pm 1$.\ \ \
The group multiplication and inverse property is realized by matrix
multiplication and inverse and a direct calculation shows it defines
the matrix group with group multiplication and inverse given by
\begin{equation}
\begin{array}{l}
\Gamma ( \epsilon ,R,v) =\Gamma ( \epsilon ^{{\prime\prime}},R^{{\prime\prime}},v^{{\prime\prime}})
\cdot \Gamma ( \epsilon ^{\prime },R^{\prime },v^{\prime }) =\Gamma
( \epsilon ^{{\prime\prime}}\epsilon ^{\prime },R^{{\prime\prime}}\cdot
R^{\prime },R^{{\prime\prime}}\cdot v^{\prime }+\epsilon ^{\prime
}v^{{\prime\prime}}) , \\
\Gamma ^{-1}( \epsilon ,R,v) =\Gamma ( \epsilon ,R^{-1},-\epsilon
R^{-1}\cdot v) .
\end{array}%
\label{A: Extended inhomogeneos gl multiplication}
\end{equation}
The requirement that $\det \Gamma \neq 0$ requires that $\det
R \neq 0$ and therefore $R\in \mathcal{G}\mathcal{L}( n,\mathbb{R})
$.\ \ The group elements $\Gamma ( 1,R,0) $ define the natural embedding
of $\mathcal{G}\mathcal{L}( n,\mathbb{R}) $ into $\mathcal{G}\mathcal{L}(
n+1,\mathbb{R}) $. The elements $\Gamma ( \epsilon ,I_{n},0) \in
\mathcal{D}_{2}$\ \ where $\mathcal{D}_{2}$ is the two element discrete
group of time reversal.\ \ \ Note that $\Gamma ( \epsilon ,R,0)
\in \mathcal{D}_{2}\otimes \mathcal{G}\mathcal{L}( n,\mathbb{R})
$.\ \ \
The {\itshape translation} group is defined to be the matrix Lie
group $\mathcal{T}( n) $ that is isomorphic to $\mathbb{R}^{n}$\ \ considered
to be an abelian group under addition, $\mathcal{T}( n) \simeq (\mathbb{R}^{n},+)$.
The group elements $\Gamma ( 1,I_{n},v) $, with {\itshape $I_{n}$}
the $n\times n$ unit matrix,\ \ define elements of the translation
group $\mathcal{T}( n) $ with group composition
\begin{equation}
\begin{array}{l}
\Gamma ( 1,I_{n},v) =\Gamma ( 1,I_{n},v^{{\prime\prime}}) \cdot
\Gamma ( 1,I_{n},v^{\prime }) =\Gamma ( 1,I_{n},v^{\prime }+v^{{\prime\prime}})
, \\
\Gamma ^{-1}( 1,I_{n},v) =\Gamma ( 1,I_{n},-v) .
\end{array}%
\label{A: Velocity translation addition}
\end{equation}
\noindent In this case, the translation group is parameterized by
velocity: the translation are in velocity space rather than position
space. The automorphisms of this translation subgroup are
\begin{equation}
\Gamma ( \epsilon ^{\prime },R^{\prime },v^{\prime }) \cdot \Gamma
( 1,I_{n},v) \cdot \Gamma ^{-1}( \epsilon ^{\prime },R^{\prime },v^{\prime
}) =\Gamma ( 1,I_{n},{\epsilon }^{\prime }R^{\prime }\cdot v) ,
\end{equation}
\noindent and therefore the translation group is a normal subgroup.
The intersection of this translation subgroup with the subgroup
$\mathcal{D}_{2}\otimes \mathcal{G}\mathcal{L}( n,\mathbb{R}) $
is the identity and the union is the entire group. Therefore, the
group is the extended inhomogeneous general linear group
\begin{equation}
\mathcal{I}\hat{\mathcal{G}}\mathcal{L}( n,\mathbb{R}) \simeq \mathcal{D}_{2}\otimes
_{s}\mathcal{I}\mathcal{G}\mathcal{L}( n,\mathbb{R}) \simeq \mathcal{D}_{2}\otimes
_{s}\mathcal{G}\mathcal{L}( n,\mathbb{R}) \otimes _{s}\mathcal{T}(
n) .
\end{equation}
The above group does not leave length, $d q^{2}$, invariant in the
rest frame.\ \ The rest frame is the special case where $v=0$.\ \ Requiring
that the line element $d q^{2}$ is invariant in the inertial rest
frame
\begin{equation}
d q^{2}={}^{t}d x\cdot \eta ^{q}\cdot d x={}^{t}d x\cdot {}^{t}\Gamma
( \epsilon ,R,0) \cdot \eta ^{q}\cdot \Gamma ( \epsilon ,R,0) \cdot
d x ,
\end{equation}
\noindent results in the condition $\eta ^{q}={}^{t}\Gamma ( \epsilon
,R,0) \cdot \eta ^{q}\cdot \Gamma ( \epsilon ,R,0) $.\ \ This may
be written in matrix notation as
\begin{equation}
\eta ^{q}=\left( \begin{array}{ll}
I_{n} & 0 \\
0 & 0
\end{array}\right) =\left( \begin{array}{ll}
{}^{t}R & 0 \\
0 & {}\epsilon
\end{array}\right) \left( \begin{array}{ll}
I_{n} & 0 \\
0 & 0
\end{array}\right) \left( \begin{array}{ll}
R & 0 \\
0 & \epsilon
\end{array}\right) =\left( \begin{array}{ll}
{}^{t}R \cdot R & 0 \\
0 & {}0
\end{array}\right) ,
\end{equation}
\noindent where $I_{n}$ is the $n\times n$ unit matrix.\ \ This
requires that ${}^{t}R =R^{-1}$ and therefore $R\in \mathcal{O}(
n) $.\ \ \
Matrices of the form $\Gamma ( \epsilon ,R,v) $ with $\epsilon =\pm
1$, $v\in \mathbb{R}^{n}$ and $R\in \mathcal{O}( n) $\ \ are elements
of the matrix Lie group, generally called the\ \ extended Euclidean
group, $\hat{\mathcal{E}}( n) $ where
\begin{equation}
\hat{\mathcal{E}}( n) \simeq \mathcal{D}_{2}\otimes _{s}\mathcal{O}(
n) \otimes _{s}\mathcal{T}( n) .%
\label{A: ExtendedEuclidean Group}
\end{equation}
\noindent The group multiplication and inverse is given by (8) with
$R\in \mathcal{O}( n) $.
The orthogonal group may be written as the semidirect product of
the special orthogonal group and a 2 element discrete parity group
${\tilde{\mathcal{D}}}_{2}$ as $\mathcal{O}( n) ={\tilde{\mathcal{D}}}_{2}\otimes
_{s}\mathcal{S}\mathcal{O}( n) $.\ \ Define $\varsigma \in \mathcal{D}_{4}=\mathcal{D}_{2}\otimes
{\tilde{\mathcal{D}}}_{2}$\ \ as the 4 element parity, time reversal
group with elements
\begin{equation}
\varsigma =\left( \begin{array}{ll}
\tilde{\epsilon }I_{n} & 0 \\
0 & \epsilon
\end{array}\right) ,\ \ \epsilon =\pm 1, \tilde{\epsilon }=\pm 1.%
\label{A: PCT n+1 matrix realization}
\end{equation}
\noindent The extended Euclidean group (14) may be written as
\begin{equation}
\hat{\mathcal{E}}( n) \simeq \mathcal{D}_{2}\otimes _{s}\mathcal{O}(
n) \otimes _{s}\mathcal{T}( n) \simeq \mathcal{D}_{4}\otimes _{s}\mathcal{S}\mathcal{O}(
n) \otimes _{s}\mathcal{T}( n) \simeq \mathcal{D}_{4}\otimes _{s}\mathcal{E}(
n) .
\end{equation}
\noindent where $\mathcal{E}( n) \simeq \mathcal{S}\mathcal{O}(
n) \otimes _{s}\mathcal{T}(n)$ is the Euclidean matrix Lie group
that is the homogeneous subgroup of the Galilei group.
Consider a transformation $\tilde{x}=\varphi ( x) $ that preserves
the Newtonian time line element and length in the rest frame.\ \ The
matrix of the Jacobian of the transformation is therefore an\ \ element
of the group,\ \ $[\frac{\partial \varphi ( x) }{\partial x}]|_{x}\in
\mathcal{E}( n) .$ The discrete transformations do not need to be
considered as the $\varphi $ are continuous and therefore only the
continuous group is required. Furthermore, we can rotate the coordinates
so that the rotation group need not be considered.\ \ Then, the
Jacobian is an element of the translation normal subgroup of the
Euclidean group,
\begin{equation}
\left( \frac{\partial \varphi ^{a}( x) }{\partial x^{a}}\right)
=\left( \begin{array}{ll}
\frac{\partial \varphi ^{i}( t,q) }{\partial q^{j}} & \frac{\partial
\varphi ^{i}( t,q) }{\partial t} \\
\frac{\partial \varphi ^{0}( t,q) }{\partial q^{j}} & \frac{\partial
\varphi ^{0}( t,q) }{\partial t}
\end{array}\right) =\left( \begin{array}{ll}
\delta _{i,j} & v^{i} \\
0 & 1
\end{array}\right) ,
\end{equation}
\noindent where in this expression indices $i,j=1,..n$ are explicit.\ \ With
$v$ constant and ignoring trivial integration constants,\ \ the
transformation equations may be integrated to
\begin{equation}
{\tilde{q}}^{i}=\varphi ^{i}( q,t) =q^{i}+v^{i} t,\ \ \ \ \ \ \ \ \tilde{
t}=\varphi ^{0}( q,t) =t.
\end{equation}
The Euclidean group defines the transformation between inertial
frames in classical Newtonian mechanics. The Euclidean group leaves
invariant $d t$ and therefore in Newtonian physics there is the
notion of absolute time that all observers agree on.\ \ As the frame
is inertial, the rate of change of momentum is zero and the motion
is {\itshape uniform}. Correspondingly, velocity is simply additive
as given by the group laws (8,9).\ \
\section{Hamilton group: Newtonian noninertial frames}
The method used above to derive the Euclidean group for inertial
frames may be applied directly to obtain the group of transformations
between general noninertial frames in Hamilton's mechanics. Again,
we require invariance of the Newtonian time line element and also
that length is invariant in the inertial rest frame. As we are using
the Hamilton formulation, we also require invariance of the symplectic
metric.\ \ \
For the noninertial case with non zero rate of change of momentum
and position between frames of particle states, consider the space
$\mathbb{P}\simeq \mathbb{R}^{2n+2}$ that has coordinates $z=(p,q,e,t)$.\ \ \ $p,q\in
\mathbb{R}^{n}$ are the $n$ momentum and $n$ position co-ordinates,
$e\in \mathbb{R}$ is the energy coordinate and $t\in \mathbb{R}$
the time coordinate. $n=3$ is the physical case.\ \ A frame at a
point in the cotangent space ${{T}^{*}}_{z}\mathbb{P}$ has a basis
$d z=(d p, d q,d e,d t)$.\ \ \ The action of an element of $\Phi
\in \mathcal{G}\mathcal{L}( 2n+2,\mathbb{R}) $ on the frame is\ \
\begin{equation}
d \tilde{z }=\Phi \cdot d z,
\end{equation}
\noindent where $\Phi $ is a nonsingular\ \ $2(n+1)\times 2(n+1)$
matrix.
As in the Euclidean case, we consider the subgroup that leaves invariant
the Newtonian time line element $d s^{2}= d t^{2}$.\ \
\begin{equation}
d \tilde{z }= d t^{2}={}^{t}d z\cdot \eta \mbox{}^{\circ}\cdot d
z,
\end{equation}
\noindent where $\eta \mbox{}^{\circ}$ is now a $2(n+1)\times 2(n+1)$
singular matrix.\ \ In addition, we again require that the length
$d q^{2}$ be invariant in the inertial rest frame of the particle.\ \ \
Hamilton's mechanics has an additional invariant, the symplectic
metric $-d e\wedge d t+\delta _{i,j}d p^{i}\wedge d q^{j}$ where
$i,j..=1,..n$.
Let $\Phi $ be an $(2n+2) \times (2n+2)$ nonsingular real matrix
written in terms of submatrices as
\begin{equation}
\Phi =\left( \begin{array}{lll}
A & b & w \\
{}^{t}c & a & r \\
{}^{t}d & f & \epsilon
\end{array}\right) ,
\end{equation}
\noindent where $A$ is a $2n\times 2 n$ real matrix, $w,b,c,d\in
\mathbb{R}^{n}\text{}$ and $a,r,f,\epsilon \in \mathbb{R}$.\ \ \ From
the analysis in the previous section, we have immediately that the
invariance of the Newtonian time line element requires $\epsilon
=\pm 1$, $d,f=0$ so that $\Phi \in \mathcal{I}\hat{\mathcal{G}}\mathcal{L}(
2n+1,\mathbb{R}) $,\ \
\begin{equation}
\Phi =\left( \begin{array}{lll}
A & b & w \\
{}^{t}c & a & r \\
0 & 0 & \epsilon
\end{array}\right) , \epsilon =\pm 1.%
\label{A: Phi gl form}
\end{equation}
\noindent Furthermore we know that the requirement that the symplectic
metric is invariant requires that $\Phi \in \mathcal{S}p( 2n+2)
$.\ \ The group $\mathcal{G}$ that leaves both the Newtonian line
element and the symplectic group invariant is the intersection of
these two groups
\begin{equation}
\mathcal{G}\simeq \mathcal{S}p( 2n+2) \cap \mathcal{I}\hat{\mathcal{G}}\mathcal{L}(
2n+1)
\end{equation}
\noindent This group may be explicitly calculated simply by applying
the symplectic condition to the explicit form of the group elements
$\Phi \in \mathcal{I}\hat{\mathcal{G}}\mathcal{L}( 2n+1,\mathbb{R})
$ given in (22).
In the basis $\{d p, d q, d e, d t\}$ the matrix for symplectic
metric ${}^{t}d z\cdot \zeta \cdot d z$\ \ has the form
\begin{equation}
\zeta =\left( \begin{array}{lll}
\zeta ^{0} & 0 & 0 \\
0 & 0 & -1 \\
0 & 1 & 0
\end{array}\right) .
\end{equation}
\noindent where\ \ \ $\zeta \mbox{}^{\circ}$ is the $2n\times 2
n$ matrix\ \ \
\begin{equation}
\zeta \mbox{}^{\circ}=\left( \begin{array}{ll}
0 & I_{n} \\
-I_{n} & 0
\end{array}\right) ,
\end{equation}
\noindent with $I_{n}$ the $n\times n$ identity matrix.\ \
Next, impose the condition that symplectic metric is invariant ${}^{t}\Phi
\cdot \zeta \cdot \Phi =\zeta $ using $\Phi $ defined in (22)
\begin{equation}
\left( \begin{array}{lll}
{}^{t}A\cdot \zeta \mbox{}^{\circ}\cdot A & {}^{t}A\cdot \zeta
\mbox{}^{\circ}\cdot b & {}^{t}A\cdot \zeta \mbox{}^{\circ}\cdot
w-\epsilon {}c \\
{}^{t}b\cdot \zeta \mbox{}^{\circ}\cdot A & {}^{t}b\cdot \zeta
\mbox{}^{\circ} \cdot b & {}^{t}b\cdot \zeta \mbox{}^{\circ}\cdot
w-\epsilon a \\
{}^{t}w\cdot \zeta \mbox{}^{\circ}\cdot A+\epsilon \ \ {}^{t}c
& {}^{t}w\cdot \zeta \mbox{}^{\circ}\cdot b+\epsilon a & {}^{t}w\cdot
\zeta \mbox{}^{\circ}\cdot w
\end{array}\right) =\left( \begin{array}{lll}
\zeta \mbox{}^{\circ} & 0 & 0 \\
0 & 0 & -1 \\
0 & 1 & 0
\end{array}\right) .
\end{equation}
\noindent First, ${}^{t}A\cdot \zeta \mbox{}^{\circ}\cdot A=\zeta
\mbox{}^{\circ}$ implies that $A\in \mathcal{S}p( 2n) $.\ \ It follows
from ${}^{t}b\cdot \zeta \mbox{}^{\circ}\cdot A=0$ and ${}^{t}A\cdot
\zeta \mbox{}^{\circ}\cdot b=0$ that $b=0$.\ \ Note that ${}^{t}w\cdot
\zeta \mbox{}^{\circ}\cdot w\equiv 0$ as $\zeta \mbox{}^{\circ}$
is antisymmetric.\ \ Then from the terms\ \ ${}^{t}w\cdot \zeta
\mbox{}^{\circ}\cdot b+\epsilon a=1$ and ${}^{t}b\cdot \zeta \mbox{}^{\circ}\cdot
w-\epsilon a=-1$,\ \ with $b=0$ we have $a=\epsilon $.\ \ \ Finally,
the remaining equations are\ \
\begin{equation}
\ \ \text{}{}c_{ }=\epsilon \ \ \ {}^{t}A\cdot \zeta \mbox{}^{\circ}\cdot
w ,\ \ \ \ {}\text{}{}^{t}c_{ }= -\epsilon \ \ {}^{t}w\cdot \zeta
\mbox{}^{\circ}\cdot A
\end{equation}
\noindent Noting that ${}^{t}\zeta \mbox{}^{\circ}=-\zeta \mbox{}^{\circ}$,
these two equation are equivalent and therefore the matrix $\Phi
$ takes the form
\begin{equation}
\Phi ( \epsilon ,A,w,r) =\left( \begin{array}{lll}
A & 0 & w \\
-\epsilon \ \ \ {}^{t}w\cdot \zeta \mbox{}^{\circ}\cdot A & \epsilon
& r \\
0 & 0 & \epsilon
\end{array}\right)
\end{equation}
It follows straightforwardly that this is a matrix group with the
group multiplication realized by matrix multiplication and the group
inverse by matrix inverse
\begin{equation}
\begin{array}{l}
\Phi ( \epsilon ,A,w,r) =\Phi ( \epsilon ^{{\prime\prime}},A^{{\prime\prime}},w^{{\prime\prime}},r^{{\prime\prime}})
\cdot \Phi ( \epsilon ^{\prime },A^{\prime },w^{\prime },r^{\prime
}) , \\
\Phi ^{-1}( \epsilon ,A,w,r) =\Phi ( \epsilon ,A^{-1},-\epsilon
A^{-1}\cdot w,-r) .
\end{array}%
\label{A: HSP group law}
\end{equation}
\noindent where
\begin{equation}
\begin{array}{l}
\epsilon =\epsilon ^{{\prime\prime}}\epsilon ^{\prime },\ \ \ A=A^{{\prime\prime}}\cdot
A^{\prime }, \\
w=\epsilon ^{\prime }w^{{\prime\prime}}+A^{{\prime\prime}}\cdot
w^{\prime }, \\
r=\epsilon ^{{\prime\prime}}r^{\prime }+\epsilon ^{\prime } r^{{\prime\prime}}-\epsilon
^{{\prime\prime}} {}^{t}w^{{\prime\prime}}\cdot \zeta \mbox{}^{\circ}\cdot
A^{{\prime\prime}}\cdot w^{\prime }.
\end{array}%
\label{A: HSP group law components}
\end{equation}
\noindent It is clear that $\Phi ( 1,A,0,0) \in \mathcal{S}p( 2n)
$ and $\Phi ( \epsilon ,I_{2n},0,0) \in \mathcal{D}_{2}$ and further
that $\Phi ( \epsilon ,A,0,0) \in \mathcal{D}_{2}\otimes \mathcal{S}p(
2n) $
\begin{equation}
\Phi ( 1,A,0,0) =\left( \begin{array}{lll}
A & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{array}\right) ,\ \ \Phi ( \epsilon ,I_{2n},0,0) =\left( \begin{array}{lll}
I_{2n} & 0 & 0 \\
0 & \epsilon & 0 \\
0 & 0 & \epsilon
\end{array}\right) .
\end{equation}
The elements of the discrete group $\mathcal{D}_{2}$ change the
sign of the time and energy degrees of freedom together and the
elements of the symplectic group $\mathcal{S}p( 2n) $ are the usual
symplectic transformations on the position and momentum degrees
of freedom.\ \ \
Note also that for $A^{{\prime\prime}}=A^{\prime }=I_{2n}$ and $\epsilon
^{{\prime\prime}}=\epsilon ^{\prime }=1$ that the group multiplication
law reduces to
\begin{equation}
\begin{array}{l}
\begin{array}{rl}
\Phi ( 1,I_{2n},w,r) & =\Phi ( 1,I_{2n},w^{{\prime\prime}},r^{{\prime\prime}})
\cdot \Phi ( 1,I_{2n},w^{\prime },r^{\prime })
\end{array} \\
\Phi ^{-1}( 1,I_{2n},w,r) =\Phi ( 1,I_{2n},-w,-r)
\end{array}%
\label{A: General Heisenberg group operation}
\end{equation}
\noindent where
\begin{equation}
\begin{array}{l}
w=w^{{\prime\prime}}+w^{\prime }, \\
r=r^{\prime }+r^{{\prime\prime}}- {}^{t}w^{{\prime\prime}}\cdot
\zeta \mbox{}^{\circ}\cdot w^{\prime }.
\end{array}
\end{equation}
\noindent and therefore\ \ $\Upsilon ( w,r) =\Phi ( 1,I_{2n},w,r)
$ defines a subgroup $\mathcal{H}( n) $ that has the group multiplication
and inverse given by (32) with $w\in \mathbb{R}^{2n}$ and $r\in
\mathbb{R}$. This group is the Weyl-Heisenberg group.
The automorphisms of this subgroup are
\begin{equation}
\begin{array}{rl}
\Phi ( \epsilon ,A,w,r) & =\begin{array}{rl}
\Phi ( \epsilon ^{\prime },A^{\prime },w^{\prime },r^{\prime })
\cdot \Phi ( 1,I_{2n},w^{{\prime\prime}},r^{{\prime\prime}}) &
\cdot \Phi ^{-1}( \epsilon ^{\prime },A^{\prime },w^{\prime },r^{\prime
})
\end{array} \\
& =\Phi ( 1,I_{2n},\epsilon ^{\prime }A^{\prime }\cdot w^{{\prime\prime}},r^{{\prime\prime}}-{}^{t}w^{\prime
}\cdot \zeta \mbox{}^{\circ}\cdot w^{{\prime\prime}}+{}^{t}w^{{\prime\prime}}\cdot
\zeta \mbox{}^{\circ}\cdot w^{\prime })
\end{array}%
\label{A: HSp Automorphisms of the Heisenberg Group}
\end{equation}
\noindent Therefore $\mathcal{H}( n) $ is a normal subgroup. The
union of $\text{$\mathcal{H}( n) $}$with $\mathcal{D}_{2}\otimes
\mathcal{S}p( 2n) $ is the full group and the intersection is the
identity. Thus we have the result that the group $\mathcal{G}$ that
leaves the symplectic metric and the Newtonian time line element
invariant is.
\begin{equation}
\mathcal{G}\simeq \mathcal{S}p( 2n+2) \cap \mathcal{I}\hat{\mathcal{G}}\mathcal{L}(
2n+1) \simeq \mathcal{H}\hat{\mathcal{S}}p( 2n) =\mathcal{D}_{2}\otimes
_{s}\mathcal{S}p( 2n) \otimes _{s}\mathcal{H}( n) %
\label{A: HSp definition}
\end{equation}
It is shown in \cite{folland} that this group is the group of linear
automorphisms of the Weyl-Heisenberg group and therefore is the
maximal group with a Weyl-Heisenberg normal subgroup .
\subsection{Weyl-Heisenberg group}
\noindent The notational change $w=(f,v)$ with $f,v\in \mathbb{R}^{n}$
enables the group operations of the Weyl-Heisenberg group\ \ to
be written in the form
\begin{equation}
\begin{array}{l}
\begin{array}{rl}
\Upsilon ( f,v,r) & =\Upsilon ( f^{{\prime\prime}},v^{{\prime\prime}},r^{{\prime\prime}})
\cdot \Upsilon ( f^{\prime },v^{\prime },r^{\prime }) \\
& =\Upsilon ( f^{{\prime\prime}}+f^{\prime },v^{{\prime\prime}}+v^{\prime
},r^{{\prime\prime}}+r^{\prime }-f^{{\prime\prime}}\cdot v^{\prime
}+v^{{\prime\prime}}\cdot f^{\prime }) ,
\end{array} \\
\Upsilon ^{-1}( f,v,r) =\Upsilon ( -f,-v-r) .
\end{array}%
\label{A: Heisenberg group operations}
\end{equation}
\noindent $\Upsilon ( f,v,r) $ may be realized by the matrix group
\cite{Major}
\begin{equation}
\Upsilon ( f,v,r) =\left( \begin{array}{llll}
I_{n} & 0 & 0 & f \\
\ \ 0 & I_{n} & 1 & v \\
v & -f & 1 & r \\
0 & 0 & 0 & 1
\end{array}\right) ,
\end{equation}
\noindent and it can be directly verified that matrix multiplication
and inverse realizes the group operations given in (36).\ \
The Weyl-Heisenberg group itself is the semidirect product of two
translation groups\ \
\begin{equation}
\mathcal{H}( n) \simeq \mathcal{T}( n) \otimes _{s}\mathcal{T}(
n+1) .%
\label{A: Heisenberg semidirect product}
\end{equation}
\noindent This may be shown as follows. Consider\ \ the group multiplication
in (36) with $f=r=0$,\ \
\begin{equation}
\begin{array}{l}
\Upsilon ( 0,v^{{\prime\prime}},0) \cdot \Upsilon ( 0,v^{\prime
},0) =\Upsilon ( f,v,r) =\Upsilon ( 0,v^{{\prime\prime}}+v^{\prime
},0) , \\
\Upsilon ^{-1}( 0,v,0) =\Upsilon ( 0,-v,0) .
\end{array}%
\label{A: Translation subgroup of Heisenberg (V)}
\end{equation}
\noindent Thus, $\Upsilon ( 0,v,0) \in \mathcal{T}( n) $.\ \ As
in the Euclidean case, these translations are parameterized by velocity.
Furthermore, with $v=0$ and $f,r\neq 0$, we have
\begin{equation}
\begin{array}{l}
\Upsilon ( f^{{\prime\prime}},0,r^{{\prime\prime}}) \cdot \Upsilon
( f^{\prime },0,r^{\prime }) =\Upsilon ( f,v,r) =\Upsilon ( f^{{\prime\prime}}+f^{\prime
},0,r^{{\prime\prime}}+r^{\prime }) , \\
\Upsilon ^{-1}( f,0,r) =\Upsilon ( -f,0,-r) .
\end{array}
\end{equation}
\noindent and therefore $\Upsilon ( f,0,r) \in \mathcal{T}( n+1)
$.\ \ These translations are parameterized by force and power.
Finally, a special case of the automorphism given in\ \ (34) gives
\begin{equation}
\Upsilon ( f^{\prime },v^{\prime },r^{\prime }) \cdot \Upsilon (
f,0,r) \cdot \Upsilon ^{-1}( f^{\prime },v^{\prime },r^{\prime })
=\Upsilon ( f,0,r-2 f\cdot v^{\prime }) .
\end{equation}
\noindent The translation subgroup $\Upsilon ( f,0,r) \in \mathcal{T}(
n+1) $ of $\mathcal{H}( n) $ is therefore a normal subgroup.\ \ It
may be shown that this is not the case for the translation subgroup
$\Upsilon ( 0,v,0) \in \mathcal{T}( n) $.\ \ Therefore, the Weyl-Heisenberg
group is the semidirect product given in (38).\ \ \
The Weyl-Heisenberg group that appears as a subgroup of the group
of transformations between noninertial frames is parameterized by
velocity, force and power. From the\ \ group multiplication given
in (36), velocity and force are simply additive as expected in Newtonian
mechanics. This identification will become clearer in the following
section as well as the meaning of the power transformation law.
\subsection{Hamilton's equations}
We consider now the transformations $\tilde{z}=\varphi ( z) =\varphi
( p,q,e,t) $\ \ that leave the symplectic metric and the Newtonian
line element $d t^{2}$ invariant.\ \ From (35), the continuous group
leaving this invariant is $\mathcal{H}\mathcal{S}p( 2n) $. The Jacobian
of the transformation, $\frac{\partial \varphi ( z) }{\partial z}$,
must be an element of this group.\ \ \ \ Consider first the case
where the Jacobian is an element of the $\mathcal{H}( n) $ subgroup
of $\mathcal{H}\mathcal{S}p( 2n) $.
\begin{equation}
\left[ \frac{\partial \varphi ^{\alpha }( z) }{\partial z^{\beta
}}\right] |_{z}=\Upsilon ( f,v,r)
\end{equation}
\noindent Set $z^{\alpha }=\{p^{i},q^{j},e,t\}$ with $\alpha =1,...2n+2$,\ \ $i,j,..=1,..n$.\ \ Then
the above expression can be expanded out to
\begin{equation}
\left( \begin{array}{llll}
\frac{\partial \varphi ^{i}( z) }{\partial p^{j}} & \frac{\partial
\varphi ^{i}( z) }{\partial q^{j}} & \frac{\partial \varphi ^{i}(
z) }{\partial e} & \frac{\partial \varphi ^{i}( z) }{\partial t}
\\
\frac{\partial \varphi ^{n+i}( z) }{\partial p^{j}} & \frac{\partial
\varphi ^{n+i}( z) }{\partial q^{j}} & \frac{\partial \varphi ^{n+i}(
z) }{\partial e} & \frac{\partial \varphi ^{n+i}( z) }{\partial
t} \\
\frac{\partial \varphi ^{2n+1}( z) }{\partial p^{j}} & \frac{\partial
\varphi ^{2n+1}( z) }{\partial q^{j}} & \frac{\partial \varphi ^{2n+1}(
z) }{\partial e} & \frac{\partial \varphi ^{2n+1}( z) }{\partial
t} \\
\frac{\partial \varphi ^{2n+2}( z) }{\partial p^{j}} & \frac{\partial
\varphi ^{2n+2}( z) }{\partial q^{j}} & \frac{\partial \varphi ^{2n+2}(
z) }{\partial e} & \frac{\partial \varphi ^{2n+2}( z) }{\partial
t}
\end{array}\right) =\left( \begin{array}{llll}
\delta _{j}^{i} & 0 & 0 & f^{i} \\
0 & \delta _{j}^{i} & 0 & v^{i} \\
v^{j} & -f^{j} & 1 & r \\
0 & 0 & 0 & 1
\end{array}\right)
\end{equation}
\noindent The solution of these equations requires the $\varphi
^{\alpha }$ to have the form
\begin{equation}
\begin{array}{l}
{\tilde{p}}^{i}=\varphi ^{i}( p,q,e,t) =p^{i}+{\varphi _{p}}^{i}(
t) , \\
{\tilde{q}}^{i}=\varphi ^{n+i}( p,q,e,t) =q^{i}+{\varphi _{q}}^{i}(
t) , \\
\tilde{e}=\varphi ^{2n+1}( p,q,e,t) =e+H( p,q,t) , \\
\tilde{t}=\varphi ^{2n+2}( p,q,e,t) =t.
\end{array}%
\label{A: Hamilton noinertial transformations}
\end{equation}
\noindent In addition these equations must satisfy
\[
\begin{array}{ll}
\frac{\partial \varphi ^{n+i}( z) }{\partial t}=v^{i}=\frac{\partial
\varphi ^{2n+1}( z) }{\partial p^{i}}, & \frac{\partial \varphi
^{2n+1}( z) }{\partial t}=r. \\
\frac{\partial \varphi ^{i}( z) }{\partial t}=f^{i}=-\frac{\partial
\varphi ^{2n+1}( z) }{\partial q^{i}}, &
\end{array}
\]
\noindent that on substituting in (44) is\ \ Hamilton's equations\ \ \label{A:
Hamilton equations}
\begin{equation}
\frac{d {\varphi _{q}}^{i}( t) }{d t }=v^{i}=\frac{\partial H( p,q,t)
}{\partial p^{i} }, \frac{d {\varphi _{p}}^{i}( t) }{d t }=f^{i}=-\frac{\partial
H( p,q,t) }{\partial q^{i} },\ \ \frac{\partial H( p,q,t) }{\partial
t }=r.%
\label{A: Hamilton equations}
\end{equation}
From this result, the identification of $v$ with velocity, $f$ with
force and $r$ with power is clear. The group operation describes
the addition of these quantities for transformations between frames
associated with particles following trajectories that satisfy Hamilton's
equations that are generally noninertial. The terms in the power
transformation in the group multiplication law (56,32) integrate
to the terms in the Hamiltonian required for noninertial frames.
On the other hand, if the Jacobian $\frac{\partial \varphi ( z)
}{\partial z}$ is an element of the subgroup $\mathcal{S}p( 2n)
\subset \mathcal{H}\mathcal{S}p( 2n) $, then the transformations
are the usual canonical transformations on momentum-position space.\ \
These transformations leave invariant the symplectic metric $\delta
_{i,j}d p^{i}\wedge d q^{j}$ and Hamilton's equations.
Thus, from the condition that the Newtonian time line element $d
t^{2}$ and the condition that\ \ the symplectic metric $\zeta $
are invariant on a $2n+2$ dimensional space, we have derived Hamilton's
equations on $2n$ dimensional phase space and the invariance under
the canonical transformations with Jacobians elements of $\mathcal{S}p(
2n) $.\ \ \ However, viewed on the $2n+2$ dimensional space, the
transformation group is $\mathcal{S}p( 2n) \otimes _{s}\mathcal{H}(
n) $.\ \ This group transforms between the frames associated with
particles following trajectories defined by Hamilton's equations
that are generally noninertial.
\subsection{The Hamilton group}
Finally, we may also consider\ \ the invariance of the length line
element
\begin{equation}
d q^{2}=\delta _{i j}d q^{i}d q^{j}={}^{t}d z\cdot \eta ^{q}\cdot
d z,%
\label{A: Length in Hamiltons mechanics}
\end{equation}
\noindent in the inertial rest frame as in the Euclidean case. The
inertial rest frame is defined by $v=f=r=0$ and therefore
\begin{equation}
{}^{t}\Phi ( 1,A,0,0,0) \cdot \eta ^{q}\cdot \Phi ( 1,A,0,0,0) =\eta
^{q}.
\end{equation}
The $2n \times 2n$ matrix $A\in \mathcal{S}p( 2n) $ may be decomposed
into the four $n \times n$ submatrices $A_{\mu ,\nu }$ with $\mu
,\nu =1,2$. In the $2n+2$ dimensional space, the $\eta ^{q}$\ \ and
$\Phi ( 1,A,0,0,0) $ are\ \ given by
\begin{equation}
\eta ^{q} =\left( \begin{array}{llll}
0 & 0 & 0 & 0 \\
0 & I_{n} & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{array}\right) ,\ \ \Phi ( 1,A,0,0,0) =\left( \begin{array}{llll}
{}A_{1,1} & {}A_{1,2} & 0 & 0 \\
{}A_{2,1} & {}A_{2,2} & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1
\end{array}\right) .
\end{equation}
Then the invariance of the length line element (0) in the inertial
rest frame results in
\begin{equation}
\left( \begin{array}{llll}
{} {}{}^{t}A_{2,1}\cdot A_{2,1} & {}{}^{t}A_{2,2}\cdot A_{2,1}
& 0 & 0 \\
{}{}^{t}A_{2,1}\cdot A_{2,2} & {}^{t}A_{2,2}\cdot A_{2,2} & 0
& 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{array}\right) =\left( \begin{array}{llll}
0 & 0 & 0 & 0 \\
0 & I_{n} & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{array}\right) ,
\end{equation}
\noindent where the dimensions of the zero submatrices are clear
from the context. From this it follows that $A_{2,2}=R\in \mathcal{O}(
n) $ and $A_{2,1}=0$.\ \
The matrices $A$ are elements of $\mathcal{S}p( 2n) $ and therefore
${}^{t}A\cdot \zeta \mbox{}^{\circ}\cdot A=\zeta \mbox{}^{\circ}$.
From this it follows that $\ \ A^{-1}=-\zeta \mbox{}^{\circ}\cdot
{}^{t}A\cdot \zeta \mbox{}^{\circ}$.\ \ Writing the $2n \times
2n$ matrices $A$ in terms of the four $n \times n$ submatrices
$A_{\mu ,\nu }$ , we have
\begin{equation}
{\left( \begin{array}{ll}
{}A_{1,1} & {}A_{1,2} \\
{}A_{2,1} & {}A_{2,2}
\end{array}\right) }^{-1}=-\left( \begin{array}{ll}
{}0 & {}I_{n} \\
{}-I_{n} & 0
\end{array}\right) \cdot \left( \begin{array}{ll}
{}^{t}A_{1,1} & {}^{t}A_{2,1} \\
{}^{t}A_{1,2} & {}^{t}A_{2,2}
\end{array}\right) \cdot \left( \begin{array}{ll}
{}0 & {}I_{n} \\
{}-I_{n} & 0
\end{array}\right) .
\end{equation}
\noindent For the case $A_{2,1}=0$, and $A_{2,2}=R$\ \ the inverse
may be computed and therefore
\begin{equation}
\left( \begin{array}{ll}
{}{A_{1,1}}^{-1} & -{}{A_{1,1}}^{-1}\cdot A_{1,2}\cdot R^{-1} \\
{}0 & {}R^{-1}
\end{array}\right) =\left( \begin{array}{ll}
{}^{t}R & {}^{t}A_{1,2} \\
0 & {}^{t}A_{1,1}
\end{array}\right) .
\end{equation}
\noindent Thus\ \ $A_{1,1}={}^{t}R^{-1}$.\ \ Now, as $R\in \mathcal{O}(
n) $, we have that $R^{-1}={}^{t}R$\ \ and so $A_{1,1}={}R$.\ \ The
remaining condition is that
\begin{equation}
-{}{}^{t}R\cdot A_{1,2}\cdot R^{-1}\equiv {}^{t}A_{1,2}\ \ \ \ \ \mathrm{or}\ \ \ \ {}{}^{t}R\cdot
A_{1,2}\equiv - {}^{t}\left( {}^{t}R\cdot A_{1,2}\right) .
\end{equation}
\noindent As this is true for all $R\in \mathcal{O}( n) $,\ \ we
have $A_{1,2}=0$.\ \ This means that $A$ is realized by the $2n
\times 2n$ matrices of the form
\begin{equation}
A=\left( \begin{array}{ll}
R & 0 \\
0 & R
\end{array}\right) , R\in \mathcal{O}( n) ,%
\label{A: A reduces to orthogonal}
\end{equation}
\noindent and therefore $A\in \mathcal{O}( n) $.
This gives the result that the extended Hamilton group $\hat{\mathcal{H}}a(
n) $ is
\begin{equation}
\hat{\mathcal{H}}a( n) \simeq \mathcal{D}_{2}\otimes _{s}\mathcal{O}(
n) \otimes _{s}\mathcal{H}( n) .
\end{equation}
\noindent An element of the Hamilton group may be written explicitly
in the $(2n +2)\times (2n+2)$ matrix realization
\begin{equation}
\Phi ( \epsilon ,R,v,f,r) =\left( \begin{array}{llll}
R & 0 & 0 & f \\
0 & R & 0 & v \\
v & -f & \epsilon & r \\
0 & 0 & 0 & \epsilon
\end{array}\right) .
\end{equation}
\noindent Again, as it is a matrix group, the group multiplication
and inverse is given by matrix multiplication and inverse. Alternatively,
these are just the special case of (0,0) with $A$ given by (0) and
$w=(f,v)$
\begin{equation}
\begin{array}{l}
\Phi ( \epsilon ,R,f,v,r) =\Phi ( \epsilon ^{{\prime\prime}},R^{{\prime\prime}},f^{{\prime\prime}},v^{{\prime\prime}},r^{{\prime\prime}})
\cdot \Phi ( \epsilon ^{\prime },R^{\prime },f^{\prime },v^{\prime
},r^{\prime }) , \\
{\Phi ( \epsilon ,R,f,v,r) }^{-1}=\Phi ( \epsilon ,R^{-1},-\epsilon
R\cdot f,-\epsilon R\cdot v,-\epsilon \ \ r) ,\text{}
\end{array}%
\label{A: Hamilton group composition law}
\end{equation}
\noindent where
\begin{equation}
\begin{array}{l}
\epsilon =\epsilon ^{\prime } \epsilon ^{{\prime\prime}},\ \ \ \ \ R=R^{{\prime\prime}}\cdot
R^{\prime },\ \ \\
f=\epsilon ^{\prime }f^{{\prime\prime}}+R^{{\prime\prime}}\cdot
f^{\prime }, \\
v=\epsilon ^{\prime }v^{{\prime\prime}}+R^{{\prime\prime}}\cdot
v^{\prime }, \\
r=\epsilon ^{\prime }r^{{\prime\prime}}+\epsilon ^{{\prime\prime}}(
r^{\prime }-f^{{\prime\prime}}\cdot R^{{\prime\prime}}\cdot v^{\prime
}+v^{{\prime\prime}}\cdot R^{{\prime\prime}}\cdot f^{\prime }) .
\end{array}
\end{equation}
\noindent These are the transformation equations for velocity $v$,
force $f$ and power $r$ under the extended Hamilton group.
Note that for the inertial case with $f=r=0$, that these reduce
to
\begin{equation}
\begin{array}{l}
\begin{array}{rl}
\Phi ( \epsilon ,R,0,v,0) & =\Phi ( \epsilon ^{{\prime\prime}},R^{{\prime\prime}},0,v^{{\prime\prime}},0)
\cdot \Phi ( \epsilon ^{\prime },R^{\prime },0,v^{\prime },0) \\
& =\Phi ( \epsilon ^{\prime } \epsilon ^{{\prime\prime}},0,\epsilon
^{\prime }v^{{\prime\prime}}+R^{{\prime\prime}}\cdot v^{\prime },0)
,
\end{array} \\
{\Phi ( \epsilon ,R,0,v,0) }^{-1}=\Phi ( \epsilon ,R^{-1},0,-\epsilon
R\cdot v,0) .
\end{array}
\end{equation}
\noindent With the identification $\Gamma ( \epsilon ,R,v) \simeq
\Phi ( \epsilon ,R,0,v,0) $, these are the group multiplication
and inverse laws for the extended Euclidean group given in (0).
Furthermore, noting that for $f=v=0$ that the Weyl-Heisenberg subgroup
reduces to the translation group (0), we have that
\begin{equation}
\hat{\mathcal{E}}( n) \subset \hat{\mathcal{H}}a( n) .
\end{equation}
\noindent Thus the inertial group, that is the homogenous subgroup
of the Galilei group,\ \ is a special case of the general noninertial
Hamilton group.
Now, as in the Euclidean case, the orthogonal group can be decomposed
into\ \ the direct product of the two element discrete parity group
and the special orthogonal group,\ \ $\mathcal{O}( n) \simeq {\tilde{\mathcal{D}}}_{2}\otimes
_{s}\mathcal{S}\mathcal{O}( n) $.\ \ The discrete two element parity
group changes the sign of the position and momentum degrees of freedom
together. The final step is to use this decomposition and again
define the 4 element discrete group with elements $\varsigma \in
\mathcal{D}_{4}\simeq {\tilde{\mathcal{D}}}_{2}\otimes \mathcal{D}_{2}$
and restrict $R\in \mathcal{S}\mathcal{O}( n) $.\ \ The $(2n +2)\times
(2n+2)$ matrix realization of the elements $\varsigma \in \mathcal{D}_{4}$\ \ are\ \ \
\begin{equation}
\varsigma =\left( \begin{array}{llll}
\tilde{\epsilon }I_{n} & 0 & 0 & 0 \\
0 & \tilde{\epsilon }I_{n} & 0 & 0 \\
0 & 0 & \epsilon & 0 \\
0 & 0 & 0 & \epsilon
\end{array}\right) ,\ \ \epsilon =\pm 1,\tilde{\epsilon }=\pm 1\text{}
\end{equation}
The Hamilton group may then be written\ \
\begin{equation}
\hat{\mathcal{H}}a( n) \simeq \mathcal{D}_{4}\otimes _{s}\mathcal{S}\mathcal{O}(
n) \otimes _{s}\mathcal{H}( n) \simeq \mathcal{D}_{4}\otimes _{s}\mathcal{H}a(
n) %
\label{Hamilton inertial velocity equation}
\end{equation}
\noindent where $\mathcal{H}a( n) =\mathcal{S}\mathcal{O}( n) \otimes
_{s}\mathcal{H}(n)$.
\section{Discussion}
We began the discussion in this paper by considering the group leaving
ariance of spacial length in the inertial rest frame resulted in
the extended Euclidean group of transformations that is the homogeneous
subgroup of the Galilei group.\ \ The diffeomorphisms with a Jacobian
that is an element of this group at a given point in the space-time
define the usual linear inertial transformations. The extended Euclidean
group defines the transformations between inertial frames in the
Newtonian formulation.
We considered next the group that leaves invariant the Newtonian
line element on a time, position, momentum, energy space formulation
that also has a symplectic metric invariant.\ \ If again we require
the invariance of length in the inertial rest frame, the extended
Hamilton group of transformations results.\ \ The diffeomorphisms
with a Jacobian that is an element of this group at a given point
in the space-time define Hamilton's equations. Particles in classical
mechanics follow trajectories that are defined by solutions to Hamilton's
equations. The frames associated with these trajectories are in
general noninertial.\ \ The extended Hamilton group therefore defines
the transformations between general noninertial frames in the Hamilton
formulation. The extended Euclidean transformations are a special
case of the extended Hamilton transformations corresponding to the
inertial case where the rate of change of momentum and energy are
zero.
The Hamilton group multiplication defines the usual addition of
velocity and force. The noninertial transformations of the power
result in terms involving velocity and force appearing in the power
transformation that integrate to the terms required in the Hamiltonian
in a noninertial frame.
There is nothing fundamentally physical that distinguishes a particle
in an inertial frame as apposed to a noninertial frame. The usual
choice of inertial frames is simply a mathematical expediency to
simplify the analysis. Furthermore, as inertial frames are related
by a group, one expects that noninertial frames in the neighborhood
of the inertial frame to likewise be related by a group. This is
the Hamilton group. In this classical case, the noninertial formulation
does not result in new physical consequences.
We know that the Euclidean group is the limit of small velocities,
$v/c\rightarrow 0$, of the Lorentz group of special relativity.\ \ The
Lorentz group defines transformations between frames of inertial
particles in special relativity. Clearly, by the above arguments,
there must be a group of transformations for noninertial frames
in the relativistic case.\footnote{This is often assumed to be general
relativity. The equivalence principle results in particles following
geodesics and so all particles in a purely gravitational system
are locally inertial in the curved manifold.\ \ Consider the case
with other forces where gravity is negligible and the problem of
relativistic noninertial frames remains.} This group must have the
Lorentz group as the inertial special case and contract in a well
defined physical limit, that includes small velocities relative
to $c$, to the Hamilton group.\ \ A group that satisfies these properties
and the new physical consequences is discussed in \cite{Low5,Low6}.
|
1,314,259,995,959 | arxiv | \section{Introduction}
White dwarf stars (WDs) are the endpoints of evolution for most
stars. Their internal structures provide key clues into their
complex pre-WD evolution. As WDs, their subsequent evolution is
dominated by cooling. The older they are, the cooler they become.
Why then, does there exist a range of temperatures within which we
hardly see any He atmosphere WDs (DBs) while we see both the H
atmosphere WDs (DAs) and non-DAs (He atmosphere DOs and DBs) at
both hotter and cooler temperature than this? This paradox is the
so-called ``DB gap'' (Fontaine \& Wesemael 1987). Recently, Sloan
Digital Sky Survey (SDSS) data have shown us that the DB gap is not
completely void of DBs, but rather deficient in the number of DBs
(Eisenstein et al. 2006a). The current best explanation for this
effect is based on WDs having specific layer masses (the large
gravity in a WD makes it compositionally stratified) which mix and
settle at certain temperatures, causing the surface ``flavor'' of
a WD to change with time and temperature (Fontaine \& Wesemael
1987). This explanation demands a thin H layer in at least a
substantial fraction of DAs. However, there have been several works
(Fontaine et al. 1992; Clemens 1994; Fontaine et al. 1994; Robinson
et al. 1995; Kleinman et al. 1998; Benvenuto et al. 2002) suggesting
that perhaps all DAs have thick H layers and if so, spectral evolution
by the current model cannot happen.
Once a WD cools past the onset of its instability strip (at a
temperature primarily determined by its atmospheric composition and
total mass), it begins pulsating in a series of non-radial g-modes,
allowing us to study its interior via the technique of asteroseismology.
Asteroseismology, the study of stellar pulsations, is an important
way to directly measure quantities of the stellar interior. And
understanding the interior structure of the DBVs is one very important
way to address some of the mysteries of DB evolution. Among the 9
DBVs known prior to our work, the first DBV discovered (Winget et
al. 1982), GD\,358, is by far the best studied WD pulsator. It has
had its internal structure substantially explored by asteroseismology
(Winget et al. 1994, Bradley \& Winget 1994; Vuille et al. 2000;
Metcalfe Salaris \& Winget 2002; Metcalfe 2003; Kepler et al. 2005;
Metcalfe et al. 2005). The results from the asteroseismological
investigations of GD\,358 (Winget et al., 1994) are impressive:
total mass of $0.61\pm0.03 M_{\sun}$, He layer mass of log$M_{He}/M_{\star}
= -5.7(+0.18, -0.30)$, $R_{\star}/R{\sun}=0.0127\pm0.0004$, He to
C transition zone thickness of about 8 pressure scale heights,
absolute luminosity log$L_{\star}/L{\sun}=-1.30 (+0.09, -0.12)$
hence a distance of $42\pm3pc$, weak magnetic field of $1300\pm300$G
and the measurements of radial differential rotation. More recent detailed
model fitting techniques using genetic algorithms along with
improvements to the models have been successful in revealing even
more information. We now have a measurement of the oxygen mass
fraction in the core which places constraints on both the nuclear
burning rate $^{12}C(\alpha, \gamma)^{16}O$ and even more detailed
structure information, such as the extent of the He/C envelope
beneath the pure He envelope (Metcalfe, Salaris \& Winget 2002;
Metcalfe 2003; Metcalfe et al. 2005). Except for one other DBV,
the rest of the class have not been so forthcoming in revealing their
internal structures, primarily due to their lack of the abundance
of pulsation modes compared to GD\,358's over 100 detected frequencies.
CBS\,114 is a DBV which showed promise for successful asteroseismological
analysis by exhibiting a rich pulsation spectrum, but earlier
observational comparisons to the models produced a
$\mathrm{C}(\alpha,\gamma)\mathrm{O}$ nuclear burning
rate which was at odds with that obtained from GD\,358 (Handler,
Metcalfe \& Wood 2002). After several years of additional observations
of CBS\,114, which lead to identifying eleven independent pulsation
modes (four of which were new) along with improvements in pulsation
models and fitting techniques, Metcalfe et al.(2005) have achieved
new asteroseismological results for both stars which are now in
agreement with each other. The one thing CBS\,114 did not show and
which GD\,358 did were the many fine structure splittings of the
pulsation modes caused predominantly by stellar rotation.
Our understanding of the pulsation amplitude
determining mechanism on these stars is incomplete and we cannot
explain why we see significant fine-structure splitting in GD\,358
and not much in CBS\,114. We certainly do not believe it is due to
lack of rotation on CBS\,114's part though it could be due to the star
being observed near pole on. So the search goes on for a
third solvable pulsator to try and distinguish modes, models,
fits and reality in these objects.
Another important reason to study DBVs is that they are great cosmic
laboratories for high energy physics. Winget et al. (2004) predict
that hot DBs should have significant plasmon neutrino production.
Their DB models suggest that 30,000K, $0.6M_{\sun}$ DBs have a
neutrino luminosity that is 1.8 times higher than their photon
luminosity. On the cool end, 22,000K, $0.6M_{\sun}$ DBV models
have a neutrino luminosity less than half of their photon luminosity.
Thus the hottest DBVs should be losing energy and cooling significantly
faster than the cooler ones. Since a pulsation mode's period is a
function of temperature, we can directly measure a star's cooling
rate by measuring a mode's rate of period change (e.g. Kepler et
al. 2005b). And thus, the DBVs may be quite revealing laboratories
for neutrino physics.
Finally, an increase in the number of known DBVs will help us
understand their properties as a group. Clemens (1994) and Kleinman
(1995, 1998) found that the DA pulsators break down nicely into two
distinct classes, each subclass exhibiting common class properties
which they have used to investigate the dynamics of the pulsation
mechanism in these stars. By increasing the number of known DBVs,
we can search for possible subclass distinctions. Nather, Robinson
\& Stover (1981) noted that the interacting binary white dwarf stars
will each eventually form a single DB at the end of their evolution.
This means that there may be more than one evolutionary channel
leading to the DBs. Perhaps we will find two distinct classes,
each of them retaining the evidence of their evolutionary paths in
their pulsation structures.
SDSS is a photometric and spectroscopic survey of the sky covering
about 10,000 square degrees around the Northern Galactic
cap (York et al. 1996; Stoughton et al. 2002; Gunn et al. 1998;
Gunn et al. 2000). In SDSS's Sixth Data Release (Adelman-McCarthy,
et al. 2008), there are photometry of close to 10,000 square degrees
in five filters (Fukugita et al. 1996) and 1.27 million spectra.
Although the survey's main goal was to produce a 3D map of the large
scale structure of the universe, it also contains data on many galactic
stellar objects, including WDs. SDSS data provide the perfect basis
set for finding new DBVs which will eventually help solve the DB
Gap mystery, measure the neutrino production rates inside the DBs,
as well as answer some other questions about WD structure and
evolution. Kleinman et al. (2004) published the first WD catalogue
based on the spectra obtained by SDSS. and doubled the number of
then known WDs. The newest WD catalogue from the SDSS (Eisenstein
et al. 2006b, DR4 WD catalogue hereafter) has almost quadrupled
the number of WDs. Among the new WDs are DBs whose physical parameters
determined from model fitting suggest they are inside the instability
strip. Therefore, we started a project to search for new DBVs using
our spectroscopic fits to SDSS spectra, originally from Kleinman
et al. (2004) and later using the DR4 WD catalogue, to identify
likely DBV candidates and follow them up with time-series photometry.
This survey is the counterpart to the search for new SDSS DAVs
reported by Mukadam et al. (2004), Mullally et al.(2005), Kepler
et al. (2005a) and Castanheira et al. (2006a, 2007).
\section{Observations}
We selected our DBV candidates based on the effective temperatures
published in the SDSS WD catalogues (Kleinman et al. 2004; Eisenstein
et al. 2006). As described in those works, each spectrum was fit
with Detlev Koester's atmosphere models (Koester et al. 2001) to
obtain an effective temperature and surface gravity. The DB models
used in the catalogues are pure He models. Beauchamp et al. (1994) showed
the physical parameters of the model fit of DBs can change if He atmosphere
models with trace amount of H are used. Since we do not know how
much H, if any, our candidate SDSS DBs have, the pure He atmosphere
models fits are as good as any other. Given the currently known coolest
DBV being 21,800K (Beauchamp et al. 1994; Castanheira et al. 2006b),
we chose to select all DBs with effective temperatures higher than
21000K as DBV candidates. The blue edge of the instability strip
is currently defined by EC\,20058, the second hottest DB known
(Beauchamp et al. 1999; Sullivan et al. 2008) prior to the new
DBs discovered by the SDSS. The hottest DB known prior to the SDSS
is PG0112+104 with $T_{\rm eff}=31,500$K which defines the cool end
of the DB gap. Time series observations of this star have not
detected any pulsations (Provencal 2006). Nonetheless, given a
boundary determined only by one object, we decided to place no upper
limit on our candidate stars' effective temperatures.
We observed our DBV candidates using the Argos CCD camera (Nather
\& Mukadam 2004) on the 2.1m telescope at McDonald Observatory,
SPICam on the 3.5m telescope at Apache Point Observatory and SOAR
Optical Imager (SOI) on the Southern Astrophysical Research Telescope
(SOAR). More than half of the new H atmosphere white dwarf variables
(DAVs) reported in the past few years have been discovered using
Argos (Mukadam et al. 2004; Mullally et al. 2005; Castanheira et
al. 2006a). We observed and reduced the data from Argos in the
same manner as described in Mukadam et al. (2004) and Mullally et
al. (2005). Exposure times ranged from 5s to 30s, depending on the
brightness of the target and condition of the sky. The readout
time was negligible due to the use of a frame transfer detector.
For some of the objects, we used a BG40 filter to suppress the
redder portion of the flux which is dominated by noise. After we
applied bias and flat field corrections to all CCD frames, we
extracted sky-subtracted lightcurves via aperture photometry for
the variable candidates and at least one comparison star in the
field. We then divided the target star's lightcurve by the sum of
the comparison stars' lightcurves to take out any transparency
variations in the sky. We normalized the result so that the average
brightness of the star is equal to 0 and the lightcurve shows the
fractional intensity variation, and applied a barycentric correction
to the times. The resulting lightcurves for the new
DBVs are shown in the left panel of Figure~1.
SPICam was not built for fast time series data acquisition and
therefore we binned and used partial readout to achieve a reasonable
duty cycle for this project. The binning and window size of the
chip depended on the seeing and field of the target since we needed
at least one comparison star. Once we acquired the data, we followed
a similar procedure as with Argos data to produce our lightcurves.
We used SOI to discover our 9th DBV. SOI has also contributed to
discoveries of 18 new DAVs (Kepler et al. 2005; Castanheira et al.
2006). It is a CCD camera with reasonably fast readout time (6.3s).
We used 30s integration time for the data we gathered on
SDSS~J085202.44+213036.5 Again, we followed a similar procedure as
with Argos data to produce our lightcurves.
Table~1 is our journal of observations. We tried to observe each
object for at least two~hours on two separate occasions. The second
observation is to confirm and test the results of the first
observation. As you can see from Table~1, we have been able to get
the second observation for five of the new DBVs, but not for all
of the objects reported in this paper. For the DBs which did not
show pulsations during the first observations, additional data are
still very important. The lack of variability in the first observation
may simply be due to amplitude modulations or beating of closely
spaced modes which are not resolved in our $\sim$2 hours observations.
It is also important to obtain a good amplitude limit (1mma or
smaller) to which we see no variability since some currently known
pulsators have similarly small amplitudes. We note that some of
the DAs which had no detectable pulsations in Mukadam et al. (2005)
turned out to be DAVs after additional observations lowered the
detectable amplitude limit (Castanheira, et al. 2006). Both these
examples suggest more data are still needed for many of our DBs
which did not show pulsations.
\section{New Pulsating DB White Dwarf Stars}
Figure~1 shows the lightcurves and their Fourier transforms for the
nine new DBVs we have found so far. We list the frequencies,
periods and the amplitudes of the large observed peaks in the FTs
in Table~2.
The g magnitudes from the SDSS imaging data, the plate, MJD and
fiber number which specify unique spectra used for the model fitting,
the effective temperature, surface gravity and their uncertainties
of each observed object are given in Table~3. The last column in
Table~3 indicates if the object was found to vary. If we saw
no variability, then this column contains the amplitude limit (in
mma) we currently have. The amplitude limit is defined as three
times the average noise between 1000-10,000$\mu$Hz. For equally
spaced data, this limit translates into a $0.1\%$ probability of
identifying false peak as a real one (e.g. Kepler 1993). This frequency
range corresponds to periods of 100$s$ to 1000$s$ where the pulsations
in DBVs have been detected. We also note that some of the lightcurves
contain noise at low frequencies (less than few hundred $\mu$Hz
which corresponds to several thousand seconds and longer in period),
probably due to transparency variations or thin cirrus. If we
included this noise in our estimate, our amplitude limits would
have been higher and not reflective of our true ability to detect
variation within the frequency range of interest.
In Figure~2, we plot the effective temperatures and the surface
gravities for DBs in the DR4 WD catalogue. Newly found DBVs,
represented by large solid dots with their uncertainties in effective
temperature and surface gravity, cluster around $T_{\rm eff} \sim
25,000$K, although many more objects still need observation (the hollow
dots). We did not plot each set of error bars to avoid clutter in the
figure. Many of the DBs for which we did not see any variability
(represented by squares in Figure~2) have not been observed a second
time, mainly because we have not yet had time to do so. As you can see
from Table~1, only two objects (SDSS J090409.03+012740.9 and SDSS
J141258.17+045602.2) were observed more than once with combined
amplitude limits of 3.5\,mma and 2.6\,mma,
respectively. These amplitude limits are by no means good enough to
call them non-pulsators since some WD pulsators are known to have lower
amplitudes than this. Our current results are consistent with, but do
not demand, a pure DBV instability strip. We need to eventually
achieve at least 1\,mma detection limit for all the DBV candidates we
observe before investigating the purity of the instability strip.
We observed four DBs with $T_{\rm eff}>30,000$K, i.e. DBs in the
``DB gap'', but we did not see any pulsations so far. Like other
DBs we observed and not detected pulsations, these objects need
to be followed up before they can be declared non-pulsators. In
the past, the instability strip was defined by the 9 known DBVs
shown by triangles in Figure~2. The blue edge of the instability
strip was defined by one DBV, EC20058. We have not found any
pulsator hotter than EC20058 and hence the best chance of determining
the neutrino production rates still lies with this star.
\section{Summary}
From the DR4 WD catalogue, we have about 70 DBV candidates brighter
than $g=20$mag. To date, we have observed 29 of them and found
nine new DBVs, doubling the number of known DBVs. We seek an increased number
of DBVs to help us understand their group properties, better determine
the location of the instability strip, and perhaps find hot DBVs
we can use to measure their cooling rates and place a limit on the
neutrino production rate in their interiors. Based on these statistics,
we can expect at least another 12 new DBVs from the DR4 sample and
20 more from DR6. These are probably lower limits though, since
we suspect additional observations of our 29 currently observed
objects will probably reveal new low amplitude pulsators as well.
\acknowledgments
Funding for the SDSS and SDSS-II has been provided by the Alfred P.
Sloan Foundation, the Participating Institutions, the National
Science Foundation, the U.S. Department of Energy, the National
Aeronautics and Space Administration, the Japanese Monbukagakusho,
the Max Planck Society, and the Higher Education Funding Council
for England. The SDSS Web Site is http://www.sdss.org/.
The SDSS is managed by the Astrophysical Research Consortium for
the Participating Institutions. The Participating Institutions are
the American Museum of Natural History, Astrophysical Institute
Potsdam, University of Basel, Cambridge University, Case Western
Reserve University, University of Chicago, Drexel University,
Fermilab, the Institute for Advanced Study, the Japan Participation
Group, Johns Hopkins University, the Joint Institute for Nuclear
Astrophysics, the Kavli Institute for Particle Astrophysics and
Cosmology, the Korean Scientist Group, the Chinese Academy of
Sciences (LAMOST), Los Alamos National Laboratory, the
Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute
for Astrophysics (MPA), New Mexico State University, Ohio State
University, University of Pittsburgh, University of Portsmouth,
Princeton University, the United States Naval Observatory, and the
University of Washington.
This work was partially supported by the National Science Foundation
through an Astronomy \& Astrophysics Postdoctoral Fellowship (to
T.S.M.) under award AST-0401441.
|
1,314,259,995,960 | arxiv | \section{Introduction and summary}
The free energy of a conformal field theory on a compact four-manifold ${\cal M}$ is ambiguous due to ultraviolet divergences. These are classified by diffeomorphism invariant local counter-terms of dimension four or less. The general answer for the free energy on ${\cal M}$ is
\begin{equation}
\log Z_{{\cal M}} = A_1 \bkt{{\rm vol}_{\cal M} \Lambda_{\rm UV}^4}+ A_2 \bkt{{\rm vol}_{\cal M} \Lambda_{\rm UV}^4}^{\frac12}+A_0 \log \bkt{{\rm vol}_{\cal M} \Lambda_{\rm UV}^4} + {\rm finite}.
\end{equation}
The coefficients of divergent terms as well as the finite term may depend on various parameters in the theory such as marginal couplings and the number of degrees of freedom. The quartic and quadratic divergences correspond to cosmological constant and Einstein-Hilbert counter-terms respectively. Due to the logarithmic divergence, the finite part of the free energy is scheme-dependent.
The coefficient of the logarithmic divergence is meaningful and is related to the conformal anomaly~\cite{Deser:1976yx,Duff:1977ay,Brown:1976wc,Brown:1977pq}. In four dimensions the conformal anomaly is comprised of two terms, the $a$ anomaly coming from the integrated Euler density, and the $c$ anomaly from an integrated $(\rm Weyl)^2$ term. Their contribution to the action is
\begin{equation}
A_0={1\over 64 \pi^2} \int \mathrm{d}^4 x \sqrt{g} \bkt{-a E_4 + c\, C_{\mu\nu\rho\sigma} C^{\mu\nu\rho\sigma}}.
\end{equation}
The $a$ and $c$ anomalies belong to different classes, ``type-A" and ``type-B" anomalies~\cite{Deser:1993yx}. The type-A anomalies can be expressed in terms of topological invariants and do not change under small deformations of the metric and other background fields. Anomalies in this class are monotonic under RG flows to the IR \cite{Komargodski:2011vj}. On the other hand, the type-B anomalies are not topological invariants and can be related to correlators of local operators. In particular, $c$ is related to the normalization of the stress-tensor two-point function, $C_T$, by $c={C_T\over 160}$~\cite{Osborn:1993cr,Osborn:1994rv}.
If ${\cal M}$ is the round-sphere then the $(\rm Weyl)^2$ term is zero and the $c$ anomaly does not contribute. Hence, one finds for $A_0$
\begin{equation}
A_0= -{a\over 64 \pi^2} \int \mathrm{d}^4 x \sqrt{g} E_4 =-a.
\end{equation}
If we deform away from the round sphere the $a$ anomaly does not change, but the $(\rm Weyl)^2$ term is no longer zero and contributes to the free energy and $A_0$. Since the stress-tensor couples to the deformations of the metric, the change in $A_0$ is computable from the integrated correlation functions of the stress-tensor.
To leading order the change comes from the integrated stress-tensor two-point function.
One can also study a conformal field theory on ${\cal M}$ in the presence of other background fields for various conserved currents. In particular, if we have an ${\cal N}=2$ superconformal field theory (SCFT), then in order to preserve supersymmetry we must turn on other background fields in the supergravity multiplet when deforming the metric~\cite{Festuccia:2011ws,Festuccia:2018rew,Festuccia:2020yff}. This leads to additional contributions to the conformal anomaly, which results in the supersymmetric completion of the Euler density and the (Weyl)$^2$ terms. The supersymmetric Euler density includes the second Chern class of the background gauge fields and preserves the topological nature of the $a$ anomaly~\cite{Butter:2013lta}. By invoking Weyl invariance and $R$-symmetry invariance, we further determine the supersymmetric completion of the (Weyl)$^2$ term up to six overall coefficients. We then study the scale dependence of the two-point functions of the tensor multiplet to fix all but one coefficient. This generalizes the result in \cite{Gomis:2015yaa,Schwimmer:2018hdl} for the supersymmetrized Weyl anomaly by including the contribution of all background fields in the supergravity multiplet. The sixth coefficient requires knowledge of the three- and four-point functions to completely fix it and will not be considered in this paper.
Extending to an ${\cal N}=2$ SCFT also restricts many of the counterterms and leads to some scheme independent finite terms. Of particular interest is the dependence of the free energy on the marginal couplings in the theory. In \cite{Gomis:2014woa,Gerchkovitz:2016gxx} it was shown that the sphere free energy of $\mathcal{N}=2$ SCFTs is proportional to the K\"ahler potential. Using localization we generalize a particular version of this result to \emph{any} supersymmetric background. Namely, we show that for the deformed sphere
\begin{equation}\label{eq:pipjlz}
\partial_i \partial_{\overline{j}} \log Z_{{\cal M}}=(32 r^2)^2 \langle A_i (N)\overline{A}_{\overline{j}}(S)\rangle,
\end{equation}
where $A_i$ is the bottom component of the exactly marginal chiral multiplet. (N) and (S) denote the north and south poles on the deformed sphere, which are defined as fixed points of the Killing vector composed from a preserved supersymmetry transformation. For an arbitrary supersymmetric background, the Killing vector can have more than two fixed points and the result generalizes by including a sum over the fixed points~(see~\cref{eq:polcorrGen}).
The two-point function appearing in \eqref{eq:pipjlz} is proportional to the Zamolodchikov metric due to the supersymmetry. We then combine this result with the moduli anomaly~\cite{Gomis:2015yaa,Schwimmer:2018hdl} and an analysis of possible counterterms~\cite{Seiberg:2018ntt} to further constrain the form of the free energy on general manifolds. We show that up to holomorphic functions and terms local in the supergravity background fields, the free energy takes the form
\begin{equation}\label{eq:logzGenIntro}
\log Z= {K\bkt{\tau_i,\overline{\tau}_i}\over 12} + {\alpha\over 96} K\bkt{\tau_i,\overline{\tau}_i} {\rm I}_{(\rm Weyl)^2}+ {1\over 96}\beta(\tau_i,\overline{\tau}_i) {\rm I}_{(\rm Weyl)^2}+\gamma(\tau_i,\overline{\tau}_i,b)+ P_h \bkt{\tau_i, b }+\overline{P}_h \bkt{\overline{\tau}_{\overline{i}},b},
\end{equation}
where $\alpha$ is a constant, $\beta(\tau_i,\overline{\tau}_i)$ and $\gamma(\tau_i,\overline{\tau}_i,b)$ are modular-invariant, and $P_h$ and $\overline{P}_h$ are holomorphic and anti-holomorphic functions of the moduli. $\gamma,P_h$ and $\overline{P}_h$ are also Weyl-invariant and necessarily non-local functionals of the supergravity background fields. $b$ parameterizes the deformation away from the round sphere. We then show that the partition function of the theory with a vector multiplet and an adjoint massless hypermultiplet on a specific deformed background~\cite{Hama:2012bg}, which can be computed exactly using localization, indeed has the structure of~\cref{eq:logzGenIntro}.
We finally point out that the deformation independence of the free energy of the theory with a special value of hypermultiplet mass can be used to obtain an infinite number of relations between various integrated correlators at all values of the coupling. Two of these constraints were recently obtained by studying the free energy of $\mathcal{N}=2^*$ theory on the deformed sphere~\cite{Chester:2020vyz}.
The rest of the paper is structured as follows. In section 2 we review the extraction of the Weyl anomaly from the stress-tensor two-point function. In section 3 we generalize this to $\mathcal{N}=2$ SCFTs and compute the supersymmetric Weyl anomaly up to one undetermined constant. We then study the dependence of $\log Z$ on the marginal couplings of the SCFT and derive the results in~\cref{eq:pipjlz,eq:logzGenIntro}. In section 4 we continue our study of $\mathcal{N}=2$ theories on the ellipsoid. We compute the localized partition function for a gauge theory with an adjoint hypermultiplet. We consider both the $U(1)$ case and that of $SU(N)$ at large $N$. We then discuss the ambiguities of defining the theory away from $S^4$ where the space is no longer conformally flat. Finally we derive an infinite number of constraints for integrated correlators in $\mathcal{N}=4$ SYM on the round sphere from the partition function of the deformed sphere. In the appendix we derive Ward identities for two-point functions.
\section{CFTs on deformed spheres and the stress-tensor two-point functions}
The deformations of the free energy with respect to the background metric yield correlators involving the stress-tensor. Since the Euler invariant does not change under metric perturbations, only the (Weyl)$^2$ term contributes. If we denote the perturbation away from the round four-sphere metric by $h_{\mu\nu}$, i.e.,
\begin{equation}
\mathrm{d} s^2_{\rm deformed} =\mathrm{d} s^2_{\rm round}+ h_{\mu\nu} \mathrm{d} x^\mu \mathrm{d} x^{\nu}= \Omega^2 \delta_{ab} \mathrm{d} x^a \mathrm{d} x^b + h_{\mu\nu} \mathrm{d} x^\mu \mathrm{d} x^{\nu},
\end{equation}
where $\Omega={2\over 1+ {|x|^2 \over r^2}}$ and $r$ is the radius of the round sphere, then the leading contribution of the (Weyl)$^2$ term to the anomaly $A_0$ is given by
\begin{equation}\label{logZW}
\delta A_0= {c\over 256\pi^2} \int \mathrm{d}^4 x\sqrt{g} h^{\mu\nu} \bkt{
\pi_{\mu\rho} \pi_{\nu\sigma}+\pi_{\mu\sigma}\pi_{\nu\rho}-{2\over 3} \pi_{\mu\nu} \pi_{\rho\sigma}
} h^{\rho\sigma},
\end{equation}
where
$\pi_{\mu\nu}\equiv \nabla_\mu \nabla_\nu-g_{\mu\nu} \nabla^2$. The combination in the parentheses projects to traceless, transverse, rank-two tensors. To relate the above expression to the stress-tensor two-point correlator, we use the fact that the integral is invariant under the Weyl scaling of the full metric $g_{\mu\nu}+h_{\mu\nu}$. Scaling the metric by $\Omega^{-2}$ and keeping only the leading term in the free energy we get
\begin{equation}
\delta A_0={c\over 256\pi^2} \int \mathrm{d}^4 x \Omega^2(x)h^{\mu\nu} \bkt{
\pi_{\mu\rho} \pi_{\nu\sigma}+\pi_{\mu\sigma}\pi_{\nu\rho}-{2\over 3} \pi_{\mu\nu} \pi_{\rho\sigma}
} \Omega^2(x)h^{\rho\sigma}(x),
\end{equation}
where the operators $\pi_{\mu\nu}$ are those for the flat metric. To simplify the above expression further, we introduce an integral over a $\delta$-function followed by an integration by parts to get
\begin{equation}\label{logZ2}
\delta A_0= {c\over 256\pi^2} \int \mathrm{d}^4 x \int \mathrm{d}^4 y \Omega^2(x) \Omega^2(y)h^{\mu\nu} (x) h^{\rho\sigma}(y) \bkt{
\pi_{\mu\rho} \pi_{\nu\sigma}+\pi_{\mu\sigma}\pi_{\nu\rho}-{2\over 3} \pi_{\mu\nu} \pi_{\rho\sigma}
} \delta^4(x-y)\,.
\end{equation}
We can further manipulate \eqref{logZ2} by using the following regularization procedure to define the Dirac delta function \cite{Freedman:1991tk,Freedman:1992tz,Latorre:1993xh}\footnote{This amounts to regularizing the coincident limit of the two-point functions. This regularization introduces a length scale and the type-B conformal anomaly is due to the dependence of free energy on this length scale~\cite{Deser:1993yx}. },
\begin{equation}
\delta^4(x)= {-1\over2 V_{\mathbb{S}^3}} \nabla^2 {1\over |x|^2 }={-1\over2 V_{\mathbb{S}^3}} \nabla^2 \delta_\sigma {\log (|x|\Lambda_{\rm UV})\over |x|^2}={1\over V_{\mathbb{S}^3}} \delta_\sigma {1\over |x|^4},\label{delta4}
\end{equation}
where the first and the second equations hold identically, while the last is true away from $|x|=0$ and $\delta_\sigma={\mathrm{d}\over \mathrm{d}\log \Lambda_{\rm UV}}$ captures the dependence on the scale. A short calculation then gives
\begin{equation}
\bkt{ \pi_{\mu\rho} \pi_{\nu\sigma}+\pi_{\mu\sigma}\pi_{\nu\rho}-{2\over 3} \pi_{\mu\nu} \pi_{\rho\sigma}} {1\over |x-y|^4}=640 \Omega^2(x) \Omega^2(y){{\cal I}_{\mu\nu\rho\sigma} (x,y) \over s(x,y)^8},
\end{equation}
where ${\cal I}_{\mu\nu\rho\sigma}$ is the tensor structure appearing in the two point function
\begin{equation}
\langle T_{\mu\nu}\bkt{x} T_{\rho\sigma} \bkt{y}\rangle = {C_T\over V_{\mathbb{S}^{d-1}}^2} {{\cal I}_{\mu\nu,\rho\sigma}\bkt{x,y}\over s(x,y)^{2d}},
\end{equation}
$s(x,y)$ is the geodesic distance on the sphere
\begin{equation}
s(x,y)=\sqrt{\Omega (x) \Omega(y)} |x-y|\,,
\end{equation}
and
\begin{equation}
{\cal I}_{\mu\nu,\rho\sigma}\bkt{x}=\frac12 \bkt{I_{\mu\sigma}\bkt{x} I_{\nu\rho}\bkt{x}+I_{\mu\rho}\bkt{x} I_{\nu\sigma}\bkt{x}}-\frac1d g_{\mu\nu} g_{\rho\sigma}\,,
\end{equation}
with
\begin{equation}
I_{\mu\nu}\bkt{x-y}= {\delta_{\mu\nu}-2{(x-y)_\mu (x-y)_\nu\over |x-y|^2}}\,.
\end{equation}
Plugging everything in, we get
\begin{equation}
\delta A_0= {1\over 32} \delta_\sigma\int \mathrm{d}^4 x \sqrt{g(x)} \int \mathrm{d}^4 y \sqrt{g(y)} h^{\mu\nu}(x) h^{\rho\sigma}(y) \langle T_{\mu\nu}(x) T_{\rho\sigma}(y)\rangle.
\end{equation}
Hence, the leading correction to the universal coefficient in the free energy is given by the integrated stress-tensor two-point function.
\subsection{Examples}
Let us demonstrate the above by considering generic CFTs placed on specific deformed spheres.
\subsubsection{$SU(2)\times U(1)$ isometry}
We first consider a simple deformation which preserves an $SU(2)\times U(1)$ isometry.
In projective coordinates the deformation is
\begin{equation}
h_{\mu\nu} \mathrm{d} x^\mu \mathrm{d} x^\nu= \varepsilon\Omega^4 \bkt{x_2 \mathrm{d} x_1-x_1 \mathrm{d} x_2}^2.
\end{equation}
The leading contribution to the (Weyl)$^2$ part of the anomaly is then given by
\begin{equation}\label{sqC2}
\delta A_0={c\over 64\pi^2} \int \mathrm{d}^4 x\sqrt{g} C_{\mu\nu\rho\sigma} C^{\mu\nu\rho\sigma}= {3\varepsilon^2\over 2240} C_T.
\end{equation}
Let us now compute the logarithmic divergence in the integrated stress-tensor two-point function. Contracting the correlator with the metric deformation we have
\begin{equation}
\begin{split}
&\sqrt{g(x)}\sqrt{g(y)}h^{\mu\nu}(x) h^{\rho\sigma} (y) \langle T_{\mu\nu}(x) T_{\rho\sigma}(y)\rangle
\\
&= {\varepsilon^2 C_T\Omega(x)^2 \Omega(y)^2\over 16 \pi^4 |x-y|^8}
\bigg(
4 (x_1 y_1+x_2 y_2)^2-\left(x_1^2+x_2^2\right) \left(y_1^2+y_2^2\right)\\&+16\frac{ (x_2 y_1-x_1 y_2)^2}{|x-y|^2}\bkt{{(x_2 y_1-x_1 y_2)^2\over |x-y|^2}-\bkt{x_1 y_1+x_2 y_2}}\bigg)\,.\end{split}
\end{equation}
In general, to compute the integrated correlator one can use the $SO(5)$ symmetry of the integration measure to fix the position of one of the operators at the north (or south) pole. This corresponds to a specific choice of regularization scheme which preserves the $SO(5)$ isometry of the round sphere. If we do this the above correlator vanishes identically at separated points because the deformation is zero at the poles. To uncover the singularities in the coincident limit we use the relations
\begin{equation}\label{eq:delsxy}
\delta_\sigma{1\over |x|^4}= 2\pi^2 \delta^4(x)\,,\qquad
\delta_\sigma{1\over |x|^{4+2 m}}= {2\pi^2\over 4^m \Gamma(m+1) \Gamma(m+2)} \square^m \delta^4(x-y).
\end{equation}
The second equality follows from the first by interchanging the Laplacian and $\delta_\sigma$. Using these relations one finds
\begin{equation}
\begin{split}
&{1\over 32} \delta_\sigma \int \mathrm{d}^4x \int \mathrm{d}^4 y\sqrt{g(x)}\sqrt{g(y)}h^{\mu\nu}(x) h^{\rho\sigma} (y) \langle T_{\mu\nu}(x) T_{\rho\sigma}(y)\rangle\\
&= -{C_T\epsilon^2\over 16 \pi^2}\int \mathrm{d}^4 x\frac{\left(x_1^2+x_2^2\right) \left(x_1^4+2 x_1^2 \left(x_2^2+x_3^2+x_4^2-7\right)+x_2^4+2 x_2^2 \left(x_3^2+x_4^2-7\right)+\left(x_3^2+x_4^2+1\right)^2\right)}{ \left(x_1^2+x_2^2+x_3^2+x_4^2+1\right)^8}\\
&= {3 C_T \varepsilon^2\over 2240},
\end{split}
\end{equation}
which matches the result in \eqref{sqC2}.
\subsubsection{$U(1)\times U(1)$ isometry}
Let us now consider squashing the round sphere to an ellipsoid \cite{Hama:2012bg}. In this case the metric is
\begin{equation}\label{eq:HHellips}
\mathrm{d} s^2= r^2 E^a E^b\delta_{ab},
\end{equation}
where
\begin{equation}
E^1={\ell }\sin\rho\cos\theta \mathrm{d} \phi,\qquad E^2=\widetilde{\ell}\sin\rho\sin\theta \mathrm{d}\chi,\qquad E^3=\sin\rho f \mathrm{d}\theta+h \mathrm{d}\rho, \qquad E^4= g \mathrm{d}\rho.
\end{equation}
The coordinates $\phi$ and $\chi$ are $2\pi$ periodic, while $\theta\in[0,{\pi\over 2}]$, $\rho\in[0,\pi]$, and
\begin{equation}\label{fgh}
f=\sqrt{\ell^2\sin^2\theta+\widetilde{\ell} \cos^2\theta},\qquad g=\sqrt{r^2 \sin^2\rho+\bkt{\ell \widetilde{\ell}}^2 f^{-2} \cos^2\rho},\qquad h={\widetilde{\ell}^2-\ell^2\over f} \cos\rho \sin\theta\cos\theta\,.
\end{equation}
Setting $\ell=\widetilde{\ell}=r$ corresponds to the round sphere. The overall size of the manifold is parameterized by $r^2 \ell \tilde{\ell}$ while the squashing is parameterized by the dimensionless parameter $b=\sqrt{\ell\over\widetilde{\ell}}$. The metric in \eqref{eq:HHellips} preserves a $U(1)\times U(1)$ isometry corresponding to the Killing vectors $\partial_\phi$ and $\partial_\chi$. For supersymmetric theories it admits a Killing spinor when certain background fields are turned on \cite{Hama:2012bg}. The integrated (Weyl)$^2$ term can be calculated analytically for this deformation for all $b\geq 1$ and we find
\begin{eqnarray}
{1\over 16\pi^2}\int \mathrm{d}^4 x\sqrt{g} C_{\mu\nu\rho\sigma} C^{\mu\nu\rho\sigma}
&=& \frac{\ms46 b^{12}\ps68 b^8\!-\! 28 b^4\ps15 \sqrt{b^4\ms1} b^{10} \log \left(2 b^2 \left(\sqrt{b^4\ms1}\!+\! b^2\right)\ms1\right)\ps6}{45 b^{10}}\nonumber\\
&=&{\cal O}\bkt{(b-1)^4}.
\eea
Hence, the integrated two-point function for the stress-tensor does not have logarithmic singularities to leading order in the deformation.
We can also show the absence of a leading order singularity in the two-point function directly. We first set $\widetilde{\ell}=r$ so that the deformation is completely captured by $\ell=b^2 r$. The deformation of the metric away from the round sphere takes the form in projective coordinates,
\begin{equation}
h_{\mu\nu}= 2 (b-1) \bkt{v_\mu v_\nu+ w_\mu w_\nu} \qquad {\rm where}\qquad v_\mu \mathrm{d} x^\mu= \mathrm{d}\bkt{\Omega x^1}\qquad {\rm and}\qquad w_\mu \mathrm{d} x^\mu = \mathrm{d} \bkt{\Omega x^2}.
\end{equation}
We now write the two-point function contracted with the deformation as
\begin{equation}\label{two-point}
\begin{split}
&\sqrt{g(x)}\sqrt{g(y)}h^{\mu\nu}(x) h^{\rho\sigma} (y) \langle T_{\mu\nu}(x) T_{\rho\sigma}(y)\rangle
\\
&= {C_T\over 4 \pi^4 |x-y|^8}
\bigg(
h^{ab}(x) h_{ab}(y)-{1\over 4} h^a{}_a(x) h^{b}{}_{b} (y)- 4 h^{ac}(x) h_{c }{}^{b}(y) {\bkt{x-y}_a (x-y)_b \over |x-y|^2}\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad+{4} h^{ab}(x) h^{cd}(y){ \bkt{x-y}_a\bkt{x-y}_b\bkt{x-y}_c\bkt{x-y}_d \over |x-y|^4
}\bigg)\,.
\end{split}
\end{equation}
After integrating over the coordinates, the anomaly contribution of each of the four terms inside the parenthesis in \eqref{two-point} can be found by using \eqref{eq:delsxy}. After a tedious calculation we find
{\begin{align}
&\begin{aligned}
&\delta_\sigma \int \mathrm{d}^4x \mathrm{d}^4 y {h^{ab}(x) h_{ab}(y)\over |x-y|^8}=\int \mathrm{d} x \frac{\pi ^4 \left(64 x^8+252 x^6+360 x^4+202 x^2+45\right)}{18 \left(x^2+1\right)^{9\over2}}, \\
-&\delta_\sigma \int \mathrm{d}^4x \mathrm{d}^4 y {1\over 4 |x-y|^8} h^a{}_a(x) h^{b}{}_{b} (y)=-\int \mathrm{d} x \frac{ \pi ^4 \left(12 x^4+6 x^2-1\right)}{24 \left(x^2+1\right)^{9\over2}}, \\
-4&\delta_\sigma \int \mathrm{d}^4x \mathrm{d}^4 y h^{ac}(x) h_{c }{}^{b}(y) {\bkt{x-y}_a (x-y)_b \over |x-y|^{10}}=
-\int \mathrm{d} x \frac{ \pi ^4 \left(2318 x^4+1271 x^2+400 \left(x^2+4\right) x^6+278\right)}{60 \left(x^2+1\right)^{9\over2}},
\end{aligned}\nonumber\\
&\hskip0.3cm{4}\delta_\sigma \int \mathrm{d}^4x \mathrm{d}^4 y h^{ab}(x) h^{cd}(y){ \bkt{x-y}_a\bkt{x-y}_b\bkt{x-y}_c\bkt{x-y}_d \over |x-y|^{12}}
\nonumber\\ &\hskip6.7cm=\int \mathrm{d} x \frac{ \pi ^4 \left(1120 x^8+4560 x^6+6888 x^4+3676 x^2+753\right)}{360 \left(x^2+1\right)^{9\over2}}.
\end{align}
}
Each of the above terms is logarithmically divergent for large $x$, but their sum vanishes. Hence, the leading logarithmic divergence for the integrated two-point function vanishes.
\section{Free energy of $\mathcal{N}=2$ SCFTs on deformed spheres }
In this section we study the partition function of $\mathcal{N}=2$ SCFTs on supersymmetric curved backgrounds. These backgrounds are obtained by coupling the stress-tensor multiplet of $\mathcal{N}=2$ theories with the gravity (Weyl) multiplet of $\mathcal{N}=2$ Poincar\'e (conformal) supergravity~\cite{Festuccia:2018rew,Festuccia:2020yff,Klare:2013dka}. The supergravity background in Euclidean signature has a metric $g_{\mu\nu}$, a self-dual two-form $B^+_{\mu\nu}$, an anti-self-dual two-from $B^{-}_{\mu\nu}$, background vector fields $V_\mu$ and ${\cal V}_{\mu}{}^{ij}$ for $U(1)_R\times SU(2)_R$ R-symmetry and a scalar field $D(x)$. The partition function is then a non-local function of the supergravity background fields and couplings of the theory which can be computed via localization under favorable circumstances. We study both the logarithmically divergent and the finite part of the free energy. We do not need the explicit knowledge of any supersymmetric background for our analysis and our main tool is the Weyl anomaly, the moduli anomaly and a classification of local counter terms.
\subsection{Weyl anomaly in $\mathcal{N}=2$ SCFTs}
The universal coefficient of the $\log({\rm vol}_{{\cal M}} \Lambda_{\rm UV}^4)$ term can be determined using the Weyl anomaly which is modified to incorporate the $\mathcal{N}=2$ supersymmetry. The appropriately supersymmetrized Weyl variation of the free energy is given by the following superspace expression~\cite{Gomis:2015yaa,Schwimmer:2018hdl,Kuzenko:2013gva}.
\begin{equation}\label{dSZ}
\delta_\Sigma \log Z \supset {1\over 16\pi^2}\int \mathrm{d}^4 x \int \mathrm{d}^4\Theta {\cal E} \delta \Sigma\bkt{a \Xi+(c-a) W^{\alpha\beta} W_{\alpha\beta} } +c.c.
\end{equation}
Here $\delta\Sigma$ is a chiral superfield which parameterizes the super-Weyl transformations. Its lowest component is $\delta\sigma+i\, \delta \alpha$ where $\delta\sigma$ parameterizes the Weyl transformations and $\delta\alpha$ parameterizes the $U(1)_R$ transformation. ${\cal E}$ is the chiral density, $W_{\alpha\beta}$ is the covariantly chiral Weyl superfield and $\Xi$ is a composite scalar constructed from curvature superfields that appear in commutators of super-covariant derivatives. In component fields the anomalous variation of the free energy takes the form\footnote{$\delta \sigma$ and $\delta \alpha$ are independent of coordinates.}
\begin{equation}\label{eq:dSZcomp}
\begin{split}
\delta_\Sigma \log Z \supset &-2a \delta\sigma \chi({\cal M})+\delta\alpha
\sbkt{
(a-c) \bkt{{\cal P}({\cal M}) - n_{U(1)_R}} - (a-{c\over 2}) n_{SU(2)_R}}
\\&+ {c\over 16\pi^2}\delta\sigma \int \mathrm{d}^4 x \sqrt{g}\bkt{C^{\mu\nu\rho\sigma}C_{\mu\nu\rho\sigma}+\cdots}.
\end{split}
\end{equation}
All terms on the first line are topological invariants, where
$\chi({\cal M})$ is the Euler characteristic of the compact manifold ${\cal M}$.
The term multiplying the $U(1)_R$ transformation is written as a combination of the Pontryagin character and the second Chern class for the background gauge fields,
\begin{equation}\label{eq:topinv}
\begin{split}
{\cal P}({\cal M}) &={1\over 32 \pi^2} \int \mathrm{d}^4 x \epsilon^{\mu\nu\rho\sigma} R_{\mu\nu\alpha\beta} R_{\rho\sigma}{}{}^{\alpha\beta}, \\
n_{U(1)_R}&={1\over 32 \pi^2} \int \mathrm{d}^4 x \epsilon^{\mu\nu\rho\sigma} F_{\mu\nu} F_{\rho\sigma}, \\
n_{SU(2)_R}&= {1\over 32\pi^2}\int \mathrm{d}^4 x \epsilon^{\mu\nu\rho\sigma} {\rm Tr} {\cal F}_{\mu\nu}{\cal F}_{\rho\sigma}.
\end{split}
\end{equation}
For supergravity backgrounds smoothly connected to the round sphere, the topological invariants in~\cref{eq:topinv} vanish and $\chi({\cal M})=2$.
The term proportional to the central charge $c$ in the Weyl transformation is not topological and hence is non-trivial on deformed spheres. The ellipses denote the additional terms required to make the (Weyl)$^2$ term supersymmetric. Let us denote this supersymmetric completion by ${\rm I}_{(\rm Weyl)^2}$. Then the Weyl anomaly coefficient, $A_0$, on supergravity backgrounds smoothly connected to the round sphere is given by
\begin{equation}\label{logZsusy}
A_0 = -a +{c\over 64 \pi^2} {\rm I}_{(\rm Weyl)^2}
\end{equation} Since $c$ appears in the normalization of the two-point functions for operators in the stress-tensor multiplet, this suggests that one can relate these two-point functions to the supersymmetric completion in \eqref{logZsusy}. In the next section we follow this strategy to determine
${\rm I}_{(\rm Weyl)^2}$.
\subsection{Supersymmetric completion of (Weyl)$^2$ from stress-tensor correlators}
In this section we determine ${\rm I}_{(\rm Weyl)^2}$ by studying the logarithmic divergences of various two-point correlators of stress-tensor multiplet operators. We first use the Weyl-weights and $U(1)_R$ charges of the various fields in the supergravity multiplet to write down the most general possibility for ${\rm I}_{(\rm Weyl)^2}$. We then use the precise coupling of the stress-tensor multiplet with the Weyl-multiplet to relate the logarithmic divergences of the two-point functions to various terms in ${\rm I}_{(\rm Weyl)^2}$.
The Weyl weights of the fields in the supergravity multiplet are
\begin{equation}
w_{g_{\mu\nu}}=-2,\qquad w_{A_\mu}=0,\qquad w_{{\cal V}_\mu}=0,\qquad w_{B}=-1\qquad w_D=2.
\end{equation}
The self-dual and anti-self-dual two-forms are charged under the background $U(1)_R$ gauge field and carry opposite chiral weights. This implies that an equal number of self-dual and anti-self-dual two-forms must appear in an allowed term. Using these considerations one can list the possible local functions of background fields which can appear in ${\rm I}_{(\rm Weyl)^2}$. For example, possible terms involving the scalar field $D(x)$ are
\begin{equation}
g^{\mu\nu} \nabla_\mu \nabla_\nu D,\qquad D \bkt{B_{\mu\nu} B^{\mu\nu}},\qquad D^2.
\end{equation}
The first term is omitted because it is a total derivative. The second term is ruled out because its non-trivial parts must involve different numbers of self-dual and anti self-dual two-forms. Similarly, after accounting for the possible terms involving other background fields one can write down the most general form for ${\rm I}_{(\rm Weyl)^2}$ consistent with the invariance under $U(1)_R$ and constant Weyl transformations:
\begin{multline}\label{eq:lgzAns}
{\rm I}_{(\rm Weyl)^2}= \int \mathrm{d}^4 x \sqrt{g}
\Big( C_{\mu\nu\rho\sigma}C^{\mu\nu\rho\sigma} +c_1 D^2+c_2 F_{\mu\nu}F^{\mu\nu}+c_3{\rm Tr}{\cal F}_{\mu\nu} {\cal F}^{\mu\nu}+c_4\nabla_{\mu} B^{+\mu\nu} \nabla^{\sigma} B^{-}_{\sigma\nu}\\+
{\tilde{c}_4}R_{\mu\nu} B^{+\mu\rho} B^{-\nu}{}_{\rho}+c_5 B^+_{\mu\nu} B^{+\mu\nu} B^-_{\mu\nu} B^{-\mu\nu}\Big).
\end{multline}
Under non-constant Weyl transformations all terms are invariant except the ones that appear with coefficients $c_4$ and $\tilde{c}_4$. The coefficient $\tilde{c}_4$ can then be fixed in terms of $c_4$ by requiring their combined Weyl variation to cancel. We rewrite these in terms of the-two form $B_{\mu\nu}=B^+_{\mu\nu}+B^-_{\mu\nu}$, such that
\begin{equation}\label{eq:c4rewr}\begin{split}
\nabla_{\mu} B^{+\mu\nu} \nabla^{\sigma} B^{-}_{\sigma\nu}&=
-{1\over 8} \bkt{
\nabla_\mu B_{\nu\rho}\nabla^{\mu} B^{\nu\rho} -2 \nabla_\mu B^{\mu\nu}\nabla_\rho B^{\rho}{}_\nu -2\nabla_\rho B^{\mu\nu}\nabla_\mu B^{\rho}{}_\nu
}, \\
R_{\mu\nu} B^{+\mu\rho} B^{-\nu}{}_{\rho}&={1\over 2} R_{\mu\nu} B^{\mu\rho} B^\nu{}_{\rho}-{1\over 8} R B^{\mu\nu} B_{\mu\nu}.
\end{split}
\end{equation}
Up to a total derivative, the Weyl variation is then given by
\begin{equation}
\delta_\sigma \sqrt{g}\Big(c_4\nabla_{\mu} B^{+\mu\nu} \nabla^{\sigma} B^{-}_{\sigma\nu}+
{\tilde{c}_4}R_{\mu\nu} B^{+\mu\rho} B^{-\nu}{}_{\rho}\Big)= {c_4-2 \tilde{c}_4\over 8} \nabla^2 \sigma B^{\mu\nu} B_{\mu\nu}-{c_4-2 \tilde{c}_4\over 2} \nabla_\mu\nabla_\nu \sigma B^{\mu\rho} B^\nu {}_\rho.
\end{equation}
Hence, the ${\rm I}_{(\rm Weyl)^2}$ given in \cref{eq:lgzAns} is invariant under local Weyl transformations if $\tilde{c}_4=\frac12 c_4$.
The rest of the coefficients in ${\rm I}_{(\rm Weyl)^2}$, except $c_5$, can be determined by relating the Weyl anomaly to logarithmic divergences in the two-point functions. These two-point functions can be computed by taking functional derivatives of the free energy with respect to the background fields to which the operators couple. To this end we compute the scale dependence of the two-point functions using $\delta_\sigma \log Z =4A_0= -4a+{c\over 16\pi^2} {\rm I}_{(\rm Weyl)^2} $ and~\cref{eq:lgzAns}. This gives \begin{equation}\label{eq:delFdelDV}
\begin{split}
\delta_\sigma {\delta^2 \log Z\over \delta D(x) \delta D(y)}\Big|_{{\mathbb R}^4} &= { c_ 1C_T\over 1280\pi^2} \delta^4(x-y), \\
\delta_\sigma {\delta^2 \log Z\over \delta { V}_\mu(x) \delta { V}_\nu(y)}\Big|_{{\mathbb R}^4}
&=
-{ c_2 C_T\over 640 \pi^2} \bkt{\delta^{\mu\nu} \partial^2-\partial^\mu \partial^\nu}\delta^4(x-y), \\
\delta_\sigma {\delta^2 \log Z\over \delta {\cal V}_\mu^{ij}(x) \delta {\cal V}_\nu^{kl}(y)}\Big|_{{\mathbb R}^4}
&=
-{ c_3 C_T\over 640 \pi^2} \epsilon_{(ij}\epsilon_{k)l}\bkt{\delta^{\mu\nu} \partial^2-\partial^\mu \partial^\nu}\delta^4(x-y), \\
\delta_\sigma {\delta^2\over \delta B_{\mu \nu} (x) \delta B_{\rho\sigma}(y) }\log Z\Big|_{\mathbb{R}^4}
&=
{c_4 C_T\over 10240 \pi^2} {\cal B}_{[\mu\nu][\rho\sigma]}\delta^4(x-y),
\end{split}
\end{equation}
where ${\cal B}_{\mu\nu\rho\sigma}$ is the differential operator
\begin{equation}\label{Bdef}
{\cal B}_{\mu\nu\rho\sigma}=\bkt{
\delta^{\mu \rho} \delta^{\sigma\nu} \partial^2-4 \delta^{\mu\rho} \partial^{\sigma} \partial^{\nu}
}.
\end{equation}
We now use the linearized coupling of the Weyl-multiplet with the stress-tensor multiplet operators~\cite{Bianchi:2019dlw}
to relate the terms computed in (\ref{eq:delFdelDV}) to the two-point functions, where we find
\begin{equation}
\label{eq:linCoup}
\delta{\mathscr L}= \sbkt{ \frac12 h^{\mu\nu} T_{\mu\nu}-{i\over 2} V_\mu j^\mu-\frac{i}{2} \bkt{t_{\mu}}^{ij} \bkt{{\cal V}_{\mu}}_{ij}- 16 \bkt{H_{\mu\nu} B^{+\mu\nu}+\overline{H}_{\mu\nu} {B}^{-\mu\nu}} - O_2 D}.
\end{equation}
The stress-tensor multiplet two-point functions are completely determined by using Ward identities in terms of the central charge $C_T$ and are given by
\begin{equation}\label{eq:TM2pfns}
\begin{split}
\langle O_2 \bkt{x} O_2\bkt{y} \rangle_{\mathbb{R}^4} &= {3C_T\over 5120 \pi^4} {1\over |x-y|^4}\,,\\
\langle {j_{\mu}(x)}{ j_{\nu}(y)} \rangle_{\mathbb{R}^4} &=
-{3 C_T\over 160 \pi^4} {1\over |x-y|^6} I_{\mu\nu}\bkt{x-y}\,, \\
\langle \bkt{t_{\mu}}^{ij}(x)\bkt{ t_{\nu}}^{kl}(y) \rangle_{\mathbb{R}^4} &=
-{3 C_T\over 160 \pi^4} {1\over |x-y|^6} I_{\mu\nu}\bkt{x-y}\, , \\
\langle H_{\mu\nu}\bkt{x}\overline{H}_{\rho\sigma}\bkt{y}\rangle_{\mathbb{R}^4} &= {3C_T\over 1280 \pi^4} {(x-y)^\gamma(x-y)^\iota\over |x-y|^8}\\ &\hskip1cm\left(4\epsilon_{\mu\nu\gamma[\sigma}\delta_{\rho]\iota}+4\epsilon_{\rho\sigma\iota[\nu}\delta_{\mu]\gamma}+12\tensor{\delta}{^[^\mu_\iota}\tensor{\delta}{^\nu_\sigma}\tensor{\delta}{^\gamma^]_\rho}+8\tensor{\delta}{^[^\mu_[_\rho}\tensor{\delta}{_\sigma_]_\iota}\tensor{\delta}{^\nu^]^\gamma}\right)\,.
\end{split}
\end{equation}
These two-point functions are derived in \autoref{2ptWard}.
From the linearized coupling of the background scalar we find that
\begin{equation}
\delta_\sigma{\delta^2 \log Z\over \delta D(x) \delta D(y)}\Big|_{{\mathbb R}^4}=\delta_\sigma\langle O_2(x) O_2 (y)\rangle= {3 C_T\over 2560 \pi^2} \delta^4 (x-y).
\end{equation}
Comparing this with~\eqref{eq:delFdelDV} we determine that $c_1= {3\over 2}$.
From the linearized coupling of the background field to the $SU(2)_R$ current we compute
\begin{equation}
\delta_{\sigma} {\delta^2 \log Z\over \delta {\cal V}_\mu^{ij}(x) \delta {\cal V}_\nu^{kl}(y)}\Big|_{{\mathbb R}^4}
=
-{1\over 4}\delta_\sigma\langle t^{\mu}_{ij}(x) t^{\nu}_{kl}(y)\rangle.
\end{equation}
The scale dependence of the right hand side can be computed using the two-point functions~\eqref{eq:TM2pfns} and the identity
\begin{equation}
\frac{I_{\mu\nu}}{\vert x\vert^6}=\frac{1}{12}(\delta_{\mu\nu}\partial^2-\partial_\mu\partial_\nu)\frac{1}{\vert x\vert^4},
\end{equation}
which gives
\begin{equation}
\delta_\sigma {\delta^2 \log Z\over \delta {\cal V}_\mu^{ij}(x) \delta {\cal V}_\nu^{kl}(y)}\Big|_{{\mathbb R}^4}
=
{C_T\over 1280 \pi^2} \epsilon_{(ij}\epsilon_{k)l}\bkt{\delta^{\mu\nu} \partial^2-\partial^\mu \partial^\nu}\delta^4(x-y).
\end{equation}
Comparing with~\cref{eq:delFdelDV} we get $c_3=-\frac12$. In a completely analogous manner one finds that $c_2=-\frac12$.
From the coupling of the two-form field in the Lagrangian~\eqref{eq:linCoup} we find that
\begin{equation}
\begin{split}
&\delta_\sigma{\delta^2\over \delta B_{\mu \nu} (x) \delta B_{\rho\sigma}(y) }\log Z\Big|_{\mathbb{R}^4}
= 256\delta_\sigma \langle H_{\mu\nu}(x) \overline{H}_{\rho\sigma}(y)+ \overline{H}_{\mu\nu}(x) H_{\rho\sigma}(y)\rangle\\
&=
{3 C_T\over 5 \pi^4} \delta_\sigma{\bkt{x-y}^\gamma\bkt{x-y}^\iota\over |x-y|^8} \bigg(4\epsilon_{\mu\nu\gamma[\sigma}\delta_{\rho]\iota}+4\epsilon_{\rho\sigma\iota[\nu}\delta_{\mu]\gamma}+12\tensor{\delta}{^[^\mu_\iota}\tensor{\delta}{^\nu_\sigma}\tensor{\delta}{^\gamma^]_\rho}\\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+8\tensor{\delta}{^[^\mu_[_\rho}\tensor{\delta}{_\sigma_]_\iota}\tensor{\delta}{^\nu^]^\gamma}+\bkt{\{\mu,\nu,\gamma\}\leftrightarrow \{\rho,\sigma,\iota\}} \bigg).
\end{split}
\end{equation}
Under $\bkt{\{\mu,\nu,\gamma\}\leftrightarrow \{\rho,\sigma,\iota\}} $ the first two terms are antisymmetric and drop out from the two-point function. Using
\begin{equation}
{x^\gamma x^\iota\over |x|^8} ={1\over 24} \bkt{\partial^\gamma \partial^\iota+\tfrac12 g^{\gamma\iota}\partial^2} {1\over |x|^4}\,,
\end{equation}
we then find for the scale dependence of the two-point function
\begin{equation}
\delta_\sigma{\delta^2\over \delta B_{\mu \nu} (x) \delta B_{\rho\sigma}(y) }\log Z\Big|_{\mathbb{R}^4}
=
{C_T\over 20 \pi^2} {\cal B}_{[\mu\nu][\rho\sigma]}\delta^4(x-y).
\end{equation}
Comparing with~\cref{eq:delFdelDV} we find that $c_4=4096$.
The final coefficient $c_5$ is for a quartic term and hence it is necessary to compute up to four-point correlators to find it. In fact, the supersymmetric Lagrangian also contains terms coupling the bottom components of marginal chiral multiplets with $B_{\mu\nu}^+ B^{+\mu\nu}$. Hence, four derivatives with respect to the background two-form field will involve a combination of two-, three- and four-point functions. In principle, all of these functions can contribute to the logarithmic divergence in the free energy.
The scale dependence of the two-point function can be ascertained as before. Scale dependence of three- and four-point functions is more non-trivial to obtain. Higher-point functions contain two types of divergences: (i) when only a subset of the operators collide, (ii) when all the operators collide. The divergences of the first kind are the so-called semi-local divergences~\cite{Bzowski:2015pba} and these can be regularized by counterterms which involve coupling of the background fields for the colliding operators with the remaining operator. Since such couplings are already present in the supersymmetric Lagrangian and are completely determined by supersymmetry, the effect of such counterterms is to only renormalize the operators appearing in the Lagrangian.
The divergences of the second kind, the so-called ultra-local divergences~\cite{Bzowski:2015pba} are regularized by adding counter-terms local in the background fields and these are the divergences that we are interested in. The ultra-local divergence of the three-point function can be determined with a bit of effort because the three-point functions are protected. The task becomes much more difficult for the four-point function. Since the only theory-dependent content of the Weyl anomaly is the coefficient $C_T$, we can use the free theory results to determine the ultra-local divergence in the four-point function and fix $c_5$. This would be interesting to compute but we will not do it here.
\subsection{Finite part of the free energy and the K\"ahler potential}
In this section we study the free energy of $\mathcal{N}=2$ SCFTs on a supersymmetric curved background as a function of the marginal couplings. On $\mathbb{R}^4$ an $\mathcal{N}=2$ SCFT can be deformed while preserving superconformal invariance by the term
\begin{equation}
{1\over \pi^2} \int \mathrm{d}^4 x\, \sum_{i=1}^{{\rm dim}_{{\mathcal M}_C}}\bkt{ \tau_i {\cal C }_i+\overline{\tau}_{\overline i} {\overline {\cal C }}_{\overline i}},
\end{equation}
where ${\cal C }_i$ is a marginal operator in the SCFT and ${\rm dim}_{{\mathcal M}_C}$ is the dimension of the conformal manifold ${\mathcal M}_C$ of the SCFT.
On a curved background the above deformation is not generally superconformal invariant. It can, however, be made superconformal invariant by adding non-minimal couplings with background fields of the supergravity Weyl-multiplet~\cite{Lauria:2020rhc,Gomis:2014woa}. This leads to a term having the form
\begin{equation}
{1\over \pi^2} \int \mathrm{d}^4 x \sqrt{g}\, \sum_i \tau_i \bkt{{\cal C }_i-{1\over 4} {\cal A}_i B_{\mu\nu}^+ B^{+\mu\nu}}+ {\rm h.c},
\end{equation}
where ${\cal A}_i$ is the bottom component of the chiral multiplet whose top component is the marginal operator ${\cal C }_i$.
For Lagrangian SCFTs based on a gauge group $G=\prod_i G_i$, the above deformation is proportional to the action for an $\mathcal{N}=2$ vector multiplet~\cite{Gerchkovitz:2014gta} with complexified gauge coupling $\tau_i$. For our purposes, it now suffices to focus on a single marginal deformation which has a Lagrangian description. Our results hold for abstract marginal deformations, irrespective of their microscopic realization.
In order to leverage the microscopic realization of marginal deformations in terms of the $\mathcal{N}=2$ vector multiplet, we use the language of cohomological fields introduced in \cite{Festuccia:2018rew,Festuccia:2020yff}. A key ingredient is the existence of a Killing vector $v$ which is the square of a supersymmetry transformation. Given $v$ we can define its dual $\kappa=g(v,\bullet)$ and the interior product on forms in the cohomology $\iota_v:\omega\in\Omega^*({\mathcal M})\mapsto(\iota_v\omega)(\bullet):=\omega(v,\bullet)$. Left and right handed generalized Killing spinors $\zeta_i$ and $\overline{\chi}^i$ of norm $s(x)$ and $\tilde{s}(x)$ respectively generate the supersymmetry transformations. These functions are related to the Killing vector field as $s\tilde{s}=\Vert v\Vert^2$. Using this geometric data one can construct the cohomological fields $\phi=\tilde{s}X+s\overline{X}$ and $\Psi_\mu=\zeta_i\sigma_\mu\overline{\lambda}_{\overline{i}}+\overline{\chi}^i\overline{\sigma}_\mu\lambda_i$ in terms of a standard $\mathcal{N}=2$ vector multiplet $(X,\lambda_i,A_\mu)$. Then the supersymmetry variations take the form
\begin{align}
\delta A&=i\,\Psi\\
\delta \Psi&=\iota_v F+i\, d_A\phi \\
\delta \phi&=\iota_v \Psi,
\end{align}
while the action can be written as
\begin{align}
S&=\frac{1}{\pig_{\rm YM}^2}\int_\mathcal{M} \Omega\wedge\text{Tr}\left(\phi+\Psi+F\right)^2+\delta(\dots)
\end{align}
where $\Omega$ is a $v$-equivariantly closed multiform whose zero-form and two-form components are given by~\cite{Festuccia:2018rew}
\begin{align*}\label{eq:defOmegas}
\Omega_0&=\frac{s-\tilde{s}}{s+\tilde{s}}+i\frac{\theta g_{\rm YM}^2}{8\pi^2}\\
\Omega_2&=-2i\frac{s-\tilde{s}}{(s+\tilde{s})^3}d\kappa-\frac{4i}{(s+\tilde{s})^3}\kappa\wedge d(s-\tilde{s}).
\end{align*} $\Omega$ also has a 4-form component which is not needed for the subsequent computations.
Since $\Omega$ is equivariantly closed, we see by a straight forward computation that
\begin{align}
(i\, d+\iota_v)\Omega\wedge\Tr(F+\Psi+\phi)^2=\delta(\Tr(F+\Psi+\phi)^2)\,,
\end{align}
showing that up to supersymmetrically exact terms the Lagrangian is equivariantly closed. Following Atiyah-Bott-Berline-Vergne, this implies that the action localizes equivariantly to a sum over fixed-points of the Killing vector \cite{Festuccia:2020xtv}. Indeed one can show that modulo $\delta$-exact terms
\begin{equation}\label{fp}
S=32 \tau\sum_{x:s(x)=0} \frac{1}{\varepsilon_x\varepsilon'_x}{\cal A}(x)+32 \overline{\tau}\sum_{x:\tilde{s}(x)=0} \frac{1}{\varepsilon_x\varepsilon'_x}\overline{{\cal A}}(x)\,,
\end{equation}
with $\varepsilon_x,\varepsilon'_x$ characterizing the manifold close to the fixed point $x$. In deriving \eqref{fp} we used that
\begin{equation}
{\rm Tr}X^2= +8i\, {\cal A}(x), \qquad {\rm Tr}\overline{X}^2=-8 i\, \overline{{\cal A}}(x).
\end{equation}
Let us show with more detail the above argument on the deformed sphere background of \cite{Hama:2012bg}. The Killing vector field and the functions $s$ and $\tilde{s}$ are given by
\begin{equation}
v=\frac{1}{\ell}\frac{\partial}{\partial\phi}+\frac{1}{\widetilde{\ell}}\frac{\partial}{\partial\chi}, \qquad s=2\sin^2\left(\frac{\rho}{2}\right), \qquad \tilde{s}=2\cos^2\left(\frac{\rho}{2}\right).
\end{equation}
The vector field has fixed-points at the north pole ($\rho=0$) and the south pole ($\rho=\pi$). We define the following multiform
\begin{equation}\label{eq:defeta}
\eta=\frac{\kappa}{\Vert v\Vert^2}-i\, \frac{\kappa\wedge d\kappa}{\Vert v\Vert^4}\quad \Rightarrow\quad (i\, d+\iota_v)\eta=1\,,
\end{equation}
which is well defined everywhere except at the fixed points of $v$. Away from the poles, we can write
\begin{align}
\Omega\wedge\text{Tr}\left(\phi+\Psi+F\right)^2&=((i\, d+\iota_v)\eta)\wedge\Omega\wedge\text{Tr}\left(\phi+\Psi+F\right)^2\nonumber\\
&=(i\, d+\iota_v)\left(\eta\wedge\Omega\wedge\text{Tr}\left(\phi+\Psi+F\right)^2\right)+\delta\left(\eta\wedge\Omega\wedge\text{Tr}\left(\phi+\Psi+F\right)^2\right).
\end{align}
To use this, we can cut out small balls of radius $\epsilon$ around the poles of the sphere and apply Stokes theorem, giving
\begin{align}
&\int_\mathcal{M} \Omega\wedge\text{Tr}\left(\phi+\Psi+F\right)^2\nonumber \\
&=\lim_{\epsilon\rightarrow 0}\left(i\, \int_{(S^{3}_\epsilon(N)\cup S^{3}_\epsilon(S))}\eta\wedge\Omega\wedge\text{Tr}\left(\phi+\Psi+F\right)^2+\delta\left(\int_{\mathcal{M}\setminus(B_\epsilon(N)\cup B_\epsilon(S))}\eta\wedge\Omega\wedge\text{Tr}\left(\phi+\Psi+F\right)^2\right)\right).\label{eq:limint}
\end{align}
Using the definition of $\eta$ in~\cref{eq:defeta} and that of $\Omega$ in~\cref{eq:defOmegas} we compute
\begin{align}
\begin{split} \eta\wedge{\Omega}\wedge\text{Tr}\left(\phi+\Psi+F\right)^2=&\text{Tr}(\phi^2)\frac{-i\, \omega_3}{(s\tilde{s})^2}\kappa\wedge d\kappa+\frac{\omega_1}{s\tilde{s}}\kappa\wedge\text{Tr}(\Psi^2)+2\frac{\omega_1}{s\tilde{s}}\kappa\wedge\text{Tr}(\phi F)\\
&+2\frac{\omega_1}{s\tilde{s}}\kappa\wedge\text{Tr}(\Psi\wedge F)\frac{-2i\, \omega_3}{(s\tilde{s})^2}\kappa\wedge d\kappa\wedge\text{Tr}(\phi\Psi)+\dots\ \,,
\end{split}
\end{align}
where we have omitted forms of degree less than three since they do not contribute to the integrals. $\omega_1$ and $\omega_3$ are the coefficients of the one- and three-form in $\eta$
\begin{equation}
\omega_1=\frac{s-\tilde{s}}{(s+\tilde{s})}+i\frac{\theta g_{YM}^2}{8\pi^2},\qquad
\omega_3= {\frac{(s-\tilde{s})(s^2+4s\tilde{s}+\tilde{s}^2)}{(s+\tilde{s})^3}+i\frac{\theta g_{YM}^2}{8\pi^2}}\,.
\end{equation}
Using the explicit form of the Killing one-form, we find that the leading term at small $\epsilon$ for the surface integrals in \eqref{eq:limint} is
\begin{align}
i\, \int_{S^{3}_\epsilon(N)} \text{Tr}(\phi^2)\omega_3\kappa\wedge d\kappa &= i\int_{S^{3}_\epsilon(N)}\text{Tr}(\phi^2)(N) (-i)\frac{1}{\epsilon^4}\omega_3(N)\frac{-2\epsilon}{f} E^1\wedge E^2\wedge E^3\nonumber\\
&=\frac{-2}{f \epsilon^3}\omega_3(N)\text{Tr}(\phi^2)(N)\int_0^{\pi/2}\int_0^{2\pi}\int_0^{2\pi} \epsilon \ell \cos\theta\epsilon\widetilde{\ell}\sin\theta \epsilon f d\phi d\chi d\theta\nonumber\\
&=-4\pi^2 \ell\widetilde{\ell} \omega_3(N) \text{Tr}(\phi^2)(N)\\
&=-16\pi^2 \ell\widetilde{\ell}\left(-1+\frac{i\theta g_{\rm YM}^2}{8\pi^2}\right)\text{Tr}(X^2)(N)\nonumber\\
&=4i\pi g_{\rm YM}^2\tau \ell\widetilde{\ell}\text{Tr}(X^2)(N)\,.\nonumber
\end{align}
All other terms contributing to the first integral in~\eqref{eq:limint} are suppressed by a factor $s\tilde{s}\approx\epsilon^2$ and thus vanish. The computation around the south pole works in the same way. One can also check that terms in the second integral are well-behaved and finite when taking $\epsilon$ to zero. This proves
that, modulo $\delta-$exact terms,
\begin{equation}\label{NScorr}
S=-4 i\tau \ell\widetilde{\ell}\Tr(X^2)(N)+4 i\overline{\tau}\ell\widetilde{\ell}\Tr(\overline{X}^2)(S) =32 \ell\widetilde{\ell} \bkt{ \tau A(N)+\overline{\tau}\overline{A}(S)}
\end{equation}
In the presence of multiple marginal deformations \eqref{NScorr} generalizes to
\begin{equation}
S=32 \ell\widetilde{\ell} \sum_i\bkt{ \tau_i A_i(N)+\overline{\tau}_{\overline{i}}\overline{A}_{\overline{i}}(S)}
\end{equation}
From this we get
\begin{align}\label{polecorr}
\partial_i\overline{\partial}_{\overline{j}}\log Z_{\cal M}
&=(32\ell\widetilde{\ell})^2\left\langle A_i(N)\overline{A}_{\overline{j}}(S)\right\rangle_{\cal M}.
\end{align}
For the case of the round sphere where $\ell=\widetilde{\ell}=r$, \eqref{polecorr} reproduces the result of~\cite{Gomis:2014woa,Gerchkovitz:2016gxx}.
Remarkably, \eqref{polecorr} can be generalized to any \emph{any} supersymmetric background. For a manifold ${\cal M}_s$ with many isolated fixed points, \eqref{polecorr} generalizes to a sum over all fixed points,
\begin{equation}\label{eq:polcorrGen}
\partial_i\overline{\partial}_j \log Z_{{\mathcal M}_s}= (32)^2 \sum_{x:s(x)=0,y: \widetilde{s}(y)=0} {1\over \varepsilon_{x} \varepsilon'_{x}\varepsilon_y\varepsilon'_{y}} \langle {\cal A}_i(x) \overline{{\cal A}}_{\overline{j}}(y) \rangle\,,
\end{equation}
where $\ell$ and $\widetilde\ell$ in \eqref{polecorr} are replaced by the two equivariant parameters $\varepsilon_x$, $\varepsilon_x'$ ($\varepsilon_y$, $\varepsilon_y'$) that characterize the plus (minus) fixed points of the chosen Killing vector on ${\cal M}_s$.
The two-point function appearing on the right hand side of \eqref{polecorr} is related to the two-point function of marginal operators due to the supersymmetric Ward identities, and hence is proportional to the Zamolodchikov metric.\footnote{One might have expected that since \eqref{polecorr} contains scalar operators, it could have been written in terms of elementary geometric data of the deformed sphere, {\it e.g.} the geodesic distance between the two fixed points. However, this is not true because of the presence of various non-trivial background fields which modify the two-point function of microscopic fields in the Lagrangian and consequently the two-point functions of various composite operators. It would be interesting to determine explicitly how the two-point functions depends on such background fields.}
Choosing $\ell= r b, \widetilde{\ell}={r\over b}$, we can parameterize \eqref{polecorr} as
\begin{equation}
\partial_i\overline{\partial}_{\overline{j}}\log Z _{\mathcal M}= {g_{i\overline{j}}\over 12}\bkt{1+ {\widetilde{P}}(\tau_i,\overline{\tau}_i,b)}\,.
\end{equation}
We can then integrate this up to
\begin{equation}\label{eq:logzAftLoc}
\log Z= {K\bkt{\tau_i,\overline{\tau}_{\overline{i}}}\over 12}\bkt{1+ P(\tau_i,\overline{\tau}_i,b)}+ P_h \bkt{\tau_i, b }+\overline{P}_h \bkt{\overline{\tau}_{\overline{i}},b},
\end{equation}
where $K(\tau_i,\overline{\tau}_i)$ is the K\"ahler potential, $ \tilde{P}= {1\over {\rm dim_{{\cal M}_C}}} g^{i\overline{j}} \partial_i \overline{\partial}_{\overline{j}} (K P)$, and $P_h$ and $\overline{P}_h$ are holomorphic and anti-holomorphic functions of the moduli. In the next section we will argue that $P_h \bkt{\tau_i, b }$ and $\overline{P}_h\bkt{\overline{\tau}_{\overline{i}},b}$ are Weyl-invariant functionals of the supergravity backgrounds.
\subsection{The moduli anomaly and the finite part of the free energy}
In this section we use insights from the moduli anomaly~\cite{Schwimmer:2018hdl} and extended conformal manifolds~\cite{Seiberg:2018ntt} to further constrain the form of the free energy for $\mathcal{N}=2$ SCFTs. In particular we demonstrate that the functions $P_h \bkt{\tau_i, b }$ and $\overline{P}_h\bkt{\overline{\tau}_{\overline{i}},b}$ appearing in~\cref{eq:logzAftLoc} are Weyl-invariant functionals of the supergravity backgrounds. Moreover, we argue that the ambiguous part of the function $K\bkt{\tau_i,\overline{\tau}_{\overline{i}}} P(\tau_i,\overline{\tau}_i,b)$ is proportional to the ${\rm I}_{(\rm Weyl)^2}$ term.
We start with the superspace expression of the Weyl-anomaly which involves the K\"ahler potential~\cite{Gomis:2015yaa,Schwimmer:2018hdl}:
\begin{equation}\label{deltaSig}
\delta_\Sigma \log Z\supset +{1\over 192 \pi^2} \int \mathrm{d}^4 x \mathrm{d}^4\theta \mathrm{d}^4\overline{\theta}\, {\cal E} \bkt{ \delta \Sigma+\delta\overline{\Sigma}} K\bkt{\tau_i,\overline{\tau}_{\overline{i}}}.
\end{equation}
The normalization is fixed by the two point function of marginal operators which is proportional to the Zamolodchikov metric.
After evaluating the right hand side of \eqref{deltaSig} in component form and setting the moduli $\tau_i$ to be constant, it takes the form of a Weyl variation of the supersymmetric Gauss-Bonnet term~(see eq. (5.9) in~\cite{Gomis:2015yaa} or (2.2) in~\cite{Schwimmer:2018hdl}),
\begin{equation}\label{GBWeyl}
{1\over 96 \pi^2} K\bkt{\tau_i, \overline{\tau}_{\overline{i}}} \int \mathrm{d}^4 x\, \delta_\sigma
\bkt{\sqrt{g} \bkt{ {1\over 8}E_4-{1\over 12} \square R + {\widetilde{c}}\bkt{\tau_i, \overline{\tau}_{\overline{i}}} \bkt{C_{\mu\nu\rho\sigma} C^{\mu\nu\rho\sigma}+\cdots}}},
\end{equation}
where $\widetilde{c}\bkt{\tau_i, \overline{\tau}_{\overline{i}}} $ is an arbitrary function of the moduli. The dependence on the Gauss-Bonnet term is unambiguous while the (Weyl)$^2$ term appears with $\widetilde{c}\bkt{\tau_i, \overline{\tau}_{\overline{i}}} $ as this is the only Weyl-invariant and supersymmetric local term that can be constructed from $\mathcal{N}=2$ supergravity background fields. Comparing \eqref{GBWeyl} with~\eqref{eq:logzAftLoc} we conclude that $P_h$ and $\overline{P}_h$ are Weyl-invariant, possibly non-local functionals of the supergravity background fields. The free energy, modulo holomorphic Weyl-invariant terms is
\begin{equation}\label{eq:logzkI}
{K\bkt{\tau_i,\overline{\tau}_i}\over 12}+ {{\rm I}_{(\rm Weyl)^2}\over 96\pi^2} K(\tau_i,\overline{\tau}_i) \tilde{c}(\tau_i,\overline{\tau}_i)+\gamma(\tau_i,\overline{\tau}_i,b),
\end{equation}
where $\gamma(\tau_i,\overline{\tau}_i,b)$ is an unambiguous, Weyl-invariant and necessarily non-local functional of the supergravity background.
We finally use the K\"ahler ambiguity and the choice of possible counter-terms~\cite{Seiberg:2018ntt} to further constrain $\tilde{c}(\tau_i,\overline{\tau}_i)$. The ambiguities in the free energy must be taken into account by appropriate counter-terms. These ambiguities render the partition function multivalued on the conformal manifold ${\cal M}_C$. Including the counter-terms with coupling $t_i$ makes the partition function single-valued on the extended conformal manifold~\cite{Seiberg:2018ntt} parameterized by marginal couplings and the counter-term couplings. For example on the round sphere the free energy is ${K(\tau_i,\overline{\tau}_i)\over 12}$ and the possible supergravity counter-terms are~\cite{Gomis:2014woa,deWit:2010za}(see also appendix A of~\cite{Chester:2020vyz})\footnote{$t_\chi$ and $t_W$ have to be holomorphic function of moduli but here we treat them as independent couplings which can have holomorphic ambiguities.}:
\begin{equation}
t_\chi{1\over 192 \pi^2 } \int \mathrm{d}^4x \mathrm{d}^4\theta {\cal E} \bkt{\Xi-W^{\alpha\beta}W_{\alpha\beta}}+c.c,\qquad t_W{1\over 192 \pi^2 } \int \mathrm{d}^4x \mathrm{d}^4\theta {\cal E} W^{\alpha\beta}W_{\alpha\beta}+c.c.
\end{equation}
The second term, which is proportional to ${\rm I}_{(\rm Weyl)^2}$, vanishes for the round sphere and the first term evaluates to
\begin{equation}
-{1\over 12}(t_\chi+\overline{t}_\chi) \chi(\mathbb{S}^4)=-{1\over 6} (t_\chi+\overline{t}_\chi).
\end{equation}
The free energy is then a well defined function of marginal couplings and $t_\chi$ if the K\"ahler shift,
\begin{equation}
K\to K+F+\overline{F}\,,
\end{equation}
is accompanied by a shift in the coupling $t_{\chi}$,
\begin{equation}
t_\chi\to t_\chi+{F\over 2}.
\end{equation}
For supergravity backgrounds with a non-zero ${\rm I}_{(\rm Weyl)^2}$, the K\"ahler shift must also be accompanied by an appropriate shift in the coupling $t_W$ to make the free energy well-defined. Since the coupling can be shifted by holomorphic functions of moduli only, this constrains the form of $\widetilde{c}(\tau_i,\overline{\tau}_i)$ in~\cref{eq:logzkI}. It must satisfy
\begin{equation}
K\bkt{\tau_i,\overline{\tau}_i} \widetilde{c}(\tau_i,\overline{\tau}_i)= \alpha K\bkt{\tau_i,\overline{\tau}_i}+\beta(\tau_i,\overline{\tau}_i),
\end{equation}
where $\alpha$ is a constant and $\beta(\tau_i,\overline{\tau_i})$ is an unambiguous function of the moduli which is independent of the K\"ahler shifts. This also implies that $\beta(\tau_i,\overline{\tau}_i)$ is modular
invariant since duality transformations generate K\"ahler shifts. The free energy is now a well-defined function of the moduli $\tau_i$, the coupling constant for the Gauss-Bonnet term $t_\chi$ and the coupling constant for the ${\rm I}_{(\rm Weyl)^2}$ term if the K\"ahler shifts are accompanied by the shifts
\begin{equation}
t_{\chi}\to t_\chi+{F\over 2}\, \qquad t_{W}\to t_W -4 \alpha F.
\end{equation}
In summary, the free energy of an $\mathcal{N}=2$ SCFT on an arbitrary supersymmetry background takes the form
\begin{equation}\label{eq:logzGen}
\log Z= {K\bkt{\tau_i,\overline{\tau}_i}\over 12} + {\alpha\over 96} K\bkt{\tau_i,\overline{\tau}_i} {\rm I}_{(\rm Weyl)^2}+ {1\over 96}\beta(\tau_i,\overline{\tau}_i) {\rm I}_{(\rm Weyl)^2}+\gamma(\tau_i,\overline{\tau}_i,b)+ P_h \bkt{\tau_i, b }+\overline{P}_h \bkt{\overline{\tau}_{\overline{i}},b},
\end{equation}
where $\alpha$ is a theory-dependent constant, $\beta(\tau_i,\overline{\tau}_i)$ and $\gamma(\tau_i,\overline{\tau}_i,b)$ are modular-invariant functions of the moduli and $P_h$($\overline{P}_h$) is a Weyl-invariant, holomorphic (anti-holomorphic) function of the moduli and supergravity background parameters.
\section{Supersymmetric localization and the free energy on the deformed sphere}
In this section we study SCFTs on the specific supergravity background of~\cite{Hama:2012bg}. We start by reviewing the background fields needed to preserve supersymmetry. We then analyze the partition function of supersymmetric theories on this background. We discuss the structure of the free energy
for Abelian $\mathcal{N}=4$ super Yang-Mills (SYM) as well as $\mathcal{N}=4$ SYM at large $N$ using localization. We then elucidate the definition of $\mathcal{N}=4$ SYM on the deformed sphere, showing that the finite part of the free energy is independent of the deformation parameter if one chooses the theory such that near the poles it reduces to $\mathcal{N}=4$ SYM in the $\Omega$-background, indicating a symmetry enhancement.
\subsection{Review of the $\mathcal{N}=2$ supersymmetric background}
An $\mathcal{N}=2$ supersymmetric theory can be coupled to a supergravity background by turning on appropriate background fields in the $\mathcal{N}=2$ Poincar\'e supergravity. To preserve supersymmetry one needs to add non-minimal couplings in the Lagrangian.
For the vector multiplet the Lagrangian takes the form ~\cite{Festuccia:2018rew,Hama:2012bg}
\begin{eqnarray}
&\sL_{\rm vec}=\sL_{\rm vec}^{\rm cov} + \Tr \Big[16 B_{m n} (F_{mn}^+\overline{X}+F_{mn}^-{X}))-64 {B^{+\mu\nu} B_{+\mu\nu}} \overline{X}^2&+ 64 {B^{-\mu\nu} B_{-\mu\nu}} X^2\nonumber\\
&&- 2 (D-{R\over 3}) X\overline{X}\Big],
\eea
where $\sL_{\rm vec}^{\rm cov}$ is obtained from the flat space Lagrangian by covariantizing all derivatives with respect to the metric and background gauge fields for the $R$-symmetry, and $X$ is the complex scalar field of the vector multiplet. Similarly for the hypermultiplet Lagrangian one needs to add non-minimal couplings with the background two-form, curvature and the scalar field to preserve supersymmetry,
\begin{equation}
\sL_{\rm hyp}=\sL^{\rm cov}_{\rm hyp} +i\, B_{\mu\nu}\Tr \bkt{\psi^1 \sigma^{\mu\nu} \psi^2-\overline{\psi}^1 \overline{\sigma}^{\mu\nu}\overline{\psi}^2 } - {1\over 4} \bkt{D-{2\over 3} R} \Tr (Z_1 \overline{Z}_1+Z_2 \overline{Z}_2),
\end{equation}
where $\psi_i$ and $Z_i$ are fermions and scalars in the hypermultiplet.
In order to preserve supersymmetry on the deformed sphere with $U(1)\times U(1)$ isometry and the metric in \eqref{eq:HHellips}, one needs to further turn on non-trivial background fields.
These were determined in~\cite{Hama:2012bg}, which we reproduce here in a slightly more conventional form\footnote{ These background fields are not uniquely determined. Indeed there is a three-parameter family of background fields which preserve supersymmetry on the deformed sphere~\cite{Hama:2012bg,Klare:2013dka,Festuccia:2018rew,Festuccia:2020yff}. In reproducing the background fields in \eqref{Bfield}-\eqref{Dfield} we have made a convenient choice of these parameters such that the the background fields are smooth at the poles.}. The background two-form field is given by
\begin{equation}\begin{split}\label{Bfield}
B_{\mu\nu}^+&={1-\cos^2{\rho\over2}\cos\rho\over 16 f g} \Bigl( \bkt{\cos\theta\bkt{g-f}+\sin\theta h}\bkt{ E^1\wedge E^4+E^2\wedge E^3}\\
&+ \bkt{\sin\theta\bkt{g-f}-\cos\theta h} \bkt{E^2\wedge E^4+E^3\wedge E^1}\Bigr)\, \\
B_{\mu\nu}^-&={1+\sin^2{\rho\over2} \cos\rho\over 16 f g}
\Bigl(\bkt{\cos\theta\bkt{f-g}+h\sin\theta}\bkt{ E^1\wedge E^4-E^2\wedge E^3}\\
&+ \bkt{\sin\theta\bkt{f-g}-\cos\theta h} \bkt{E^2\wedge E^4-E^3\wedge E^1}\Bigr),
\end{split}
\end{equation}
where $f$, $g$, and $h$ are defined in \eqref{fgh}.
The expression for the background $SU(2)_R$ field is more complicated and requires the explicit expression for the generalized Killing spinor of~\cite{Hama:2012bg} to solve for it.
It can be expressed as
\begin{equation}\begin{split}
{\cal V}_{\mu} \mathrm{d} x^\mu&= {\mathbf{V}}_a E^a,
\end{split}
\end{equation}
where ${\mathbf{V}_a}$ are $SU(2)$-valued components of the background field in the local frame. These are given by
\begin{equation}\label{VVVV}
\begin{split}
{\bf V}_1&=\widetilde{V}_{1,1}\tau^3+\widetilde{V}_{1,2}\tau^2_{\chi+\phi}, \\
{\bf V}_2&= \widetilde{V}_{2,1}\tau^3+\widetilde{V}_{2,2}\tau^2_{\chi+\phi}, \\
{\bf V}_3&= \widetilde{V}_{3,3}\tau^1_{\chi+\phi}, \\
{\bf V}_4&= \widetilde{V}_{4,3}\tau^1_{\chi+\phi},
\end{split}
\end{equation}
where we have defined $\tau_{\chi+\phi}^1=\cos({\chi+\phi}) \tau^1+\sin({\chi+\phi}) \tau^2$ and $\tau_{\chi+\phi}^2=\cos({\chi+\phi}) \tau^2-\sin({\chi+\phi}) \tau^1$. The terms multiplying the various matrices in \eqref{VVVV} are given by
\begin{equation}
\begin{split}
\widetilde{V}_{1,1}&={\sin^2\theta\over 2 f \sin\rho\cos\theta}+{\cos\theta\over 2 g \sin\rho}+{h\sin\theta \cos\rho\over 2 f g\sin\rho}\bkt{1+{\sin^2\rho\over 2}}-{1\over 2 \ell \cos\theta\sin\rho}\\
\widetilde{V}_{1,2}&={\sin\theta\cos\rho\over 2 f \sin\rho}\bkt{1-{\widetilde{\ell}^2\over g f}+{\sin^2\rho\over 2}\bkt{1-{f\over g}}} \\
\widetilde{V}_{2,1}&={\cos^2\theta\over 2 f \sin\rho\sin\theta}+{\sin\theta\over 2 g \sin\rho}-{h\cos\theta\cos\rho\over 2 f g\sin\rho }\bkt{1+{\sin^2\rho\over 2}}-{1\over 2 {\widetilde{\ell}} \sin\theta\sin\rho}\\
\widetilde{V}_{2,2}&=-{\cos\theta\cos\rho\over 2 f \sin\rho}\bkt{1-{\ell^2\over g f}+{\sin^2\rho\over 2}\bkt{1-{f\over g}}}\\
\widetilde{V}_{3,3}&=-{\cos\rho\over 2 f \sin\rho}\bkt{1-{\ell^2\widetilde{\ell}^2\over g f^3}+{\sin^2\rho\over 2}\bkt{1-{f\over g}}}\\
\widetilde{V}_{4,3}&={h\cos\rho\over 2 f g \sin\rho}\bkt{1-{\ell^2\widetilde{\ell}^2\over g f^3}+{\sin^2\rho\over 2}\bkt{1-{f\over g}}},
\end{split}
\end{equation}
Finally the expression for the background scalar field, after subtracting the contribution from the curvature coupling, is
\begin{equation}\begin{split}\label{Dfield}
D(x)-{R\over 3} &={1\over f^2}-{1\over g^2} +{h^2\over f^2 g^2}-{4\over f g}-{\sin^2\rho\cos^2\rho\over 4 f^2 g^2} \bkt{f^2+g^2-2 f g+h^2}\\&+ \bkt{
{1\over g}\partial_\rho -{h\over g f \sin\rho}\partial_\theta +{\ell^2 \widetilde{\ell}^2\cos\rho\over g f^4 \sin\rho } +{\bkt{\ell^2+\widetilde{\ell}^2-f^2}\cos\rho\over gf^2\sin\rho}-{\cos\rho\over f \sin\rho}
} \bkt{{1\over f}-{1\over g}} \sin\rho\cos\rho\\
&+ \bkt{{1\over f \sin\rho} \partial_\theta
+{\ell^2\widetilde{\ell}^2 h \cos\rho\over g^2 f^4 \sin\rho}+{2\cot2\theta\over f \sin\rho}-{h\cos\rho \over f g \sin\rho}} {h\over f g} \sin\rho\cos\rho
\end{split}
\end{equation}
\subsection{The localized partition function}
In this section we consider corrections to the free energy coming from the squashing of the sphere to an ellipsoid. We will pay close attention to the logarithmic divergence in the free energy as well as its dependence on the marginal couplings of the SCFT. We start by giving the localized partition function for $\mathcal{N}=2$ vector and hypermultiplets and then specialize to the theory with a hypermultiplet in the adjoint representation.
The localized partition function is given by \cite{Hama:2012bg}
\begin{equation}
{\mathcal Z}=\int \mathrm{d} a \bkt{\prod_{\alpha\in \Delta_+} \bkt{a\cdot \alpha}^2}\, e^{-{8\pi^2\over g_{\rm YM}^2}{\ell\widetilde{\ell}\over r^2} \Tr a^2} |Z_{\rm inst}|^2 Z_{\tt vec} Z_{\tt hyp}, \end{equation}
where $Z_{\tt vec}$ and $Z_{\tt hyp}$ are the one-loop contributions from the vector multiplet and hypermultiplet. $Z_{\rm inst}$ is the contribution of instantons at the north and the south pole. On the ellipsoid the one-loop contributions are given by
\begin{eqnarray}
Z_{\tt vec}&=&\bkt{Z_{\tt vec, U(1)}}^{r_G} \prod_{\alpha\in \Delta_+} {1\over (a\cdot \alpha)^2} \prod_{m,n\geq 0} \bkt{ \bkt{\bkt{m+1} b +{n+1\over b}}^2 +{\ell\widetilde{\ell}\over r^2}\bkt{a\cdot \alpha}^2}\\ &&\hspace{150pt}\bkt{ \bkt{m b +{n\over b}}^2 +{\ell\widetilde{\ell}\over r^2}\bkt{a\cdot \alpha}^2}.\nonumber\label{Zvec}\\
Z_{\tt hyp}&=& \prod_{\rho\in {\cal R}} \prod_{m,n\geq 0}
\bkt{\bkt{m+\tfrac12} b +{n+\frac12\over b} +i\, \sqrt{\ell \widetilde{\ell}\over r^2} \bkt{a\cdot\rho + {\mu}} }^{-1}\\ &&\hspace{105pt}\bkt{\bkt{m+\tfrac12} b +{n+\frac12\over b} -i\, \sqrt{\ell \widetilde{\ell}\over r^2} \bkt{a\cdot\rho + {\mu}}}^{-1}\,, \nonumber\label{Zhyp}
\eea
where $r_G$ is the rank of the gauge group, the product on $\alpha$ is over all positive roots and the product on $\rho$ is over all weights in the representation ${\cal R}$ of the hypermultiplet. The hypermultiplet mass is ${\mu\over r}$. $Z_{\tt vec, U(1)}$ is the one-loop determinant associated with each element of the Cartan subalgebra, and is given by
\begin{equation}
\begin{split}
Z_{\tt vec, U(1)}&= Q \prod_{m,n\geq 0, (m,n)\neq (0,0)} \bkt{ {\bkt{m+1} b +{n+1\over b}}}{ \bkt{m b +{n\over b}}},
\end{split}
\end{equation}
where $Q={b+b^{-1}}$.
This term is normally dropped on the sphere because it only contributes to an overall constant. However, on more general manifolds it encodes interesting dependence on the background and are needed to reproduce the correct correlators when taking derivatives with respect to the deformation parameter~\cite{Chester:2020vyz}. The
instanton contribution is given by the Nekrasov partition function with equivariant parameters $b,{1\over b}$ and the hypermultiplet mass ${\mu\over r}$.
Equations \eqref{Zvec} and \eqref{Zhyp} are divergent and need to be regularized, except for a specific choice of the hypermultiplet where the product of $Z_{\tt vec}$ and $Z_{\tt hyp}$ is finite. The regularized one-loop determinants can be expressed in terms of the Upsilon function $\Upsilon_b (x)$~\cite{Zamolodchikov:1995aa,Nakayama:2004vk} which has zeros at $x=mb+{n\over b}+Q, -mb-{n\over b}$ for all non-negative integers $m$ and $n$.
\begin{equation}
\begin{split}
Z_{\tt vec}&= \bkt{\Upsilon_b'(0)}^{r_G} \prod_{\alpha\in\Delta_+}{\Upsilon_b (i\sqrt{\ell\widetilde{\ell}\over r^2} a\cdot\alpha)\Upsilon_b(-i\sqrt{\ell\widetilde{\ell}\over r^2} a\cdot\alpha)\over (a\cdot\alpha)^2},\\ Z_{\tt hyp}&=\prod_{\rho\in {\cal R}} \bkt{\Upsilon_b\bkt{i\, \sqrt{\ell\widetilde{\ell}\over r^2}\bkt{a\cdot\rho+{\mu}}+{Q\over 2}}}^{-1}.
\end{split}
\end{equation}
\subsubsection{Abelian $\mathcal{N}=4$ SYM}
We now use the localization results to explore the free energy of $\mathcal{N}=4$ SYM. Due to the possibility of non-minimal couplings, there is a subtlety in defining what we mean by an $\mathcal{N}=4$ theory on curved space. In this example we consider the theory of an $\mathcal{N}=2$ vector multiplet and a massless adjoint hypermultiplet coupled to the curved space by turning on the supergravity background fields in \eqref{Bfield}-\eqref{Dfield}.
The instanton partition function for the abelian theory is independent of the deformation and is given by \cite{Pestun:2007rz,Chester:2020vyz}
\begin{equation}
Z_{\tt inst, U(1)}=\prod_{k=1}^\infty {1\over 1-e^{2\pi i\, k \tau}}\,.
\end{equation}
The one-loop determinants need to be regularized.
On the round sphere the logarithmic divergence in the free energy is given by $-4a\log \Lambda_{\rm UV}=-\log \Lambda_{\rm UV}$. After regularizing the infinite products in \eqref{Zvec} and \eqref{Zhyp} the free energy can be written as
\begin{equation}
\log Z=
\bkt{{Q^2\over 4}-2} \log \Lambda_{\rm UV}-\frac12 \log \bkt{\tau-\overline{\tau}}+\log \Upsilon'_b (0)+\log Z_{\tt inst, U(1)}+\log \overline{ Z_{\tt inst, U(1)}}.
\end{equation}
Comparing with the general form of the free energy in~\cref{eq:logzGen} we see that
\begin{equation}
{\rm I}_{(\rm Weyl)^2}=64\pi^2 \bkt{{Q^2\over 4}-1},\quad \alpha=\beta(\tau,\overline{\tau})=P_h \bkt{\tau_i, b }=\overline{P}_h \bkt{\overline{\tau}_{\overline{i}},b}=0,\quad \gamma(\tau_i,\overline{\tau}_i,b)=\log \Upsilon_b'(0).
\end{equation}
The first term is theory independent and is computed from the logarithmic divergence of the free energy.
\subsubsection{$\mathcal{N}=4$ SYM at large $N$}
Let us now consider the large $N$ limit of the $\mathcal{N}=2$ theory with a massive adjoint hypermultiplet. The instanton contribution can be ignored in this limit and
the localized partition function can be written as
\begin{equation}\label{pf}
Z(b,\mu)=\int \prod_i \mathrm{d}\sigma_i \prod_{i<j}(\sigma_{ij})^2 e^{-\frac{8\pi^2}{\lambda}N\sum_i \sigma_i^2}Z_{\tt vec}Z_{\tt hyp}
\end{equation}
where we have set $\ell={rb}$ and $\widetilde{\ell}={r\over b}$. The one-loop determinants can now be written as
{\small\begin{equation}\begin{split}
Z_{\tt vec}&= \Upsilon_b'(0)^{N-1}\prod_{i\ne j} {\Upsilon_b(i\, \sigma_{ij})\over i \sigma_{ij}}, \\
Z_{\tt hyp}&=\bkt{ \Upsilon_b({Q\over 2}+i\mu)}^{-N+1}\prod_{i\ne j} \bkt{\Upsilon_b(i\sigma_{ij}+i\, \mu+{Q\over 2})}^{-1}.
\end{split}
\end{equation}}
After some manipulation the infinite products can be written as
\begin{eqnarray}\label{ZZeq}
Z_{\tt vec}Z_{\tt hyp}&=&\bkt{\Upsilon_b'(0)\over \Upsilon_b({Q\over 2}+i\mu)}^{N-1}\prod_{i\ne j}\prod_{n=1}^\infty\prod_{m=1}^{n}\left(1-\frac{(n-2m)^2{\gamma'}^2}{(n+i\sigma'_{ij})^2}\right)\left(1-\frac{(n-2m+1+i\,\rho)^2{\gamma'}^2}{(n+i\sigma'_{ij})^2}\right)^{-1}\nonumber\\
&=&\bkt{\Upsilon_b'(0)\over \Upsilon_b({Q\over 2}+i\mu)}^{N-1}\nonumber\\ &&\times\exp\left(-\sum_{i\ne j}\sum_{p=1}^{\infty}\frac{(\gamma')^{2p}}{p}\sum_{n=1}^{\infty}\left[\frac{1}{(n+i\sigma'_{ij})^{2p}}\sum_{m=1}^{n}\left((n-2m)^{2p}-(n-2m+1+i\,\rho)^{2p}\right)\right)\right]\nonumber\\
\eea
where $\gamma'=\sqrt{1-{4\over Q^2}}$, $\rho=\frac{2\mu}{Q\gamma'}$, and $\sigma'_i=2\sigma_i/Q$. The sum over $n$ in \eqref{ZZeq} is divergent and needs to be regularized. To do this we cut off the sum at some large $n= r\Lambda_{\rm UV}'$ and then take the limit $\Lambda_{\rm UV}'\to\infty$. However, there is a subtlety as to how $\Lambda_{\rm UV}'$ is chosen since $n$ appears with the redefined fields $\sigma'_{ij}$ in \eqref{ZZeq}. In particular, we claim that to match with the definition of $\sigma'_{ij}$, $\Lambda_{\rm UV}'=2\Lambda_{\rm UV}/Q$, where $\Lambda_{\rm UV}$ is the cutoff that is held fixed as the squashing parameter is varied. While we do not prove it here, we believe that this choice is consistent with the enhancement to $\mathcal{N}=4$ supersymmetry. We will later show that this is also consistent with the results in \cite{Chester:2020vyz} for integrated correlators.
Equation \eqref{ZZeq} can then be reexpressed as
\begin{equation}\label{eq:n4largeNz}
Z_{\tt vec}Z_{\tt hyp}=\bkt{\Upsilon_b'(0)\over \Upsilon_b({Q\over 2}+i\mu)}^{N-1}\exp\left((1+\rho^2)\sum_{i\ne j}\sum_{p=1}^{\infty}{(\gamma')^{2p}}f_p(\sigma'_{ij},\rho)\right)\,,
\end{equation}
where the functions $f_p(x,\rho)$ can be written in terms of digamma functions and their derivatives, plus a logarithmic divergence. The first few examples are
\begin{eqnarray}\label{fpeq}
f_1(x,\rho)&=&-\log\Lambda_{\rm UV}'+\psi(1+i\,x)+i\,x\,\psi'(1+ix)\nonumber\\
f_2(x,\rho))&=&-\log\Lambda_{\rm UV}'+\psi(1+i\,x)+3i\,x\,\psi'(1+ix)-\left(\frac{1}{4}(1+\rho^2)+\frac{3}{2}x^2\right)\psi''(1+i\,x)\nonumber\\
&&\qquad\qquad\qquad-\left(\frac{1}{12}(1+\rho^2) i\,x+\frac{1}{6}i\,x^3\right)\psi'''(1+i\,x)\,\nonumber\\
f_3(x)&=&-\log\Lambda_{\rm UV}'+\psi(1+i\,x)+5i\,x\,\psi'(1+ix)\nonumber\\
&&\qquad-\left(\frac{5}{6}(1+\rho^2)+5x^2\right)\psi''(1+i\,x)-\left(\frac{5}{6}(1+\rho^2) i\,x+\frac{5}{3}i\,x^3\right)\psi'''(1+i\,x)\nonumber\\
&&\qquad+\left(\frac{1}{72}(1+\rho^2)(3+\rho^2)+\frac{5}{24}(1+\rho^2)x^2+\frac{5}{24}x^4\right)\psi^{(4)}(1+i\,x)\nonumber\\
&&\qquad+\left(\frac{1}{120}(1+\rho^2)(3+\rho^2)i\,x+\frac{1}{72}(1+\rho^2)i\,x^3+\frac{1}{120}i\,x^5\right)\psi^{(5)}(1+i\,x)\,.
\eea
Combining the divergent part of $f_p$ with the divergent part of the prefactor in~\eqref{eq:n4largeNz} we find that the free energy has the logarithmic divergence
\begin{equation}
\log Z\supset -\bkt{{Q^2\over 4}-1+{{\mu}}^2} \bkt{ N^2-1} \log \Lambda_{UV}'\,.
\end{equation}
Using the functions in \eqref{fpeq} one can then find corrections to the free energy order by order in $\gamma'$. We are particularly interested in the situation where $\lambda\gg1$. In this case we expect generic eigenvalues in the matrix model to be widely separated from each other at the saddle point. In other words one has that $|\sigma'_{ij}|\gg1$ for generic $i$ and $j$. In this case we have that all $f_p(x,\rho)$ satisfy $f_p(x,\rho)\approx -\log\Lambda_{\rm UV}'+\log(i\,x)$. Hence, after regularization, which removes the $\Lambda_{\rm UV}$ dependence, we have that\footnote{We have dropped the prefactor in \eqref{eq:n4largeNz} as it does not contribute to the leading order result at large $N$.}
\begin{equation}
Z_{\tt vec}Z_{\tt hyp}\Big|_{\rm reg.}\approx\prod_{i\ne j}\left(\frac{Q}{2\,}\right)^{\frac{Q^2}{2}-2+2\mu^2}\exp\left((1+\rho^2)\frac{(\gamma')^2}{1-(\gamma')^2}\left[\log(i\sigma'_{ij})\right]\right)=\prod_{i\ne j}(i\sigma_{ij})^{\frac{Q^2}{4}-1+\mu^2}\,.
\end{equation}
Substituting this into \eqref{pf} we find that the partition function is
\begin{equation}
Z{\big|_{\rm reg.}}\approx\int \prod_i \mathrm{d} \sigma_i \prod_{i<j}(\sigma^2_{ij})^{\frac{Q^2}{4}+\mu^2} e^{-\frac{8\pi^2}{\lambda}N\sum_i \sigma_i^2}
\end{equation}
This is very close to the form of a Gaussian matrix model. In fact if $\mu=\pm i\frac{Q\gamma'}{2}$ it {\it is} the Gaussian matrix model, as we know it must be since at these points $Z_{\tt vec}Z_{\tt hyp}=1$. In the large $N$ limit the saddle point equation is
\begin{equation}\label{sadpt}
\frac{16\pi^2}{\lambda}N\sigma_i=2\left(\frac{Q^2}{4}+\mu^2\right)\sum_{j\ne i}\frac{1}{\sigma_i-\sigma_j}\,,
\end{equation}
which is equivalent to the saddle point equation for a Gaussian matrix model with $\lambda$ replaced by $\left(\frac{Q^2}{4}+\mu^2\right)\lambda$. Hence, the free energy is
\begin{equation}\label{ZQmu}
\log Z\big|_{\rm reg.}\approx\frac{N^2}{2}\left(\frac{Q^2}{4}+\mu^2\right)\log\left(\lambda\left(\frac{Q^2}{4}+\mu^2\right)\right)\,.
\end{equation}
Note that \eqref{ZQmu} is very similar to the free energy for strongly coupled $\mathcal{N}=2^*$ on the round sphere \cite{Russo:2012kj,Russo:2012ay,Bobev:2013cja}.
Due to the logarithmic divergence, one needs combinations of at least three derivatives with respect to $Q$ and $\mu$ for scheme independence. For $\mu=0$ we can write the free energy, including the divergent part, as
\begin{equation}
\log Z \approx N^2 \sbkt{-\bkt{{Q^2\over 4}-2}\log \Lambda_{\rm UV} +{Q^2\over 8} \log \lambda+{Q^2\over8} \log {Q^2\over 4}}.
\end{equation}
Comparing this with the Weyl anomaly and the general form of the finite part in~\eqref{eq:logzGen}, we see that in the large $N$ limit\footnote{We use that in the large $N$ limit $K(\tau,\overline{\tau})=-6 N^2 \log \lambda$.}
\begin{eqnarray}
&{\rm I}_{(\rm Weyl)^2}= {1\over 64\pi^2} \left({Q^2\over 4}-1\right),\qquad \alpha= 512,\qquad &\beta(\tau_i,\overline{\tau}_i)=P_h \bkt{\tau_i, b }=\overline{P}_h \bkt{\overline{\tau}_{\overline{i}},b}=0,\nonumber\\
&& \gamma(\tau_i,\overline{\tau}_i,b)=-{N^2Q^2\over 8} \log {Q^2\over 4}.
\eea
\subsection{Subtleties regarding $\mathcal{N}=4$ on curved space and an infinite set of relations}
It is a subtle issue to identify a quantum field theory on a general curved space as a CFT.
For a conformally flat background, one can canonically map a CFT from flat space to the curved space. For a generic curved space one cannot unambiguously determine a unique fixed point due to the presence of more than one length scale. The ambiguity manifests itself in various choices for the non-minimal coupling of the QFT to the curved space. Demanding supersymmetry substantially restricts these choices but still leaves some ambiguity. One possibility is to determine the beta function of the theory on the curved space\footnote{The flat space beta function can be determined from the localized partition function by examining the scale dependence of the one-loop determinants and comparing it to the classical part \cite{Pestun:2007rz,Minahan:2013jwa,Minahan:2017wkz} for the case of the sphere.}. However, the renormalization group flow is not well understood on curved space~\cite{Hollands:2002ux}.
This ambiguity is also present when we place $\mathcal{N}=4$ SYM on a curved space. We define the $\mathcal{N}=4$ theory as $\mathcal{N}=2^*$ with a special value of the hypermultiplet mass parameter, $\mu/r$. Na\"ively, one would associate an adjoint hypermultiplet with $\mu=0$ to the $\mathcal{N}=4$ theory. Indeed, this turns out to be the correct choice for the theory on the round sphere~\cite{Okuda:2010ke}. Near the poles, the round supersymmetric background is equivalent to the $\Omega$-background with equivariant parameters $\epsilon_1=\epsilon_2={1\over r}$. For generic values of the hypermultiplet mass superconformal symmetry is broken and only 8 supercharges are preserved. For the correct value of the adjoint hypermultiplet mass, all 32 superconformal symmetries are restored in the $\Omega$-background. This value depends on the equivariant parameters, and for the round sphere corresponds to $\mu=0$.
The $\mathcal{N}=4$ value of $\mu$ is modified on the deformed sphere.
At the superconformal point, the mass parameter $m_N$ which appears in the Nekrasov partition function is equal to either equivariant parameter \cite{Okuda:2010ke}. In the more general case, $m_N$ is related to the hypermultiplet mass $ {\mu\over r}$ and the equivariant parameters $\epsilon_1, \epsilon_2$ as \cite{Okuda:2010ke}
\begin{equation}\label{masseps}
m_N=i\, {\mu\over r} +{\epsilon_1+\epsilon_2\over2}.
\end{equation}
Since the equivariant parameters are equal on the round sphere, setting $\mu=0$ leads to
$
m_N={1\over r},
$
which is the conformal mass term on the round sphere. For the deformed sphere we have that $\epsilon_1=\frac{b}{r},$ $\epsilon_2=\frac{1}{br}$, and thus get\footnote{In computing the partition function via localization only values of the background fields near the poles appear in the one-loop determinants. It is possible that there is a non-trivial profile of the hypermultiplet mass which enhances the symmetry and has the value in~\eqref{eq:n4mass} at the poles.}
\begin{equation}\label{eq:n4mass}
i \mu = \pm {1\over 2} \bkt{b-{1\over b}}
\end{equation}
so that $m_N=\epsilon_1$ or $m_N=\epsilon_2$ at this pole.
The relation in \eqref{eq:n4mass} is also the value advocated in~\cite{Fucito:2015ofa} by demanding that the instanton partition function is trivial. By embedding the supersymmetric background in $\mathcal{N}=4$ supergravity~\cite{Maxfield:2017bfe}, one can show that the supersymmetry enhances at the poles~\cite{Charles:2020} for this particular value of mass parameter. Moreover, for this value of the hypermultiplet mass the infinite products in the one-loop determinants simplify to $Z_{\tt vec}Z_{\tt hyp}=1${\color{red}\footnote{Another interesting choice is ${\mathrm i}\mu=-{1\over2}\bkt{b+{1\over b}}$ for which the one-loop determinants cancel the Vandermonde determinant and only the instantons contribute to the partition function. For $b=1$ this choice coincides with the theory pointed out in eq. (5.16) of~\cite{Pestun:2007rz}.}.}
Consequently, the partition function
with any gauge group is independent of the deformation and given by
\begin{equation}
Z\bkt{b,\mu=\pm\bkt{{i\, b\over 2} -{i\, \over2 b}}}=\int \mathrm{d} a \exp\bkt{-{8\pi^2 \over g_{\rm YM}^2} \Tr a^2} {\prod_{\alpha\in \Delta_+} \bkt{a\cdot \alpha}^2}\,.
\end{equation}
For the $SU(N)$ gauge group in the large $N$ limit, the result in~\cref{ZQmu} becomes exact for the $\mathcal{N}=4$ value for the mass and we obtain the free energy
\begin{equation}
\log Z=-\frac{N^2}{2}\log\lambda.
\end{equation}
This has remarkable consequences for the integrated correlators that can be computed by taking derivatives of the free energy with respect to the squashing parameter. In particular
\begin{equation}\label{eq:dblogz0}
\partial_b^n \log Z \bkt{b, \mu=\pm\bkt{{i\, b\over 2} -{i\, \over2 b}}}=0
\end{equation}
gives a relation between various integrated $n$- and lower-point correlation functions. In~\cite{Chester:2020vyz}, three non-trivial relations between various four-point correlators in $\mathcal{N}=4$ SYM were derived by studying the mass-deformed $
\mathcal{N}=2^*$ on the deformed sphere.
We can demonstrate
how two of these relations are a simple consequence
of \eqref{eq:dblogz0}\footnote{To be precise, one obtains unambiguous relations between various derivatives of the free energy which can be related to integrated correlators. In doing so one needs to be careful in dealing with possible contributions from redundant operators. We thank G. Festuccia for pointing this out.}. To do so, we write the deformed Lagrangian schematically as
\begin{equation}
{\mathscr L}= \frac{1}{g_{\rm YM}^2}\sum_{n=0}^{\infty} \bkt{b-1}^n ({\mathscr L}^{0,n}+\mu{\mathscr L}^{1,n}
+\mu^2{\mathscr L}^{2,n}
).
\end{equation}
The first relation in~\cite{Chester:2020vyz} is
\begin{equation}
-64 \pi^2\partial_\tau \partial_{\overline{\tau}} \bkt{\partial_\mu^2-\partial_b^2} \log Z(b,\mu)\Big|_{\mu=0,b=1}=0\,.
\end{equation}
This is equivalent to \footnote{We set $\theta=0$ in the following expressions.}
\begin{equation}\label{rel1}
\begin{split}
&-2\int \prod_{i=1}^2 \mathrm{d}^4 x_i \sqrt{g(x_i)} \langle\sL^{1,0}(x_1)\sL^{1,0}(x_2)+2 \sL^{0,0}(x_1)\sL^{2,0}(x_2)-\sL^{0,1}(x_1)\sL^{0,1}(x_2)-2\sL^{0,0}(x_1)\sL^{0,2}(x_2) \rangle\\
&+{4\over g_{\rm YM}^2}\int \prod_{i=1}^3 \mathrm{d}^4 x_i \sqrt{g(x_i)}\langle \sL^{0,0}(x_1)\sL^{1,0}(x_2) \sL^{1,0}(x_3) - \sL^{0,0}(x_1)\sL^{0,1}(x_2) \sL^{0,1}(x_3) \rangle\\
&-{1\over g_{\rm YM}^4} \int \prod_{i=1}^4 \mathrm{d}^4 x_i \sqrt{g(x_i)}\langle \sL^{0,0}(x_1)\sL^{0,0}(x_2)\sL^{1,0}(x_3)\sL^{1,0}(x_4) -\sL^{0,0}(x_1)\sL^{0,0}(x_2)\sL^{0,1}(x_3)\sL^{0,1}(x_4)\rangle
\\ &=0\,.
\end{split}
\end{equation}
It is straightforward to check that the same constraint follows from the deformation independence of the $\mathcal{N}=4$ theory. In particular, the combination appearing in \eqref{rel1} is equal to
\begin{equation}
-32 \pi^2\partial_\tau \partial_{\overline{\tau}} \partial_b^2 \sbkt{\log Z(b, {i\, b\over2}-{i\over 2 b})+\log Z(b,-{i\, b\over 2}+{i\over 2 b})}\Bigg|_{b=1}.
\end{equation}
Similarly we can show that the second relation in \cite{Chester:2020vyz},
\begin{equation}
\bkt{-6\partial_b^2\partial_\mu^2+\partial_\mu^4+\partial_b^4-15\partial_b^2} \log Z(b,\mu)\Big|_{\mu=0,b=1}=0\,,
\end{equation}
is equivalent to
\begin{equation}
\bkt{\partial_b^4-15 \partial_b^2} \sbkt{\log Z(b, {i\, b\over2}-{i\over 2 b})+\log Z(b,-{i\, b\over 2}+{i\over 2 b})}\Bigg|_{b=1}=0,
\end{equation}
where we also used $\log Z(b,\mu)=\log Z(b^{-1},\mu)$, which is evident from the construction of the partititon function.
One can, in fact, derive an infinite number of relations between various integrated correlators using
\begin{equation}
\sum_{n} a_n \partial_b^{n}\sbkt{\log Z(b, {i\, b\over2}-{i\over 2 b})+\log Z(b,-{i\, b\over 2}+{i\over 2 b})}\Bigg|_{b=1}=0,
\end{equation}
where the $a_n$ are chosen to satisfy
\begin{equation}
\sum_{n} a_n \partial_b^{n} (b+{1\over b})^2 \Big|_{b=1}=0\,,
\end{equation}
in order to ensure that ambiguous terms in the free energy do not contribute.
For example, the above relation with the operator $\partial_b^5+6 \partial_b^4$ translates into
\begin{equation}
\bkt{\partial_b^5+6\partial_b^4-10 \partial_b^3 \partial_\mu^2-6\partial_b^2\partial_\mu^2-4\partial_\mu^4} \log Z(b,\mu)\Big|_{b=1,\mu=0}=0.
\end{equation}
The third relation in \cite{Chester:2020vyz} is
\begin{equation}\label{rel3}
-16 c=(3\partial_b^2\partial_\mu^2-\partial_\mu^4-16\tau_2^2\partial_\tau\partial_{\overline{\tau}}\partial_\mu^2)\log Z(b,\mu)\Big|_{b=1,\mu=0},
\end{equation}
where $c=\frac{N^2-1}{4}$. The authors of \cite{Chester:2020vyz} provided overwhelming evidence for \eqref{rel3}, and it is straightforward to show that this is consistent with the large-$N$ expression given in~\eqref{ZQmu}, but it does not follow from \eqref{eq:dblogz0} alone. It would be interesting to demonstrate \eqref{rel3} directly.
\section*{Acknowledgements}
We thank A. Ardehali helpful discussions and G. Festuccia for helpful discussions and comments on the manuscript.
This research is supported in part by
Vetenskapsr{\aa}det under grant \#2016-03503 and by the Knut and Alice Wallenberg Foundation under grant Dnr KAW 2015.0083.
JAM thanks the Center for Theoretical Physics at MIT for kind (virtual)
hospitality during the course of this work.
|
1,314,259,995,961 | arxiv | \section{Introduction}
Event cameras (dynamic vision sensors) are imaging devices which asynchronously measure per-pixel brightness changes. These sensors are suited for robotics and virtual reality applications, since they offer lower latency, lower power consumption as well as higher dynamic range and higher temporal resolution compared to frame-based cameras. In order to actually tap into these benefits, computer vision algorithms for event-based sensors need to be developed. However, since event sensors are based on fundamentally different measurement principles than standard frame-based cameras, traditional computer vision algorithms cannot simply be applied to event data, but rather need to be developed from scratch.
Event cameras report per-pixel brightness changes of the observed logarithmic brightness. Each event $\mathbf{e}_i = \{ t_i, \mathbf{x}_i, p_i\}$ is a tuple of a micro-resolution timestamp $t_i$, image plane coordinates $\mathbf{x}_i = (x_i, y_i)$ and the respective polarity change $p_i \in \{-1, 1\}$. The data stream is asynchronous and sparse because an event is transmitted only if the logarithmic brightness changes by a predefined, usually unknown threshold. This is in contrast to frame-based cameras, where each pixel is illuminated during a shared exposure time interval, resulting in an absolute brightness measurement at a fixed frequency.
A common approach is to accumulate events over a fixed time interval into a frame-like structure and apply traditional computer vision methods on them. The drawback of a naive accumulation is that moving edges in the scene result in blurred edges in the image plane. One popular way to correct for this error is by estimating constant optical flow in a certain space-time window, i.e. at a predefined patch location and during a time interval. Each event in the window is then warped to a common reference time using the estimated optical flow. The goal is to create maximally sharp event patch image. This approach is known as motion-compensation \cite{gallego2018unifying}. It is applied to a variety of event-based vision tasks, such as feature tracking \cite{Gehrig19ijcv}, \cite{Seok2020WACV}, ego-motion estimation \cite{Zhu_2019_CVPRUnsupervised}, \cite{ye2018unsupervised} or motion segmentation \cite{stoffregen2019event}.
In case of feature tracking, there remains one main drawback with motion-compensation: Usually, the estimated optical flow is assumed to be constant in a certain (short) time interval, hence the trajectory of events in the x-y-t space-time volume is piece-wise linear. To resolve this problem, we propose to track features with a continuous curve and optimize the curve in the manner of sliding window optimization. In this paper, we propose to employ B-splines as a curve representation in sliding window optimization for two reasons: (1) To account for feature tracks of variable length, we can build n-th order continuous trajectories by adding any number of knots to the curve. (2) Adding knots or changing knot values only changes the curve {\em locally}, so we can reduce the problem size by setting old knots which are out of scope to a constant value. Our approach is compared to a state-of-the-art event feature tracking algorithm and shows significant improvements in terms of error and feature age. The contributions of this paper can be summarized as follows:
\begin{enumerate}\setlength\itemsep{2mm}
\item We introduce the first event feature tracking algorithm that uses continuous B-spline functions and employs SE2 warping of events.
\item We optimize the B-spline parameters of the trajectory in a sliding-window manner.
\item We experimentally confirm significant improvements in tracking precision and feature age over existing event tracking algorithms.
\end{enumerate}
\section{Related Work}
One line of work performs hybrid tracking by combining frames with events. The advantage of hybrid approaches is that during certain degenerate motions, such as movements parallel to an intensity edge (when no events are triggered), the frames can still provide useful intensity information for tracking. The work by Gehrig et al. \cite{Gehrig19ijcv} detects features in standard frames and tracks those using events. Their approach employs an event generation model, which is based on frames and estimated optical flow to predict the observed events. To model larger variations in appearance, the feature patches are additionally parameterized by a rigid warp function in the image plane. This method achieves accurate results on a variety of datasets. However, it relies on specialized hardware, such as the Dynamic and Active-pixel Vision Sensor \cite{brandli2014240}, which captures frames and events within the same pixel array, or alternatively requires beam splitting techniques.
There have been adoptions of frame-based corner detectors to event streams, such as the event-FAST corner detector by Muggler et al. \cite{muggler2017bmcvEfsat} or the event-Harris detector by Vasco et al. \cite{Vasco2016FastEH}. However, those trackers are not robust to changes in motion direction \cite{manderscheid2019speed}. In our work, we circumvent this issue by keeping most information from the past in the form of a template, see section \ref{sec::method}.
The work by Ignacio et al. \cite{ignacio20193dv} proposes an event-by-event tracking approach which models different hypotheses per tracked feature. While this work tracks features at a very high rate of up to 12500 events per second, it is still formulated in discrete time and thus does not allow for simple derivative calculations of the trajectory, which can be useful in some applications. The approach by Manderscheid et al. \cite{manderscheid2019speed} also performs tracking on an event-by-event basis. They train a random forest which extracts only the corner features from the event stream. The main drawback is that they rely on absolute intensity information during training time.
The work by Zhu et al. \cite{zhu2017icra} first builds feature templates by accumulating the event stream over a short time interval. A batch of new event is then aligned to those templates by probabilistic, soft association. The optical flow is computed as an expectation over all associations and the patch can undergo affine deformations during tracking.
The work by Seok et al. \cite{Seok2020WACV} is the first approach to formulate event tracking in continuous time. However, adding more knots to the existing B\'ezier curve changes all previous knots, so the feature trajectory has to be formed by concatenating many short B\'ezier curves and is only zero order continuous.
To the best of our knowledge, we are the first to formulate event tracking with the concept of sliding window optimization.
\section{Method}
\label{sec::method}
We define a B-spline curve $B(t;\Theta,\Delta t)$ which returns a transformation $T_{r,t}$ to transform a 2D-point from current frame at timestamp $t$ to reference frame r. All knots in the spline are denoted by $\Theta$ and the time difference between two knots is denoted by $\Delta t$, which is a pre-determined constant and we will ignore it in our notation later in the paper. If an event $e_n$ is within the region of a patch, it satisfy the condition $B(t_n;\Theta)x_{start} - \textbf{x}_n < R$, where $x_{start}$ is the starting position of the patch and $R$ is the radius of the patch. The position of a warped event $e_n$ is defined as
\begin{equation}
\mathbf{x}_n' = B(t_n;\Theta,\Delta t)\mathbf{x}_n
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=130mm]{figures/optim_window.png}
\caption{Process of updating the sliding window with optimizing 3 knots using SE(2) warping. The arrows on the knots indicate the rotation angles, green knots are the knots inside the sliding window, red knots are fixed knots. The image patches containing the star shape show the History patches. Top: The latest event is in the second last event. Middle: The latest event is in the last event bin, we add an additional knot for the trajectory. Down: After we add a new knot, we update the History patch and the location of the sliding window to fix the problem size.} \label{fig::optWindow}
\end{figure}
We create a patch image by warped back all events in the patch to reference time (here we set the reference time as the starting time of the feature). Since the warped positions of events are not guaranteed to be integers, a bi-linear kernel $k_b$ is used to construct differentiable patches. If event $e_n$ is received at pixel $\mathbf{x}_n$ at time $t_n$, the patch image from events received in the time domain $[t:t']$ is defined as
\begin{equation}
I_{[t:t']}(\mathbf{x}) = \sum_{\{n;t_n \in [t:t']\}} k_b(\mathbf{x},\mathbf{x}_n')
\end{equation}
\begin{equation}
k_b(\mathbf{a},\mathbf{b}) = max(0,1-|a_1-b_1|) \cdot max(0,1-|a_2-b_2|)
\end{equation}
The advantage of using bi-linear kernel instead of using Dirac-delta function is that the patch image is also well-defined in sub-pixel level, which enables us to calculate well-defined $\frac{\partial I}{\partial \mathbf{x'}}$.
In principle, we can optimize a continuous B-spline trajectory by optimizing all knots on the spline and taking all events into account, but this is expensive and inefficient. Inspired by \cite{Seok2020WACV}, we make use of a History patch $H$ which compresses the information of previous knots and events in order to speed up the algorithm. The History patch $H$ is built in recurrent relation. $H$ and the modified patch image $I^*(t)$ at timestamp $t$ are defined as
\begin{equation}
H_{t_{k}}(\mathbf{x};\mathbf{\Theta}) = I_{[t_{k-1}:t_{k})}(\mathbf{x};\mathbf{\Theta}) + \rho H_{t_{k-1}}(\mathbf{x;\mathbf{\Theta}})
\end{equation}
\begin{equation}
I^*(\mathbf{x}, t;\mathbf{\Theta}) = I_{[t_{k}:t)}(\mathbf{x};\mathbf{\Theta}) + H_{t_{k}}(\mathbf{x};\mathbf{\Theta})
\end{equation}
where $t_k$ is the timestamp of knot k, $H_{t_{0}}(\mathbf{x}) = \textbf{0}$ and $0 \leq \rho \leq 1$ is the decaying parameter.
The best B-spline curve is the one which maximize the variance (sharpness) of a patch image from active event with using History patch, see Fig. \ref{fig::optWindow} for an equivalent visualization of the problem using SE(2) warping. The modified variance $\sigma^*$ of the patch at timestamp $t$ is defined as
\begin{equation}
\label{eqn:modified_var}
\sigma^*(P(t);\mathbf{\Theta}) = \frac{1}{N} \sum_\mathbf{x} ( I^*(\mathbf{x}, t;\mathbf{\Theta}) - \langle I^*(\mathbf{x}, t;\mathbf{\Theta}\rangle) ^2
\end{equation}
where $N$ is the total number of pixels in the patch, $\mathbf{x}$ is the image coordinate, $\langle I^*(\mathbf{x}, t;\mathbf{\Theta}\rangle$ is the mean value of the modified patch image $I^*$. The work in \cite{gallego2019focus} shows that among 22 possibilities of measuring sharpness in event images, the variance is often a suitable choice.\\
To maximize $I^*$, we need the Jacobian of the variance function w.r.t warping parameters $\mathbf{\Theta}$:
\begin{equation}
\frac{\partial I^*(\mathbf{x}, t;\mathbf{\Theta}) }{\partial \mathbf{\Theta}} = \sum_{n=1}^{N_e^i} \frac{\partial k_b(\mathbf{x},\mathbf{x}_n')}{\partial \mathbf{x}_n'} \frac{\partial \mathbf{x}_n'}{\partial B} \frac{ \partial B(t_n;\mathbf{\Theta})}{\partial \mathbf{\Theta}}
\end{equation}
To use SE(2) B-spline as an example, the parameter $\mathbf{\Theta}$ with $N_k$ knots is defined as\\
\begin{equation}
\mathbf{\Theta} = [k_1, \cdots, k_{N_k} ]
\end{equation}
with $k_i = [x_1^i,x_2^i,\theta^i]$
\begin{equation}
\frac{ \partial \mathbf{x}'}{\partial B}
= -\begin{bmatrix}
R_{r,t} & | & \sigma_x \mathbf{x}'
\end{bmatrix}_{2\times3}
\end{equation}
with $\sigma_x = \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}$, $R_{r,t}$ is the rotation matrix of the feature relative to its original orientation
\begin{equation}
\frac{ \partial B(t_n;\mathbf{\Theta})}{\partial k_i}
= \lambda(t, \Delta t_{knot}) \mathbb{I}_{3\times 3}
\end{equation}
We refer the reader to \cite{sommer19spline} for the deviation of $\lambda(t, \Delta t_{knot})$. The optimal solution $\mathbf{\Theta}^*$ for Equation \ref{eqn:modified_var} is calculated through maximizing $\sigma^*$ by line search.
\begin{table}
\caption{Quantitative Comparison of \cite{zhu2017icra} against our method with error threshold 3.}\label{tab::ours-zhu-th3}
\centering
\resizebox{120mm}{!}{%
\begin{tabular}{|
*{1}{>{\centering\arraybackslash}p{0.25\textwidth}}|
*{1}{>{\centering\arraybackslash}p{0.15\textwidth}}||
*{4}{>{\centering\arraybackslash}p{0.17\textwidth}}
*{1}{>{\centering\arraybackslash}p{0.19\textwidth}|}}
\hline
dataset & method & mean relative age & mean age[sec] & mean error[pix] & mean common error[pix] & mean common error for ours/ours$^*$[pix]\\
\hline
\hline
\multirow{3}{*}{shapes\_translation} & ours\;\; & \textbf{0.27} & \textbf{0.71} & 0.80 & \textbf{0.91} & \textbf{0.95}\\
& ours* & 0.26 & 0.63 & \textbf{0.79} & \textbf{0.91} & \textbf{0.95}\\
& Zhu et al. & 0.04 & 0.08 & 2.89 & 2.92 & -\\
\hline
\multirow{3}{*}{shapes\_rotation} & ours\;\; & 0.30 & 0.61 & \textbf{0.81} & \textbf{0.93} & \textbf{0.90}\\
& ours* & \textbf{0.32} & \textbf{0.67} & 0.83 & \textbf{0.93} & \textbf{0.90}\\
& Zhu et al. & 0.02 & 0.02 & 2.81 & 2.81 & -\\
\hline
\multirow{3}{*}{shapes\_6dof} & ours\;\; & \textbf{0.32} & \textbf{1.48} & \textbf{1.05} & \textbf{0.61} & \textbf{1.14}\\
& ours* & 0.31 & 1.45 & 1.07 & \textbf{0.61} & 1.16\\
& Zhu et al. & 0.05 & 0.18 & 2.05 & 1.97 & -\\
\hline
\multirow{3}{*}{poster\_translation} & ours\;\; & 0.47 & 1.43 & \textbf{0.87} & \textbf{0.71} & \textbf{0.88}\\
& ours* & \textbf{0.48} & \textbf{1.47} & \textbf{0.87} & \textbf{0.71} & \textbf{0.88}\\
& Zhu et al. & 0.33 & 0.71 & 1.15 & 1.10 & -\\
\hline
\multirow{3}{*}{poster\_rotation} & ours\;\; & 0.41 & 1.33 & \textbf{0.79} & \textbf{0.66} & 0.80\\
& ours* & \textbf{0.45} & \textbf{1.54} & 0.80 & \textbf{0.66} & \textbf{0.79}\\
& Zhu et al. & 0.20 & 0.51 & 1.51 & 1.40 & -\\
\hline
\multirow{3}{*}{poster\_6dof} & ours\;\; & \textbf{0.35} & 2.57 & \textbf{1.05} & \textbf{0.87} & \textbf{1.04}\\
& ours* & \textbf{0.35} & \textbf{2.61} & \textbf{1.05} & \textbf{0.87} & \textbf{1.04}\\
& Zhu et al. & 0.25 & 1.37 & 1.32 & 1.28 & -\\
\hline
\multirow{3}{*}{boxes\_translation} & ours\;\; & 0.35 & 1.35 & \textbf{0.98} & \textbf{0.88} & \textbf{0.98}\\
& ours* & \textbf{0.37} & \textbf{1.50} & 1.01 & \textbf{0.88} & 1.00\\
& Zhu et al. & 0.31 & 0.95 & 1.19 & 0.94 & -\\
\hline
\multirow{3}{*}{boxes\_rotation} & ours\;\; & 0.33 & 1.22 & \textbf{0.79} & \textbf{0.68} & 0.81\\
& ours* & \textbf{0.36} & \textbf{1.41} & 0.80 & \textbf{0.68} & \textbf{0.80}\\
& Zhu et al. & 0.19 & 0.59 & 1.57 & 1.53 & -\\
\hline
\multirow{3}{*}{boxes\_6dof} & ours\;\; & 0.37 & 1.76 & \textbf{1.07} & 0.72 & 1.08\\
& ours* & \textbf{0.38} & \textbf{1.85} & \textbf{1.07} & \textbf{0.71} & \textbf{1.07}\\
& Zhu et al. & 0.15 & 0.64 & 1.88 & 1.78 & -\\
\hline
\end{tabular}
}
\end{table}
\section{Experiments}
We evaluate all methods on each dataset with the same pre-selected, evenly-distributed Harris corners. We use a circular patch with diameter $d = 31$, a decaying parameter of $\rho = 0.9$ and track up to 60 features in each experiment. We use third order B-spline and create a new spline knot every 50 milliseconds. If the number of events used in the optimization is small, it may lead to a wrong optimal solution. We tackle this problem by optimizing more knots with more events in this case. In the experiments denoted by \textit{ours*}, we optimize three knots when there are less than $\frac{d^2}{4}$ events in the sliding window, otherwise we optimize only two knots to speed up the run-time. The method of always optimizing two knots is denoted \textit{ours}.
Evaluation is performed on the Event Camera Dataset \cite{mueggler2017event}, which contains recordings from a DAVIS camera. Ground truth feature tracks are computed from frames of the DAVIS camera using the KLT optical flow method \cite{lucas1981iterative}. We compare our methods against Zhu et al. \cite{zhu2017icra}. To allow for a fair comparison against Zhu et al., we use the authors public MATLAB implementation and initialize the tracking with exactly the same feature positions as in our method, disabling the re-detection of new features.
We use four different metrics to do the evaluation. To illustrate the metrics clearly, the error of feature $f_i$ at time $t$ with using method $m$ is denoted by $e^i_m(t)$. The lifetime of feature $f_i$ before the error is larger than threshold $th$ with using method $m$ is denoted by $L_m^{th}(f_i)$. The definition of each metric with error threshold $th$ are:\\
\begin{equation}
\text{mean relative age} = \langle\frac{L_m^{th}(f_i)}{L_{gt}(f_i)}\rangle_i
\end{equation}
\begin{equation}
\text{mean age} = \langle L^{th}_m(f_i)\rangle_i
\end{equation}
\begin{equation}
\text{mean error} = \langle\{ e^i_m(t); \ t\leq L^{th}_m(f_i) \}\rangle_i
\end{equation}
\begin{equation}
\text{mean common error} = \langle\{ e^i_m(t); \ t\leq t_{min} \}\rangle_i
\end{equation}
where $\langle \dots \rangle_i$ takes average measurement of all features i, $t_{min}=\min\{(L^{th}_{m_j}(f_i); \ j=1,2,\dots, N_m\}$ is the minimal feature lifetime and $N_m$ is the number of methods we compare. We set the threshold $th$ to 3 pixels in all experiments.
\begin{figure}[h]
\centering
\includegraphics[width=105mm]{figures/tracksTime_harris_knot3_th3.png}
\caption{Number of feature tracks over time with error threshold 3 for the Event Camera Dataset.} \label{fig::numTracksTime}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=115mm]{figures/example_3knots_better.png}
\caption{Example of the features in Boxes dataset which are improved in method $ours^*$ compared to method $ours$.} \label{fig::example_3knot_better}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=110mm]{figures/3d_plot.png}
\caption{Qualitative results of feature tracking for the Event Camera Dataset.} \label{fig::3dplot}
\end{figure}
In Table \ref{tab::ours-zhu-th3}, we compare our method to Zhu's \cite{zhu2017icra} approach. It shows that our method is always better than Zhu. By comparing \textit{ours} and \textit{ours*}, we can see that enlarging the window size during low-events periods can help to improve the mean age around 6\% and the mean common error for ours/ours$^*$ almost remains unchanged. Fig. \ref{fig::example_3knot_better} shows some examples of features which are improved when using method $ours^*$. Fig. \ref{fig::numTracksTime} shows the number of feature tracks over time. The features in $ours*$ last slightly longer than in $ours$. Compared to Zhu, our proposed algorithm tracks more features (with lower error) at almost all instances in time.
Since there is no public implementation of \cite{Seok2020WACV} available, and we are using different initial positions, we can only compare our result qualitatively. In Fig. \ref{fig::3dplot} we show the 3D trajectories which can be used to compare the results to \cite{Seok2020WACV} qualitatively. It shows that our trajectories live longer than theirs.
\section{Conclusion}
In this paper we proposed a novel event tracking algorithm that aligns the event stream with a B-spline curve representation in a sliding window fashion. By using a history patch, the locality of B-splines and a sliding window optimization, our algorithm can track features accurately and for a long time. Our experiments show that the proposed algorithm outperforms the state-of-the-art time-continuous event tracking algorithm. We believe that this method can serve as a basis for event-based video analysis and event-based SLAM. Future research aims at extending our algorithm to a Sim(2)-formulation, allowing to track features with scale changes.
\bibliographystyle{splncs04}
|
1,314,259,995,962 | arxiv | \section{Introduction}
The rich dynamic behavior of interacting multi-agent, or particle, systems
has been the focus of numerous recent studies. These multi-particle systems
are capable of self-organization, as shown by the various coherent
conformations with complex structure that they generate,
even when the interactions are short range and in the absence of a leader
agent. The study of these `swarming' or `herding' systems has had many
interesting biological applications which have resulted in a better
understanding of the spatio-temporal patterns formed by bacterial colonies,
fish, birds, locusts, ants, pedestrians, etc. \cite{Budrene95, Toner95,
Toner98, Parrish99,Edelsteinkeshet98,Topaz04, hebling1995}. {The
mathematical study of these swarming systems is also helpful in
the understanding of oscillator synchronization, as in the neural phenomenon of
central pattern generators \cite{Cohen82}}. The results of
these studies have impacted and have been successfully applied in the design of
systems of autonomous, inter-communicating robotic systems
\cite{Leonard02,Justh04, Morgan05, chuang2007}, as well as mobile sensor
networks \cite{lynch2008}.
It is possible to design swarming models for robotic motion planning,
consensus and cooperative control, and spatio-temporal formation. Pairwise potentials for individual agents can be
straightforwardly ported onto autonomous vehicles. Furthermore, these pairwise interactions can be used in conjunction
with simple scalable algorithms to achieve multi-vehicle cooperative motion
\cite{nguyen2005}. Specific goals include: obstacle
avoidance \cite{Morgan05}, boundary tracking \cite{hsieh2005},
environmental sensing \cite{lynch2008,lu2011} and decentralized target
tracking \cite{chung2006}.
An important problem is that of environmental consensus estimation. Here,
the individuals of the swarm communicate with each other through a network to
achieve asymptotically synchronous information about their
environment \cite{lynch2008}. Recently, consensus was extended to include time delayed
communication among agents \cite{Jad2006}.
Task allocation is another problem of interest involving robotic swarms. The
objective is to reallocate swarm robots to perform a set of tasks in parallel
and independently of one another in an optimal way. In order to make task
reallocation more realistic it is possible to consider a time delay that
arises from the amount of time required to switch between tasks \cite{mather2011}.
Regardless of the design objective of a robotic swarm system, a comprehensive theoretical analysis of the model must
be performed in order to achieve successful algorithm design.
Many different mathematical approaches have been utilized to study aggregating
agent systems. {Some of these} studies have treated the problem at a
single-individual level, using ordinary differential equations (ODEs) or delay differential equations (DDEs) to
describe their trajectories \cite{vicsek95,flierl99,couzin02,Justh04}. An
alternative method has been proposed by other researchers and consists of
using continuum models that consider averaged velocity and agent density fields
that satisfy partial differential equations (PDEs)
\cite{Toner95,Toner98,Edelsteinkeshet98, Topaz04}. In addition, authors also have studied the effects of noise on the swarm's behavior and have shown the existence of
noise-induced transitions \cite{Erdmann05, Forgoston08}. The study of these
systems has been enriched by tools from statistical
physics since both first and second order phase transitions have been found in
the formation of coherent states \cite{aldana07}.
An additional effect that has recently been considered is that of communication
time delays between robots. Time delay models are common in many areas of
mathematical biology including population dynamics, neural
networks, blood cell maturation, virus
dynamics and genetic networks \cite{macdonald78, macdonald89,campbell02,
bernard04, mackey04a, mackey04b, tianjenssnepp02, jenssnepp03, monk03}. In
the context of swarming agents, it has been shown that the introduction of a
communication time delay may induce transitions between different coherent
states in a manner which depends on the coupling strength between agents
and the noise intensity \cite{Forgoston08}. {Thus far, most of the work has
concentrated on the case of uniform time delays among agents \cite{Kimura08}. However, the practical engineering of
multi-agent systems requires researchers to consider the case in which time
delays may vary due to data processing times, problems in inter-agent
communication, etc. The case of differing (and even time-vary\
ing)
time delays between agents may be treated similarly to the case of a single
delay by using a data buffer \cite{Yang10}.}
In this work, we carry out a detailed study of the bifurcation structure of
the mean field approximation used in \cite{Forgoston08} and investigate how the
bifurcations in the system are modified in the presence of noise. Section~\ref{sec:SM} contains the swarm model, while Sec.~\ref{sec:MFA}
contains the derivation of the mean field approximation. The bifurcation
analysis of the mean field equation can be found in Sec.~\ref{sec:Bif}, and
Sec.~\ref{sec:Comp} provides a comparison of the mean field analysis with
the nonlinear governing equations. In Sec.~\ref{sec:Noise},
we describe the effects of noise on the swarm, and the conclusions are
contained in Sec.~\ref{sec:Conc}.
\section{Swarm Model}\label{sec:SM}
We consider a two-dimensional (2D) swarm that consists of $N$
identical self-propelling individuals of unit mass that
are mutually attracted to one another in a symmetric fashion. {Hence, the
coupling of the agents occurs via a fully connected graph.} In
addition, we consider the case
in which the individuals that comprise the swarm are communicating with each
other in a stochastic environment. Because of the finite communication times
between individuals, there is a time delay between interactions. Assuming
that the communication time between agents is constant and equal to {$\tau>0$}, the
swarm dynamics is described by the following governing equations:
\begin{subequations}
\begin{align}
\dot{\mathbf{r}}_i =& \mathbf{v}_i,\label{swarm_eq_a}\\
\dot{\mathbf{v}}_i =& \left(1 - |\mathbf{v}_i|^2\right)\mathbf{v}_i -
\frac{a}{N}\mathop{\sum_{j=1}^N}_{i\neq j}(\mathbf{r}_i(t) -
\mathbf{r}_j(t-\tau)) + \boldsymbol{\eta}_i(t),\label{swarm_eq_b}
\end{align}
\end{subequations}
for $i =1,2\ldots,N$. The terms $\mathbf{r}_i$ and
$\mathbf{v}_i$ respectively represent the 2D position and velocity of the
$i$-th agent at time $t$. The strength of the attraction is measured by the coupling
constant {$a>0$}. The
self-propulsion and frictional drag forces on each agent is given by the
term $\left(1 - |\mathbf{v}_i|^2\right)\mathbf{v}_i$. Therefore, in the
absence of coupling, agents tend to move on a straight line with unit speed
$|\mathbf{v}_i| = 1$ as time goes to infinity. The term
$\boldsymbol{\eta}_i(t) = (\eta_i^{(1)}, \eta_i^{(2)})$ is a {2D}
vector of stochastic white noise with intensity equal to $D$ such that $\langle \eta_i^{(\ell)}(t)\rangle=0$ and $\langle \eta_i^{(\ell)}(t)
\eta_j^{(k)}(t') \rangle = 2D\delta(t-t')\delta_{ij}\delta_{\ell k}$ for
$i,j=1,2,\ldots N$ and $\ell, k = 1,2$. {It is the main objective of this work to
identify the possible swarm behaviors for different values of $a$ and $\tau$.}
The coupling between individuals arises from a time delayed, spring-like
potential. Hence, our equations of motion may be considered to be the first
term in a Taylor expansion of other more general time delayed potential
functions about an equilibrium point.
\section{Mean Field Approximation}\label{sec:MFA}
We can investigate the stability of the swarm system by deriving a mean field
approximation of the system. The derivation involves the
consideration of agent coordinates relative to the center of mass and the elimination of the noise
terms. The center of mass of the swarming system is given by
\begin{align}
\mathbf{R}(t) = \frac{1}{N} \sum_{i=1}^N\mathbf{r}_i(t).
\end{align}
The position of each individual can be decomposed into
\begin{align}\label{pos_decomp}
\mathbf{r}_i = \mathbf{R} + \delta \mathbf{r}_i, \qquad i =1,2\ldots,N,
\end{align}
where {$\delta \mathbf{r}_i$ is the vector from the center of mass to
particle $i$ and}
\begin{align}\label{linear_dep}
\sum_{i=1}^N\delta\mathbf{r}_i(t) = 0.
\end{align}
We substitute the ansatz given by Eq. \eqref{pos_decomp} into the second
order system that is equivalent to Eqs. \eqref{swarm_eq_a}-\eqref{swarm_eq_b} with $D=0$. After simplification, one obtains
\begin{align}\label{CM1}
\ddot{\mathbf{R}} + \delta\ddot{\mathbf{r}}_i =& \left(1 - |\dot{\mathbf{R}}|^2 -
2\dot{\mathbf{R}}\cdot \delta\dot{\mathbf{r}}_i -
|\delta\dot{\mathbf{r}}_i|^2\right)(\dot{\mathbf{R}} +
\delta\dot{\mathbf{r}_i})\notag\\
& - \frac{a(N-1)}{N}\bigg(\mathbf{R}(t) - \mathbf{R}(t-\tau) +
\delta\mathbf{r}_i(t)\bigg) \notag\\
&- \frac{a}{N}\delta\mathbf{r}_i(t-\tau),
\end{align}
where we used the fact that Eq. \eqref{linear_dep} can be written as
\begin{align}
\delta\mathbf{r}_i(t-\tau) =
-\mathop{\sum\limits_{j=1}^{N}}\limits_{i\ne j} \delta\mathbf{r}_j(t-\tau).
\end{align}
Summing Eq.
\eqref{CM1} over $i$ and using Eq. \eqref{linear_dep}, we find
\begin{align}\label{CM}
\ddot{\mathbf{R}}=& \left(1 - |\dot{\mathbf{R}}|^2 -
\frac{1}{N}\sum_{i=1}^N|\delta\dot{\mathbf{r}}_i|^2\right)\dot{\mathbf{R}}
\notag\\
&- \frac{1}{N}\sum_{i=1}^N\left(2\dot{\mathbf{R}}\cdot \delta\dot{\mathbf{r}}_i +
|\delta\dot{\mathbf{r}}_i|^2\right)\delta\dot{\mathbf{r}_i} \notag\\
& -a\frac{N-1}{N}\left(\mathbf{R}(t) - \mathbf{R}(t-\tau)\right).
\end{align}
By inserting Eq. \eqref{CM} into Eq. \eqref{CM1} it is possible to find the
following equation for $\delta \ddot{\mathbf{r}}_i$:
\begin{align}\label{dri}
\delta\ddot{\mathbf{r}}_i=&
\left(\frac{1}{N}\sum_{j=1}^N|\delta\dot{\mathbf{r}}_j|^2 -
2\dot{\mathbf{R}}\cdot \delta\dot{\mathbf{r}}_i - |\delta\dot{\mathbf{r}}_i|^2
\right)\dot{\mathbf{R}} \notag\\
&+ \left(1 - |\dot{\mathbf{R}}|^2 -
2\dot{\mathbf{R}}\cdot \delta\dot{\mathbf{r}}_i -
|\delta\dot{\mathbf{r}}_i|^2\right)\delta\dot{\mathbf{r}}_i\notag\\
&+\frac{1}{N}\sum_{j=1}^N\left(2\dot{\mathbf{R}}\cdot
\delta\dot{\mathbf{r}}_j + |\delta\dot{\mathbf{r}}_j|^2\right)
\ \delta\dot{\mathbf{r}}_j \notag\\
&- a \frac{N-1}{N} \delta\mathbf{r}_i - \frac{a}{N}\delta\mathbf{r}_i(t-\tau),
\end{align}
for $i =1,2\ldots,N$.
Taken together, Eqs. \eqref{CM} and \eqref{dri} are equivalent to
Eqs. \eqref{swarm_eq_a}-\eqref{swarm_eq_b} and they merely involve a
reconstruction of the original system that is written in terms of particle coordinates
$\mathbf{r}_i$ into this new system that is written in terms of the center of
mass $\mathbf{R}$ and coordinates relative to the center of mass
$\delta\mathbf{r}_i$. One can see that this mapping has transformed the
original $2N$ differential equations into $2N+2$ equations. Due to the
relation given by Eq. \eqref{linear_dep}, only $2N$ of the transformed set of equations are independent. Therefore, there is no inconsistency between the original and transformed equations.
By neglecting the fluctuation terms {$\delta \mathbf{r}_i$} from
Eq. \eqref{CM} {and taking $N\rightarrow \infty$}, we obtain the
following heuristic mean field approximation for the center of mass:
\begin{align}\label{mean_field}
\ddot{\mathbf{R}}=& \left(1 - |\dot{\mathbf{R}}|^2 \right)\dot{\mathbf{R}} -a\left(\mathbf{R}(t) - \mathbf{R}(t-\tau)\right),
\end{align}
where we made the approximation $a\frac{N-1}{N}\approx a$
since we are considering the large system size limit $N\to\infty$. We will address
the validity of neglecting the fluctuation terms in Section \ref{sec:Comp}.
\section{Bifurcations in the Mean Field Equation}\label{sec:Bif}
Having derived a mean field equation, we continue by analyzing the
bifurcation structure. This bifurcation analysis will allow us to better
understand the behavior of the system in different
regions of parameter space. {Letting} $\mathbf{R} = (X, Y)$ and $\dot{\mathbf{R}} = (U, V)$,
{Eq. \eqref{mean_field} may be written in
component form } as
\begin{subequations}
\begin{align}
\dot{X} &= U,\label{CM_components_a}\\
\dot{U} &= (1 - U^2 - V^2)U - a(X - X(t -\tau)),\\
\dot{Y} &= V,\\
\dot{V} &= (1 - U^2 - V^2)V - a(Y - Y(t-\tau)).\label{CM_components_d}
\end{align}
\end{subequations}
Regardless of the value of $a$ and $\tau$,
Eqs. \eqref{CM_components_a}-\eqref{CM_components_d} have translational invariant stationary solutions given by
\begin{align}
X = X_0, \quad U = 0, \quad Y = Y_0, \quad V=0,
\end{align}
where $X_0$ and $Y_0$ are two free parameters. In addition,
Eqs. \eqref{CM_components_a}-\eqref{CM_components_d} also have
a three parameter family of uniformly translating solutions given by
\begin{align}
X = U_0 t + X_0, \quad U = U_0, \quad Y = V_0 t + Y_0, \quad V = V_0,
\end{align}
which requires
\begin{align}
U_0^2 + V_0^2 = 1 - a\tau
\end{align}
and is real-valued only when $a\tau \le 1$. In the two-parameter space $(a, \tau)$,
the hyperbola $a \tau = 1$ is in fact a pitchfork bifurcation curve on which
the uniformly translating states are born from the stationary state $(X_0, 0, Y_0,
0)$. The pitchfork bifurcation curve can be seen in Fig. \ref{Hopf_pitchfork_a_tau}. The other branch of the pitchfork bifurcation is an unphysical solution with
negative speed.
Linearizing Eqs. \eqref{CM_components_a}-\eqref{CM_components_d} about the stationary state, we obtain the
characteristic equation
\begin{align}
\left( a(1- e^{-\lambda\tau}) - \lambda + \lambda^2 \right)^2 = 0.\label{ceq}
\end{align}
It is sufficient to study the zeros of the function
\begin{align}\label{char_eq}
\mathcal{D}(\lambda) = a(1- e^{-\lambda\tau}) - \lambda + \lambda^2 = 0,
\end{align}
since the eigenvalues [see Eq. \eqref{ceq}] of the system given by Eqs. \eqref{CM_components_a}-\eqref{CM_components_d} are
obtained by duplicating those of Eq. \eqref{char_eq}.
We identify the Hopf bifurcations in the two parameter space $(a,
\tau)$ by letting the eigenvalue be purely imaginary. Our choice
of $\lambda = i \omega$ is substituted into Eq.
\eqref{char_eq}, {and one obtains}
\begin{align}\label{hopf_cond}
a - \omega^2 - i\omega = a e^{-i\omega \tau}.
\end{align}
By taking the modulus of Eq. \eqref{hopf_cond}, one finds
that $a$ at the Hopf point is given by
\begin{align}
a_H^2 = (a_H - \omega^2)^2 + \omega^2.
\end{align}
If we consider the case when $\omega \neq 0$, then
\begin{align}\label{a_H}
a_H = \frac{1 + \omega^2}{2}.
\end{align}
We substitute Eq. \eqref{a_H} into Eq. \eqref{hopf_cond} and take the complex conjugate. This allows us to obtain the following equation
for $\tau$ at the Hopf point that does not involve $a$:
\begin{align}
\frac{1 - \omega^2}{1+\omega^2} + i\frac{2\omega}{1 + \omega^2} = e^{i\omega \tau}.
\end{align}
We isolate $\tau$ by equating the arguments of both sides, being careful to
use the branch of $\tan\theta$ in $(0,\pi)$ since the left hand side of the
equation above is on the upper complex plane for $\omega > 0$. We then
obtain a family of Hopf bifurcation curves parameterized by $\omega$:
\begin{subequations}
\begin{align}
a_H(\omega) &= \frac{1 + \omega^2}{2},\label{hopf_omega_a}\\
\tau_{H_n}(\omega) &=
\frac{1}{\omega}\left(\arctan\left(\frac{2\omega}{1-\omega^2}\right) +
2n\pi\right), \quad n = 0, 1,\ldots\label{hopf_omega_b}
\end{align}
\end{subequations}
The first few members of the family of Hopf bifurcation curves are shown in
Fig. \ref{Hopf_pitchfork_a_tau}. It also is possible to eliminate the
parameter $\omega$ in Eqs. \eqref{hopf_omega_a}-\eqref{hopf_omega_b}. Doing so, one obtains
\begin{figure}[t!]
\begin{center}
\subfigure{\includegraphics[scale=0.45]{Hopf_fold_a_tau.eps} \label{Hopf_pitchfork_a_tau}}
\subfigure{\includegraphics[scale=0.45]{Eigvl_a_tau.eps} \label{Eigvls_a_tau}}
\caption{(a) Hopf (blue) and pitchfork (red) bifurcation curves in
$(a$,$\tau)$ space. (b) A zoom-in of
Fig. \ref{Hopf_pitchfork_a_tau}. Included is the saddle to node
transition curve (dashed black) and a number in each region (with boundaries given by the solid curves) that indicates the number of eigenvalues with a real part
greater than zero.}
\end{center}
\end{figure}
\begin{align}\label{hopf_a}
&\tau_{H_n}(a) =\notag\\
&\frac{1}{\sqrt{2a -1}}\left(\arctan\left(\frac{\sqrt{2a-1}}{1-a}\right) +
2n\pi\right), \quad n = 0, 1,\ldots
\end{align}
In spite of their appearance, the Hopf curves in Eqs. \eqref{hopf_omega_a}-\eqref{hopf_omega_b} and
\eqref{hopf_a} are in fact continuous at $\omega = 1$ and $a=1$,
respectively [with the correct branch of $\tan\theta$ in $(0, \pi)$].
Inspection of Eq. \eqref{hopf_omega_a}, shows that the Hopf
frequency depends only on the value of $a$ for all members in the
family. The frequency equals one when $a=1$, and the frequency tends to
infinity as $a$ increases. Interestingly, only the first Hopf curve is
defined at $a=1/2$ and has the value $\tau_{H_0}\vert_{a=1/2} = 2$. The point ($a=1/2$, $\tau = 2$) which lies both on the first Hopf curve and on the pitchfork curve is a Bogdanov-Takens (BT) point (the
eigenvalues are zero), where $\omega = 0$. None of the other Hopf branches
meet the pitchfork bifurcation curve since $\tau\to\infty$ as $a\to 1/2$.
\begin{figure}[t!]
\begin{center}
\subfigure{\includegraphics[scale=0.26]{BT_a.eps} \label{BT_a}}
\subfigure{\includegraphics[scale=0.26]{BT_b.eps} \label{BT_b}}
\subfigure{\includegraphics[scale=0.26]{BT_c.eps} \label{BT_c}}
\subfigure{\includegraphics[scale=0.26]{BT_d.eps} \label{BT_d}}
\subfigure{\includegraphics[scale=0.26]{BT_e.eps} \label{BT_e}}
\caption{Real and Imaginary parts of the dominating eigenvalues as one
moves around the Bogdanov-Takens point $(a = 1/2, \tau = 2)$ in
$(a ,\tau)$ parameter space. The eigenvalues shown are associated
with the locations (a) $a = 0.60$, $\tau = 2.0$, (b) $a = 0.48$,
$\tau = 2.09$, (c) $a = 0.40$, $\tau = 2.01$, (d) $a = 0.53$, $\tau = 1.90$, and (e) $a = 0.55$, $\tau = 1.91$. Refer to Fig. \ref{Eigvls_a_tau} to see where each of the $(a, \tau)$ points lies in relation to the bifurcation curves.}\label{BT_fig}
\end{center}
\end{figure}
The pitchfork and Hopf bifurcation curves in the $(a,
\tau)$ parameter space were computed using a numerical continuation
method \cite{Engel}. These
results {(not shown)} are in perfect agreement with our analytical calculations. These numerical continuation studies also allow for the determination
of the number of eigenvalues with real part greater than zero in different regions of the $(a,\tau)$ parameter space. The results are shown in Fig. \ref{Eigvls_a_tau}. In
addition, our numerical continuation analysis revealed node to focus transitions of the steady
state. These transitions occur at points where there are two real and equal
eigenvalues, i.e. where $\mathcal{D}(\lambda) = 0$ and $\mathcal{D}'(\lambda)
= 0$, for real-valued $\lambda$. If $\mathcal{D}'(\lambda) = 0$ then one can show that $e^{-\tau\lambda} =\frac{1-2\lambda}{a\tau}$. Insertion of this relation into $\mathcal{D}(\lambda) = 0$ leads to
\begin{align}\label{node_focus_quadratic}
\lambda^2 - \left(1 - \frac{2}{\tau}\right)\lambda + a - \frac{1}{\tau} = 0,
\end{align}
which has solutions $\lambda = \frac{1}{2}\left[1 - \frac{2}{\tau} \pm \sqrt{ 1
+\frac{4}{\tau^2} - 4a}\right]$. For the roots to be repeated, we
set the discriminant equal to zero and this gives the following curve where the node-focus transitions occur:
\begin{align}\label{node_focus}
\tau = \frac{1}{\sqrt{a - 1/4}}.
\end{align}
Moreover, by inspecting the solutions to Eq. \eqref{node_focus_quadratic} one finds that the repeated eigenvalues
have positive real parts if $\tau > 2$ and negative real parts if $\tau <
2$. In Figure \ref{Eigvls_a_tau}, we
show the pitchfork and Hopf bifurcation curves overlaid with the node-focus
transition curve given by Eq. \eqref{node_focus}.
As seen in Fig. \ref{Eigvls_a_tau}, the pitchfork and first Hopf
bifurcation curves, together with the node-focus transition curve, split the
area around the BT point into five different regions. The behavior of the
dominating eigenvalues (excluding the one at the origin) in each of these five
regions is shown in Figs. \ref{BT_a}-\ref{BT_e}. Starting at a point directly to
the right of the BT point in $(a, \tau)$ space, there is a pair of
eigenvalues with positive real parts and non-zero imaginary parts
[Fig. \ref{BT_a}]. Moving counter-clockwise, the eigenvalue pair collapse on
the positive real axis upon crossing the
upper branch of the node-focus transition curve [Fig. \ref{BT_b}]. Continuing in the same direction, we observe two different instances of eigenvalues
crossing the origin: (i) first the smaller of the two purely real and positive
eigenvalues does so as the upper part of the pitchfork
bifurcation curve is crossed [Fig. \ref{BT_c}] and (ii) then the remaining
purely real and positive eigenvalue crosses the origin as the lower part of
the pitchfork bifurcation curve is crossed [Fig. \ref{BT_d}]. Finally, as the
node-focus transition curve is crossed, the two purely
real and negative eigenvalues coincide on the negative real axis and acquire
non-zero imaginary parts [Fig. \ref{BT_e}]. Continuing upwards in parameter
space, the complex pair of eigenvalues crosses the imaginary axis
as the Hopf bifurcation curve is crossed, giving birth to a stable limit cycle.
\section{Comparison of the Mean Field Analysis and the Full Swarm Equations}\label{sec:Comp}
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.45]{Regions_a_tau.eps}
\caption{Regions in $(a, \tau)$ space with different dynamical behavior.}\label{a_tau_regions}
\end{center}
\end{figure}
Our analysis of the deterministic mean field equations identified the different dynamical
behaviors that the approximation given by Eq. \eqref{mean_field} exhibits in different regions of
the $(a, \ \tau)$ plane. However, the analysis does not provide any information about how the
swarm agents are distributed about the center of mass. We neglect the stochastic terms in Eqs. \eqref{swarm_eq_a}-\eqref{swarm_eq_b}
and use extensive numerical simulations to identify some of the coherent structures that the swarm adopts
asymptotically in time:
\begin{itemize}
\item[(i)] A translational state, in which all swarm particles have identical
positions and velocities and move uniformly in a straight line. The direction of motion depends on
the initial conditions. This behavior is only possible in region A of
Fig. \ref{a_tau_regions}. {Moreover, the asymptotic convergence to this
state requires that all particles be located in close proximity and with
aligned velocities at the initial time. Hence, the basin of attraction
is extremely small which causes this state to be very sensitive
to perturbations. This is discussed in more detail below.}
\item[(ii)] A ring state, in which the center of mass is stationary.
The swarm agents distribute themselves along the ring with roughly half
of the agents moving clockwise and half of the agents moving counter-clockwise. The final stationary
position of the center of mass and the particular behavior of each
individual in the swarm is dependent on the initial conditions. This behavior is
possible in regions A, B and C of Fig. \ref{a_tau_regions}.
\item[(iii)] A rotational state, in which all swarm agents collapse
to the center of mass and the latter rotates on a circular orbit. The
direction of rotation depends on the initial conditions. This behavior is only possible in region C of Fig. \ref{a_tau_regions}.
\item[(iv)] A degenerate rotational state, in which all swarm particles collapse
to the center of mass and the latter oscillates back and forth on a
line. This behavior is only possible in region C of
Fig. \ref{a_tau_regions}. In addition, it requires that the initial motion of all swarm
particles be constrained to a line and so is sensitive with respect to
perturbations and noise.
\end{itemize}
The above list is not extensive and our simulations have revealed other time-asymptotic
patterns. However, all of these other patterns (and including the
translational state and the degenerate rotational state) require extreme
symmetry in the initial conditions and are very sensitive with respect to
perturbations and noise. Our numerical simulations suggest that only the ring and the rotational
state have a significant robustness with respect to perturbations and noise.
The full system of equations predict a bistable behavior since the translating
and ring states are both possible in region A and C
[Fig. \ref{a_tau_regions}], depending on the initial conditions. The
linear stability analysis of Section \ref{sec:Bif} shows that the mean field approximation fails to capture this bistable behavior.
{The mean-field bifurcation results obtained here are of practical value
since they provide us with guidelines
for selecting values for $a$ and $\tau$ that will result in a
particular coherent pattern asymptotically in time. In the case of
bistability,} our numerical simulations strongly suggest that the initial alignment of the
agents' velocities is critical in determining the coherent state
adopted. Specifically, to obtain the translating, rotating and degenerate
rotating states asymptotically in time (structures in which the individuals'
velocities are perfectly aligned), one requires a high alignment of the
initial particles' velocities; otherwise, the swarm will adopt the ring
state. However, how high an alignment is needed depends on the specific choice of $(a,\tau)$. Our results indicate that it is easier to obtain aligned states
for larger values of the coupling constant $a$. {Unfortunately, it is not
feasible to obtain analytic basin boundaries in this infinite dimensional
system. In principle, one may approximate such boundaries by performing
prohibitively extensive
numerical simulations where the space of history functions is restricted in
some way. Therefore, the computation of basins of attraction is outside the
scope of this work and is left for future research.}
For the non-degenerate and degenerate rotating states as well as for the
translating state, the approximation we made when neglecting the fluctuation
terms in Eq. \eqref{mean_field} is entirely valid since in the noiseless case all agents collapse to the center of
mass. In the case of the ring structure, these fluctuation terms are
not necessarily small. However, in Eq. \eqref{CM} all fluctuation terms with the
exception of the one containing the factor $\frac{1}{N}\sum_{i=1}^N |\delta
\dot{\mathbf{r}}_i|^2$ approximately cancel out in the long time limit, due to
the symmetry in the distribution of the agents. The fluctuation term that
remains becomes equal to one in the long time limit. This has the effect of
eliminating the self-propulsion of the center of mass and what remains is solely
cubic dissipation.
The following sub-sections contain
detailed discussion regarding the spatio-temporal scales of
each coherent structure.
\subsection{The Ring State}
The analysis of Appendix \ref{ring} shows that the radius and angular
frequency of the swarm particles on the ring state is given by
\begin{gather}\label{ring_rho_omega}
\rho_j = \frac{1}{\sqrt{a}}, \qquad \dot{\theta}_j = \pm\sqrt{a},
\end{gather}
so that particles move at unit speed, $\rho_j \dot\theta_j = \pm 1$.
\begin{figure}[t!]
\begin{center}
\subfigure{\includegraphics[scale=0.26]{Ring_rho_a.eps} \label{Ring_radius}}
\subfigure{\includegraphics[scale=0.26]{Ring_omega_a.eps} \label{Ring_omega}}
\subfigure{\includegraphics[scale=0.26]{Ring_sim_map_a.eps} \label{Ring_sim_map}}
\caption{Comparison of numerical simulations (red
circular markers) with the analytical expressions (continuous blue curve) given by
Eq. \eqref{ring_rho_omega} for (a) the radius and (b) the frequency of the ring state.(c) For each value of
$a$, the time delay was chosen as $\tau = 1/\sqrt{a-1/4}$ (black
circular markers).}\label{Ring_fig}
\end{center}
\end{figure}
We have numerically computed the radius and angular frequency for different values of $a$ and $\tau$
within the region in which the mean field approximation gives a stable
stationary center of mass (Fig. \ref{Ring_fig}). Figures
\ref{Ring_radius}-\ref{Ring_omega} shows that there is excellent agreement
between the numerical simulations and the analytical result given by
Eq. \eqref{ring_rho_omega}. It is worth noting that the condition given by Eq. \eqref{dri_condition} and used to derive Eq. \eqref{ring_rho_omega} is satisfied in the long time limit in our simulations.
\subsection{The Rotating State}
We show in Appendix \ref{rotating} that the circular orbit of the rotating state has radius
$\rho_0$ and frequency $\omega$ that satisfy the following relations:
\begin{subequations}\label{omega_rho_CM_circle}
\begin{align}
\omega^2 =& a \cdot(1 - \cos\omega\tau),\label{omega_CM_circle}\\
\rho_0 =& \frac{1}{|\omega|} \sqrt{1 - a\frac{\sin\omega\tau}{\omega}}\label{rho_CM_circle}.
\end{align}
\end{subequations}
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.45]{Circle_birth_death.eps}
\caption{In $(a$,$\tau)$ space, we plot: Hopf (blue) and pitchfork (red)
bifurcation curves, and the curve $a \tau^2=2$ where the first limit cycle
ceases to exist by having its radius diverging to infinity (green).}\label{Circle_birth_death}
\end{center}
\end{figure}
Eqs. \eqref{omega_CM_circle}-\eqref{rho_CM_circle} can have as many solutions as desired by
choosing $a$ and $\tau$ large enough. However, a careful analysis reveals
that the solutions to Eqs. \eqref{omega_CM_circle}-\eqref{rho_CM_circle} are
generated exactly along the Hopf curves of our previous mean field analysis and represent the same limit cycles of
that analysis [Fig. \ref{Hopf_pitchfork_a_tau}]. The expressions in
Eqs. \eqref{omega_CM_circle}-\eqref{rho_CM_circle} thus determine the
spatio-temporal scales of these circular orbits beyond the Hopf curves where
they are born. Our analysis also shows that
the circular limit cycle that is created on the first member of the Hopf bifurcation curves persists to the left of the pitchfork bifurcation curve
and then ceases to exist as its radius diverges to infinity on the curve
$a\tau^2=2$ (Fig. \ref{Circle_birth_death}). Moreover, numerical simulations
of the mean field equations reveal that both the translating state and the
rotating state are linearly stable for $(a, \tau)$ pairs inside the wedge between the curve
$a\tau^2=2$ and the pitchfork bifurcation curve $a\tau=1$ above the BT point.
Figures \ref{Circle_radius}-\ref{Circle_sim_map} show the excellent agreement between numerical
simulations and the analytical results given by Eqs. \eqref{omega_CM_circle}-\eqref{rho_CM_circle}, for
different values of $a$ and $\tau$.
\begin{figure}[t!]
\begin{center}
\subfigure{\includegraphics[scale=0.26]{Circle_rho_vs_a.eps} \label{Circle_radius}}
\subfigure{\includegraphics[scale=0.26]{Circle_T_vs_tau_a.eps} \label{Circle_omega}}
\subfigure{\includegraphics[scale=0.26]{Circle_speed_vs_tau_a.eps} \label{Circle_speed}}
\subfigure{\includegraphics[scale=0.26]{Circle_line_sim_map_a.eps} \label{Circle_sim_map}}
\caption{Comparison of numerical simulations (red
circular markers) with the analytical expressions (continuous blue curve) given by Eqs. \eqref{omega_CM_circle}-\eqref{rho_CM_circle} for (a) the
radius, (b) the period, and (c) the speed
of the collapsed circular orbit. (d) For each value of
$a$, the time delay was chosen as $\tau =
\frac{2}{\sqrt{2a
-1}}\arctan\left(\frac{\sqrt{2a-1}}{1-a}\right)$ (black
circular markers) to
assure asymptotic time convergence to the collapsed circular orbit state.}
\label{Circle_orbit}
\end{center}
\end{figure}
Interestingly, in Fig. \ref{Circle_speed} we note that in the
asymptotic time limit the collapsed agents move at a speed greater than
one, the speed at which agents would tend to move in the absence of
coupling. This is explained by noting that the ratio of the time delay to the
period of oscillations is such that the delayed position of the collapsed
agents $\mathbf{R}(t-\tau)$ is ahead of the present position
$\mathbf{R}(t)$. The attraction that an individual particle feels to the
delayed position of the rest of the swarm forces the whole system go faster.
\subsection{The Degenerate Rotating State}
A degenerate version of the rotating state is possible when the initial motion
of the swarm is restricted to a line, since in this case it follows from
Eqs. \eqref{swarm_eq_a}-\eqref{swarm_eq_b} that the swarm will remain on such
a line for all times. As we show in Appendix \ref{deg_rotating}, we may assume that the motion of the
collapsed swarm occurs on the $X=Y$
line of the center of mass coordinates and then use a finite Fourier mode
approximation of the ensuing dynamics. An
approximation in terms of just three modes gives
\begin{align}\label{X_CM_line}
X(t)=Y(t) = 2 c_1 \cos\omega t + 2|c_3| \cos(3\omega t + \phi_3),
\end{align}
where $\omega$, $c_1$, $c_3$ and $\phi_3$ are obtained by solving
Eqs. \eqref{13_fourier_a}-\eqref{13_fourier_b} numerically.
\begin{figure}[t!]
\begin{center}
\subfigure{\includegraphics[scale=0.26]{Line_amplitude_vs_tau.eps} \label{Line_amplitude}}
\subfigure{\includegraphics[scale=0.26]{Line_T_vs_tau.eps} \label{Line_omega}}
\subfigure{\includegraphics[scale=0.26]{Line_speed_vs_tau.eps} \label{Line_speed}}
\subfigure{\includegraphics[scale=0.26]{Circle_line_sim_map_a.eps} \label{Line_sim_map}}
\caption{Comparison of numerical simulation (red
circular markers) with the
analytical expressions (continuous blue curve) given by
Eqs. \eqref{13_fourier_a}-\eqref{13_fourier_b} for (a) the amplitude, (b) period, and (c) the
maximum speed of the collapsed straight line orbit. (d) At each value of $a$, the time delay was chosen as $\tau =
\frac{2}{\sqrt{2a
-1}}\arctan\left(\frac{\sqrt{2a-1}}{1-a}\right)$ (black circular
markers) to
ensure asymptotic time convergence to the collapsed-straight line orbit state.}\label{Line_orbit}
\end{center}
\end{figure}
Figures \ref{Line_amplitude}-\ref{Line_sim_map} show a comparison between our
analytical results given by Eqs. \eqref{13_fourier_a}-\eqref{13_fourier_b} and results obtained using numerical simulation for the amplitude, period and maximum speed of
oscillation for different values of $a$ and $\tau$. There is excellent agreement in both amplitude and period between our analysis
and the numerical simulations [Figs. \ref{Line_amplitude}-\ref{Line_omega}]. The agreement for the speed of motion is
very good as well, but the theoretical
estimate is shifted slightly with respect to the results from
simulations [Fig. \ref{Line_speed}]. As in the collapsed circular orbit, we
note that the collapsed set of particles have a maximum speed which exceeds
one, the speed that individual, uncoupled particles acquire in the long-time
limit. As before, this effect arises from the attraction that the current particle position $\mathbf{R}(t)$ feels towards the delayed position $\mathbf{R}(t-\tau)$ when the latter lies in the direction of motion of the collapsed particles.
\section{The Effects of Noise on the Swarm}\label{sec:Noise}
In the absence of noise, the initial alignment
of the swarm particles is critical in determining the asymptotic behavior of
the swarm (Sec.~\ref{sec:Comp}). When noise is introduced, the interplay of coupling strength, time
delay and noise intensity gives rise to very interesting behavior due to
fluctuations in the particles' alignment. Specifically, our studies show that if the coupling strength $a$ and/or the
time delay $\tau$ are below a certain limit, then the presence of noise
promotes swarm transitions from aligned into misaligned coherent states. More
surprising, however, is that if the coupling strength $a$ and/or the
time delay $\tau$ are big enough, then there is a noise intensity
threshold that forces a transition in the swarm from misaligned into aligned
states. In addition, we show that for these high values of $a$ and/or $\tau$,
the system presents an interesting hysteresis phenomenon when the noise
intensity is time dependent.
For the purpose of these studies, we define the alignment of particle $j$ with the rest of the swarm as the cosine of the angle between
the velocity of particle $j$ and the velocity of the swarm as a whole:
\begin{align}
\cos\theta_j = \frac{\dot{\mathbf{r}}_j \cdot \dot{\mathbf{R}}}{|\dot{\mathbf{r}}_j| |\dot{\mathbf{R}}|}.
\end{align}
Therefore the alignment of individual particles ranges from -1
to 1. A good measure of the overall alignment of the swarm is furnished by
the ensemble average of these cosines given as
\begin{align}
\textrm{Mean swarm alignment} = \frac{1}{N}\sum_{j=1}^N\cos\theta_j = \frac{1}{N}\sum_{j=1}^N\frac{\dot{\mathbf{r}}_j \cdot \dot{\mathbf{R}}}{|\dot{\mathbf{r}}_j| |\dot{\mathbf{R}}|}.
\end{align}
We first carry out a numerical simulation with coupling
constant $a=0.5$ and noise standard deviation $\sigma
= 0.05$ (noise intensity $D = 0.00125$). At $t = 50$, a time delay of $\tau =
0.5$ is turned on. These parameters correspond to region A of
Fig. \ref{a_tau_regions}. Initially, we place all particles at the origin and
align their velocities by choosing $\dot{x}_j = 1$ and $\dot{y}_j = 1$ for
all particles. We describe the behavior of the swarm by following the ensemble averages of the particle distances to the center of mass
[Fig. \ref{dr_a_0p5_tau_0p5}] and of the particle
alignment [Fig. \ref{allign_a_0p5_tau_0p5}] as functions of time. Before the time delay is turned on at $t=50$,
the swarm is in a translating state with particles slightly spread out from
the center of mass in a `pancake' shape, as described in \cite{Erdmann05},
with an ensemble alignment close to one. Once the delay is turned on, the
translating state is broken up and the swarm converges to the ring state in
which the mean particle alignment is near zero. The radius of the ring
obtained in this numerical simulation matches the theoretical
result [Eq. \eqref{ring_rho_omega}] that predicts a radius of $\frac{1}{\sqrt{a}} =
\sqrt{2} \approx 1.41$. A completely analogous situation ensues for parameters
in region B of Fig. \ref{a_tau_regions} (results not shown). In addition, in
both cases the swarm will immediately converge to the ring state if the
swarm velocities are not sufficiently aligned at time zero. We thus
conclude that for these choices of $(a, \ \tau)$ pairs, the noise misaligns
the particles' velocities and forces a transition into the ring state.
\begin{figure}[t!]
\begin{center}
\subfigure{\includegraphics[scale=0.26]{dr_a_0p5_tau_0p5.eps} \label{dr_a_0p5_tau_0p5}}
\subfigure{\includegraphics[scale=0.26]{Allignment_a_0p5_tau_0p5.eps} \label{allign_a_0p5_tau_0p5}}
\caption{ Time evolution of the ensemble average of (a) the particle distance to
the center of mass, and (b) the mean particle alignment showing how the particle alignment breaks up
due to the effects of noise. For long times the swarm converges to a ring
state. The parameter values of $a = 0.5$ and
$\tau = 0.5$ are associated with region A of
Fig. \ref{a_tau_regions}. The time delay is
turned on at $t=50$ and the noise standard deviation is $\sigma = 0.05$ ($D = 0.00125$). }\label{dr_allign_A}
\end{center}
\end{figure}
In contrast to the cases discussed above, for parameters in region C of
Fig. \ref{a_tau_regions}, a sufficiently large noise intensity promotes
transitions from misaligned to aligned states. We
show this by comparing the results of a series of simulations for different
values of the noise standard deviation $\sigma$. The simulations are divided
into two cases that differ only on the initial conditions for the swarm
particles. In all simulations, the coupling constant $a=2$ and a time delay
of $\tau = 2$ is turned on at $t = 50$. In the first case, all particles start from the origin with identical velocities $\dot{x}_j = 1$
and $\dot{y}_j = 1$. In the second case, all swarm particles
are initially distributed uniformly on the unit square and
are at rest.
In these simulations, the final state of the swarm may be visualized by plotting the mean swarm
alignment after transients have decayed ($t=300$) as a function of noise
intensity for the first case
[Fig. \ref{asympt_allign}] and the second case [Fig. \ref{asympt_unallign}]. In the first case of simulations, the high initial alignment of
particles' velocities forces the swarm to converge to a compact rotating state
independent of noise intensity. However, the rotational state is destroyed
if the noise standard deviation is bigger than $\sigma \approx 0.8$
[Fig. \ref{asympt_allign}]. The situation is more interesting and complex for the second set of
simulations. For low noise intensities ($\sigma \lesssim 0.26$) the low
initial alignment of the particles leads the swarm to converge to a ring
state with near zero mean alignment [Fig. \ref{asympt_unallign}]. A noise
standard deviation just beyond the threshold of $\sigma \approx 0.26$ displays an
interesting effect.
\begin{figure}[t!]
\begin{center}
\subfigure{\includegraphics[scale=0.26]{asympt_allign.eps} \label{asympt_allign}}
\subfigure{\includegraphics[scale=0.26]{asympt_unallign.eps} \label{asympt_unallign}}
\caption{Asymptotic value of the mean particle alignment for (a) particles
starting with perfectly aligned velocities at time zero and
(b) for particles distributed uniformly over the unit square and starting from rest for different values of the noise standard deviation $\sigma$. The
parameter
values of $a = 2$ and $\tau = 2$ (turned on at $t=50$) are
associated with a location in region C of
Fig. \ref{a_tau_regions}. }\label{asympt_vel_proj}
\end{center}
\end{figure}
As the $\sigma \approx 0.26$ threshold is crossed,
the swarm transitions from the ring state into the rotating state with high mean alignment. An examination
of the full simulation data reveals that the transition occurs as
an increasing group of particles gradually becomes aligned and eventually absorbs
all the remaining particles. A sufficient amount of noise is necessary for
this transition, since it allows each particles' velocity vector to probe many
directions until finally enough of them become trapped in a `potential
well' of alignment with other particles. As with the first case of simulations a noise
standard deviation bigger than $\sigma \approx 0.8$ breaks up the rotating state. Figure \ref{dr_allign_a_2_tau_2} clearly shows the transition from the ring to
the compact, rotational state through the time evolution of the ensemble averages of the particle
distances to the center of mass and of the mean particle alignment.
Further studies on the switching behavior between coherent states
of the swarm demonstrate that the system exhibits a hysteresis phenomena. With the swarm system starting on the ring state with noise standard deviation
of $\sigma = 0.24$, one can force a transition into the rotating state by
increasing the noise to $\sigma = 0.26$. However, even if the noise is lowered
down to $\sigma = 0.02$, the swarm remains in the rotating state with a high
velocity alignment [Figs. \ref{allign_a_2_tau_2_hyst1}-\ref{sigma_a_2_tau_2_hyst1}]. Nevertheless, it is possible to return the swarm to the ring state if, once in
the rotating state, the
noise is raised to very high amounts ($\sigma = 1$) for a sufficient amount of
time and then dropped suddenly to a very low value ($\sigma = 0.05$). The high noise levels serve to completely misalign the
particles' velocities and allow them to converge to the ring once the noise
levels are below $\sigma \lesssim 0.26$.
\begin{figure}[h!]
\begin{center}
\subfigure{\includegraphics[scale=0.26]{dr_a_2_tau_2.eps} \label{dr_a_2_tau_2}}
\subfigure{\includegraphics[scale=0.26]{Allignment_a_2_tau_2.eps} \label{allign_a_2_tau_2}}
\caption{ Time evolution of the ensemble average of (a) the particle distance to
the center of mass, and (b) the mean particle alignment showing how the swarm transitions from a
ring state into a compact, rotational state with alignment
close to one. The parameter values of $a = 2$ and $\tau = 2$ (turned on at
$t=50$) and $\sigma = 0.4$ ($D = 0.08$) are associated with region C of
Fig. \ref{a_tau_regions}. Particles are initially distributed uniformly over
the unit square and start from rest.}\label{dr_allign_a_2_tau_2}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\subfigure{\includegraphics[scale=0.26]{Allignment_a_2_tau_2_hyst1.eps} \label{allign_a_2_tau_2_hyst1}}
\subfigure{\includegraphics[scale=0.26]{sigma_a_2_tau_2_hyst1.eps} \label{sigma_a_2_tau_2_hyst1}}
\subfigure{\includegraphics[scale=0.26]{Allignment_a_2_tau_2_hyst2.eps} \label{allign_a_2_tau_2_hyst2}}
\subfigure{\includegraphics[scale=0.26]{sigma_a_2_tau_2_hyst2.eps} \label{sigma_a_2_tau_2_hyst2}}
\caption{Time evolution of (a) mean particle alignment for example 1, (b)
noise standard deviation for example 1, (c) mean particle alignment for
example 2, and (d) noise standard deviation for example 2. The results
show how a time-dependent noise
intensity may be used to force swarm transitions. The parameter values
of $a = 2$ and $\tau = 2$ (turned on at
$t=10$) are associated with region C of
Fig. \ref{a_tau_regions}. Particles are initially distributed uniformly over
the unit square and start from rest.}\label{hyst}
\end{center}
\end{figure}
\section{Conclusions}\label{sec:Conc}
In this work we analyzed the dynamics of a self-propelling swarm where
individuals interact with a
communication time delay in the presence of noise. Using a mean field
approximation in the deterministic case, we analytically obtained the
complete bifurcation picture in the parameter space of coupling strength and
communication time delay. This analysis shows how different combinations of coupling
strength and time delay induce the swarm to adopt different coherent
structures asymptotically in time. Our bifurcation studies demonstrated
the existence of a Bogdanov-Takens point, where the stationary center of mass
solution has a double zero eigenvalue, which is critical in organizing the
dynamics of the swarm.
{The stable patterns that are possible for this system have several
applications for autonomous vehicles. More detailed applications for each
pattern are as follows: (1) the translational state may be used for target tracking and group transport \cite{Morgan05,chung2006}. (2) The ring
state should prove useful in terrain coverage and regional surveillance \cite{Svennebring03,
Vallejo09}. (3) The rotating state may be exploited in obstacle avoidance,
boundary tracking and surveillance \cite{hsieh2005, Morgan05, Vallejo09}.
In addition, we believe all three patterns are applicable to the problem of environmental sensing \cite{lynch2008,lu2011}.}
In numerical experiments with noise, we showed that the interplay of coupling
strength, time delay and noise intensity may give rise to interesting
switching behavior from one coherent structure to another. We found that if the coupling strength $a$ and/or the time delay $\tau$ are
below a certain limit, then the presence of noise induces transitions from
states in which the alignment of the particles' velocities is high into states
with low alignment. More surprising, however, is that if the coupling strength $a$ and/or the
time delay $\tau$ are big enough, then there is a noise intensity
threshold that forces a transition in the swarm from misaligned into aligned
states. In addition, by using a time-dependent noise intensity at these high values of
$a$ and/or $\tau$, we show that the system exhibits hysteresis since the
swarm's transitions are not easily reversible. {We note that analytical
results on the effects of noise on delay-coupled swarms are not easy to obtain. Two
examples relevant to our work are given in \cite{Erdmann05,Strefler08},
where the authors investigate models similar to the one presented here but without time delay.}
Realistic application of the model treated here to the motion of multi-robot
systems requires local repulsion among individuals to be taken into
account. We have simulated the swarm model with the addition of a
repulsive inter-agent potential of exponential form $U_{ij} = c_r
\exp{\left({-\frac{|\mathbf{r}_i - \mathbf{r}_j|}{L_r}}\right)}$. These simulations demonstrate (results not shown) that
the coherent patterns we discussed in this article persist when the
characteristic repulsion length $L_r$ and repulsion strength $c_r$ between
robots are small compared to global attraction parameters. Stronger repulsion can destabilize the coherent structures.
{Recently, systems with non-uniform time delays have received much
attention. For example, the important question of synchronization in networks
communicating at randomly-distributed time delays has been recently investigated \cite{Masoller05, Masoller06}. In practical applications, the case of differing (and even time-varying)
time delays between agents may be treated similarly to the case of a single
delay by using a data buffer \cite{Yang10}. The idea is to identify an upper
bound to the time delay ($\tau_\textrm{max}$) between all agent pairs and
then design the agents so that the actuation occurs when the data buffer of
size $\tau_\textrm{max}$ is full.}
{As part of our ongoing work, we are extending our investigations for the cases
in which: (\emph{i}) the communication time delays vary between different
pairs of agents; and (\emph{ii}) the communication graph is non-globally
coupled. In realistic settings, both of these cases may occur due to the effects
of the spatial distribution of agents such as signal travel times and
imperfect transmission arising, for example, from complex terrain topography
or component malfunction. In the case of communication delays that differ
among different pairs of agents (though constant in time), our preliminary results show some patterns
analogous to the ones observed here, but with much more added complexity. The present investigation lays a good foundation on which to base the study of these more complicated cases.}
In summary, our results aid in understanding the stability of complex coherent
structures in swarming systems with time delayed communication and in the
presence of a noisy environment. Although our analytical and numerical results were
obtained using a model with linear, attractive interactions, our analysis
gives useful insight for the study of models with more general forms of
time delayed coupling between agents. Our results may prove to be valuable for
the control of man-made vehicles where actuation and communication are delayed, as well as in
understanding swarm alignment in biological systems.
\appendices
\section{Analysis of the Ring State}\label{ring}
The swarm ring state is obtained when the center of mass is stationary. For the solution $\mathbf{R}=$const. to satisfy Eq. \eqref{CM} we require
\begin{align}\label{dri_condition}
\sum_{i=1}^N\delta\dot{\mathbf{r}}_i^2 \delta\dot{\mathbf{r}_i} = 0.
\end{align}
We simplify Eq. \eqref{dri} by taking $\mathbf{R}=$const. and using
Eq. \eqref{dri_condition} { we obtain}
\begin{align}\label{dri_ring}
\delta\ddot{\mathbf{r}}_j=&
\left(1 -\delta\dot{\mathbf{r}}_j^2\right)\delta\dot{\mathbf{r}}_j - a \delta\mathbf{r}_j - \frac{a}{N}\delta\mathbf{r}_j(t-\tau).
\end{align}
We consider the {large system size} limit $N\rightarrow \infty$ and we drop the
delayed term. The resulting equations are simply ODEs and so the analysis
below shows that the ring orbit is not dependent on having time delays in the
system. Writing Eq. \eqref{dri_ring} in polar coordinates $\delta x_j = \rho_j
\cos{\theta_j}$ and $\delta y_j =
\rho_j \cos{\theta_j}$, we obtain
\begin{subequations}
\begin{align}
\ddot{\rho}_j =& \left(1 - \dot{\rho}_j^2 - \rho_j^2 \dot{\theta}_j^2 \right)\dot{\rho}_j + \rho_j \dot{\theta}_j^2 - a \rho_j,\label{rho_theta_a}\\
\rho_j \ddot{\theta}_j =& \left(1 - \dot{\rho}_j^2 - \rho_j^2 \dot{\theta}_j^2\right) \rho_j \dot{\theta}_j - 2 \dot{\rho}_j \dot{\theta}_j.\label{rho_theta_b}
\end{align}
\end{subequations}
Equations \eqref{rho_theta_a}-\eqref{rho_theta_b} have the trivial solution $\rho_j = 0$ as well as a ring solution:
\begin{gather}
\rho_j = \frac{1}{\sqrt{a}}, \qquad \dot{\theta}_j = \pm\sqrt{a},
\end{gather}
in which particles move at unit speed, $\rho_j \dot\theta_j = \pm 1$.
\section{Analysis of the Rotating State}\label{rotating}
In the noiseless rotating state, all particles collapse to a point,
$\delta\mathbf{r}_i=0$, and the equation for the center of mass given by
Eq. \eqref{CM} simplifies considerably to
\begin{align}\label{CM_circle}
\ddot{\mathbf{R}}=& \left(1 - \dot{\mathbf{R}}^2\right)\dot{\mathbf{R}} - a\left(\mathbf{R}(t) - \mathbf{R}(t-\tau)\right).
\end{align}
We write $\mathbf{R} = (X, Y)$ and introduce polar coordinates $X = \rho
\cos{\theta}$ and $Y = \rho \sin{\theta}$ to obtain
\begin{subequations}\begin{align}
\ddot{\rho} =& \left(1 - \dot{\rho}^2 - \rho^2 \dot{\theta}^2
\right)\dot{\rho} + \rho \dot{\theta}^2 - a \bigg(\rho - \rho_\tau\cos(\theta - \theta_\tau)\bigg),\label{rho_theta_CM_circle_a}\\
\rho \ddot{\theta} =& \left(1 - \dot{\rho}^2 - \rho^2 \dot{\theta}^2\right)
\rho \dot{\theta} - 2 \dot{\rho} \dot{\theta} + a \rho_\tau\sin(\theta - \theta_\tau),\label{rho_theta_CM_circle_b}
\end{align}
\end{subequations}
where we've written $\rho_\tau \equiv \rho(t-\tau)$ and $\theta_\tau \equiv
\theta(t-\tau)$. Equations \eqref{rho_theta_CM_circle_a}-\eqref{rho_theta_CM_circle_b} have a circular orbit solution, $\rho =
\rho_0$ and $\theta = \omega t + \theta_0$, where
\begin{subequations}
\begin{align}
\omega^2 =& a \cdot(1 - \cos\omega\tau),\label{omega_CM_circle_app_a}\\
\rho_0 =& \frac{1}{|\omega|} \sqrt{1 - a\frac{\sin\omega\tau}{\omega}}.\label{rho_CM_circle_app_b}
\end{align}
\end{subequations}
and $\theta_0$ is obtained from the initial conditions. In the main text we discuss the behavior of the solutions to Eqs. \eqref{omega_CM_circle_app_a}-\eqref{rho_CM_circle_app_b}.
\section{Analysis of the Degenerate Rotating State}\label{deg_rotating}
When the motion of the whole swarm is initially constrained to a line,
Eqs. \eqref{swarm_eq_a}-\eqref{swarm_eq_b} dictate that the swarm will remain on this line for all
times. If the coupling parameter $a$ and/or the time delay $\tau$ are large
enough, the resulting motion is a degenerate form of the rotating solution in which the swarm moves back and forth along a straight line.
In the case without noise all particles collapse to a point, $\delta\mathbf{r}_i=0$, and
the line along which motion occurs is arbitrary; here we use $X = Y$. The
problem reduces to analyzing a single delay equation { given by}
\begin{align}\label{CM_line}
\ddot{X}=& \left(1 - 2\dot{X}^2\right)\dot{X} - a\left(X(t) -
X(t-\tau)\right).
\end{align}
We {find} a solution using Fourier analysis. {We let}
\begin{align}\label{X_fourier}
X(t) = \sum_{n = -\infty}^{\infty} c_n e^{in\omega t},
\end{align}
where the coefficients satisfy $c_n = {c_{-n}}^*$ in order to ensure that
$X(t)$ is a
real quantity. Substituting Eq. \eqref{X_fourier} into Eq. \eqref{CM_line}, we
get for the $n$-th mode
\begin{align}\label{n_fourier}
- n^2 \omega^2 c_n &= i n \omega c_n \notag\\
&+ 2i \omega^3 \sum_{\ell, m \neq 0} c_\ell c_m
c_{n-\ell-m}\ell m (n - \ell - m) \notag\\
&- a c_n (1 - e^{-in\omega \tau}),
\end{align}
for $n = 0,1,2,\dots$. The $n=0$ equation is
\begin{align}\label{0_fourier}
\sum_{\ell, m\neq 0} c_\ell c_m c_{-\ell-m} \ell m (\ell + m) = 0,
\end{align}
which does not involve $c_0$. Unsurprisingly, $c_0$ is undetermined since the position of the center of mass may be translated in space without modifying the dynamics of the system.
We now approximate the motion of the center of mass by keeping the first three
modes. By appropriately choosing the time origin, we may take $c_1$ to be
purely real and positive. In contrast, $c_2$ and $c_3$ are complex quantities
which we write as $c_i = |c_i| e^{i\phi_i}$, for $i=2,3$. The equations for
the first three modes $n=1,2,3$ become
\begin{subequations}
\begin{align}
-& \omega^2 c_1 = i \omega c_1 \notag\\
&+ 2i \omega^3\left( -3c_1^3 - 36 c_2^2c_3^* -
54c_1|c_3|^2 - 24c_1|c_2|^2 + 9c_1^2c_3 \right)\notag\\
&- a c_1(1 - e^{-i\omega\tau}),\label{123_fourier_a}\\
\notag\\
-& 4\omega^2 c_2 = 2i \omega c_2 \notag\\
&+ 2i \omega^3\left( -108c_2|c_3|^2 - 36
c_1c_2^* c_3 - 24c_2|c_2|^2 - 12c_2c_1^2 \right)\notag\\
&- a c_2(1 - e^{-2i\omega\tau}),\label{123_fourier_b}\\
\notag\\
-& 9\omega^2 c_3 = 3i \omega c_3 \notag\\
&+ 2i \omega^3\left( -18c_3c_1 -
72c_3|c_2|^2- 81c_3|c_3|^2 - 12c_2^2c_1 + c_1^3 \right)\notag\\
&- a c_3(1 - e^{-3i\omega\tau}).\label{123_fourier_c}
\end{align}
\end{subequations}
In addition, the condition from Eq.\eqref{0_fourier} becomes
\begin{align}\label{0_fourier_bis}
6(c_2 c_3^* - c_2^*c_3) - c_1(c_2 - c_2^*)=0.
\end{align}
Separating Eqs. \eqref{123_fourier_a}-\eqref{123_fourier_c} and Eq. \eqref{0_fourier_bis} into real and
imaginary parts yields a system of seven equations (since the real part of
Eq. \eqref{0_fourier_bis} is satisfied automatically) for the six unknowns:
$\omega$, $c_1$, $|c_2|$, $\phi_2$, $|c_3|$ and $\phi_3$. These equations cannot be
satisfied in general. However, if $|c_2| = 0$, then the equation for mode
$n=2$ [Eq. \eqref{123_fourier_b}] and Eq. \eqref{0_fourier_bis} are satisfied automatically,
leaving
four equations:
\begin{subequations}
\begin{align}
- \omega^2 c_1 =& i \omega c_1 + 2i \omega^3\left( -3c_1^3 54c_1|c_3|^2 +
9c_1^2c_3 \right) \notag\\
&- a c_1(1 - e^{-i\omega\tau}),\label{13_fourier_a}\\
- 9\omega^2 c_3 =& 3i \omega c_3 + 2i \omega^3\left( -18c_3c_1 - 81c_3|c_3|^2
+ c_1^3 \right) \notag\\
&- a c_3(1 - e^{-3i\omega\tau})\label{13_fourier_b}.
\end{align}
\end{subequations}
for the four unknowns $\omega$, $c_1$, $|c_3|$ and $\phi_3$. Equations \eqref{13_fourier_a}-\eqref{13_fourier_b}
may be solved numerically and permit one to approximate the motion of the
center of mass in the form
\begin{align}
X(t)=Y(t) = 2 c_1 \cos\omega t + 2|c_3| \cos(3\omega t + \phi_3).
\end{align}
The frequency of the straight line orbit of the swarm center of mass is
approximately equal to the frequency of the circular orbit in Eq. \eqref{omega_CM_circle}. In addition, the amplitude of oscillation of the straight line orbit is approximately equal to the radius of the circular orbit of Eq. \eqref{rho_CM_circle} divided by a factor of $\sqrt{6}$.
\section*{Acknowledgments}
The authors gratefully acknowledge the Office of Naval Research for its
support. LMR and IBS are supported by
Award Number R01GM090204 from the National Institute
Of General Medical Sciences. The content is solely the
responsibility of the authors and does not necessarily represent
the official views of the National Institute Of General
Medical Sciences or the National Institutes of Health. EF
is supported by the Naval Research Laboratory (Award No.
N0017310-2-C007). We also extend our thanks to Kevin Lynch and M. Ani Hsieh for reading
early versions of the manuscript.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.